title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Higher-order modes of vacuum-clad ultrathin optical fibers",
"Higher-order modes of vacuum-clad ultrathin optical fibers"
] | [
"Fam Le Kien ",
"Thomas Busch ",
"Viet Giang Truong ",
"Síle Nic Chormaic ",
"\nQuantum Systems Unit\nOkinawa Institute of Science and Technology Graduate University\n904-0495OnnaOkinawaJapan\n",
"\nLight-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna\n904-0495OkinawaJapan\n",
"\nJapan and School of Chemistry and Physics\nLight-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna\nUniversity of KwaZulu-Natal\n904-0495DurbanOkinawa\n",
"\nKwaZulu-Natal\n4001South Africa\n"
] | [
"Quantum Systems Unit\nOkinawa Institute of Science and Technology Graduate University\n904-0495OnnaOkinawaJapan",
"Light-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna\n904-0495OkinawaJapan",
"Japan and School of Chemistry and Physics\nLight-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna\nUniversity of KwaZulu-Natal\n904-0495DurbanOkinawa",
"KwaZulu-Natal\n4001South Africa"
] | [] | We present a systematic treatment of higher-order modes of vacuum-clad ultrathin optical fibers. We show that, for a given fiber, the higher-order modes have larger penetration lengths, larger effective mode radii, and larger fractional powers outside the fiber than the fundamental mode. We calculate, both analytically and numerically, the Poynting vector, propagating power, energy, angular momentum, and helicity (or chirality) of the guided light. The axial and azimuthal components of the Poynting vector can be negative with respect to the direction of propagation and the direction of phase circulation, respectively, depending on the position, the mode type, and the fiber parameters. The orbital and spin parts of the Poynting vector may also have opposite signs in some regions of space. We show that the angular momentum per photon decreases with increasing fiber radius and increases with increasing azimuthal mode order. The orbital part of angular momentum of guided light depends not only on the phase gradient but also on the field polarization, and is positive with respect to the direction of the phase circulation axis. Meanwhile, depending on the mode type, the spin and surface parts of angular momentum and the helicity of the field can be negative with respect to the direction of the phase circulation axis.of angular momentum, both the spin part J spin z and the surface part J surf z can be negative with respect to the direction of the phase circulation axis. It is interesting to note that the sum J orb z + J spin z of the orbital and spin parts is always positive with respect to the direction of the phase circulation axis and is determined by the phase gradient.We can write J orb z = J e-orb z + J h-orb z , J spin z = J e-spin z + J h-spin z , and J surf z = J e-surf z | 10.1103/physreva.96.023835 | null | 119,336,976 | 1703.00109 | ad9e12d53b4dec59408996da51170aa6f3d3791e |
Higher-order modes of vacuum-clad ultrathin optical fibers
1 Mar 2017
Fam Le Kien
Thomas Busch
Viet Giang Truong
Síle Nic Chormaic
Quantum Systems Unit
Okinawa Institute of Science and Technology Graduate University
904-0495OnnaOkinawaJapan
Light-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna
904-0495OkinawaJapan
Japan and School of Chemistry and Physics
Light-Matter Interactions Unit, Okinawa Institute of Science and Technology Graduate University, Onna
University of KwaZulu-Natal
904-0495DurbanOkinawa
KwaZulu-Natal
4001South Africa
Higher-order modes of vacuum-clad ultrathin optical fibers
1 Mar 2017(Dated: July 25, 2018)arXiv:1703.00109v1 [physics.optics]
We present a systematic treatment of higher-order modes of vacuum-clad ultrathin optical fibers. We show that, for a given fiber, the higher-order modes have larger penetration lengths, larger effective mode radii, and larger fractional powers outside the fiber than the fundamental mode. We calculate, both analytically and numerically, the Poynting vector, propagating power, energy, angular momentum, and helicity (or chirality) of the guided light. The axial and azimuthal components of the Poynting vector can be negative with respect to the direction of propagation and the direction of phase circulation, respectively, depending on the position, the mode type, and the fiber parameters. The orbital and spin parts of the Poynting vector may also have opposite signs in some regions of space. We show that the angular momentum per photon decreases with increasing fiber radius and increases with increasing azimuthal mode order. The orbital part of angular momentum of guided light depends not only on the phase gradient but also on the field polarization, and is positive with respect to the direction of the phase circulation axis. Meanwhile, depending on the mode type, the spin and surface parts of angular momentum and the helicity of the field can be negative with respect to the direction of the phase circulation axis.of angular momentum, both the spin part J spin z and the surface part J surf z can be negative with respect to the direction of the phase circulation axis. It is interesting to note that the sum J orb z + J spin z of the orbital and spin parts is always positive with respect to the direction of the phase circulation axis and is determined by the phase gradient.We can write J orb z = J e-orb z + J h-orb z , J spin z = J e-spin z + J h-spin z , and J surf z = J e-surf z
I. INTRODUCTION
Near-field optics using optical fibers is currently a highly active and productive area of research that has implications for optical communication, sensing, computing, and even quantum information. Its main tool are so-called nanofibers, which are optical fibers that are tapered to a diameter comparable to or smaller than the wavelength of light [1][2][3]. The essence of the tapering technique is to heat and pull a single-mode optical fiber to a very small thickness, while maintaining the taper condition adiabatically [1][2][3][4]. Due to the tapering, the original core almost vanishes and the refractive indices that determine the guiding properties of the tapered fiber are those of the original silica cladding and the surrounding vacuum. Thus, these fibers can be treated as very thin vacuum-clad silica-core fibers.
In a vacuum-clad nanofiber, the guided field penetrates an appreciable distance into the surrounding medium and appears as an evanescent wave carrying a significant fraction of the power and having a complex polarization pattern [5][6][7]. These fibers offer high transmission and strong confinement of guided light in the transverse plane of the fiber. This confinement allows one to efficiently couple guided light to emitters placed on or near the fiber surface. Such fibers are therefore versatile tools for coupling light and matter and have a wide range of potential practical applications [8,9]. For example, they have been used for trapping atoms [10][11][12], for probing atoms [13][14][15][16][17][18][19], molecules [20], quantum dots [21], and color centers in nanodiamonds [22,23], and for mechanical manipulation of small particles [24][25][26]. Due to the lack of cutoff as well as the possession of a small mode area and a simple mode structure, the fundamental HE 11 mode has been exploited in most studies to date.
However, tapered fibers can also be fabricated with slightly larger diameters and/or larger refractive indices so that they can support not only the fundamental HE 11 mode but also several higher-order modes. Compared to the HE 11 mode, the higher-order modes have larger cutoff size parameters and more complex intensity, phase, and polarization distributions. In addition, the higher-order modes can have larger angular momentum compared to the HE 11 mode. For ease of reference, the micro-and nanofibers that can support the fundamental mode and several higher-order modes are called ultrathin fibers in this paper.
Theoretical studies have shown that ultrathin fibers with higher-order modes can be used to trap, probe, and manipulate atoms, molecules, and particles [27][28][29][30][31][32][33]. The excitation of higher-order modes has been studied [34,35], and the production of ultrathin fibers for higherorder mode propagation with high transmission has been demonstrated [36][37][38]. First experimental studies on the interaction between higher-order modes and atoms [39] or particles [40,41] have also been reported.
Despite increased interest in higher-order modes of ultrathin fibers, systematic treatments for the basic prop-erties of light fields in such modes do not exist. Although the full and exact fiber theory [42] is applicable also to ultrathin fibers, deep understanding can only be reached only by combining a systematic and comprehensive analysis with detailed numerical calculations for fibers with parameters in the range of experimental interest. The purpose of this work is to present such a systematic treatment. We show that, for a given fiber, the higher-order modes have larger penetration lengths, larger effective mode radii, and larger fractional powers outside the fiber than the fundamental mode. We calculate analytically and numerically the Poynting vector, propagating power, energy, angular momentum, and helicity (or chirality) of guided light.
The paper is organized as follows. In Sec. II we review the theory of guided modes of optical fibers and present the results of numerical calculations for the propagation constants and penetration lengths of the modes of fibers with the parameters in the range of experimental interest. Section III is devoted to the study of the electric intensity distribution and the effective mode radius. In Sec. IV we calculate the Poynting vector, propagating power, and energy per unit length, and examine the orbital and spin parts of the Poynting vector. Section V is devoted to the study of angular momentum of guided light and its orbital, spin, and surface parts. In Sec. VI we calculate the helicity and the associated chirality of guided light. Our conclusions are given in Sec. VII.
II. GUIDED MODES OF OPTICAL FIBERS
In this section, we first briefly review the theory of guided modes of optical fibers and then calculate the propagation constants and evanescent-wave penetration lengths of the fundamental mode and higher-order modes of ultrathin fibers with parameters in the range of experimental interest.
For this we consider the model of a step-index fiber that is a dielectric cylinder of radius a and refractive index n 1 , surrounded by an infinite background medium of refractive index n 2 , where n 2 < n 1 . We use Cartesian coordinates {x, y, z}, where z is the coordinate along the fiber axis, and also cylindrical coordinates {r, ϕ, z}, where r and ϕ are the polar coordinates in the fiber transverse plane xy.
For a guided light field of frequency ω (free-space wavelength λ = 2πc/ω and free-space wave number k = ω/c), the propagation constant β is determined by the fiber eigenvalue equation [42] J ′ l (ha) haJ l (ha) + K ′ l (qa) qaK l (qa) n 2 1 J ′ l (ha) haJ l (ha) + n 2 2 K ′ l (qa) qaK l (qa) = l 2 1 h 2 a 2 + 1 q 2 a 2 2 β 2
k 2 . (1)
Here, we have introduced the parameters h = (n 2 1 k 2 − β 2 ) 1/2 and q = (β 2 − n 2 2 k 2 ) 1/2 , which characterize the scales of the spatial variations of the field inside and outside the fiber, respectively. The integer index l = 0, 1, 2, . . . is the azimuthal mode order, which determines the helical phasefront and the associated phase gradient in the fiber transverse plane. The notations J l and K l stand for the Bessel functions of the first kind and the modified Bessel functions of the second kind, respectively. The notations J ′ l (x) and K ′ l (x) stand for the derivatives of J l (x) and K l (x) with respect to the argument x.
For l ≥ 1, the eigenvalue equation (1) leads to hybrid HE and EH modes [42], for which the eigenvalue equations are given by Eqs. (A1) and (A2) in Appendix A. We label these modes as HE lm and EH lm , where l = 1, 2, . . . is the azimuthal and m = 1, 2, . . . the radial mode orders. The radial mode order m implies that the HE lm or EH lm mode is the m-th solution to the corresponding eigenvalue equation.
For l = 0, the eigenvalue equation (1) leads to TE and TM modes [42], for which the eigenvalue equations are given by Eqs. (A4) and (A5) in Appendix A. We label these modes as TE 0m and TM 0m , where again m = 1, 2, . . . is the radial mode order and the subscript 0 implies that the azimuthal mode order of each mode is l = 0. We are interested in vacuum-clad ultrathin fibers, which can support not only the fundamental HE 11 mode but also several higher-order modes in the optical region. For this we plot in Fig. 1 the propagation constant β for the HE 11 mode and several higher-order modes as a function of the fiber radius a for a wavelength of light that is chosen to be λ = 780 nm. The fiber is assumed to be made of silica, with a refractive index n 1 = 1.4537, and the surrounding medium is air or vacuum, with a refractive index n 2 = 1. One can see that the first two higherorder modes, TE 01 and TM 01 , appear when a ≃ 283 nm, and the next higher-order mode, HE 21 , appears when a ≃ 325 nm. It is clear that the number of modes supported by the fiber increases with increasing fiber radius a. The numerical results presented in Fig. 1 Outside the fiber, the guided modes are evanescent waves in the radial direction r. The penetration depth is characterized by the parameter Λ = 1/q, which we show for the HE 11 mode and several higher-order modes as a function of the fiber radius a in Fig. 2. One can see that, near to the cutoffs, the penetration length Λ is large, that is, the field is not tightly confined inside the fiber. Furthermore, when the fiber radius a increases, the penetration length decreases to the limiting value Λ min = 1/(k n 2 1 − n 2 2 ). In general, for a given fiber, the higher-order modes have larger penetration lengths than the HE 11 mode.
We will next discuss the mode functions [42]. For this we will write the electric and magnetic components of the field in the form E = (Ee −iωt + c.c.)/2 and H = (He −iωt + c.c.)/2, where E and H are spatial envelope functions, which obey the Helmholtz equation. They are the mode functions we are interested in and, for a guided mode with a propagation constant β and an azimuthal mode order l, we can write them as E = ee iβz+ilϕ and H = he iβz+ilϕ . Here, e and h are the reduced mode profile functions of the electric and magnetic components of the field, respectively, and β and l can take not only positive but also negative values. In the following we will consider hybrid modes, TE modes, and TM modes separately.
A. Hybrid modes
Quasicircularly polarized hybrid modes
In this section, we consider quasicircularly polarized hybrid HE and EH modes. For convenience, we use the notations β > 0 and l > 0 for the propagation constant and the azimuthal mode order, respectively. We introduce the index f = +1 or −1 (or simply f = + or −) for the positive (+ẑ) or negative (−ẑ) propagation direction, which leads to the corresponding propagation phase factor of e iβz or e −iβz . We also introduce the index p = +1 or −1 (or simply p = + or −) for the counterclockwise or clockwise phase circulation, corresponding to the azimuthal phase factor of e ilϕ or e −ilϕ . The index p = + or − also indicates that the central phase circulation axis is +ẑ or −ẑ. We can label quasicircularly polarized hybrid modes by the mode index µ = (f lp), which can also be extended to include the mode type, HE or EH, and the radial mode order, m, when necessary.
We choose a notation in which we decompose an arbitrary vector V =rV r +φV ϕ +ẑV z into the radial, azimuthal, and axial components denoted by the subscripts r, ϕ, and z. The notationsr =x cos ϕ +ŷ sin ϕ, ϕ = −x sin ϕ+ŷ cos ϕ, andẑ stand for the unit basis vectors of the cylindrical coordinate system {r, ϕ, z}, withx andŷ being the unit basis vectors of the Cartesian coordinate system for the fiber transverse plane xy. The position vector in the fiber transverse plane is given by r = rr = xx + yŷ.
In the cylindrical coordinates, the reduced mode profile functions e (f lp) (r) and h (f lp) (r) of the electric and magnetic components of a quasicircularly polarized hybrid mode with the propagation direction f , the azimuthal mode order l, and the phase circulation direction p are then given by
e (f lp) =re r + pφe ϕ + fẑe z , h (f lp) = f prh r + fφh ϕ + pẑh z ,(2)
where the electric mode function components e r , e ϕ , and e z and the magnetic mode function components h r , h ϕ , and h z are given by Eqs. (A9)-(A12) for β > 0 and l > 0 in Appendix A. These mode function components depend explicitly on the azimuthal mode order l and are implicitly dependent on the radial mode order m. An important property of the mode functions is that the longitudinal components e z and h z are nonvanishing and in quadrature (π/2 out of phase) with the radial components e r and h r , respectively. In addition, the azimuthal components e ϕ and h ϕ are also nonvanishing and in quadrature with the radial components e r and h r , respectively. The electric and magnetic polarizations of hybrid modes are not of the TE and TM types. Note that the full mode functions for quasicircularly polarized hybrid modes are given by
E (f lp) circ = e (f lp) e if βz+iplϕ , H (f lp) circ = h (f lp) e if βz+iplϕ .
(3)
Quasilinearly polarized hybrid modes
Quasilinearly polarized hybrid modes are linear superpositions of counterclockwise and clockwise quasicircularly polarized hybrid modes. The full mode functions of the electric and magnetic components of the guided field in a quasilinearly polarized hybrid mode (f, l, ϕ pol ) are given by [42]
E (f lϕ pol ) lin = 1 √ 2 (E (f l+) circ e −iϕ pol + E (f l−) circ e iϕ pol )
,
H (f lϕ pol ) lin = 1 √ 2 (H (f l+) circ e −iϕ pol + H (f l−) circ e iϕ pol ).(4)
Here, the phase angle ϕ pol determines the orientation of the symmetry axes of the mode profile in the fiber transverse plane. In particular, the specific phase angle values ϕ pol = 0 and π/2 define two orthogonal polarization profiles, one being symmetric with respect to the x axis and the other being the result of the rotation of the first one by an angle of π/2l in the fiber transverse plane xy. We can write
E (f lϕ pol ) lin = e (f lϕ pol ) e if βz , H (f lϕ pol ) lin = h (f lϕ pol ) e if βz ,(5)
where e (f lϕ pol ) and h (f lϕ pol ) are the reduced mode profile functions of quasilinearly polarized hybrid modes and are given as
e (f lϕ pol ) = 1 √ 2 (e (f l+) e i(lϕ−ϕ pol ) + e (f l−) e −i(lϕ−ϕ pol ) )
,
h (f lϕ pol ) = 1 √ 2 (h (f l+) e i(lϕ−ϕ pol ) + h (f l−) e −i(lϕ−ϕ pol ) ).(6)
Inserting Eqs. (2) into Eqs. (6) yields
e (f lϕ pol ) = √ 2[re r cos(lϕ − ϕ pol ) + iφe ϕ sin(lϕ − ϕ pol ) + fẑe z cos(lϕ − ϕ pol )], h (f lϕ pol ) = √ 2[ifrh r sin(lϕ − ϕ pol ) + fφh ϕ cos(lϕ − ϕ pol ) + iẑh z sin(lϕ − ϕ pol )]. (7)
In particular, we find, for ϕ pol = 0, e (f l,0) = √ 2(re r cos lϕ + iφe ϕ sin lϕ + fẑe z cos lϕ),
h (f l,0) = √ 2(ifrh r sin lϕ + fφh ϕ cos lϕ + iẑh z sin lϕ),(8)
and, for ϕ pol = π/2, e (f l,π/2) = √ 2(re r sin lϕ − iφe ϕ cos lϕ + fẑe z sin lϕ), h (f l,π/2) = √ 2(−ifrh r cos lϕ + fφh ϕ sin lϕ − iẑh z cos lϕ).
B. TE modes
We again label the propagation directions of TE modes by the index f = + or −. The reduced mode profile functions of the electric and magnetic components of TE modes with the propagation directions f can be written as
e (f ) =φe ϕ , h (f ) = frh r +ẑh z ,(10)
where the mode function components e ϕ , h r , and h z are given by Eqs. (A13)-(A16) for β > 0 in Appendix A. They depend implicitly on the radial mode order m. It is clear from Eqs. (10) that, for TE modes, we have e
(f ) r = e (f ) z = h (f ) ϕ = 0.
The electric polarization of a TE mode is therefore linear and aligned along the azimuthal direction. Meanwhile, since h r is π/2 out of phase with respect to h z , the magnetic polarization of the mode is elliptical in the meridional rz plane, which contains the radial r axis and the fiber z axis. The full mode functions of TE modes are given by
E (f ) = e (f ) e if βz and H (f ) = h (f ) e if βz .
C. TM modes
We also label the propagation directions of TM modes by the index f = + or −. The reduced mode profile functions of the electric and magnetic components of TM modes with the propagation directions f can be written as
e (f ) =re r + fẑe z , h (f ) = fφh ϕ ,(11)
where the mode function components e r , e z , and h ϕ are given by Eqs. (A17)-(A20) for β > 0 in Appendix A. They depend implicitly on the radial mode order m. It is clear from Eqs. (11) that, for TM modes, we have e
(f ) ϕ = h (f ) r = h (f ) z = 0.
The magnetic polarization of a TM mode is therefore linear and aligned along the azimuthal direction. Meanwhile, since e r is π/2 out of phase with respect to e z , the electric polarization of the mode is elliptical in the meridional rz plane. The full mode functions of TM modes are given by
E (f ) = e (f ) e if βz and H (f ) = h (f ) e if βz .
III. SPATIAL INTENSITY DISTRIBUTIONS
In this section, we study the electric intensity distributions |e| 2 = |e r | 2 + |e ϕ | 2 + |e z | 2 of the fields in the fundamental HE 11 and several higher-order modes, namely the TE 01 , TM 01 , HE 21 , and EH 11 modes. In the cases of the hybrid HE 11 , HE 21 , and EH 11 modes, we examine both quasicircular and quasilinear polarizations. The inner (r/a < 1) and outer (r/a > 1) parts are distinguished by the blue and cyan colors, respectively. In all four cases The distributions are normalized to the same power. The fiber radius is chosen to be a = 400 nm. All other parameters are as for Fig. 1.
The cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in the quasicircularly polarized HE 11 mode, the TE 01 mode, the TM 01 mode, and the quasicircularly polarized HE 21 mode are shown in Fig. 3. One can note that all of them are azimuthally symmetric. To show the spatial dependencies of these distributions more clearly, we display in Fig. 4 cuts in the radial direction. We note that the group of the TE 01 , TM 01 , and HE 21 modes corresponds to the first higher-order LP 11 mode of weakly guiding fibers [42]. Meanwhile, the fundamental HE 11 mode corresponds to the lowest LP 01 mode of weakly guiding fibers [42].
For hybrid modes, we can use quasilinear polarization instead of quasicircular polarization. In order to illustrate fields in hybrid modes with quasilinear polariza-Radial distance r (nm) Electric intensity (arb. units) tion, we display in Fig. 5 the cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in the quasilinearly polarized HE 11 mode and the quasilinearly polarized HE 21 mode. To show the spatial dependencies more clearly, radial cuts for both modes in two different directions are shown in Fig. 6.
In addition to the TE, TM, and HE modes, there is an-Radial distance r (nm) Electric intensity (arb. units) other type of guided modes, namely the EH modes. The cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in the quasicircularly and quasilinearly polarized EH 11 modes are shown in Fig. 7. Similarly, Fig. 8 depicts cuts in the radial direction. Inside the fiber, the field intensity is not a fast reducing function of the radial distance r. A discontinuity of the field intensity is observed at the position of the fiber surface. This discontinuity is due to the boundary condition for the normal (radial) component e r of the electric field. Since the difference between the refractive indices of the silica core and the vacuum cladding is large, the discontinuity of the field at the position of the fiber surface is dramatic.
Outside the fiber, the field intensity monotonically and quickly reduces with increasing radial distance r. This behavior is a consequence of the evanescent-wave nature of guided fields, which do not propagate along the radial direction. Comparison between the figures shows that, for the parameters used, the fraction of the field intensity distribution outside the fiber for higher-order modes is larger than that for the HE 11 mode.
We observe from Figs. 3 and 7(a) that, for quasicircularly polarized hybrid HE and EH modes, TE modes, and TM modes, the spatial distribution of the field intensity is cylindrically symmetric. In these cases, the outer parts of the electric intensity distributions of the different modes look very similar to each other as they exhibit the evanescent wave behavior. Meanwhile, the inner parts of the electric intensity distributions of different modes look very different from each other. Indeed, the inner parts of the electric intensity profiles have the shape of a cone in Figs. 3(a) and 3(c), the shape of a doughnut in Figs. 3(b) and 3(d), and the shape of a combination of a cone and a doughnut in Fig. 7(a).
We observe from Figs. 5 and 7(b) that, for quasilinearly polarized hybrid modes, the spatial distribution of the field intensity is not cylindrically symmetric. In the inner and outer vicinities of the fiber surface, the field intensity strongly varies with varying azimuthal angle.
Finally, from Figs. 4(b), 4(d), and 6(b) one can see that, in the cases of the TE 01 and HE 21 modes, the electric field intensity is exactly equal to zero at the center of the fiber. Figure 8(b) shows that, for the quasilinearly polarized EH 11 mode, the electric field intensity is exactly equal to zero at two centrally symmetric off-center positions along the y axis inside the fiber.
The spatial profiles of the fields presented in Figs The effective mode area can be defined as A eff = ( |e| 2 dr) 2 / |e| 4 dr, where we use the notation dr = 2π 0 dϕ ∞ 0 r dr. This allows us to define an effective mode radius as r eff = A eff /π. The parameters A eff and r eff characterize the confinement of the field mode in the fiber transverse plane. We show in Fig. 9 the effective mode radius r eff as a function of the fiber radius a for the fundamental mode and several higher-order modes. It is clear that the effective radii of the higher-order modes are larger than that of the fundamental mode. In addition, different modes have different minimum effective radii. These minimum values are achieved at different points corresponding to different values of a. For the light wavelength λ = 780 nm used in our numerical calculations, the smallest value of r eff is about 353 nm and is achieved for the fundamental HE 11 mode of a fiber with the radius a = 275 nm.
IV. POYNTING VECTOR, POWER, AND ENERGY PER UNIT LENGTH
Next, we calculate the Poynting vector, propagating power, and energy per unit length. We show that the axial and azimuthal components of the Poynting vector can be negative with respect to the direction of propagation and the direction of phase circulation, respectively. In order to get deeper insight into the connection between linear and angular momenta of light, we also study the decomposition of the Poynting vector into the orbital and spin parts. We show that the orbital and spin parts of the Poynting vector can have opposite signs in some regions of space.
A. Poynting vector
An important characteristic of light propagation is the cycle-averaged Poynting vector
S = 1 2 Re(E × H * ).(12)
We introduce the notations S z , S ϕ , and S r for the axial, azimuthal, and radial components of the vector S in the cylindrical coordinates. For guided modes of fibers, we have S r = 0 and
S z = 1 2 Re(E r H * ϕ − E ϕ H * r ), S ϕ = 1 2 Re(E z H * r − E r H * z ).(13)
The explicit expressions for S z and S ϕ are given by Eqs. (B1)-(B8) in Appendix B. We note that the existence of a nonzero azimuthal component S ϕ of the Poynting vector for guided fields leads to a force transverse to the direction of propagation. This is similar to the situation for light beams with a transverse phase gradient, for which transverse optical forces have been experimentally observed [43]. It is worth nothing that, due to the interference between different terms associated with different Bessel functions, the sign of the azimuthal component S ϕ of the Poynting vector of a quasicircularly polarized hybrid mode and the sign of the axial component S z of the Poynting vector of a quasilinearly polarized hybrid mode can vary in space. The details are given in Appendix B.
For the quasicircularly polarized HE 11 mode, the TE 01 mode, the TM 01 mode, and the quasicircularly polarized HE 21 The axial component S z and the azimuthal component S ϕ of the Poynting vector for the quasicircularly polarized HE 12 and EH 11 modes are displayed in Fig. 11. The dashed blue curves of the figure shows that S ϕ is negative in a localized region of space. For the HE 12 mode [ Fig. 11(a)], this region is inside the fiber. However, for the EH 11 mode [ Fig. 11(b)], part of this region is the outside and part is in the inside of the fiber.
Figures 10(a), 10(d), 11(a), and 11(b) and additional numerical calculations, which are not shown here, confirm that, outside the fiber, the azimuthal component S ϕ of the Poynting vector is positive for quasicircularly polarized HE modes but negative for quasicircularly polarized EH modes.
It is not surprising that a component of the Poynting vector can have different signs in different regions of space [44,45]. Similar results have been obtained for the axial component of the Poynting vector of a guided mode [44] and for the axial and azimuthal components of the Poynting vector of a Bessel beam [45]. In fact, we have confirmed that the axial component S z of the Poynting vector of the quasilinearly polarized HE 11 mode can become negative when the refractive index n 1 of the fiber is large enough (n 1 /n 2 > 2.71 for the HE 11 mode) [44]. We show in Fig. 12 a similar result for the quasilinearly polarized higher-order HE 21 mode. One can see that in this case the Poynting vector is negative in four regions around the fiber surface at azimuthal angles around the Radial distance r (nm) Poynting vector (arb. units) values ϕ = 0, π/2, π, and 3π/2.
B. Propagating power
The optical power carried by the fiber is given by
P = S z dr.(14)
It can be split as P = P in + P out , where P in and P out are the propagating powers inside and outside the fiber and explicit expressions for both are given by Eqs. (C1)-(C6) in Appendix C. The fractional power outside the fiber η P is defined as η P = P out /P . We display η P in Fig. 13 as a function of the fiber radius a for the HE 11 mode and several higherorder modes. We observe from the figure that η P reduces with increasing a and that the fractional powers outside the fiber for higher-order modes are larger than that for the fundamental mode. It is interesting to note that, near the cutoffs for the EH 11 and HE 31 modes, the factor η P is significantly smaller than unity, unlike the cases of the HE 11 and HE 12 modes. We show in Appendix C that, for the EH lm modes with l = 1, 2, . . . and the HE lm modes with l = 3, 4, . . . , the limiting values of the factor η P in the cutoff regions are smaller than unity, in agreement with the aforementioned numerical results. We also show in Appendix C that, for the TE 0m and TM 0m modes and the HE lm modes with l = 1 or 2, the limiting values of the factor η P in the cutoff regions are equal to unity. Despite this prediction, we observe from Fig. 13 TM 01 , and HE 21 modes are slightly smaller than unity. These numerical deviations are due to the steep slopes of the curves that make it difficult to approach the cutoffs.
Note that the numerical results presented in Fig. 13 are in agreement with the results presented in Ref. [46].
C. Energy per unit length
The cycle-averaged energy per unit length is given by
U = ǫ 0 4 n 2 |E| 2 dr + µ 0 4 |H| 2 dr,(15)
where n(r) = n 1 for r < a and n 2 for r > a. The first and second terms on the right-hand side of expression (15) correspond to the electric and magnetic parts, respectively, of the energy of the field. For guided modes, these parts are equal to each other. We can split U as U = U in + U out , where U in and U out are the energies per unit length inside and outside the fiber, and their explicit expressions are given by Eqs. (D1)-(D6) in Appendix D. The fractional energy outside the fiber η U = U out /U is shown as a function of the fiber radius a for the HE 11 mode and several higher-order modes in Fig. 14. One can see that the behavior of η U is very similar, but not identical, to that of η P .
D. Orbital and spin parts of the Poynting vector
It is known that the Poynting vector of the field can be decomposed into two parts, the orbital part and the spin part [47][48][49][50][51][52][53]. In the dual-symmetric formalism, the decomposition takes the form [47][48][49][50][51][52][53]
S = S orb + S spin ,(16)
where S orb = S e-orb + S h-orb is the orbital part, with its electric and magnetic components
S e-orb = cǫ 0 4k Im[E * · (∇)E], S h-orb = cµ 0 4kn 2 Im[H * · (∇)H],(17)
and S spin = S e-spin + S h-spin is the spin part, with its electric and magnetic components
S e-spin = cǫ 0 8k ∇ × Im(E * × E), S h-spin = cµ 0 8kn 2 ∇ × Im(H * × H).(18)
In Eq. (17), the dot product applies to the field vectors, that is, A · (∇)B ≡ i=x,y,z A i ∇B i for arbitrary field vectors A and B.
In general, we have the equality S e = S h , where S e = S e-orb + S e-spin and S h = S h-orb + S h-spin are the electric and magnetic components of the Poynting vector. However, we may observe the inequalities S e-orb = S h-orb and S e-spin = S h-spin . The explicit expressions for the electric and magnetic components of the orbital and spin parts of the Poynting vector of guided light are given in Appendix E. It is worth noting that the orbital part S orb of the Poynting vector is proportional to the canonical momentum of light, which determines the radiation pressure force upon a small dipole Rayleigh particle [49][50][51][52][53].
We show in Appendix E that the orbital parts S orb z and S orb ϕ of the axial and azimuthal components, respectively, of the Poynting vector are positive with respect to the direction of propagation and the direction of phase circulation, respectively. Meanwhile, the signs of the spin parts, S spin z and S spin ϕ , of the Poynting vector can vary in the fiber transverse plane and can hence be negative with respect to the direction of propagation and the direction of phase circulation, respectively, in some regions of space. Thus, the orbital and spin parts of the Poynting vector can have opposite signs in certain regions of space. We show numerical results confirming this in Figs. 15-17. We show in Appendix E that the orbital part S orb z of the axial component S z of the Poynting vector is determined by the local density of energy. Meanwhile, the orbital part S orb ϕ of the azimuthal component S ϕ of the Poynting vector of a quasicircularly polarized hybrid mode depends on not only the local phase gradient but also the local polarization, unlike the case of uniformly polarized paraxial beams [53].
The radial dependencies of the orbital part S orb z and the spin part S spin z of the axial component S z of the Poynting vector for the quasicircularly polarized HE 11 mode, the TE 01 mode, the TM 01 mode, and the quasicircularly polarized HE 21 mode are shown in Fig. 15. The radial dependencies of the orbital part S orb ϕ and the spin part S spin ϕ of the azimuthal component S ϕ of the Poynting vector for the quasicircularly polarized HE 11 and HE 21 modes are shown in Fig. 16. Additionally, the radial dependencies of the orbital and spin parts of the axial and azimuthal components of the Poynting vector for the quasicircularly polarized EH 11 mode are displayed in Fig. 17.
These figures show that the orbital parts S orb Figures 16 and 17(b) show that, outside the fiber, the spin part S spin ϕ of the azimuthal component S ϕ of the Poynting vector is positive for HE modes and is negative for EH modes. These features are also observed for S ϕ (see Figs. 10 and 11).
V. ANGULAR MOMENTUM OF GUIDED LIGHT
In this section, we calculate the angular momentum of guided light and also study its orbital, spin, and surface parts. We show that the orbital part of angular momentum depends not only on the phase gradient, but also on the field polarization, and is always positive with respect to the direction of the phase circulation axis. Meanwhile, the spin and surface parts of angular momentum and the helicity (chirality) of light can be negative with respect to the direction of the phase circulation axis. We find that the signs of the spin and surface parts of the transverse angular momentum density of the fundamental and higher-order modes depend on the direction of propagation.
A. Angular momentum of guided light
For the electromagnetic field in free space, the linear momentum density is given by p local = S/c 2 [54]. For the field in a dielectric medium, several formulations for the linear momentum density can be found in the literature [55]. The Abraham formulation [56] takes p local = [E × H]/c 2 , which is sometimes interpreted as the field-only contribution to the momentum of light. On the other hand, the Minkowski formulation [57] takes p local = [D × B]. While the appropriate form remains contentious because the debate has not been settled by experiments, the Abraham formulation is generally accepted [54,58]. Therefore, in our basic calculations, we adopt the Abraham formulation for the field linear momentum density inside and outside the fiber.
With the above definition of the linear momentum density, the angular momentum density of the electromagnetic field is given by j local ≡ (R × p local ) = (R × S)/c 2 . Here, R = xx + yŷ + zẑ is the position vector in the three-dimensional space. Integrating j local over the crosssectional plane of the fiber then yields the angular momentum per unit length
J ≡ j local dr = 1 c 2 (R × S) dr.(19)
Note that TE and TM modes and quasilinearly polarized HE and EH modes have no angular momentum. We consider the cycle-averaged angular momentum per unit length J of quasicircularly polarized HE and EH modes. The only nonzero component of J is aligned along the fiber axis and is given by
J z = 1 c 2 rS ϕ dr.(20)
Thus, the axial angular momentum per unit length J z is determined by the azimuthal component S ϕ of the Poynting vector. We can write J z = J in z + J out z , where J in z and J out z are the parts of the angular momentum of light inside and outside the fiber. The explicit analytical expressions for J in z and J out z are given by Eqs. (F1) and (F2) in Appendix F. According to these expressions, the axial angular momentum per unit length J z depends on the direction of phase circulation, specified by the index p, but does not depend on the direction of propagation, specified by the index f .
The angular momentum per photon j z =hωJ z /U for quasicircularly polarized hybrid modes with the positive (counterclockwise) phase circulation direction p = + is shown as a function of the fiber radius a in Fig. 18. One can see that j z decreases with increasing a and increases with increasing l. Comparison between HE and EH modes shows that, for a given set of l and m, the angular momentum per photon j z for an EH lm mode is smaller than that for the corresponding HE lm mode. This feature is related to the fact that, outside the fiber, the azimuthal component S ϕ of the Poynting vector is positive for HE modes and is negative for EH modes (see Fig. 11). Figure 18 also shows that the EH 11 mode has a (nm) the lowest angular momentum per photon. It is clear that the angular momentum per photon in a higher-order hybrid mode is large when the azimuthal mode order l is large.
B. Orbital and spin parts of angular momentum
The angular momentum per unit length J of a light beam can be decomposed into orbital, spin, and surface parts as J = J orb + J spin + J surf [59][60][61][62][63]. However, the identification of terms as orbital, spin, and surface components is not unique [64,65].
In the dual-symmetric formalism, the orbital and spin parts of angular momentum per unit length are given as [50][51][52][53]66]
J orb = ǫ 0 4ω Im[E * · (R × ∇)E] dr + µ 0 4ω 1 n 2 Im[H * · (R × ∇)H] dr(21)
and
J spin = ǫ 0 4ω Im(E * × E) dr + µ 0 4ω 1 n 2 Im(H * × H) dr.(22)
In Eq. (21), the dot product applies to the field vectors, that is, A · (R × ∇)B ≡ i=x,y,z A i (R × ∇)B i for arbitrary field vectors A and B. Detailed discussions of various aspects of optical orbital angular momentum can be found in Ref. [67].
Meanwhile, the surface part of angular momentum per unit length is, the dual-symmetric formalism, given as [59][60][61][62][63]
J surf = − ǫ 0 4ω Im[∇ · E * (R × E)] dr − µ 0 4ω 1 n 2 Im[∇ · H * (R × H)] dr,(23)
where we have used the notation
∇ · A(R × B) = i=x,y,z ∇ i [A i (R × B)].
The orbital part of angular momentum per unit length is related to the orbital part S orb of the Poynting vector via the formula J orb = (1/c 2 ) (R × S orb ) dr. The spin and surface parts of angular momentum per unit length are related to the spin part S spin of the Poynting vector via the formula J spin + J surf = (1/c 2 ) (R × S spin ) dr.
The surface part of angular momentum is usually omitted in the literature [63]. The reason is that, when the field vanishes sufficiently quickly in the limit of large distances, the surface part is, due to the Gaussian theorem, identical to zero. For example, for a bullet-like light wave packet with a finite transverse and longitudinal extent, the surface part of angular momentum can be neglected. However, for a pencil-like light beam whose span along the direction of propagation is virtually infinite, the surface part is not vanishing [63].
For TE and TM modes, we have J orb = J spin = J surf = 0. For quasicircularly polarized HE and EH modes, the only nonzero components of the vectors J orb , J spin , and J surf are the axial components
J orb z = p ǫ 0 4ω [l|e| 2 − 2Im(e * r e ϕ )] dr + p µ 0 4ω 1 n 2 [l|h| 2 − 2Im(h * r h ϕ )] dr,(24)J spin z = p ǫ 0 2ω Im(e * r e ϕ ) dr + p µ 0 2ω 1 n 2 Im(h * r h ϕ ) dr,(25)
and
J surf z = p πa 2 ǫ 0 2ω Im(e * r e ϕ | r=a+0 − e * r e ϕ | r=a−0 ) + p πa 2 µ 0 2ω 1 n 2 2 − 1 n 2 1 Im(h * r h ϕ )| r=a .(26)
Here, we have introduced the notations a ± 0 = lim ε→0 (a ± ε). According to Eqs. An important point to note here is that the above results, derived for angular momentum of light in guided modes, are different from the results for angular momentum of light in scalar Laguerre-Gaussian beams [68,69]. The main reason is that a guided light beam is a vector beam [70], whose polarization is not uniform in the crosssectional plane. Another important reason is that the guided mode has two parts: one inside the fiber, where the medium is a dielectric, and the other one outside the fiber, where the medium is the vacuum. In addition, the discontinuity of the refractive index at the fiber surface leads to the appearance of the surface part of the angular momentum of light.
It follows from Eq. (24) that the orbital part J orb z of angular momentum of light in an arbitrary hybrid mode is positive or negative when the phase circulation direction index p is positive or negative, respectively. Note that p = + or− means that the phase circulation direction in the xy plane is counterclockwise or clockwise, that is, the phase circulation axis is +ẑ or −ẑ, respectively. Thus, the orbital part J orb z of angular momentum of light in a hybrid mode is positive with respect to the direction of the phase circulation axis. Meanwhile, the expression on the right-hand side of Eq. (24) contains not only the terms l|e| 2 and l|h| 2 , which result from the local phase gradient, but also the terms Im(e * r e ϕ ) and Im(h * r h ϕ ), which result from the local polarization. Thus, the orbital part J orb z of angular momentum depends not only on the phase gradient but also on the field polarization.
Equation (25) shows that the spin part J spin z of angular momentum is determined by the polarization of the field, whereas, according to Eq. (26), the surface part J surf z of angular momentum is associated with the discontinuity of the spin density at the fiber surface. It is clear that the discontinuity of the spin density is induced by the discontinuity of the refractive index of the medium at the fiber surface. Unlike the orbital part J orb z z of angular momentum per photon is always positive with respect to the direction of the phase circulation axis, which is in agreement with Eq. (24). Note that j orb tially smaller thanh in the cases of the HE 11 and HE 12 modes but is comparable to or larger thanh in the cases of higher-order HE and EH modes. From Fig. 20 one can see that the spin part j spin z of angular momentum per photon is positive with respect to the direction of the phase circulation axis for the HE modes and negative for the EH modes. Furthermore, for the HE lm modes with the azimuthal mode order l = 1, the spin part j spin z is dominant to the orbital part j orb z .
However, for the HE lm modes with l ≥ 2 and the EH lm modes, the orbital part j orb . The above result is in agreement with the results of Ref. [69], where it has been shown for Laguerre-Gaussian beams that the local spin density can be positive in some regions and negative in others.
Although the transverse component of angular momentum of guided light is zero, the local density of this component is not zero. Indeed, the local density of the azimuthal component of angular momentum is given by ρ Jϕ = −rS z /c 2 . It can be decomposed as
ρ Jϕ = ρ J orb ϕ + ρ J spin ϕ + ρ J surf ϕ , where ρ J orb ϕ = −f ǫ 0 β 4ω r|e| 2 − f µ 0 β 4ωn 2 r|h| 2 , ρ J spin ϕ = f ǫ 0 2ω Im(e r e * z ) + f µ 0 2ωn 2 Im(h r h * z ), ρ J surf ϕ = −f ǫ 0 4ω r ∂ ∂r Im(e r e * z ) + 3Im(e r e * z ) − f µ 0 4ωn 2 r ∂ ∂r Im(h r h * z ) + 3Im(h r h * z ) .(27)
The azimuthal orbital angular momentum density ρ J orb ϕ originates from the orbital part S orb z of the axial component of the Poynting vector. The azimuthal spin and surface angular momentum densities ρ J spin ϕ and ρ J surf ϕ result from the spin part S spin z of the axial component of the Poynting vector. Note that the first and second terms in the expressions on the right-hand side of Eqs. (27) correspond to the electric and magnetic parts, respectively. Equations (27) can be used not only for quasicircularly polarized hybrid modes but also for TE and TM modes.
According to Eq. (27), the signs of ρ J orb ϕ , ρ J spin ϕ , and ρ J surf ϕ depend on the direction of propagation f . The dependence of the local transverse spin density ρ J spin ϕ on the direction of propagation is a signature of spin-orbit coupling of light [51][52][53][72][73][74][75]. Note that both ρ J spin ϕ and ρ J surf ϕ appear as a result of the facts that the longitudinal field components e z and h z are nonvanishing and in quadrature with the radial field components e r and h r , respectively. It has been shown that, due to spin-orbit coupling of light, spontaneous emission and scattering from an atom with a circular dipole near a nanofiber can be asymmetric with respect to the opposite axial propagation directions [76][77][78][79][80][81].
VI. HELICITY AND CHIRALITY OF LIGHT
The cycle-averaged optical helicity density of a monochromatic light field is given by [66,[82][83][84][85][86] ρ hlcy = 1 2cω
Im(E · H * ).
The helicity of a light beam is closely related to its chirality. Indeed, according to [87], the cycle-averaged optical chirality density of a monochromatic light field can be characterized by the quantity [82,[87][88][89][90][91] ρ chir = n 2 2c Im[E · H * ], (29) so that ρ chir = n 2 ωρ hlcy . Thus, in the frequency domain, the chirality density is proportional to the helicity density and the proportionality factor is n 2 ω. Note that the optical chirality density (29) can be measured from the asymmetry in the rates of excitation between a small chiral molecule and its mirror image [87,90]. However, according to [92], there is no single measure of chirality. The optical chirality measure (29) is appropriate to chiral effects arising from interference between electric and magnetic dipole transitions [87,90], whereas for chiral effects in spontaneous emission and scattering from atoms with rotating electric dipoles, an appropriate measure of optical chirality is the ellipticity vector of the field polarization [76][77][78][79][80][81]93].
For quasicircularly polarized hybrid modes, we find the following expression for the helicity density:
ρ hlcy = f p 1 2cω Im(e r h * r + e ϕ h * ϕ + e z h * z ).(30)
The optical helicity per unit length is
J hlcy = ρ hlcy dr.(31)
It is clear from Eqs. (30) and (31) that when we reverse the propagation direction f or the phase circulation direction p, the sign of helicity per unit length is reversed. The explicit expression for the helicity per unit length J hlcy in terms of the fiber parameters is given by Eq. (G3) in Appendix G. Note that the optical helicity of TE and TM guided modes is zero. The helicity per photon j hlcy =hωJ hlcy /U is shown as a function of the fiber radius a in Fig. 22. One can see that, for the propagation direction f = + and the phase circulation direction p = +, the helicity per photon j hlcy is positive for the HE modes and negative for the EH modes. We note that the magnitude of the helicity per photon j hlcy in a guided mode does not exceed the valueh, which is the value of the helicity per photon of circularly polarized light in free space.
VII. SUMMARY
In this work, we have presented a systematic treatment of higher-order modes of vacuum-clad ultrathin optical fibers. We have shown that, for a given fiber, the higherorder modes have larger penetration lengths, larger effective mode radii, and larger fractional powers outside the fiber than the fundamental mode. We have calculated analytically and numerically the Poynting vector, propagating power, energy, angular momentum, and helicity of the field. In doing so we have shown that the axial component S z and the azimuthal component S ϕ of the Poynting vector can be negative with respect to the direction of propagation and the direction of phase circulation, respectively, depending on the position, the mode type, and the fiber parameters. The occurrence of such a negative axial or azimuthal component of the Poynting vector indicates the possibility of the occurrence of a negative force upon an atom or a small particle. We have also found that the orbital and spin parts of the Poynting vector may have opposite signs in some regions of space. We have shown that, for the EH lm modes with l = 1, 2, . . . and the HE lm modes with l = 3, 4, . . . , the limiting values of the fractional power outside the fiber in the cutoff regions are significantly smaller than unity. Meanwhile, for the TE 0m and TM 0m modes and the HE lm modes with l = 1 or 2, the limiting values of the fractional power outside the fiber in the cutoff regions are equal to unity. Our calculations have shown that the angular momentum per photon decreases with increasing fiber radius and increases with increasing azimuthal mode order, and the angular momentum per photon of an EH lm mode is smaller than that of the corresponding HE lm mode. We have found that the orbital part of angular momentum of guided light depends on not only the phase gradient but also the field polarization, and is positive with respect to the direction of the phase circulation axis. Meanwhile, the spin and surface parts of angular momentum and the helicity (chirality) of light in an EH mode are negative with respect to the direction of the phase circulation axis. We have shown that the signs of the spin and sur-face parts of the transverse angular momentum density of the fundamental and higher-order modes depend on the direction of propagation. The directional dependence of the local transverse spin and surface angular momentum densities is a signature of spin-orbit coupling of light and appears as a result of the facts that the longitudinal field components are nonvanishing and in quadrature with the radial field components. Our results lay the foundations to future research on manipulating and controlling the motion of atoms, molecules, and dielectric particles using higher-order modes of ultrathin fibers.
ACKNOWLEDGMENTS
We acknowledge support for this work from the Okinawa Institute of Science and Technology Graduate University. S.N.C. and T.B. are grateful to JSPS for partial support from a Grant-in-Aid for Scientific Research (Grant No. 26400422).
Appendix A: Guided modes of a step-index fiber
For l ≥ 1, the eigenvalue equation (1) leads to hybrid HE and EH modes [42]. For the HE modes, the respective eigenvalue equation is given as
J l−1 (ha) haJ l (ha) = − n 2 1 + n 2 2 2n 2 1 K ′ l (qa) qaK l (qa) + l h 2 a 2 − R (A1)
and, for the EH modes, as
J l−1 (ha) haJ l (ha) = − n 2 1 + n 2 2 2n 2 1 K ′ l (qa) qaK l (qa) + l h 2 a 2 + R. (A2)
Here, we have introduced the notation
R = n 2 1 − n 2 2 2n 2 1 2 K ′ l (qa) qaK l (qa) 2 + lβ n 1 k 2 1 q 2 a 2 + 1 h 2 a 2 2 1/2 .(A3)
For l = 0, the eigenvalue equation (1) leads to TE and TM modes [42], with the eigenvalue equation for the TE modes given as
J 1 (ha) haJ 0 (ha) = − K 1 (qa) qaK 0 (qa)(A4)
and for the TM modes as
J 1 (ha) haJ 0 (ha) = − n 2 2 n 2 1 K 1 (qa) qaK 0 (qa) .(A5)
According to [42], the fiber size parameter V is defined as V = ka n 2 1 − n 2 2 . The cutoff values V c for HE 1m modes are determined as solutions to the equation J 1 (V c ) = 0. For HE lm modes with l = 2, 3, . . . , the cutoff values are obtained as nonzero solutions to the equation (n 2 1 /n 2 2 + 1)(l − 1)J l−1 (V c ) = V c J l (V c ). The cutoff values V c for EH lm modes, where l = 1, 2, . . . , are determined as nonzero solutions to the equation J l (V c ) = 0. For TE 0m and TM 0m modes, the cutoff values V c are obtained as solutions to the equation J 0 (V c ) = 0.
The electric and magnetic components of the field can be presented in the form
E H = 1 2 E H e −iωt + c.c.,(A6)
where E and H are the envelope functions. For a guided mode with a propagation constant β and an azimuthal mode order l, we can write
E H = e h e iβz+ilϕ ,(A7)
where e and h are the reduced mode profile functions and β and l can take not only positive but also negative values. The mode profile functions e and h can be decomposed into radial, azimuthal and axial components denoted by the subscripts r, ϕ and z, respectively. We summarize the expressions for the mode functions of hybrid modes, TE modes, and TM modes in the below [42].
Hybrid modes
It is convenient to introduce the parameters
s = l 1 h 2 a 2 + 1 q 2 a 2 J ′ l (ha) haJ l (ha) + K ′ l (qa) qaK l (qa) −1 s 1 = β 2 k 2 n 2 1
s,
s 2 = β 2 k 2 n 2 2 s.(A8)
Then, we find, for r < a,
e r = iA β 2h [(1 − s)J l−1 (hr) − (1 + s)J l+1 (hr)], e ϕ = −A β 2h [(1 − s)J l−1 (hr) + (1 + s)J l+1 (hr)], e z = AJ l (hr),(A9)h z = iA βs ωµ 0 J l (hr),(A10)
and, for r > a, e r = iA β 2q
J l (ha) K l (qa) [(1 − s)K l−1 (qr) + (1 + s)K l+1 (qr)], e ϕ = −A β 2q J l (ha) K l (qa) [(1 − s)K l−1 (qr) − (1 + s)K l+1 (qr)], e z = A J l (ha) K l (qa) K l (qr),(A11)
and
h r = A ωǫ 0 n 2 2 2q J l (ha) K l (qa) [(1 − s 2 )K l−1 (qr) − (1 + s 2 )K l+1 (qr)], h ϕ = iA ωǫ 0 n 2 2 2q J l (ha) K l (qa) [(1 − s 2 )K l−1 (qr) + (1 + s 2 )K l+1 (qr)], h z = iA βs ωµ 0 J l (ha) K l (qa) K l (qr).(A12)
Here, the parameter A is a constant that can be determined from the propagating power of the field.
TE modes
For r < a, we have e r = 0,
e ϕ = i ωµ 0 h AJ 1 (hr), e z = 0,(A13)
and
h r = −i β h AJ 1 (hr), h ϕ = 0, h z = AJ 0 (hr).(A14)
For r > a, we have e r = 0,
e ϕ = −i ωµ 0 q J 0 (ha) K 0 (qa) AK 1 (qr), e z = 0,(A15)
and h r = i β q J 0 (ha) K 0 (qa) AK 1 (qr),
h ϕ = 0, h z = J 0 (ha) K 0 (qa) AK 0 (qr).(A16)
TM modes
For r < a, we have
e r = −i β h AJ 1 (hr), e ϕ = 0, e z = AJ 0 (hr),(A17)
and h r = 0,
h ϕ = −i ωǫ 0 n 2 1 h AJ 1 (hr), h z = 0. (A18)
For r > a, we have
e r = i β q J 0 (ha) K 0 (qa) AK 1 (qr), e ϕ = 0, e z = J 0 (ha) K 0 (qa) AK 0 (qr),(A19)
and h r = 0,
h ϕ = i ωǫ 0 n 2 2 q J 0 (ha) K 0 (qa) AK 1 (qr), h z = 0. (A20)
Appendix B: Poynting vector
In this appendix, we calculate the Poynting vector S for the different mode families. First we note that, for guided modes of fibers, the radial component of the Poynting vector is always zero, that is, S r = 0.
Hybrid modes
For quasicircularly polarized hybrid modes, we find that the axial and azimuthal components of the Poynting vector are given, for r < a, by
S z = f |A| 2 ωǫ 0 n 2 1 β 4h 2 [(1 − s)(1 − s 1 )J 2 l−1 (hr) + (1 + s)(1 + s 1 )J 2 l+1(
and, for r > a, by
S z = f |A| 2 ωǫ 0 n 2 2 β 4q 2 J 2 l (ha) K 2 l (qa) [(1 − s)(1 − s 2 )K 2 l−1 (qr) + (1 + s)(1 + s 2 )K 2 l+1 (qr)], S ϕ = p|A| 2 ωǫ 0 n 2 2 4q J 2 l (ha) K 2 l (qa) [(1 − 2s 2 + ss 2 )K l−1 (qr)K l (qr) − (1 + 2s 2 + ss 2 )K l+1 (qr)K l (qr)].(B2)
Note that the expressions for S ϕ in Eqs. (B1) and (B2) contain cross terms of the types J l±1 (hr)J l (hr) and K l±1 (qr)K l (qr). These terms appear as a result of the interference between different terms associated with different Bessel functions. Due to the interference, the azimuthal component S ϕ of the Poynting vector of a quasicircularly polarized hybrid mode may have different signs in different regions of space.
For quasilinearly polarized hybrid modes, we find that the axial component of the Poynting vector is given, for r < a, by
S z = f |A| 2 ωǫ 0 n 2 1 β 4h 2 {(1 − s)(1 − s 1 )J 2 l−1 (hr) + (1 + s)(1 + s 1 )J 2 l+1 (hr) − 2(1 − ss 1 )J l−1 (hr)J l+1 (hr) cos[2(lϕ − ϕ pol )]}, (B3)
and, for r > a, by
S z = f |A| 2 ωǫ 0 n 2 2 β 4q 2 J 2 l (ha) K 2 l (qa) {(1 − s)(1 − s 2 )K 2 l−1 (qr) + (1 + s)(1 + s 2 )K 2 l+1 (qr) + 2(1 − ss 2 )K l−1 (qr)K l+1 (qr) cos[2(lϕ − ϕ pol )]}.(B4)
In both regions, we have S ϕ = 0.
It is worth noting that expressions (B3) and (B4) for S z contain cross terms of the types J l−1 (hr)J l+1 (hr) and K l−1 (qr)K l+1 (qr). A similar argument as the one above confirms that the axial component S z of the Poynting vector of a quasilinearly polarized hybrid mode can have different signs in different regions of space.
TE modes
For TE modes, we find that the axial component of the Poynting vector is given, for r < a, by
S z = f |A| 2 ωµ 0 β 2h 2 J 2 1 (hr) (B5)
and, for r > a, by
S z = f |A| 2 ωµ 0 β 2q 2 J 2 0 (ha) K 2 0 (qa) K 2 1 (qr). (B6)
The azimuthal component is zero, that is, S ϕ = 0. It is clear from Eqs. (B5) and (B6) that, for TE modes, the axial component S z of the Poynting vector is positive with respect to the direction of propagation, in agreement with the results of Ref. [44].
TM modes
For TM modes, we find that the axial component of the Poynting vector is given, for r < a, by S z = f |A| 2 ωǫ 0 n 2 1 β 2h 2 J 2 1 (hr) (B7) and, for r > a, by
S z = f |A| 2 ωǫ 0 n 2 2 β 2q 2 J 2 0 (ha) K 2 0 (qa)
K 2 1 (qr).(B8)
The azimuthal component is zero, that is, S ϕ = 0. It is clear from Eqs. (B7) and (B8) that, for TM modes, the axial component S z of the Poynting vector is positive with respect to the direction of propagation, in agreement with the results of Ref. [44].
Appendix C: Power
The propagating power of a guided mode is P = P in +P out , where P in and P out are the propagating powers inside and outside the fiber.
Hybrid modes
For hybrid modes, the explicit expressions for the powers inside and outside the fiber are [42,71] P in = f |A| 2 πa 2 ωǫ 0 n 2 1 β and P out = f |A| 2 πa 2 ωǫ 0 n 2 2 β 4q 2 J 2 l (ha) K 2 l (qa) × {(1 − s)(1 − s 2 )[K l−2 (qa)K l (qa) − K 2 l−1 (qa)] + (1 + s)(1 + s 2 )[K l+2 (qa)K l (qa) − K 2 l+1 (qa)]}. (C2) We consider the asymptotic behavior of P out in the case where the fiber size parameter V is near the cutoff value V c for a hybrid mode. In the limit V → V c , we have qa → 0 and ha → V c .
For EH lm modes, the cutoff value V c is determined as a nonzero solution to the equation J l (V c ) = 0. In the limit V → V c for EH lm modes, the parameters s and s 2 tend to a limiting value s c , where s c = −1. Consequently, the term in the last line of Eq. (C2) is dominant. On the other hand, in the limit qa → 0, we have K l (qa) ≃ (1/2)(l − 1)!(2/qa) l for l ≥ 1 [94]. With the use of the boundary conditions for the field at the fiber surface, we can show that J l (ha) = O(q 2 a 2 ). When we use the aforementioned asymptotic expressions, we can show that P out tends to a finite value. Meanwhile, P in tends to a nonzero finite value. Consequently, in the limit V → V c for EH lm modes, the fractional power outside the fiber η P tends to a limiting value that is smaller than unity.
For HE lm modes, the cutoff value V c is not a solution to the equation J l (V c ) = 0 except for the case of l = 1. In the limit V → V c for HE lm modes, we have s = −1 + O(q 2 a 2 ) and s 2 = −1 + O(q 2 a 2 ). When we use these asymptotic expressions and the approximate expression K l (qa) ≃ (1/2)(l − 1)!(2/qa) l for l ≥ 1 [94], we can show that, for l ≥ 3, the power outside the fiber P out tends to a finite value. Meanwhile, P in tends to a nonzero finite value. Consequently, in the limit V → V c for HE lm modes with l ≥ 3, the fractional power outside the fiber η P tends to a limiting value that is smaller than unity.
The analysis in the above paragraph is not valid for the HE lm modes with l = 1, 2. Indeed, for l = 1, 2, the expression on the right-hand side of Eq. (C2) contains the modified Bessel function K 0 (qa). The asymptotic expression for this function with a small argument qa is K 0 (qa) ≃ − ln(qa/2) − γ, where γ ≃ 0.5772 is the Euler-Mascheroni constant [94]. In addition, for l = 1, the cutoff value V c is a solution to the equation J 1 (V c ) = 0, and the corresponding magnitude of J 1 (ha) in the limit V → V c is found to be on the order of 1/| ln qa|. Then, we can show that, in the limit V → V c for the HE lm modes with l = 1, 2, we have P out → ∞. Meanwhile, P in tends to a finite value. Consequently, in the limit V → V c for the HE lm modes with l = 1, 2, the fractional power outside the fiber η P tends to unity.
TE modes
For TE modes, the explicit expressions for the powers inside and outside the fiber are [42] P in = f |A| 2 πa 2 ωµ 0 β 2h 2 [J 2 1 (ha) − J 0 (ha)J 2 (ha)] (C3)
and P out = f |A| 2 πa 2 ωµ 0 β 2q 2 J 2 0 (ha) K 2 0 (qa) [K 0 (qa)K 2 (qa) − K 2 1 (qa)]. (C4) We calculate the fractional power outside the fiber η P for a TE mode in the limit where the fiber size parameter V tends to the cutoff value V c . This cutoff value V c is determined as a solution to the equation J 0 (V c ) = 0. In the limit V → V c , we have qa → 0 and ha → V c . When we use the eigenvalue equation (A4) for TE modes and the asymptotic expressions for the modified Bessel functions K 0 (qa) and K 1 (qa) with a small argument qa, we find from Eq. (C4) that P out → ∞. Meanwhile, P in tends to a finite value. Consequently, in the limit V → V c for TE modes, the fractional power outside the fiber η P tends to unity.
TM modes
For TM modes, the explicit expressions for the powers inside and outside the fiber are [42] P in = f |A| 2 πa 2 ωǫ 0 n 2 1 β 2h 2 [J 2 1 (ha) − J 0 (ha)J 2 (ha)] (C5)
and P out = f |A| 2 πa 2 ωǫ 0 n 2 2 β 2q 2 J 2 0 (ha) K 2 0 (qa)
[K 0 (qa)K 2 (qa)−K 2 1 (qa)].
(C6) We calculate the fractional power outside the fiber η P for a TM mode in the limit where the fiber size parameter V tends to the cutoff value V c . The cutoff value V c is determined as a solution to the equation J 0 (V c ) = 0. In the limit V → V c , we have qa → 0 and ha → V c . When we use the eigenvalue equation (A5) for TM modes and the asymptotic expressions for the modified Bessel functions K 0 (qa) and K 1 (qa) with a small argument qa, we find from Eq. (C6) that P out → ∞. Meanwhile, P in tends to a finite value. Consequently, in the limit V → V c for TM modes, the fractional power outside the fiber η P tends to unity.
TE modes
For TE modes, the explicit expressions for the energies per unit length inside and outside the fiber are found to be U in = |A| 2 πa 2 µ 0 4 J 2 0 (ha) + 2n 2 1 k 2 h 2 J 2 1 (ha)
+ 1 − 2n 2 1 k 2 h 2 J 0 (ha)J 2 (ha)(D3)
and U out = |A| 2 πa 2 µ 0 4
J 2 0 (ha) K 2 0 (qa) 1 + 2n 2 2 k 2 q 2 K 0 (qa)K 2 (qa) − 2n 2 2 k 2 q 2 K 2 1 (qa) − K 2 0 (qa) .(D4)
TM modes
For TM modes, the explicit expressions for the energies per unit length inside and outside the fiber are found to be U in = |A| 2 πa 2 ǫ 0 n 2 1 4 J 2 0 (ha) + 2n 2 1 k 2 h 2 J 2 1 (ha)
+ 1 − 2n 2 1 k 2 h 2 J 0 (ha)J 2 (ha)(D5)
and U out = |A| 2 πa 2 ǫ 0 n 2 2 4 J 2 0 (ha) K 2 0 (qa) 1 + 2n 2 2 k 2 q 2 K 0 (qa)K 2 (qa) − 2n 2 2 k 2 q 2 K 2 1 (qa) − K 2 0 (qa) .
FIG
. 1. (Color online) Propagation constant β, normalized to the free-space wave number k, as a function of the fiber radius a. The wavelength of the light is chosen to be λ = 780 nm. The refractive index of the fiber is n1 = 1.4537 and of the surrounding medium is n2 = 1.
FIG
. 3. (Color online) Cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in (a) the quasicircularly polarized HE11 mode, (b) the TE01 mode, (c) the TM01 mode, and (d) the quasicircularly polarized HE21 mode.
FIG. 4 .
4Electric intensities |e| 2 of the fields in (a) the quasicircularly polarized HE11 mode, (b) the TE01 mode, (c) the TM01 mode, and (d) the quasicircularly polarized HE21 mode as functions of the radial distance r. The parameters used are the same as forFig. 3. The vertical dotted lines indicate the position of the fiber surface.
FIG
. 5. (Color online) Cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in (a) the quasilinearly polarized HE11 mode and (b) the quasilinearly polarized HE21 mode. The inner (r/a < 1) and outer (r/a > 1) parts are distinguished by the blue and cyan colors, respectively. The symmetry axis orientation angle is ϕ pol = 0. Other parameters are as forFigs. 1 and 3.
FIGFIG. 7 .
7. 6. (Color online) Electric intensities |e| 2 of the fields in the quasilinearly polarized (a) HE11 and (b) HE21 modes as functions of the radial distance r for two different angles ϕ. The parameters used are as for Fig. 5. The vertical dotted lines indicate the position of the fiber surface. (Color online) Cross-sectional profiles of the electric intensity distributions |e| 2 of the fields in (a) the quasicircularly and (b) the quasilinearly polarized EH11 modes.The inner (r/a < 1) and outer (r/a > 1) parts are distinguished by the blue and cyan colors, respectively. The fiber radius is a = 600 nm. In (b), the symmetry axis orientation angle is ϕ pol = 0. Other parameters are as forFig. 1.
FIG
Figures 3-8 show that the shapes of the profiles inside and outside the fiber are very different from each other. Radial distance r (nm) Electric intensity (arb. . 8. (Color online) Electric intensities |e| 2 of the fields in (a) the quasicircularly and (b) the quasilinearly polarized EH11 modes as functions of the radial distance r. The parameters used are the same as for Fig. 7. The vertical dotted lines indicate the position of the fiber surface.
FIG. 9 .
9(Color online) Effective mode radius r eff as a function of the fiber radius a. The parameters used are the same as forFig. 1.
mode, the axial component S z and the azimuthal component S ϕ of the Poynting vector are shown in Fig. 10. The dashed blue curves in Figs. 10(a) and 10(d) show that the azimuthal component S ϕ is nonzero for the quasicircularly polarized hybrid modes. In these cases, outside the fiber, the azimuthal component S ϕ is comparable to [see Fig. 10(a)] and may even be slightly larger than [see Fig. 10(d)] the axial component S z . For the parameters used, both components S z and S ϕ are positive. According to Figs. 10(b) and 10(c) and Appendix B, the azimuthal component S ϕ of the Poynting vector is vanishing for TE and TM modes. The figure indicates that the fractional power outside the fiber for higher-order modes is larger than that for the fundamental HE 11 mode. 10. (Color online) Components Sz (solid red curves) and Sϕ (dashed blue curves) of the Poynting vectors of the fields in (a) the quasicircularly polarized HE11 mode, (b) the TE01 mode, (c) the TM01 mode, and (d) the quasicircularly polarized HE21 mode as functions of the radial distance r. The fiber radius is a = 400 nm and the same power is used for calculations in all four cases. The propagation direction index is f = + and the phase circulation direction index for the HE modes is p = +. All other parameters are as for Fig. 1. The vertical dotted lines indicate the position of the fiber surface.
FIG. 11. (Color online) Components Sz (solid red curves) and Sϕ (dashed blue curves) of the Poynting vectors of the fields in the quasicircularly polarized (a) HE12 and (b) EH11 modes as functions of the radial distance r. The fiber radius is a = 600 nm and all other parameters are as for Figs. 1 and 10. The vertical dotted lines indicate the position of the fiber surface.
FIG
. 12. (Color online) (a) Cross-sectional profile and (b) radial dependence of the axial Poynting vector component Sz of the quasilinearly polarized HE21 mode. The refractive indices of the fiber and the surrounding medium are n1 = 3.25 and n2 = 1. The fiber radius is a = 270 nm and the wavelength of light is λ = 1500 nm.
13. (Color online) Fractional power outside the fiber ηP as a function of the fiber radius a. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes. . 14. (Color online) Fractional energy outside the fiber ηU as a function of the fiber radius a. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes.
z and S orb ϕ of the axial and azimuthal components of the Poynting vector are positive with respect to the direction of propagation and the direction of phase circulation, respectively. Radial distance r (nm) Orbital and spin parts of S blue curves) of the axial component Sz of the Poynting vector as functions of the radial distance r. The field is in (a) the quasicircularly polarized HE11 mode, (b) the TE01 mode, (c) the TM01 mode, and (d) the quasicircularly polarized HE21 mode. The fiber radius is a = 400 nm and the same power is used for calculations in all four cases. The propagation direction index is f = + and the phase circulation direction index for the HE modes in parts (a) and (d) is p = +. All other parameters are as for Fig. 1. The vertical dotted lines indicate the position of the fiber surface. However, the signs of the spin parts S spin z and S spin ϕ of the axial and azimuthal components can vary inside the fiber. We observe from Figs. 15 and 17(a) that, outside the fiber, the spin part S spin z of the axial component S z of the Poynting vector is negative.
blue curves) of the azimuthal component Sϕ of the Poynting vector as functions of the radial distance r. The field is in (a) the quasicircularly polarized HE11 mode and (b) the quasicircularly polarized HE21 mode. The fiber radius is a = 400 nm and the same power is used for calculations in both cases. The propagation direction index is f = + and the phase circulation direction index is p = +. All other parameters are as for Fig. 1. The vertical dotted lines indicate the position of the fiber surface.
FIG
. 17. (Color online) Orbital (solid red curves) and spin (dashed blue curves) of (a) the axial component and (b) the azimuthal component of the Poynting vector of the quasicircularly polarized EH11 mode as functions of the radial distance r. The fiber radius is a = 600 nm. The propagation direction index is f = + and the phase circulation direction index is p = +. All other parameters are as for Fig. 1. The vertical dotted lines indicate the position of the fiber surface.
FIG
. 18. (Color online) Angular momentum per photon jz as a function of the fiber radius a for quasicircularly polarized hybrid modes with the positive phase circulation direction p = +. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higherorder modes.
(24)-(26), the orbital, spin, and surface parts of the axial angular momentum per unit length J z depend on the direction of phase circulation p, but not on the direction of propagation f .Equations(24)-(26) are in agreement with the relations J orb z = (1/c 2 ) rS orb ϕ dr and J spin z + J surf z = (1/c 2 ) rS spin ϕ dr. Here, S orb ϕ and S spin ϕ are the orbital and spin parts of the azimuthal component S ϕ of the Poynting vector and are given in Appendix E.
FIG
z increases with increasing azimuthal mode order l. It is substan-. 19. (Color online) Orbital angular momentum per photon j orb z as a function of the fiber radius a for quasicircularly polarized hybrid modes with the positive phase circulation direction p = +. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes.
. 20. (Color online) Spin angular momentum per photon j spin z as a function of the fiber radius a for quasicircularly polarized hybrid modes with the positive phase circulation direction p = +. The parameters used are the same as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes. The horizontal dotted line separates the positive and negative sides of the vertical axis.
. 21. (Color online) Surface angular momentum per photon j surf z as a function of the fiber radius a for quasicircularly polarized hybrid modes with the positive phase circulation direction p = +. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes.FromFig. 21one can see that, like the spin part j spin z , the surface part j surf z is positive with respect to the direction of the phase circulation axis for the HE modes and negative for the EH modes. Furthermore, the surface part j surf z is always smaller thanh. Comparison between Figs. 19, 20, and 21 shows that, for the HE lm modes with l = 1, the surface part j surf
22. (Color online) Helicity per photon j hlcy as a function of the fiber radius a for quasicircularly polarized hybrid modes with the positive propagation direction f = + and the positive phase circulation direction p = +. The parameters used are as for Fig. 1. The vertical dotted lines indicate the positions of the cutoffs for higher-order modes. The horizontal dotted line separates the positive and negative sides of the vertical axis.
s 1 )J l−1 (hr) + (1 + s 1 )J l+1 (hr)], h ϕ = iA ωǫ 0 n 2 1 2h [(1 − s 1 )J l−1 (hr) − (1 + s 1 )J l+1 (hr)],
hr)], S ϕ = p|A| 2 ωǫ 0 n 2 1 4h[(1 − 2s 1 + ss 1 )J l−1 (hr)J l (hr) + (1 + 2s 1 + ss 1 )J l+1 (hr)J l (hr)],
4h 2 {
2(1 − s)(1 − s 1 )[J 2 l−1 (ha) − J l−2 (ha)J l (ha)] + (1 + s)(1 + s 1 )[J 2 l+1 (ha) − J l+2 (ha)J l (ha)]} (C1)
are in agreement with the results of Refs.[28,36,39].FIG. 2. (Color online) Penetration length Λ = 1/q as a function of the fiber radius a. The parameters used are the same as for Fig. 1.a (nm)
Λ (nm)
HE11
TE01
TM01
HE21
HE12
EH11
HE31
TE02
TM02
HE22
EH21
HE41
EH31
HE51
that, in the vicinities of the cutoffs, the factors η P for the TE 01 ,Poynting vector component S
z (arb. units)
(b)
r/a
ϕ = 0, π/2, π, 3π/2
x/a
y/a
(a)
lin HE 21 and n 1 =3.25
Appendix D: Energy per unit lengthThe energy per unit length of a guided mode is U = U in + U out , where U in and U out are the energies per unit length inside and outside the fiber.Hybrid modesFor hybrid modes, the explicit expressions for the energies per unit length inside and outside the fiber are found to be[42,71]U in = |A| 2 πa 2 ǫ 0 n 2×[J 2 l−1 (ha) − J l−2 (ha)J l (ha)]and U out = |A| 2 πa 2 ǫ 0 n 2Appendix E: Decomposition of the Poynting vectorWe decompose the Poynting vector into the orbital and spin parts.Hybrid modesWe consider quasicircularly polarized hybrid modes. For the electric components of the orbital and spin parts of the Poynting vector, we find the expressionsWe note that, in the case where l = 1, Eqs. (E1) and (E2) reduce to the results of Ref.[26]for the orbital and spin parts of the Poynting vector of the fundamental HE 11 mode.For the magnetic components of the orbital and spin parts of the Poynting vector, we find the expressionsandEquations (E1) and (E3) show that the orbital parts of the axial and azimuthal components of the Poynting vector are positive with respect to the direction of propagation and the direction of phase circulation, respectively.Equations (E2) and (E4) show that the signs of the spin parts of the axial and azimuthal components of the Poynting vector can vary in the fiber transverse plane. Thus, the spin parts of the axial and azimuthal components of the Poynting vector may be negative with respect to the direction of propagation and the direction of phase circulation, respectively. Consequently, the orbital and spin parts of the Poynting vector may have opposite signs.The first expressions in Eqs. (E1) and (E3) indicate that the orbital part of the axial component of the Poynting vector is determined by the local density of energy.Meanwhile, the second expressions in Eqs. (E1) and (E3) contain not only the terms l|e| 2 and l|h| 2 , which result from the local phase gradient, but also the terms Im(e r e * ϕ ) and Im(h r h * ϕ ), which result from the local polarization. This indicates that the orbital part of the azimuthal component of the Poynting vector of a hybrid mode depends on not only the local phase gradient but also the local polarization, unlike the case of uniformly polarized paraxial beams[53].TE modesWe consider TE modes. For the fields in these modes, the Poynting vector is aligned along the z axis. For the electric components of the orbital and spin parts of the Poynting vector, we find the expressionsand S e-orb ϕ = S e-spin ϕ = 0.For the magnetic components of the orbital and spin parts of the Poynting vector, we find the expressionsand S h-orb ϕ = S h-spin ϕ = 0. It is clear that the orbital part of the axial component of the Poynting vector of a TE mode is determined by the local density of energy and is positive with respect to the direction of propagation.TM modesWe consider TM modes. For the fields in these modes, the Poynting vector is aligned along the z axis. For the electric components of the orbital and spin parts of the Poynting vector, we find the expressionsand and we note that, for l = 1, Eqs. (F1) and (F2) reduce to the results of Ref.[71]for the angular momentum of the fundamental HE 11 mode. The angular momentum per unit length J z can be decomposed as J z = J orbHere, J orb z , J spin z , and J surf z are given by Eqs.(24),(25), and (26), respectively, and are interpreted as the orbital, spin, and surface parts[59][60][61]63]. In the dual-symmetric formalism, we have J orbHere, the terms with the letters e and h in the superscripts correspond to the first and second terms, respectively, in Eqs. (24)-(26), and are called the electric and magnetic components.For the electric and magnetic components of the orbital angular momentum, we find the explicit expressionsandAppendix G: Helicity and chirality of guided lightWe calculate the helicity and chirality of guided light. The cycle-averaged optical helicity density of a monochromatic field is given by[66,[82][83][84][85][86]where A and C are the positive-frequency components of the magnetic and electric vector potentials. With the help of the relations A = E/iω and n 2 C = µ 0 cH/iω, we can then obtain Eq. (28) (see[66,[82][83][84][85][86]). Helicity of a light beam is closely related to its chirality. Indeed, according to[87], the cycle-averaged optical chirality density of a monochromatic field is characterized by the quantity[82,[87][88][89][90][91](G2) When we use the equations ∇×E = iωµ 0 H and ∇×H = −iωǫ 0 n 2 E, we obtain Eq. (29) (see[89]).The optical helicity per unit length J hlcy is given by Eq.(31). For quasicircularly polarized hybrid modes, the explicit expression for the helicity per unit length is found to be
. L Tong, R R Gattass, J B Ashcom, S He, J Lou, M Shen, I Maxwell, E Mazur, Nature. 426816L. Tong, R. R. Gattass, J. B. Ashcom, S. He, J. Lou, M. Shen, I. Maxwell, and E. Mazur, Nature (London) 426, 816 (2003).
. T A Birks, W J Wadsworth, P J St, Russell, Opt. Lett. 251415T. A. Birks, W. J. Wadsworth, and P. St. J. Russell, Opt. Lett. 25, 1415 (2000);
S G Leon-Saval, T A Birks, W J Wadsworth, P J St, M W Russell, Mason, Conference on Lasers and Electro-Optics (CLEO), Technical Digest. Washington, D.C.Optical Society of America6postconference ed.S. G. Leon-Saval, T. A. Birks, W. J. Wadsworth, P. St. J. Russell, and M. W. Mason, in Conference on Lasers and Electro-Optics (CLEO), Technical Digest, postconference ed. (Optical Society of America, Washington, D.C., 2004), paper CPDA6.
. J C Knight, G Cheung, F Jacques, T A Birks, Opt. Lett. 221129J. C. Knight, G. Cheung, F. Jacques, and T. A. Birks, Opt. Lett. 22, 1129 (1997);
. M Cai, K Vahala, 26884M. Cai and K. Vahala, ibid. 26, 884 (2001).
. J M Ward, A Maimaiti, V H Le, S Nic Chormaic, Rev. Sci. Instrum. 85111501J. M. Ward, A. Maimaiti, V. H. Le, and S. Nic Chormaic, Rev. Sci. Instrum. 85, 111501 (2014).
. J Bures, R Ghosh, J. Opt. Soc. Am. A. 161992J. Bures and R. Ghosh, J. Opt. Soc. Am. A 16, 1992 (1999).
. L Tong, J Lou, E Mazur, Opt. Express. 121025L. Tong, J. Lou, and E. Mazur, Opt. Express 12, 1025 (2004).
. J Q Fam Le Kien, K Liang, V I Hakuta, Balykin, Opt. Commun. 242445Fam Le Kien, J. Q. Liang, K. Hakuta, and V. I. Balykin, Opt. Commun. 242, 445 (2004).
. M J Morrissey, K Deasy, M Frawley, R Kumar, E Prel, L Russell, V G Truong, S Nic Chormaic, Sensors. 1310449M. J. Morrissey, K. Deasy, M. Frawley, R. Kumar, E. Prel, L. Russell, V. G. Truong, and S. Nic Chormaic, Sensors 13, 10449 (2013).
. T Nieddu, V Gokhroo, S Nic Chormaic, J. Opt. 1853001T. Nieddu, V. Gokhroo, and S. Nic Chormaic, J. Opt. 18, 053001 (2016).
. V I Balykin, K Hakuta, Fam Le Kien, J Q Liang, M Morinaga, Phys. Rev. A. 7011401V. I. Balykin, K. Hakuta, Fam Le Kien, J. Q. Liang, and M. Morinaga, Phys. Rev. A 70, 011401(R) (2004);
. V I Fam Le Kien, K Balykin, Hakuta, ibid. 7063403Fam Le Kien, V. I. Balykin, and K. Hakuta, ibid. 70, 063403 (2004).
. E Vetsch, D Reitz, G Sagué, R Schmidt, S T Dawkins, A Rauschenbeutel, Phys. Rev. Lett. 104203603E. Vetsch, D. Reitz, G. Sagué, R. Schmidt, S. T. Dawkins, and A. Rauschenbeutel, Phys. Rev. Lett. 104, 203603 (2010).
. A Goban, K S Choi, D J Alton, D Ding, C Lacroûte, M Pototschnig, T Thiele, N P Stern, H J Kimble, Phys. Rev. Lett. 10933603A. Goban, K. S. Choi, D. J. Alton, D. Ding, C. Lacroûte, M. Pototschnig, T. Thiele, N. P. Stern, and H. J. Kimble, Phys. Rev. Lett. 109, 033603 (2012).
. P Domokos, P Horak, H Ritsch, Phys. Rev. A. 6533832P. Domokos, P. Horak, and H. Ritsch, Phys. Rev. A 65, 033832 (2002).
. V I Fam Le Kien, K Balykin, Hakuta, Phys. Rev. A. 7313819Fam Le Kien, V. I. Balykin, and K. Hakuta, Phys. Rev. A 73, 013819 (2006).
. K P Nayak, P N Melentiev, M Morinaga, Fam Le Kien, V I Balykin, K Hakuta, Opt. Express. 155431K. P. Nayak, P. N. Melentiev, M. Morinaga, Fam Le Kien, V. I. Balykin, and K. Hakuta, Opt. Express 15, 5431 (2007).
. K P Nayak, Fam Le Kien, M Morinaga, K Hakuta, Phys. Rev. A. 7921801K. P. Nayak, Fam Le Kien, M. Morinaga, and K. Hakuta, Phys. Rev. A 79, 021801(R) (2009).
. S T Dawkins, R Mitsch, D Reitz, E Vetsch, A Rauschenbeutel, Phys. Rev. Lett. 107243601S. T. Dawkins, R. Mitsch, D. Reitz, E. Vetsch, and A. Rauschenbeutel, Phys. Rev. Lett. 107, 243601 (2011).
. D Reitz, C Sayrin, R Mitsch, P Schneeweiss, A Rauschenbeutel, Phys. Rev. Lett. 110243603D. Reitz, C. Sayrin, R. Mitsch, P. Schneeweiss, and A. Rauschenbeutel, Phys. Rev. Lett. 110, 243603 (2013).
. L Russell, R Kumar, V B Tiwari, S Nic Chormaic, Opt. Commun. 309313L. Russell, R. Kumar, V. B. Tiwari, and S. Nic Chormaic, Opt. Commun. 309, 313 (2013).
. A Stiebeiner, O Rehband, R Garcia-Fernandez, A Rauschenbeutel, Opt. Express. 1721704A. Stiebeiner, O. Rehband, R. Garcia-Fernandez, and A. Rauschenbeutel, Opt. Express 17, 21704 (2009).
. R Yalla, Fam Le Kien, M Morinaga, K Hakuta, Phys. Rev. Lett. 10963602R. Yalla, Fam Le Kien, M. Morinaga, and K. Hakuta, Phys. Rev. Lett. 109, 063602 (2012).
. T Schröder, M Fujiwara, T Noda, H.-Q Zhao, O Benson, S Takeuchi, Opt. Express. 2010490T. Schröder, M. Fujiwara, T. Noda, H.-Q. Zhao, O. Ben- son, and S. Takeuchi, Opt. Express 20, 10490 (2012).
. L Liebermeister, F Petersen, A V Münchow, D Burchardt, J Hermelbracht, T Tashima, A W Schell, O Benson, T Meinhardt, A Krueger, A Stiebeiner, A Rauschenbeutel, H Weinfurter, M Weber, Appl. Phys. Lett. 10431101L. Liebermeister, F. Petersen, A. V. Münchow, D. Bur- chardt, J. Hermelbracht, T. Tashima, A. W. Schell, O. Benson, T. Meinhardt, A. Krueger, A. Stiebeiner, A. Rauschenbeutel, H. Weinfurter, and M. Weber, Appl. Phys. Lett. 104, 031101 (2014).
. G Brambilla, G S Murugan, J S Wilkinson, D J Richardson, Opt. Lett. 323041G. Brambilla, G. S. Murugan, J. S. Wilkinson, and D. J. Richardson, Opt. Lett. 32, 3041 (2007).
. S E Skelton, M Sergides, R Patel, E Karczewska, O M Maragó, P H Jones, J. Quant. Spectrosc. Radiat. Transfer. 1132512S. E. Skelton, M. Sergides, R. Patel, E. Karczewska, O. M. Maragó, and P. H. Jones, J. Quant. Spectrosc. Radiat. Transfer 113, 2512 (2012).
. Fam Le Kien, A , Phys. Rev. A. 8863845Fam Le Kien and A. Rauschenbeutel, Phys. Rev. A 88, 063845 (2013).
. J Fu, X Yin, L Tong, J. Phys. B: At. Mol. Opt. Phys. 404195J. Fu, X. Yin, and L. Tong, J. Phys. B: At. Mol. Opt. Phys. 40, 4195 (2007).
. G Sagué, A Baade, A Rauschenbeutel, New J. Phys. 10113008G. Sagué, A. Baade, and A. Rauschenbeutel, New J. Phys. 10, 113008 (2008).
. J Fu, X Yin, N Li, L Tong, Chinese Opt. Lett. 112J. Fu, X. Yin, N. Li, and L. Tong, Chinese Opt. Lett. 112 (2008).
. A V Masalov, V G Minogin, Laser Phys. Lett. 1075203A. V. Masalov and V. G. Minogin, Laser Phys. Lett. 10, 075203 (2013).
. C F Phelan, T Hennessy, Th Busch, Opt. Express. 2127093C. F. Phelan, T. Hennessy, and Th. Busch, Opt. Express 21, 27093 (2013).
. M Sadgrove, S Wimberger, S Nic Chormaic, Sci. Rep. 628905M. Sadgrove, S. Wimberger, and S. Nic Chormaic, Sci. Rep. 6, 28905 (2016).
. M H Alizadeh, B M Reinhard, Opt. Lett. 414735M. H. Alizadeh and B. M. Reinhard, Opt. Lett. 41, 4735 (2016).
. G Volpe, D Petrov, Opt. Commun. 23789G. Volpe and D. Petrov, Opt. Commun. 237, 89 (2004).
. A Petcu-Colan, M Frawley, S Nic Chormaic, J. Nonlinear Opt. Phys. Mat. 20293A. Petcu-Colan, M. Frawley, and S. Nic Chormaic, J. Nonlinear Opt. Phys. Mat. 20, 293 (2011).
. M C Frawley, A Petcu-Colan, V G Truong, S Nic Chormaic, Opt. Commun. 2854648M. C. Frawley, A. Petcu-Colan, V. G. Truong, and S. Nic Chormaic, Opt. Commun. 285, 4648 (2012).
. S Ravets, J E Hoffman, L A Orozco, S L Rolston, G Beadie, F K Fatemi, Opt. Express. 2118325S. Ravets, J. E. Hoffman, L. A. Orozco, S. L. Rolston, G. Beadie, and F. K. Fatemi, Opt. Express 21, 18325 (2013).
. J M Ward, A Maimaiti, V H Le, S Nic Chormaic, Rev. Sci. Instrum. 85111501J. M. Ward, A. Maimaiti, V. H. Le, and S. Nic Chormaic, Rev. Sci. Instrum. 85, 111501 (2014).
. R Kumar, V Gokhroo, K Deasy, A Maimaiti, M C Frawley, C Phelan, S Nic Chormaic, New. J. Phys. 1713026R. Kumar, V. Gokhroo, K. Deasy, A. Maimaiti, M. C. Frawley, C. Phelan, and S. Nic Chormaic, New. J. Phys. 17, 013026 (2015).
. A Maimaiti, M Viet Giang Truong, I Sergides, S Gusachenko, Nic Chormaic, Sci. Rep. 59077A. Maimaiti, Viet Giang Truong, M. Sergides, I. Gusachenko, and S. Nic Chormaic, Sci. Rep. 5, 09077 (2015).
. A Maimaiti, D Holzmann, H Viet Giang Truong, S Ritsch, Nic Chormaic, Sci. Rep. 630131A. Maimaiti, D. Holzmann, Viet Giang Truong, H. Ritsch, and S. Nic Chormaic, Sci. Rep. 6, 30131 (2016).
. See, D Example, Marcuse, KriegerMalabarLight Transmission OpticsSee, for example, D. Marcuse, Light Transmission Op- tics (Krieger, Malabar, 1989);
A W Snyder, J D Love, Optical Waveguide Theory. New YorkChapman and HallA. W. Snyder and J. D. Love, Optical Waveguide Theory (Chapman and Hall, New York, 1983);
K Okamoto, Fundamentals of Optical Waveguides. New YorkElsevierK. Okamoto, Fundamentals of Opti- cal Waveguides (Elsevier, New York, 2006).
. Y Roichman, B Sun, Y Roichman, J Amato-Grill, D G Grier, Phys. Rev. Lett. 10013602Y. Roichman, B. Sun, Y. Roichman, J. Amato-Grill, and D. G. Grier, Phys. Rev. Lett. 100, 013602 (2008).
. S Mokhov, R El-Ganainy, D N Christodoulides, Opt. Express. 143255S. Mokhov, R. El-Ganainy, and D. N. Christodoulides, Opt. Express 14, 3255 (2006).
. A V Novitsky, D V Novitsky, J. Opt. Soc. Am. A. 242844A. V. Novitsky and D. V. Novitsky, J. Opt. Soc. Am. A 24, 2844 (2007).
. A Maimaiti, National University of IrelandPh.D. thesisA. Maimaiti, Ph.D. thesis, National University of Ireland, 2016.
. M V Berry, J. Opt. A: Pure Appl. Opt. 1194001M. V. Berry, J. Opt. A: Pure Appl. Opt. 11, 094001 (2009).
. A Bekshaev, K Y Bliokh, M Soskin, J. Opt. 1353001A. Bekshaev, K. Y. Bliokh, and M. Soskin, J. Opt. 13, 053001 (2011).
. K Y Bliokh, F Nori, Phys. Rev. A. 8561801K. Y. Bliokh and F. Nori, Phys. Rev. A 85, 061801(R) (2012).
. K Y Bliokh, A Y Bekshaev, F Nori, New J. Phys. 1533026K. Y. Bliokh, A. Y. Bekshaev, and F. Nori, New J. Phys. 15, 033026 (2013).
. K Y Bliokh, J Dressel, F Nori, New J. Phys. 1693037K. Y. Bliokh, J. Dressel, and F. Nori, New J. Phys. 16, 093037 (2014).
. K Y Bliokh, A Y Bekshaev, F Nori, Nature Commun. 53300K. Y. Bliokh, A. Y. Bekshaev, and F. Nori, Nature Com- mun. 5, 3300 (2014).
. K Y Bliokh, F Nori, Phys. Rep. 5921K. Y. Bliokh and F. Nori, Phys. Rep. 592, 1 (2015).
J D See, Jackson, Classical Electrodynamics. New YorkWiley3rd ed.See, for example, J. D. Jackson, Classical Electrodynam- ics, 3rd ed. (Wiley, New York, 1999).
. I Brevik, Phys. Rep. 52133I. Brevik, Phys. Rep. 52, 133 (1973).
. M Abraham, Rend. Circ. Mat. Palermo. 281M. Abraham, Rend. Circ. Mat. Palermo 28, 1 (1909);
. H Minkowski, Nachr. Ges. Wiss. Gottingen. 53H. Minkowski, Nachr. Ges. Wiss. Gottingen 53 (1908);
. Math, Ann, 68472Math. Ann. 68, 472 (1910).
. M Padgett, S M Barnett, R Loudon, J. Mod. Opt. 501555M. Padgett, S. M. Barnett, and R. Loudon, J. Mod. Opt. 50, 1555 (2003).
C Cohen-Tannoudji, J Dupont-Roc, G Grynberg, Photons and Atoms. New YorkWileyIC. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, Photons and Atoms (Wiley, New York, 2004), Chap. I.
. J Humblet, Physica. 10585J. Humblet, Physica 10, 585 (1943).
. S M Barnett, L Allen, Opt. Commun. 110670S. M. Barnett and L. Allen, Opt. Commun. 110, 670 (1994).
J B Götte, S M Barnett, The Angular Momentum of Light. D. L. Andrews and M. BabikerNew YorkCambridge University Press1J. B. Götte and S. M. Barnett, in The Angular Momen- tum of Light, edited by D. L. Andrews and M. Babiker (Cambridge University Press, New York, 2012), p. 1.
. M Ornigotti, A Aiello, Opt. Express. 226586M. Ornigotti and A. Aiello, Opt. Express 22, 6586 (2014).
. L Allen, M J Padgett, M Babiker, Prog. Opt. 39and references thereinL. Allen, M. J. Padgett, and M. Babiker, Prog. Opt. 39, 291 (1999), and references therein.
L Mandel, E Wolf, Optical Coherence and Quantum Optics. New YorkCambridge University Press488L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University Press, New York, 1995), p. 488.
. R P Cameron, S M Barnett, A M Yao, New J. Phys. 1453050R. P. Cameron, S. M. Barnett, and A. M. Yao, New J. Phys. 14, 053050 (2012).
Optical orbital angular momentum. M Babiker, M J Padgett, Phil. Trans. R. Soc. A. 3752087Optical orbital angular momentum, edited by S. M. Bar- nett, M. Babiker, and M. J. Padgett, Phil. Trans. R. Soc. A 375, issue 2087 (2017).
. L Allen, M W Beijersbergen, R J C Spreeuw, J P Woerdman, Phys. Rev. A. 458185L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Phys. Rev. A 45, 8185 (1992).
. L Allen, M J Padgett, Opt. Commun. 18467L. Allen and M. J. Padgett, Opt. Commun. 184, 67 (2000).
. C Maurer, A Jesacher, S Fürhapter, S Bernet, M Ritsch-Marte, New. J. Phys. 978C. Maurer, A. Jesacher, S. Fürhapter, S. Bernet, and M. Ritsch-Marte, New. J. Phys. 9, 78 (2007).
. V I Fam Le Kien, K Balykin, Hakuta, Phys. Rev. A. 7353823Fam Le Kien, V. I. Balykin, and K. Hakuta, Phys. Rev. A 73, 053823 (2006).
. A V Dooghin, N D Kundikova, V S Liberman, B Y Zeldovich, Phys. Rev. A. 458204A. V. Dooghin, N. D. Kundikova, V. S. Liberman, and B. Y. Zeldovich, Phys. Rev. A 45, 8204 (1992);
. V S Liberman, B Y Zeldovich, Phys. Rev. A. 465199V. S. Liberman and B. Y. Zeldovich, Phys. Rev. A 46, 5199 (1992);
. M Y Darsht, B Y Zeldovich, I V Kataevskaya, N D Kundikova, Zh. Eksp. Theor. Phys. 801464JETPM. Y. Darsht, B. Y. Zeldovich, I. V. Kataevskaya, and N. D. Kundikova, JETP 80, 817 (1995) [Zh. Eksp. Theor. Phys. 107, 1464 (1995)].
K Y Bliokh, A Aiello, M A Alonso, The Angular Momentum of Light. D. L. Andrews and M. BabikerNew YorkCambridge University Press174K. Y. Bliokh, A. Aiello, and M. A. Alonso, in The Angu- lar Momentum of Light, edited by D. L. Andrews and M. Babiker (Cambridge University Press, New York, 2012), p. 174.
. A Aiello, P Banzer, M Neugebauer, G Leuchs, Nature Photon. 9789A. Aiello, P. Banzer, M. Neugebauer, and G. Leuchs, Nature Photon. 9, 789 (2015).
. K Y Bliokh, F J Rodriguez-Fortuño, F Nori, A V Zayats, Nature Photon. 9796K. Y. Bliokh, F. J. Rodriguez-Fortuño, F. Nori, and A. V. Zayats, Nature Photon. 9, 796 (2015).
. Fam Le Kien, A , Phys. Rev. A. 9023805Fam Le Kien and A. Rauschenbeutel, Phys. Rev. A 90, 023805 (2014).
. J Petersen, J Volz, A Rauschenbeutel, Science. 34667J. Petersen, J. Volz, and A. Rauschenbeutel, Science 346, 67 (2014).
. R Mitsch, C Sayrin, B Albrecht, P Schneeweiss, A Rauschenbeutel, Nature Commun. 55713R. Mitsch, C. Sayrin, B. Albrecht, P. Schneeweiss, and A. Rauschenbeutel, Nature Commun. 5, 5713 (2014).
. Fam Le Kien, A , Phys. Rev. A. 9063816Fam Le Kien and A. Rauschenbeutel, Phys. Rev. A 90, 063816 (2014).
. C Sayrin, C Junge, R Mitsch, B Albrecht, D O'shea, P Schneeweiss, J Volz, A Rauschenbeutel, Phys. Rev. X. 541036C. Sayrin, C. Junge, R. Mitsch, B. Albrecht, D. O'Shea, P. Schneeweiss, J. Volz, and A. Rauschenbeutel, Phys. Rev. X 5, 041036 (2015).
. P Lodahl, S Mahmoodian, S Stobbe, P Schneeweiss, J Volz, A Rauschenbeutel, H Pichler, P Zoller, Nature. 541473P. Lodahl, S. Mahmoodian, S. Stobbe, P. Schneeweiss, J. Volz, A. Rauschenbeutel, H. Pichler, and P. Zoller, Nature (London) 541, 473 (2017).
. G Nienhuis, Phys. Rev. A. 9323840G. Nienhuis, Phys. Rev. A 93, 023840 (2016).
. D J Candlin, Nuovo Cimento. 371390D. J. Candlin, Nuovo Cimento 37, 1390 (1965).
. J L Trueba, A F Ranada, Eur. J. Phys. 17141J. L. Trueba and A. F. Ranada, Eur. J. Phys. 17, 141 (1996).
. G N Afanasiev, Y P Stepanovsky, Nuovo Cimento A. 109271G. N. Afanasiev and Y. P. Stepanovsky, Nuovo Cimento A 109, 271 (1996).
. T G Philbin, Phys. Rev. A. 8743843T. G. Philbin, Phys. Rev. A 87, 043843 (2013).
. Y Tang, A E Cohen, Phys. Rev. Lett. 104163901Y. Tang and A. E. Cohen, Phys. Rev. Lett. 104, 163901 (2010).
. D Lipkin, J. Math. Phys. 5696D. Lipkin, J. Math. Phys. 5, 696 (1964).
. K Y Bliokh, F Nori, Phys. Rev. A. 8321803K. Y. Bliokh and F. Nori, Phys. Rev. A 83, 021803 (2011).
. Y Tang, A E Cohen, Science. 332333Y. Tang and A. E. Cohen, Science 332, 333 (2011).
. M M Coles, D L Andrews, Phys. Rev. A. 8563810M. M. Coles and D. L. Andrews, Phys. Rev. A 85, 063810 (2012).
. A B Harris, R D Kamien, T C Lubensky, Rev. Mod. Phys. 711745A. B. Harris, R. D. Kamien, and T. C. Lubensky, Rev. Mod. Phys. 71, 1745 (1999).
. Fam Le Kien, A , Phys. Rev. A. 9343828Fam Le Kien and A. Rauschenbeutel, Phys. Rev. A 93, 043828 (2016).
Handbook of Mathematical Functions. M. Abramowitz and I. A. StegunDoverNew YorkHandbook of Mathematical Functions, edited by M. Abramowitz and I. A. Stegun (Dover, New York, 2013).
| [] |
[
"dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training",
"dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training"
] | [
"Hanpeng Hu ",
"Chenyu Jiang ",
"Yuchen Zhong ",
"Yanghua Peng ",
"Chuan Wu ",
"Yibo Zhu ",
"Haibin Lin ",
"Chuanxiong Guo "
] | [] | [] | Distributed training using multiple devices (e.g., GPUs) has been widely adopted for learning DNN models over large datasets. However, the performance of large-scale distributed training tends to be far from linear speed-up in practice. Given the complexity of distributed systems, it is challenging to identify the root cause(s) of inefficiency and exercise effective performance optimizations when unexpected low training speed occurs. To date, there exists no software tool which diagnoses performance issues and helps expedite distributed DNN training, while the training can be run using different deep learning frameworks. This paper proposes dPRO, a toolkit that includes: (1) an efficient profiler that collects runtime traces of distributed DNN training across multiple frameworks, especially fine-grained communication traces, and constructs global data flow graphs including detailed communication operations for accurate replay; (2) an optimizer that effectively identifies performance bottlenecks and explores optimization strategies (from computation, communication, and memory aspects) for training acceleration. We implement dPRO on multiple deep learning frameworks (TensorFlow, MXNet) and representative communication schemes (AllReduce and Parameter Server). Extensive experiments show that dPRO predicts the performance of distributed training in various settings with < 5% errors in most cases and finds optimization strategies with up to 3.48× speed-up over the baselines. productivity of AI systems.Diagnosing and improving distributed training efficiency are challenging as: (i) causes of unexpected performance are complex and it requires substantial time and efforts from domain experts to manually inspect runtime traces in order to figure out the real culprit; (ii) often, traces collected from different ML frameworks (e.g., TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019)) are insufficient for obtaining an exact global view of a distributed training system, due to less accurate communication profiling; (iii) various optimization strategies exist that can be applied to tackle performance issues, incurring a large strategy space. | 10.6084/m9.figshare.19165622 | [
"https://arxiv.org/pdf/2205.02473v2.pdf"
] | 248,524,726 | 2205.02473 | 79f8767b37b0b9b54cb6e40a2fb9770c5f614219 |
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
Hanpeng Hu
Chenyu Jiang
Yuchen Zhong
Yanghua Peng
Chuan Wu
Yibo Zhu
Haibin Lin
Chuanxiong Guo
dPRO: A Generic Performance Diagnosis and Optimization Toolkit for Expediting Distributed DNN Training
Distributed training using multiple devices (e.g., GPUs) has been widely adopted for learning DNN models over large datasets. However, the performance of large-scale distributed training tends to be far from linear speed-up in practice. Given the complexity of distributed systems, it is challenging to identify the root cause(s) of inefficiency and exercise effective performance optimizations when unexpected low training speed occurs. To date, there exists no software tool which diagnoses performance issues and helps expedite distributed DNN training, while the training can be run using different deep learning frameworks. This paper proposes dPRO, a toolkit that includes: (1) an efficient profiler that collects runtime traces of distributed DNN training across multiple frameworks, especially fine-grained communication traces, and constructs global data flow graphs including detailed communication operations for accurate replay; (2) an optimizer that effectively identifies performance bottlenecks and explores optimization strategies (from computation, communication, and memory aspects) for training acceleration. We implement dPRO on multiple deep learning frameworks (TensorFlow, MXNet) and representative communication schemes (AllReduce and Parameter Server). Extensive experiments show that dPRO predicts the performance of distributed training in various settings with < 5% errors in most cases and finds optimization strategies with up to 3.48× speed-up over the baselines. productivity of AI systems.Diagnosing and improving distributed training efficiency are challenging as: (i) causes of unexpected performance are complex and it requires substantial time and efforts from domain experts to manually inspect runtime traces in order to figure out the real culprit; (ii) often, traces collected from different ML frameworks (e.g., TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019)) are insufficient for obtaining an exact global view of a distributed training system, due to less accurate communication profiling; (iii) various optimization strategies exist that can be applied to tackle performance issues, incurring a large strategy space.
INTRODUCTION
Distributed training on large datasets has been widely adopted to learn modern machine learning (ML) models such as deep neural networks (DNNs), to power various AIdriven applications. As compared to single-node training, distributed training using devices on multiple servers substantially alleviates the time cost Lepikhin et al., 2021), but often fails to scale well, i.e., far from linear speed-up according to the number of devices in use, even with state-of-the-art communication methods (Sergeev & Del Balso, 2018;Jiang et al., 2020).
The causes of low distributed training efficiency are diverse: stragglers , computation bottlenecks (Hu et al., 2020;Ho et al., 2013), prolonged parameter synchronization due to sub-optimal tensor granularity (Peng et al., 2019), large idling time due to poor communicationcomputation overlap (Narayanan et al., 2019), etc. Effectively diagnosing performance bottlenecks and boosting distributed training throughput have been critical for the * Equal contribution 1 Department of Computer Science, University of Hong Kong, Hong Kong, China 2 ByteDance Inc., Beijing, China. Correspondence to: Hanpeng Hu <[email protected]>.
Optimization techniques for DNN training acceleration can be divided into three categories: 1) Computationoptimization strategies, such as operator fusion (Jia et al., 2019b;a) and mixed precision training (Micikevicius et al., 2018;nvi, 2020); 2) Communication-oriented techniques, i.e., tensor fusion (hor, 2020), tensor partition (Peng et al., 2019;Jiang et al., 2020), gradient compression (Alistarh et al., 2017;Zheng et al., 2019), and transmission scheduling to improve communication-computation overlap (Peng et al., 2019;Cho et al., 2019); 3) Memory-optimization methods, e.g., gradient accumulation and re-computation . Even within a single optimization technique, multiple possible configurations can be applied, e.g., arXiv:2205.02473v2 [cs.DC] 18 May 2022 dPRO: A Generic Profiling and Optimization System for Expediting Distributed DNN Training various combinations of fusing two or more operators (ops) (Jia et al., 2019b;a). Besides, effects of different optimizations interact when applied together, while the combined effects have not been clearly explored so far.
A common approach for training performance inspection and improvement (Curtsinger & Berger, 2015;Ousterhout et al., 2015) is to: (i) profile a training job during runtime, collecting traces which record timestamps of specific tasks and resource utilization; (ii) break down collected training time into computation, communication, etc., and seek insights from the summaries; (iii) apply a particular optimization technique accordingly and tune its configurations based on simulated performance or by experiments. However, existing endeavors are limited in distributed DNN training analysis, as follows:
No global data flow graph (DFG) is readily available. Though local DFG can be constructed based on each worker's trace, building a global DFG including all computation and communication ops, detailed inter-worker communication topology and op dependencies is non-trivial. Traces independently collected from workers must be carefully aligned in execution time, and cross-worker communication topology, order and send-recv mapping have to be correctly figured out from the disparate traces.
No automatic and systematic analytical tool to identify performance bottlenecks and evaluate proper optimizations. Inefficiencies happen in different aspects when training different DNN models under various configurations, requiring different optimization strategies. To date, the combined effects of different optimizations and related configurations have not been carefully investigated. This paper proposes dPRO, an automatic diagnosis and optimization system for distributed DNN training expedition. We make the following contributions with dPRO.
• dPRO's profiler automatically collects global traces and constructs an accurate global DFG for distributed training analysis. Computation/communication op dependencies are carefully obtained, and fine-grained communication ops are exploited to model each tensor transmission. To tackle clock difference among different servers and inaccurate RECV op timestamp, we carefully design a method to align trace timestamps for distributed training, exploiting dependencies between communication ops and time similarity of transmitting tensors of the same size.
• dPRO's optimization framework automatically discerns performance bottlenecks and identifies suitable strategies for uplifting training performance. We theoretically analyze the interactions between optimization techniques, especially op fusion and tensor fusion, and propose a new abstraction of the global DFG, i.e., the Coarsened View, to reduce the strategy search space. The algorithm framework further exploits partial replay, the critical path and the symmetry of the global DFG to accelerate strategy search.
• We build the dPRO toolkit and release it on GitHub 1 . dPRO can be easily enabled by setting an environment variable, and its profiling incurs little overhead. We evaluate dPRO on TensorFlow and MXNet, with PS or AllReduce gradient synchronization, using RDMA or TCP transports and show that dPRO accurately simulates distributed DNN training with a < 5% average error (10× better than Daydream). Compared to representative baselines, dPRO effectively explores good optimization strategies, increases the training throughput by up to 3.48× and achieves the best scalability. Finally, dPRO's search acceleration techniques can reduce the search time by orders of magnitude compared to the baselines.
BACKGROUND AND MOTIVATION
2.1 DNN Training Profilers 1) Hardware profiling tools. NVIDIA provides NVProf (NVIDIA, 2021d) to collect start/end time of each kernel, GPU core and memory usage, as well as many other hardware counters. NVIDIA CUPTI (NVIDIA, 2021a) enables collecting profiling information at runtime. The vendorprovided tools are hardware-dependent and do not capture dependencies among ops in a DNN model, making it challenging to parse kernel-level traces.
2) Built-in profilers of ML frameworks. State-of-the-art ML frameworks, such as TensorFlow (Abadi et al., 2016), PyTorch (Paszke et al., 2019) and MXNet (Chen et al., 2015), provide their own built-in profilers. These profilers collect DNN op-level traces, including time and memory consumption when executing each op. TensorFlow and MXNet profilers also gather coarse-grained communication traces for their distributed APIs, including start time and end time of communication ops. We can not get the real transmission time as the profiling does not exclude queuing time in communication libraries.
3) Communication library profilers. Two parameter synchronization (aka communication) schemes are widely adopted for data-parallel training:(1) AllReduce (hor, 2020), where all workers are connected in a tree or ring topology (Patarasuk & Yuan, 2007), synchronously aggregate gradients using collective primitives and then update parameters independently; (2) parameter server (PS) architecture (Jiang et al., 2020), where workers PUSH the local gradients to PSs and then PULL the aggregated gradients from the PSs. Horovod (Foundation, 2019), a popular AllReduce communication library for DNN training, regards an entire NCCL AllReduce task for a tensor as a single op and collects start time and duration for such AllReduce tasks on a single GPU (called the 'coordinator' in Horovod). BytePS (ByteDance, 2020), a PS-based communication library that allows tensor partition, profiles the time spent on the PUSH/PULL operation of each tensor. Their profilers do not capture computation traces. KungFu's profiler (Mai et al., 2020) is able to monitor gradient noise scale (GNS) by inserting a monitoring op to the DFG and uses a collective communication layer to aggregate GNS across workers; however, it does not track execution time of computation/communication ops and the dependencies between them.
Challenges in Building Accurate Global Timeline
First, existing studies (e.g., Daydream , FlexFlow (Jia et al., 2019c)) predict training time of distributed DNN training jobs based on coarse-grained communication traces from the above existing profilers, estimating communication time based on bandwidth and tensor size. Such coarse-grained traces regard synchronization of one tensor as a black box without differentiating queueing time and transmission time, and are insufficient for accurately predicting distributed runtime. Fig. 1 shows the per-iteration time estimated using Daydream's simulator and obtained using testbed experiments by training ResNet50 (He et al., 2016) under four different configurations (using Horovod or BytePS for gradient synchronization over RDMA or TCP). Daydream's results remain similar across four cases, while real execution time is vastly different (due to communication protocol, topology and scale).
Next, existing profilers do not support timestamp alignment among workers/PSs. The error of trace timestamps of distributed training jobs is incurred by two factors: 1) there exist millisecond level or sub-millisecond level clock drifts among workers/PSs even with clock synchronization tools, such as NTP (Mills, 1991) or more precise tools (e.g., HUY-GENS (Geng et al., 2018)), leading to some cross-worker event dependency conflicts; 2) Profiling tools can only capture the launch time of a RECV op instead of the exact time of receiving data. It is important to correct the start timestamps of RECV ops for faithful trace replay. Without trace timestamp alignment, the error of communication timestamps will accumulate and increase the error of end-to-end performance estimation.
We seek to design a generic distributed training profiler, collecting both computation and fine-grained communication traces, and aligning traces timestamps from different devices to provide an accurate global timeline of distributed DNN training.
DNN Training Optimization
Computation Acceleration. Op fusion (ten, 2020; Jia et al., 2019b;a) allows a compilation of multiple computa- Communication optimization. Tensor fusion (hor, 2020) fuses multiple small tensors to reduce communication overhead in gradient synchronization. Tensor partitioning (Jayarajan et al., 2019;Peng et al., 2019) slices a large tensor into multiple pieces to better overlap push and pull in a PS architecture. Gradient compression (Alistarh et al., 2017;Bernstein et al., 2018) reduces gradient size with various compression methods. A number of studies (Peng et al., 2019;Bao et al., 2020;Zhang et al., 2017) have proposed algorithms of tensor transmission scheduling to better overlap computation and communication.
Memory optimization. Re-computation reduces memory footprint by deleting intermediate results and re-computing them when necessary. Gradient accumulation accumulates gradients over consecutive training iterations and reduces each iteration's batch size to achieve the same overall batch size.
Choosing appropriate optimizations for a particular distributed training job is challenging, due mainly to: 2021) adopt op fusion to reduce GPU memory access and may fuse all back-propagation ops (i.e., as with the XLA auto-clustering algorithm), which delays communication of tensors produced by these ops. As shown in Fig. 2(a), although the computation time of the fused op becomes shorter, end-to-end training time increases due to less overlapping between computation and communication. Memory optimizations also interact with computation and communication. Fig. 2(b) shows that re-computation of intermediate results sacrifices training speed to reduce memory footprint. It may also delay tensor communication.
•
• Very large combined strategy space. The global DFG in distributed training of a DNN model is large, making it time-consuming to find the optimal set of strategies.
We design a search-based automatic optimizer framework, that effectively explores trade-offs among multiple optimizations to identify proper strategies for training acceleration.
OVERVIEW
dPRO includes three modules, as shown in Fig. 3.
Profiler is a cross-framework distributed profiler, supporting three representative ML frameworks (TensorFlow, PyTorch and MXNet) and two parameter synchronization schemes (AllReduce and PS). The profiler collects time stamps (namely gTrace) of both computation ops and finegrained communication ops. It also tracks dependencies among fine-grained communication ops and constructs the global data flow graph (global DFG). Our profiler uses oplevel traces for computation ops, achieving similar high simulation accuracy as in Daydream (which uses kernellevel traces).
Replayer simulates distributed training of a DNN model and estimates per-iteration training time of the global DFG.
Optimizer takes as input a given DNN model and resource configurations (i.e., GPUs and their inter-connections), evaluates training performance of various strategy combinations using the replayer, and produces the best set of strategies found. We also provide an interface for developers to register custom optimization strategies.
We detail our design of key modules in next sections.
PROFILER AND REPLAYER
Global DFG Construction
The profiler automatically constructs a global DFG of the distributed training job, in which vertexes are computation and fine-grained communication ops and edges denote their dependencies. As shown in Fig. 4, the global DFG consists of local data flow graphs and communication topologies.
Local DFGs. Most DL frameworks have the conception of data flow graphs for individual workers (Abadi et al., 2016;Paszke et al., 2019;Chen et al., 2015). Our profiler extracts dependency information from each framework's built-in graph definition and constructs a data flow graph for each worker accordingly. We further insert a pair of In/Out virtual ops for each tensor into each local DFG, indicating where the tensor transmission occurs.
Fine-grained Communication Topology describes how each tensor is transferred between two devices, which contains two types of vertices (communication ops): (1) producer, which sends a tensor (partition) to another device; (2) consumer, which receives a tensor (partition) from another device. The profiler labels every transmission of a tensor (partition) between two devices with a unique transaction ID. A Middleman connects producers to the corresponding consumers that share the same transaction IDs. The communication topology for each tensor also contains a pair of In/Out virtual ops labeled with the tensor name, indicating the start/end of tensor transmission.
We connect local DFGs with the communication topology through In/Out virtual ops, and the global DFG is hence constructed. Decoupling global DFG construction into local DFGs and communication topology, we enable dPRO to support various ML frameworks and communication architectures. In a PS architecture, each PUSH (PULL) is regarded as a pair of SEND and RECV ops at a worker (PS) and a PS (worker), respectively. The unique transaction ID of each transmission can be produced using sender/receiver IPs, tensor name and whether it is PUSH or PULL. For AllReduce, we use a pair of SEND and RECV ops to represent the transmission of a chunk of the tensor to the next hop (e.g., along the ring in ring AllReduce). The transaction ID generation further uses chunk id of tensor partition and step id as in ring AllReduce (Baidu, 2017).
Trace Time Alignment
To combine traces collected on disparate workers/PSs and produce correct global DFG, one major obstacle is the time shift among machines where workers run. Not relying on accurate clock synchronization among machines, our profiler corrects the start time of RECV ops and obtains a more accurate communication duration of each tensor.
Let W and P denote the set of workers and PSs in a training job, respectively. For AllReduce, P is empty.
LetT i op [st] andT i op [ed]
be an op's measured start and end timestamps on node i ∈ P ∪ W, and T i op [st] and T i op [st] represent the respective adjusted time. Let node 0 be a reference for other nodes to align their time to. We compute a time offset/drift of node i, θ i , as the difference in measured time between node i and node 0, i.e., T 0 op =T 0 op and T i op = θ i +T i op . We leverage two observations for time alignment.
First, RECV ops on the same node receiving (even partitions of) the same tensor from the same sender in different training iterations, denoted as a RECV op family f recv , should have similar execution time. Consider a pair of SEND and RECV from node i to node j. Because RECV never happens before SEND, RECV's true start time, T j recv [st] + θ j , should be clipped by the start time of SEND, T i send [st] + θ i , which changes the commu-
nication time from (T j recv [ed] + θ j ) − (T j recv [st] + θ j ) to T j recv [ed] + θ j − max(T j recv [st] + θ j , T i send [st] + θ i ).
With time adjustment, we should minimize the variance of execution time of RECV ops in the same RECV op family:
O1 = frecv ∈Frecv V ar recv j ∈frecv T j recv [ed] +θj − max(T j recv [st] + θj, T i send [st] + θi)
where F recv is the set of all RECV op families and i denotes the node where the sender of the tensor of f recv resides.
Second, nodes on the same physical machine should have the same time offset because they share the same physical clock. Let M be the set of all physical machines and g m be the set of nodes on machine m. We should ensure the time offset of workers on the same machine as close as possible:
O2 = m∈M V ari∈g m (θi)
We compute time offsets θ i 's for time alignment among distributed nodes, by solving the following optimization:
min θ i :i∈P∪W a1O1 + a2O2 s.t. θ0 = 0, θi − θj ≤T j o 2 −T i o 1 , ∀(i, j) ∈ (W × P) ∪ (P × W), i = j, (o1, o2) ∈ E
Here a 1 ≥ 0 and a 2 ≥ 0 are two coefficients gauging the weights of the two objectives, and E is the set of inter-op dependencies. The constraints ensure inter-op dependencies for time alignment, i.e., the adjusted time of an op o 2 on j (T j o2 + θ j ) that depends on op o 1 on i, is not earlier than the adjusted time of o 1 (T i o1 + θ i ). The optimization problem can be solved in a few seconds using the state-of-the-art optimization libraries (CVXPY, 2020).
Replayer
The replayer simulates the execution of the global DFG based on a modified Kahn's algorithm (Kahn, 1962) of topological sorting. Instead of using a global ready queue (used in Daydream or Kahn's algorithm), for a distributed training job, we regard each worker/PS and each communication link as one device, and the replayer maintains a queue and a device time (end time of the last op executed on the device) for each device. An op is enqueued to the queue of corresponding device once it is ready, i.e., all its predecessor ops are executed. The replayer iteratively picks the device with the smallest device time, dequeues an op from the head of this queue and updates the corresponding device time with the op's execution time. After all ops in the global DFG are run, we take the largest device time as the iteration time. Although there might be multiple feasible topological sortings, dPRO's replayer can generate the most likely one by averaging op execution time over 10 training iterations and imitating the FIFO queue in ML framework's engines.
The replayer also produces an execution graph by adding additional edges into the global DFG, indicating execution ordering between ops running in the same device, and computes the critical path on the execution graph, for bottleneck identification by the optimizer.
OPTIMIZER
Given a global DFG G of a distributed DNN training job and a set of optimization strategies S, the optimizer identifies the bottlenecks in the global graph through training replay and produces a subset of optimization strategies, S * ∈ S, minimizing per-iteration training time (referred to as the iteration time):
min S ∈S ITERATIONTIME(f (G, S ))
where G = f (G, S ) is the modified global DFG after applying strategies in S to the original global DFG G. Figure 5. Illustration of Critical Path.
Theory Foundation
The main idea of our optimizer is to iteratively check and optimize the critical path of the global execution graph.
The critical path C contains a sequence of computation and communication ops: C = [p 0 , p 1 , ..., p i , q i , q i+1 , ..., q |C|−1 ], where p 0 , p 1 , ..., p i are computation ops and q i , q 1 , ..., q |C|−1 are communication ops. Here, we group fine-grained communication ops in the global DFG for the transmission of tensor n (e.g., SEND and RECV) as one communication op q n . Fig. 5 depicts an example of the critical path and the correspondence between each pair of p n /q n , n = 0, 1, ...|C| − 1. Since gradient tensors are dependent on computation ops, the critical path always starts from a sequence of computation ops and ends with a sequence of communication ops.
Note that each computation op p n , n = 1, . . . , i on the critical path may correspond to a communication op q n (each backward computation op has a corresponding tensor communication op, while we treat q n for a forward computation op p n as null); the corresponding communication ops do not lie on the critical path before p i , because computation ops are the bottleneck in this phase; On the other hand, each communication op q n , n = i, . . . , |C| − 1 on the critical path corresponds to a computation op p n which may not lie on the critical path.
The optimizer inspects the critical path form p 0 to q |C|−1 . For each op, p n , n = 1, . . . , i or q n , n = i, . . . , |C|−1, a decision d n is made on whether op fusion and/or tensor fusion should be applied: 1) d n = opf s, fusing two computation ops p n−1 and p n ; 2) d n = tsf s, fusing the two tensors q n−1 and q n (those corresponding to computation ops p n−1 and p n ); 3) d n = opf s tsf s, fusing p n−1 and p n and fusing tensors q n−1 and q n ; 4) d n = null, no fusion. We have d 0 = null. When tensor partition is enabled, the optimizer also decides an optimal partition number k n for the tensor of op p n or q n on the critical path. Unlike op and tensor fusion, tensor partition does not hurt computation-communication overlapping, but only affects tensor synchronization time, which inspires us to adopt the optimal partition size for each possible choice of d n , before comparing their performance.
Let T n denote the duration from the start of the global DFG execution till the completion of computation op p n and corresponding communication op p n , i.e., T n = max(p e n , q e n ) (see Table 1 for notation definitions). The opti- We analyze conditions for applying the optimization techniques, deriving insights to tailor a strategy search algorithm. Let opf s time(p n−1 , p n ) be the execution time of the fused op after fusing p n−1 and p n .
Theorem 1 (Op Fusion) If q d n−1 ≤ p d n + p d n−1 − opf s time(p n−1 , p n ), T n achieved with this op fusion is smaller than no fusion, i.e., T n (d n = opf s) ≤ T n (d n = null) (fusing p n−1 and p n is better than not); otherwise, T n (d n = opf s) ≥ T n (d n = null), i.e., fusing p n−1 and p n leads to a worse performance.
Let t sync (s, k) be the time to synchronize a tensor of size s, divided to k partitions, i.e., execution time of the complete synchronization operation on this tensor (using either PS or AllReduce). Given a tensor of size s, we use k * [s] to denote the optimal partition number that minimizes t sync .
Theorem 2 (Tensor Fusion/Partition) If q e n−1 > p e n + t sync (q s n−1 + q s n , k * [q s n−1 + q s n ]) − t sync (q s n , k * [q s n ]), then T n achieved by fusing tensors q n−1 and q n is smaller than no fusion, i.e., T n (d n = tsf s, k n = k * [q s n−1 + q s n ]) < T n (d n = null, k n = k * [q s n ]) (fusing q n−1 and q n is better than not); otherwise, q n−1 and q n should not be fused.
When considering op fusion and tensor fusion/partition together, if fusing two computation (communication) ops is better than not, their corresponding communication (computation) ops -if there are -should also be fused, without sacrificing computation-communication overlapping.
Theorem 3 (Op Fusion and Tensor Fusion/Partition) T n (d n = opf s tsf s, k n = k * [q s n−1 + q s n ]) ≤ T n (d n = tsf s, k n = k * [q s n−1 + q s n ]) and T n (d n = opf s tsf s, k n = k * [q s n−1 + q s n ]) ≤ T n (d n = opf s, k * n [q s n ]).
See the appendix (hu2, 2022) for proofs of Theorems 1, 2 and 3. Figure 6. Illustration of Coarsened View, where p1 has no learnable parameter but p3 has two.
Diagnosis and Optimization Algorithm
An ablation of dPRO's optimizer module is given in Fig. 3. The optimizer maintains a Graph Pass Registry including various optimization techniques. Each Graph Pass in the Registry corresponds to an optimization technique, such as op fusion, tensor fusion, tensor partition, etc.
The optimizer analyzes the bottlenecks in the global DFG and optimizes them in an iterative manner. The optimizer algorithm is given in Alg. 1. At the beginning, the optimizer evaluates the iteration time and memory usage of the original global DFG G by replaying it with the replayer. If the memory usage exceeds the memory budget (specified by the user), the memory-optimization passes are invoked to reduce the memory footprint (see the appendix (hu2, 2022) for details).
Then the optimizer proceeds with throughput optimization, that minimizes the makespan of the critical path in the global execution graph. Given a critical path C = [p 0 , p 1 , ..., p i , q i , q i+1 , ..., q |C|−1 ], the optimizer first examines the computation ops p n , n = 1, . . . , i in order. Because the performance of this segment of the critical path is computation-bound, the optimizer first evaluates whether op fusion should be applied on p n−1 and p n according to Theorem 1. If so, it invokes the op fusion pass to fuse p n−1 and p n , as well as the tensor fusion pass to fuse the corresponding two tensors q n−1 and q n (Theorem 3). An optimal partition number k * is then computed and applied (if k * > 1) on the fused or non-fused tensor q n .
Next, the optimizer inspects the communication ops q n , n = i, . . . , |C| − 1, on the critical path. In this segment, the performance is communication-bound. The optimizer first computes the optimal partition number k * on the fused and non-fused tensor q n and checks whether tensor fusion should be applied to q n−1 and q n according to Theorem 2. If so, the optimizer invokes the tensor fusion pass to fuses q n−1 and q n , as well as the corresponding computation ops p n−1 and p n (Theorem 3). Then the optimal partition number k * on fused or non-fused tensor is applied accordingly.
After applying the optimizations, the global DFG G is updated. The optimizer uses the replayer to update the critical path C and then repeats the search on the new critical path.
The fused op execution time, opf s time(p n−1 , p n ), can be obtained by profiling the fused op in an offline manner (as Execute the updated global DFG using the replayer and obtain a new critical path C = [p0, p1, ..., pi, qi, qi+1, ..., q |C|−1 ] 27: end while we will do in the experiments) or use a cost model (Kaufman et al., 2019). The time to synchronize a tensor, t sync (s, k), is estimated with partial replay of the subgraph including all relevant communication ops (Sec. 5.3). The optimal partition number of a tensor is obtained through grid search.
Search Speed-up
It is time-consuming to exhaustively explore the large strategy space, e.g., it takes more than 24 hours to search for the optimal optimization strategies for BERT Base. We propose several techniques to expedite the search process.
Coarsened View. Inspired by Theorem 3, we coarsen the global DFG during the search process, namely constructing the Coarsened View: we put computation ops that do not produce tensors but are close to one tensor-producing computation op into one group (together with the latter), and communication ops connected to the same computation op into one group. Ops or tensors in the same group are fused, and we search for optimization strategies based on such a coarsened view of the global DFG (line 1 of Alg. 1). Fig. 6 gives an illustration. p 1 produces no tensor (q 1 = null) and is connected to p 2 which generates a tensor q 2 ; p 1 and p 2 are fused into one group. The rationale lies in that: we can view tensor q 2 as a fusion of q 1 = null and q 2 ; then according to Theorem 3, fusing p 1 and p 2 leads to a better performance. q 3 and q 4 are both tensors produced by p 3 (e.g., the Batch-Norm Layer has two learnable parameters (Ioffe & Szegedy, 2015)), and they are put into one group. This is because we can regard p 3 as a fusion of p 3 and p 4 = null; then fusing q 3 and q 4 is better than not based on Theorem 3. We construct Coarsened View using a backtracking algorithm detailed in the appendix (hu2, 2022).
Partial Replay. To avoid frequently replaying the entire global DFG during strategy search (for estimating t sync (s, k)), the replayer supports partial simulation. Given the current global DFG G, the replayer identifies all communication ops S p related to the tensor, and generates a subgraph G , which contains the ops in S p and the edges between those ops. Execution of the subgraph is simulated to produce the tensor synchronization time.
Exploiting Symmetry. We further leverage the symmetry in the global DFG of state-of-the-art DNNs to accelerate strategy search. For example, BERT (Devlin et al., 2019) includes multiple transformer blocks; the strategies applied inside one block can be used in other blocks as well. For data parallel training with homogeneous workers, the optimizations applied on one critical path can actually be applied to multiple workers.
IMPLEMENTATION
Profiler. To profile computation ops, we use the native profiler of each ML framework with minor modifications.
• TensorFlow. We implemented dPRO with TensorFlow 2.4 in the graph execution mode. We modify tf.profiler to collect computation traces with absolute time and extract dependencies from tf.RunMetadata.partition graphs.
• MXNet.
We collect computation traces using mxnet.prof iler, with some modifications to ensure a unique trace name for each op. We extract op dependencies through mxnet.Symbol.debug str() API.
• Communication. We use Horovod (Sergeev & Del Balso, 2018) as the AllReduce communication library, which uses NCCL (NVIDIA, 2021c) for collective communication across GPUs. We dive into NCCL (adding 318 LoCs) to collect timestamps of SEND/RECV of the tensor chunks. We adopt BytePS (Jiang et al., 2020) for PS-based communication and add around 400 LoCs to its communication library, ps-lite (Li et al., 2014), for recording timestamps of PUSH and PULL of each tensor.
Replayer. We implement it using Python with 3653 LoCs.
Optimizer. We implement the optimizer and all optimization passes in Python with 5745 LoCs. We implement op fusion on TensorFlow using XLA (TensorFlow, 2021), and modify it to allow manual control of which ops to be fused in detail, in order to apply our identified strategies. We modify Horovod and BytePS to enable customized tensor fusion pattern and tensor partition size.
APIs and Commands. dPRO provides a simple programming interface for trace collection. The following shows the APIs for trace collection on TensorFlow, with only 2 additional LoCs: a recorder is defined, with which the wrapper decides when to start/finish profiling and output traces. After profiling, a user can call the commands as follows to invoke the profiler, replayer and optimizer, respectively.
$ dpro profile <python program> -o <trace path> $ dpro replay <trace path> $ dpro optimize <trace path> 7 EVALUATION 7.1 Experiment Setup Testbed. We evaluate dPRO in a production ML cluster and use up to 128 Tesla V100 32GB GPUs (on 16 servers). The GPU servers are equipped with NVLinks and interconnected with 100Gbps bandwidth using Mellanox CX-5 single-port NICs. We use CUDA v10.2 (NVIDIA, 2020) and cuDNN v7.6.5 (NVIDIA, 2021b) in our experiments. In our default, we use 16 GPUs (on 2 servers).
Benchmarks. We train 4 DNN models: BERT Base (Devlin et al., 2019) for natural language processing, and ResNet50 (He et al., 2016), VGG16 (Simonyan & Zisserman, 2015) and InceptionV3 (Szegedy et al., 2016) for image classification. Each model is trained using various combinations of ML framework, communication library and inter-server connectivity: Horovod Tensor-Flow (HVD+TF), BytePS TensorFlow (BPS+TF), Horovod MXNet (HVD+MX), BytePS MXNet (BPS+MX), each with TCP or RDMA inter-connectivity.
Baselines. We compare dPRO with the state of the art in various aspects: 1) Daydream for replay accuracy: Daydream estimates the iteration time of distributed training using a local DFG and inserts one coarse-grained communication op for each tensor, whose communication time is calculated by tensor size/bandwidth. 2) XLA's default op fusion, which fuses as many computation ops as possible. 3) Horovod's default tensor fusion, which fuses tensors in intervals of 5ms with fused tensor size upper-bounded by 64MB, as well as 4) Horovod Autotune, which automatically tunes the interval and upper bound. 5) BytePS's default setting, in which tensors are partitioned with a partition size of 4MB.
By default, the batch size is 32 per GPU. Iteration time is averaged over 10 training steps after the warm-up phase.
More experimental results can be found in the appendix (hu2, 2022). We also note that the overhead introduced by dPRO (5.86% in our experiments) is almost the same as that of ML frameworks' built-in profilers, indicating that our detailed com- munication profiling introduces little extra overhead.
Replay Accuracy
Trace Time Alignment
To evaluate the effect of our trace time alignment, we collect traces by running distributed training jobs on different clusters, where NTP is enabled. Fig. 8 shows errors of iteration time estimated by our replayer (as compared to ground truth) with and without time alignment. Although workers have been synchronized with NTP or with no clock drift (as in the 8-GPU jobs), inaccuracy of communication traces (e.g., due to RECV op) still leads to large replay errors (up to 36.7%), while errors are reduced to < 5% with time alignment. Since larger clusters are more likely to experience large clock drifts and queuing delay, the simulation error gap between dPRO w/ and w/o trace time alignment becomes significantly larger as the cluster size grows.
Optimizer Performance
Computation op fusion. We first evaluate the performance of op fusion strategies found by the optimizer, temporarily excluding other optimization techniques from the search space. Fig. 9 Figure 9. Performance of op fusion and tensor fusion: training different DNNs on TensorFlow XLA is widely used to accelerate training on a single machine, the results show that in distributed training, simply fusing many ops as with XLA may even slow down the training progress, due mainly to reduced overlap between computation and communication.
Tensor Fusion and Tensor Partition. We next evaluate the effect of tensor fusion strategies found by dPRO's optimizer (dP RO T SF S). Note that both tensor fusion and tensor partition (BytePS's default partition strategy) are enabled when we use BytePS as the communication library. As Fig. 9 shows, our strategies achieve up to 19.1% speed-up compared with default Horovod and BytePS.
Interaction between op fusion and tensor fusion. We further evaluate combined op fusion and tensor fusion/partition strategies identified by our optimizer (dP RO OP F S T SF S). In Fig. 9, We observe that our combined strategies perform the best in most cases: up to 62.95% acceleration as compared to XLA and up to 26.44% speed-up compared to default Horovod and BytePS.
Integrating memory optimization. In this experiment, we evaluate the accuracy of peak memory estimation with our replayer. Table 3 shows the actual memory consumption and estimated peak memory with our replayer, when training different DNN models on TensorFlow with a batch size of 32 per GPU. The relative errors across different models are at most 5.25%.
We further investigate the effects of memory optimization (gradient accumulation and re-computation to reduce memory footprint) performed at the start of our search algorithm. Especially, the algorithm evaluates iteration time and memory incurred by the two memory optimizations and selects the one achieving the shorter time under the memory budget. Table 4 presents the results when training BERT Base . Performance when training DNN models using up to 128 V100 GPUs on TensorFlow + Horovod using a batch size of 64 per GPU, on 16GB V100 GPUs (an OOM error occurs without memory optimization). Recomputation performs better than gradient accumulation in both time and memory consumption in this setting. Moreover, prediction results with our replayer match well the measured data collected in actual training.
Search speedup. We evaluate the effects of our techniques used to accelerate the strategy search process. Table 5 gives the algorithm running time, where the search ends when the per-iteration training time evaluated with the identified strategies changes little over 5 search iterations. The strawman case is Alg. 1 without any search speed-up methods, which costs tens of hours. The last three columns show the search time when the respective speed-up method is in place (in addition to the previous one(s)). With more speed-up methods applied, the strategy search time decreases significantly and we can finish the search within a short time. Note that all experimental results we presented earlier are based on strategies found with the speeded-up search.
Large-Scale Distributed Training
We further evaluate dPRO on replaying and optimizing distributed DNN training using large-scale clusters. In Fig. 10, we observe the following: (1) as the cluster size grows, DNN training becomes slower since the communication overhead is larger for synchronizing among more GPUs.
(2) Our replayer can still simulate such distributed training accurately, with an error lower than 5% in most cases (up to 5.6%); Daydream's prediction error increases substantially (up to 73.8%) as the cluster size grows.
(3) dPRO's combined strategies reach the best scalability and yield up to 3.48× speed-up compared to XLA's default strategies.
CONCLUSION
dPRO is a diagnosis and optimizing toolkit for distributed DNN training. It can simulate distributed training with low error under different configurations and efficiently explore collaborative effects among multiple DNN optimizations. The key design points include: 1) building an accurate global DFG by collecting fine-grained traces and aligning timestamps in different devices; 2) designing an optimization framework to search for combined optimization strategies efficiently.
dPRO, along with the time alignment method, can also be applied to model or pipeline-parallel training, where each local DFG only contains part of ops in the DNN model and the communication topology models transmission of activations between workers. We also provide an interface for developers to register custom optimization strategies (e.g., mixed precision), and the optimizer can automatically explore all registered optimizations (see the appendix (hu2, 2022) for more details).
ACKNOWLEDGEMENT
Figure 1 .Figure 2 .
12Training ResNet50 in 100Gbps network, batch size 32 per GPU (see Sec. 7.1 for testbed details) (a) Effect of op fusion: Comp. -computation op, Comm. -gradient synchronization; A/B is the gradient produced by op a/b; c is op fused from a and b. (b) Effect of re-computation: FWforward propagation; BW -backward propagation; re-computation inserts a F W.b before BW.b since the output of the first F W.b is not cached.tion ops to one monolithic op, achieving reduced op scheduling overhead and increased temporal and spatial locality of intermediate data access. Mixed precision training(Micikevicius et al., 2018; nvi, 2020) advocates using 16-bit floating-point instead of 32-bit as numerical formats of most ops for less memory consumption and faster computation.
Figure 3 .
3dPRO architecture.
Figure 4 .
4An illustration of the global DFG.
optimal decisions D = [d 0 , d 1 , ..., d |C|−1 ], and K = [k 0 , k 1 , ..., k |C|−1 ] that minimize the duration of the critical path: min D,K T |C|−1 .
Figure 7 .
7Replay Accuracy: dPRO replayer vs. Daydream simulator
Fig. 7
7compares the predicted iteration time by dPRO and Daydream's simulator against the real time measured in actual training. dPRO's replay error is less than 5% in most cases, while Daydream's error rate is up to 70.2%.
Figure 8 .
8Effects of trace timeline alignment. Workers in the 8-GPU jobs are located on the same physical machine.
Figure 10
10Figure 10. Performance when training DNN models using up to 128 V100 GPUs on TensorFlow + Horovod
Interactions/conflicts among different optimizations' effects. For example, DL compilers such as XLA (TensorFlow,ML
Frameworks
Communication
Frameworks
Profiler
Replayer
Optimization Pass Registry
Global Timeline
View
BytePS
Horovod
Optimizer
①
②
③
④
⑤
Scheduler
Target
* Iteration Time or
Memory Usage
Candidate
Selector
Global DFG Time Alignment
Operator
Fusion
Tensor Fusion
Re-
computation
Gradient
Accumulation
......
Tensor
Partition
Table 1 .
1NotationSymbol Description
Calculation
p d
n
execution time of computa-
tion op pn
Algorithm 1 Diagnosis and Optimization Algorithm 1: If OOM, invoke the Memory Optimization Pass 2: Construct the Coarsened View and fuse ops/tensors in the same group. 3: The replayer computes an initial critical path C = [p0, p1, ..., pi, qi, qi+1, ..., q |C|−1 ] 4: while search time budget not exhausted and speed-up notconverged do
5: for n = 0 → i do
6:
if q d
n−1 < p d
n + p d
n−1 − opf s time(pn−1, pn) then
7:
OPFUSION(pn−1, pn) [Theorem 1]
8:
TENSORFUSION(qn−1 , qn) [Theorem 3]
9:
k * ← OPTPARTNUM(qn−1 + qn)
10:
Partition fused tensor q s
n−1 + q s
n evenly by k *
11:
else
12:
// tensor partition only
13:
k * ← OPTPARTNUM(qn)
14:
Partition tensor q s
n evenly by k *
15:
end if
16:
end for
17:
for n = i → |C| − 1 do
18:
if q e
n−1 > p e
n + tsync(q s
n−1 + q s
n , k * [q s
n−1 + q s
n ]) −
tsync(q s
n , k * [q s
n ]) then
19:
TENSORFUSION( qn−1, qn) [Theorem 2]
20:
OPFUSION( pn−1 , pn) [Theorem 3]
21:
Partition fused tensor qn−1 + qn evenly by k * =
OPTPARTNUM(qn−1 + qn)
22:
else
23:
Partition tensor q s
n
evenly by k *
=
OPTPARTNUM(qn)
24:
end if
25:
end for
26:
Table 2 .
2Deep Dive of Simulation Error Comparison for Tensor-Flow Horovod RDMAModel
Experiment
Time (ms)
Iteration
FW
BW
ResNet50
Ground truth
138.64
34.78
71.34
dPRO
142.31
34.20
70.32
Daydream
109.19
34.69
70.04
BERT Base
Ground truth
459.62
107.49 185.66
dPRO
453.83
106.58 187.05
Daydream
345.68
106.80 185.79
Table 2
2shows detailed estimated durations of forward and
backward propagation. We observe that both dPRO and
Daydream predict forward and backward execution time
accurately, so the error mainly arises from the estimation
of communication time. Daydream's simple approach for
communication time prediction does not capture effects of
message queuing, network and communication protocols,
resulting in larger errors in training simulation.
shows the actual training throughput when applying the respective strategies in real distributed training. dPRO's op fusion (dP RO OP F S) yields up to 51.843% speed-up as compared to XLA's default strategies. AlthoughHVD
BPS
HVD
BPS
HVD
BPS
HVD
BPS
0.0
0.3
0.6
0.9
1.2
1.5
Normalized Throughput
ResNet50
InceptionV3
BERT Base
VGG16
Default BytePS/Horovod
XLA
dPRO_OPFS
dPRO_TSFS
dPRO_OPFS_TSFS
Horovod Autotune
Table 3 .
3Memory estimation accuracyModel
Real (GB) Est. (GB) Relative Error (%)
BERT Base
9.96
10.25
2.83
ResNet50
5.41
5.71
5.25
InceptionV3
3.91
4.05
3.46
VGG16
5.91
5.83
1.37
Table 4 .Table 5 .
45Iteration time and memory usage: BERT Base (Tensor-Flow Horovod RDMA) on 16 GPUs with batch size 64 per GPU. Time to search optimal op fusion and tensor fusion/partition strategies on TensorFlow BytePS (in hours).Optimization Method
Time (ms)
Memory (GB)
Real
Est.
Real
Est.
w/o optimization
622.12 613.87 16.42 16.97
Re-computation
696.13 672.60
7.43
7.20
Gradient Accumulation 714.22 708.57
9.96
10.25
Model
Straw-
man
+Coarsened
View
+Partial
Replay
+Sym-
metry
ResNet50
14.60
5.35
0.91
0.29
VGG16
11.97
3.74
0.71
0.04
InceptionV3
16.75
6.13
1.04
0.47
BERT Base
>24
22.01
3.25
0.49
TCP
RDMA
TCP
RDMA
TCP
RDMA
0
180
360
540
720
Iteration Time (ms)
ResNet50
VGG16
InceptionV3
0
5
10
15
20
dPRO Error (%)
Ground Truth
dPRO
Daydream
dPRO Prediction Error
This work was supported by Hong Kong Innovation and Technology Commission's Innovation and Technology Fund (Partnership Research Programme with ByteDance Limited, Award No. PRP/082/20FX), and grants from Hong Kong RGC under the contracts HKU 17204619, 17208920 and 17207621.
https://github.com/joapolarbear/dpro
Horovod Tensor Fusion. Horovod Tensor Fusion, 2020. https://
Training With Mixed Precision. Training With Mixed Precision, 2020. https://docs. nvidia.com/deeplearning/performance/ mixed-precision-training/index.html.
TensorFlow Operation Fusion. TensorFlow Operation Fusion, 2020. https: //www.tensorflow.org/lite/convert/ operation_fusion.
10.6084/m9.figshare.19165622.v3dPRO FigShare Software. dPRO FigShare Software, 2022. https://doi.org/ 10.6084/m9.figshare.19165622.v3.
Tensorflow: A System for Large-scale Machine Learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, M Devin, S Ghemawat, G Irving, M Isard, Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation. the 12th USENIX Symposium on Operating Systems Design and ImplementationAbadi, M., Barham, P., Chen, J., Chen, Z., Davis, A., Dean, J., Devin, M., Ghemawat, S., Irving, G., Isard, M., et al. Tensorflow: A System for Large-scale Machine Learn- ing. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation, 2016.
QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. D Alistarh, D Grubic, J Li, R Tomioka, M Vojnovic, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsAlistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic, M. QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. In Proceedings of Advances in Neural Information Processing Systems, 2017.
. Baidu, Ring Allreduce, Baidu. Ring AllReduce, 2017. https://github.com/ baidu-research/baidu-allreduce.
Preemptive Allreduce Scheduling for Expediting Distributed DNN Training. Y Bao, Y Peng, Y Chen, C Wu, Proceedings of IEEE International Conference on Computer Communications. IEEE International Conference on Computer CommunicationsBao, Y., Peng, Y., Chen, Y., and Wu, C. Preemptive All- reduce Scheduling for Expediting Distributed DNN Train- ing. In Proceedings of IEEE International Conference on Computer Communications, 2020.
SignSGD: Compressed Optimisation for Non-Convex Problems. J Bernstein, Y.-X Wang, K Azizzadenesheli, Anandkumar , A , Proceedings of International Conference on Machine Learning. International Conference on Machine LearningBernstein, J., Wang, Y.-X., Azizzadenesheli, K., and Anand- kumar, A. SignSGD: Compressed Optimisation for Non- Convex Problems. In Proceedings of International Con- ference on Machine Learning, 2018.
. Bytedance, Byteps Timeline, 2020ByteDance. BytePS Timeline, 2020. https:
T Chen, M Li, Y Li, M Lin, N Wang, M Wang, T Xiao, B Xu, C Zhang, Z Zhang, Mxnet, arXiv:1512.01274A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. arXiv preprintChen, T., Li, M., Li, Y., Lin, M., Wang, N., Wang, M., Xiao, T., Xu, B., Zhang, C., and Zhang, Z. MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems. arXiv preprint arXiv:1512.01274, 2015.
. T Chen, B Xu, C Zhang, C Guestrin, arXiv:1604.06174Training Deep Nets with Sublinear Memory Cost. arXiv preprintChen, T., Xu, B., Zhang, C., and Guestrin, C. Training Deep Nets with Sublinear Memory Cost. arXiv preprint arXiv:1604.06174, 2016.
Elastic Parameter Server Load Distribution in Deep Learning Clusters. Y Chen, Y Peng, Y Bao, C Wu, Y Zhu, C Guo, Proceedings of the 11th ACM Symposium on Cloud Computing. the 11th ACM Symposium on Cloud ComputingChen, Y., Peng, Y., Bao, Y., Wu, C., Zhu, Y., and Guo, C. Elastic Parameter Server Load Distribution in Deep Learning Clusters. In Proceedings of the 11th ACM Sym- posium on Cloud Computing, 2020.
BlueConnect: Decomposing All-reduce for Deep Learning on Heterogeneous Network Hierarchy. M Cho, U Finkler, M Serrano, D Kung, H Hunter, IBM Journal of Research and Development. Cho, M., Finkler, U., Serrano, M., Kung, D., and Hunter, H. BlueConnect: Decomposing All-reduce for Deep Learning on Heterogeneous Network Hierarchy. IBM Journal of Research and Development, 2019.
Coz: Finding Code that Counts with Causal Profiling. C Curtsinger, E D Berger, Proceedings of the 25th Symposium on Operating Systems Principles. the 25th Symposium on Operating Systems PrinciplesCurtsinger, C. and Berger, E. D. Coz: Finding Code that Counts with Causal Profiling. In Proceedings of the 25th Symposium on Operating Systems Principles, 2015.
. Cvxpy, Cvxpy, CVXPY. CVXPY, 2020. https://www.cvxpy.org/ tutorial/advanced/index.html.
Pre-training of Deep Bidirectional Transformers for Language Understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, Proceedings of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Lan- guage Understanding. In Proceedings of the North Amer- ican Chapter of the Association for Computational Lin- guistics: Human Language Technologies, 2019.
. L A Foundation, Foundation, L. A. .
Analyze Performance. D Horovod, D. Horovod, Analyze Perfor- mance, 2019. https://horovod.readthedocs. io/en/stable/timeline_include.html.
Exploiting a natural network effect for scalable, fine-grained clock synchronization. Y Geng, S Liu, Z Yin, A Naik, B Prabhakar, M Rosenblum, A Vahdat, Proceedings of 15th USENIX Symposium on Networked Systems Design and Implementation. 15th USENIX Symposium on Networked Systems Design and ImplementationGeng, Y., Liu, S., Yin, Z., Naik, A., Prabhakar, B., Rosen- blum, M., and Vahdat, A. Exploiting a natural network effect for scalable, fine-grained clock synchronization. In Proceedings of 15th USENIX Symposium on Networked Systems Design and Implementation, 2018.
Deep Residual Learning for Image Recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionHe, K., Zhang, X., Ren, S., and Sun, J. Deep Residual Learn- ing for Image Recognition. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2016.
More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server. Q Ho, J Cipar, H Cui, S Lee, J K Kim, P B Gibbons, G A Gibson, G Ganger, E P Xing, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsHo, Q., Cipar, J., Cui, H., Lee, S., Kim, J. K., Gibbons, P. B., Gibson, G. A., Ganger, G., and Xing, E. P. More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server. Proceedings of Advances in Neural Information Processing Systems, 2013.
Distributed Machine Learning through Heterogeneous Edge Systems. H Hu, D Wang, C Wu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceHu, H., Wang, D., and Wu, C. Distributed Machine Learning through Heterogeneous Edge Systems. In Proceedings of the AAAI Conference on Artificial Intelligence, 2020.
Efficient Training of Giant Neural Networks using Pipeline Parallelism. Y Huang, Y Cheng, A Bapna, O Firat, D Chen, M Chen, H Lee, J Ngiam, Q V Le, Y Wu, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsHuang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, D., Chen, M., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Ef- ficient Training of Giant Neural Networks using Pipeline Parallelism. In Proceedings of Advances in Neural Infor- mation Processing Systems, 2019.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, Proceedings of International Conference on Machine Learning. International Conference on Machine LearningIoffe, S. and Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of International Conference on Machine Learning, 2015.
Priority-based Parameter Propagation for Distributed DNN Training. A Jayarajan, J Wei, G Gibson, A Fedorova, G Pekhimenko, Proceedings of Machine Learning and Systems. Machine Learning and SystemsJayarajan, A., Wei, J., Gibson, G., Fedorova, A., and Pekhi- menko, G. Priority-based Parameter Propagation for Distributed DNN Training. In Proceedings of Machine Learning and Systems, 2019.
Optimizing Deep Learning Computation with Automatic Generation of Graph Substitutions. Z Jia, O Padon, J Thomas, T Warszawski, M Zaharia, A Aiken, Taso, Proceedings of the 27th ACM Symposium on Operating Systems Principles. the 27th ACM Symposium on Operating Systems PrinciplesJia, Z., Padon, O., Thomas, J., Warszawski, T., Zaharia, M., and Aiken, A. TASO: Optimizing Deep Learning Computation with Automatic Generation of Graph Sub- stitutions. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, 2019a.
Optimizing DNN Computation with Relaxed Graph Substitutions. Z Jia, J Thomas, T Warszawski, M Gao, M Zaharia, A Aiken, Proceedings of Machine Learning and Systems. Machine Learning and SystemsJia, Z., Thomas, J., Warszawski, T., Gao, M., Zaharia, M., and Aiken, A. Optimizing DNN Computation with Re- laxed Graph Substitutions. In Proceedings of Machine Learning and Systems, 2019b.
Beyond Data and Model Parallelism for Deep Neural Networks. Z Jia, M Zaharia, A Aiken, Proceedings of Machine Learning and Systems. Machine Learning and SystemsJia, Z., Zaharia, M., and Aiken, A. Beyond Data and Model Parallelism for Deep Neural Networks. In Proceedings of Machine Learning and Systems, 2019c.
A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. Y Jiang, Y Zhu, C Lan, B Yi, Y Cui, C Guo, Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation. the 14th USENIX Symposium on Operating Systems Design and ImplementationJiang, Y., Zhu, Y., Lan, C., Yi, B., Cui, Y., and Guo, C. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In Pro- ceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation, 2020.
Topological Sorting of Large Networks. Communications of the ACM. A B Kahn, Kahn, A. B. Topological Sorting of Large Networks. Com- munications of the ACM, 1962.
Learned TPU Cost Model for XLA Tensor Programs. S Kaufman, P Phothilimtha, M Burrows, Proceedings of the Workshop on ML for Systems at NeurIPS 2019. the Workshop on ML for Systems at NeurIPS 2019Kaufman, S., Phothilimtha, P., and Burrows, M. Learned TPU Cost Model for XLA Tensor Programs. In Proceed- ings of the Workshop on ML for Systems at NeurIPS 2019, 2019.
GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. D Lepikhin, H Lee, Y Xu, D Chen, O Firat, Y Huang, M Krikun, N Shazeer, Chen , Z , Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsLepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. GShard: Scaling Giant Models with Conditional Computation and Auto- matic Sharding. In Proceedings of International Confer- ence on Learning Representations, 2021.
Scaling Distributed Machine Learning with the Parameter Server. M Li, D G Andersen, J W Park, A J Smola, A Ahmed, V Josifovski, J Long, E J Shekita, B.-Y Su, Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation. the 11th USENIX Symposium on Operating Systems Design and ImplementationLi, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., Long, J., Shekita, E. J., and Su, B.-Y. Scaling Distributed Machine Learning with the Parameter Server. In Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation, 2014.
Making training in distributed machine learning adaptive. L Mai, G Li, M Wagenländer, K Fertakis, A.-O Brabete, P Pietzuch, Kungfu, Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation. the 14th USENIX Symposium on Operating Systems Design and Implementation2020Mai, L., Li, G., Wagenländer, M., Fertakis, K., Brabete, A.-O., and Pietzuch, P. Kungfu: Making training in distributed machine learning adaptive. In Proceedings of the 14th USENIX Symposium on Operating Systems Design and Implementation), 2020.
Mixed Precision Training. P Micikevicius, S Narang, J Alben, G Diamos, E Elsen, D Garcia, B Ginsburg, M Houston, O Kuchaiev, G Venkatesh, H Wu, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsMicikevicius, P., Narang, S., Alben, J., Diamos, G., Elsen, E., Garcia, D., Ginsburg, B., Houston, M., Kuchaiev, O., Venkatesh, G., and Wu, H. Mixed Precision Training. In Proceedings of International Conference on Learning Representations, 2018.
Internet Time Synchronization: the Network Time Protocol. D L Mills, IEEE Transactions on Communications. Mills, D. L. Internet Time Synchronization: the Network Time Protocol. IEEE Transactions on Communications, 1991.
Generalized Pipeline Parallelism for DNN Training. D Narayanan, A Harlap, A Phanishayee, V Seshadri, N R Devanur, G R Ganger, P B Gibbons, M Zaharia, Pipedream, Proceedings of the 27th ACM Symposium on Operating Systems Principles, 2019. dPRO: A Generic Profiling and Optimization System for Expediting Distributed DNN Training NVIDIA. CUDA Toolkit Release Notes. the 27th ACM Symposium on Operating Systems Principles, 2019. dPRO: A Generic Profiling and Optimization System for Expediting Distributed DNN Training NVIDIA. CUDA Toolkit Release NotesNarayanan, D., Harlap, A., Phanishayee, A., Seshadri, V., Devanur, N. R., Ganger, G. R., Gibbons, P. B., and Za- haria, M. PipeDream: Generalized Pipeline Parallelism for DNN Training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, 2019. dPRO: A Generic Profiling and Optimization System for Expediting Distributed DNN Training NVIDIA. CUDA Toolkit Release Notes, 2020. https://docs.nvidia.com/cuda/archive/
. Nvidia, Cupti, NVIDIA. CUPTI, 2021a. https://docs.nvidia. com/cuda/cupti/.
. Nvidia, Nccl, NVIDIA. NCCL, 2021c. https://developer. nvidia.com/nccl.
. Nvidia, Nvprof, NVIDIA. NVProf, 2021d. https://docs.nvidia. com/cuda/profiler-users-guide/index. html.
Making Sense of Performance in Data Analytics Frameworks. K Ousterhout, R Rasti, S Ratnasamy, S Shenker, Chun , B.-G , Proceedings of 12th USENIX Symposium on Networked Systems Design and Implementation. 12th USENIX Symposium on Networked Systems Design and ImplementationOusterhout, K., Rasti, R., Ratnasamy, S., Shenker, S., and Chun, B.-G. Making Sense of Performance in Data An- alytics Frameworks. In Proceedings of 12th USENIX Symposium on Networked Systems Design and Implemen- tation, 2015.
PyTorch: An Imperative Style, Highperformance Deep Learning Library. A Paszke, S Gross, F Massa, A Lerer, J Bradbury, G Chanan, T Killeen, Z Lin, N Gimelshein, L Antiga, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsPaszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al. PyTorch: An Imperative Style, High- performance Deep Learning Library. In Proceedings of Advances in Neural Information Processing Systems, 2019.
Bandwidth Efficient All-reduce Operation on Tree Topologies. P Patarasuk, X Yuan, Proceedings of the IEEE International Parallel and Distributed Processing Symposium. the IEEE International Parallel and Distributed Processing SymposiumPatarasuk, P. and Yuan, X. Bandwidth Efficient All-reduce Operation on Tree Topologies. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium, 2007.
A Generic Communication Scheduler for Distributed DNN Training Acceleration. Y Peng, Y Zhu, Y Chen, Y Bao, B Yi, C Lan, C Wu, C Guo, Proceedings of the 27th ACM Symposium on Operating Systems Principles. the 27th ACM Symposium on Operating Systems PrinciplesPeng, Y., Zhu, Y., Chen, Y., Bao, Y., Yi, B., Lan, C., Wu, C., and Guo, C. A Generic Communication Scheduler for Distributed DNN Training Acceleration. In Proceed- ings of the 27th ACM Symposium on Operating Systems Principles, 2019.
A Sergeev, M Balso, Horovod, arXiv:1802.05799Fast and Easy Distributed Deep Learning in TensorFlow. arXiv preprintSergeev, A. and Del Balso, M. Horovod: Fast and Easy Distributed Deep Learning in TensorFlow. arXiv preprint arXiv:1802.05799, 2018.
Very Deep Convolutional Networks for Large-Scale Image Recognition. K Simonyan, A Zisserman, Proceedings of International Conference on Learning Representations. International Conference on Learning RepresentationsSimonyan, K. and Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Pro- ceedings of International Conference on Learning Repre- sentations, 2015.
Rethinking the Inception Architecture for Computer Vision. C Szegedy, V Vanhoucke, S Ioffe, J Shlens, Z Wojna, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSzegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna, Z. Rethinking the Inception Architecture for Computer Vision. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, 2016.
Optimizing Compiler for Machine Learning. Tensorflow, Xla, TensorFlow. XLA: Optimizing Compiler for Machine Learn- ing, 2021. https://www.tensorflow.org/xla.
Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters. H Zhang, Z Zheng, S Xu, W Dai, Q Ho, X Liang, Z Hu, J Wei, P Xie, E P Xing, Proceedings of USENIX Annual Technical Conference. USENIX Annual Technical ConferenceZhang, H., Zheng, Z., Xu, S., Dai, W., Ho, Q., Liang, X., Hu, Z., Wei, J., Xie, P., and Xing, E. P. Poseidon: An Ef- ficient Communication Architecture for Distributed Deep Learning on GPU Clusters. In Proceedings of USENIX Annual Technical Conference, 2017.
Communication-Efficient Distributed Blockwise Momentum SGD with Error-Feedback. S Zheng, Z Huang, J Kwok, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsZheng, S., Huang, Z., and Kwok, J. Communication- Efficient Distributed Blockwise Momentum SGD with Error-Feedback. In Proceedings of Advances in Neural Information Processing Systems, 2019.
Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training. H Zhu, A Phanishayee, G Pekhimenko, Proceedings of USENIX Annual Technical Conference. USENIX Annual Technical ConferenceZhu, H., Phanishayee, A., and Pekhimenko, G. Daydream: Accurately Estimating the Efficacy of Optimizations for DNN Training. In Proceedings of USENIX Annual Tech- nical Conference, 2020.
| [
"https://github.com/joapolarbear/dpro"
] |
[
"Drell-Yan Tails Beyond the Standard Model",
"Drell-Yan Tails Beyond the Standard Model"
] | [
"L Allwicher \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n\nIJCLab\nCNRS\nPôle Théorie (Bat. 210)\n\nIN2P3 et Université\n91405Paris-Saclay, OrsayFrance\n",
"D A Faroughy \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n\nIJCLab\nCNRS\nPôle Théorie (Bat. 210)\n\nIN2P3 et Université\n91405Paris-Saclay, OrsayFrance\n",
"F Jaffredo •• \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n\nIJCLab\nCNRS\nPôle Théorie (Bat. 210)\n\nIN2P3 et Université\n91405Paris-Saclay, OrsayFrance\n",
"O Sumensari •• \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n\nIJCLab\nCNRS\nPôle Théorie (Bat. 210)\n\nIN2P3 et Université\n91405Paris-Saclay, OrsayFrance\n",
"F Wilsch \nPhysik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland\n\nIJCLab\nCNRS\nPôle Théorie (Bat. 210)\n\nIN2P3 et Université\n91405Paris-Saclay, OrsayFrance\n"
] | [
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"IJCLab\nCNRS\nPôle Théorie (Bat. 210)",
"IN2P3 et Université\n91405Paris-Saclay, OrsayFrance",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"IJCLab\nCNRS\nPôle Théorie (Bat. 210)",
"IN2P3 et Université\n91405Paris-Saclay, OrsayFrance",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"IJCLab\nCNRS\nPôle Théorie (Bat. 210)",
"IN2P3 et Université\n91405Paris-Saclay, OrsayFrance",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"IJCLab\nCNRS\nPôle Théorie (Bat. 210)",
"IN2P3 et Université\n91405Paris-Saclay, OrsayFrance",
"Physik-Institut\nUniversität Zürich\nCH-8057ZürichSwitzerland",
"IJCLab\nCNRS\nPôle Théorie (Bat. 210)",
"IN2P3 et Université\n91405Paris-Saclay, OrsayFrance"
] | [] | We investigate the high-p T tails of the pp → ν and pp → Drell-Yan processes as probes of New Physics in semileptonic interactions with an arbitrary flavor structure. For this purpose, we provide a general decomposition of the 2 → 2 scattering amplitudes in terms of form-factors that we match to specific scenarios, such as the Standard Model Effective Field Theory (SMEFT), including all relevant operators up to dimension-8, as well as ultraviolet scenarios giving rise to tree-level exchange of new bosonic mediators with masses at the TeV scale. By using the latest LHC run-II data in the monolepton (eν, µν, τ ν) and dilepton (ee, µµ, τ τ , eµ, eτ , µτ ) production channels, we derive constraints on the SMEFT Wilson coefficients for semileptonic four-fermion and dipole operators with the most general flavor structure, as well as on all possible leptoquark models. For the SMEFT, we discuss the range of validity of the EFT description, the relevance of O(1/Λ 2 ) and O(1/Λ 4 ) truncations, the impact of d = 8 operators and the effects of different quark-flavor alignments. Finally, as a highlight, we extract for several New Physics scenarios the combined limits from high-p T processes, electroweak pole measurements and lowenergy flavor data for the b → cτ ν transition, showing the complementarity between these different observables. Our results are compiled in HighPT , a package in Mathematica which provides a simple way for users to extract the Drell-Yan tails likelihoods for semileptonic effective operators and for leptoquark models. | 10.1007/jhep03(2023)064 | [
"https://export.arxiv.org/pdf/2207.10714v2.pdf"
] | 251,018,727 | 2207.10714 | 4085c467db53988c2438db6382ecbbb970d7f26f |
Drell-Yan Tails Beyond the Standard Model
17 Mar 2023
L Allwicher
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
IJCLab
CNRS
Pôle Théorie (Bat. 210)
IN2P3 et Université
91405Paris-Saclay, OrsayFrance
D A Faroughy
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
IJCLab
CNRS
Pôle Théorie (Bat. 210)
IN2P3 et Université
91405Paris-Saclay, OrsayFrance
F Jaffredo ••
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
IJCLab
CNRS
Pôle Théorie (Bat. 210)
IN2P3 et Université
91405Paris-Saclay, OrsayFrance
O Sumensari ••
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
IJCLab
CNRS
Pôle Théorie (Bat. 210)
IN2P3 et Université
91405Paris-Saclay, OrsayFrance
F Wilsch
Physik-Institut
Universität Zürich
CH-8057ZürichSwitzerland
IJCLab
CNRS
Pôle Théorie (Bat. 210)
IN2P3 et Université
91405Paris-Saclay, OrsayFrance
Drell-Yan Tails Beyond the Standard Model
17 Mar 2023
We investigate the high-p T tails of the pp → ν and pp → Drell-Yan processes as probes of New Physics in semileptonic interactions with an arbitrary flavor structure. For this purpose, we provide a general decomposition of the 2 → 2 scattering amplitudes in terms of form-factors that we match to specific scenarios, such as the Standard Model Effective Field Theory (SMEFT), including all relevant operators up to dimension-8, as well as ultraviolet scenarios giving rise to tree-level exchange of new bosonic mediators with masses at the TeV scale. By using the latest LHC run-II data in the monolepton (eν, µν, τ ν) and dilepton (ee, µµ, τ τ , eµ, eτ , µτ ) production channels, we derive constraints on the SMEFT Wilson coefficients for semileptonic four-fermion and dipole operators with the most general flavor structure, as well as on all possible leptoquark models. For the SMEFT, we discuss the range of validity of the EFT description, the relevance of O(1/Λ 2 ) and O(1/Λ 4 ) truncations, the impact of d = 8 operators and the effects of different quark-flavor alignments. Finally, as a highlight, we extract for several New Physics scenarios the combined limits from high-p T processes, electroweak pole measurements and lowenergy flavor data for the b → cτ ν transition, showing the complementarity between these different observables. Our results are compiled in HighPT , a package in Mathematica which provides a simple way for users to extract the Drell-Yan tails likelihoods for semileptonic effective operators and for leptoquark models.
Introduction
Semileptonic transitions are powerful probes of physics beyond the Standard Model (SM) which can indirectly access new phenomena arising at scales well beyond the reach of the direct searches at particle colliders. The most sensitive observables are those suppressed in the SM, such as Flavor Changing Neutral Currents (FCNC), which allow us to indirectly probe New Physics scales (Λ) up to O(10 5 TeV) with the current precision [1]. The physics of low-energy semileptonic transitions has recently attracted a lot of attention thanks to the rich program of experiments studying K- [2,3], D- [4] and B-meson [5,6] decays, which will offer many opportunities to probe New Physics in the near future. In particular, current data with looplevel [7][8][9][10][11] and tree-level [12][13][14][15][16][17][18][19] induced B-meson decays already shows intriguing patterns of deviations from the SM which are under scrutiny at LHCb and Belle-II.
Although low-energy flavor observables provide the most stringent constraints on semileptonic transitions, their sensitivity depends fundamentally on the assumption regarding the (unknown) flavor structure of physics Beyond the SM (BSM). New Physics scenarios based on Minimal Flavor Violation [20] or the U (2) symmetry [21] can be fully compatible with current flavor data for much lower values of Λ, in the O(1 TeV) range, due to the symmetry protection of quark-flavor violation. These scales are currently being directly probed at the LHC, which can provide useful complementary probes to flavor observables. In particular, this is the case for operators that are unconstrained or that can only be weakly constrained by low-energy processes, see e.g. Ref. [22,23].
Measurements in the tails of momentum-dependent distributions in pp → ν and pp → processes at the LHC have proven to be powerful probes of flavor physics at hadron colliders. Effective Field Theory (EFT) contributions to the Drell-Yan cross-sections can be energy enhanced, as long as the EFT approach is valid, being potentially larger than the SM background in the tails of the distributions [24]. Furthermore, the parton content of the proton includes five different quark flavors that can be exploited to indirectly probe various semileptonic transitions in high-energy proton collisions. A notable example concerns the discrepancies observed in b → cτ ν [12][13][14][15][16][17][18][19], for which LHC data from pp → τ τ at high-p T was used to discard several putative BSM explanations of these anomalies, see e.g. Ref. [25] and following works. Furthermore, Drell-Yan measurements at the LHC are important probes for leptoquark states with masses in the TeV range, and in particular, for those coupling to third generation fermions. Indeed, it was pointed out long ago in Ref. [26] that leptoquarks can contribute non-resonantly to dilepton production pp → at hadron colliders when exchanged in the t-or u-channels. These indirect processes provide an additional experimental handle that can complement the existing leptoquark pair-production searches, in some cases more efficiently than the leptoquark single-production processes, see e.g. Ref. [27,28].
There have been many studies that derive constraints on flavor-physics scenarios by using the processes pp → [23,25,[29][30][31][32][33], pp → ν [23,32,[34][35][36][37][38][39][40][41] and pp → (with = ) [22] at the LHC. However, these studies typically consider either specific types of processes, or they impose a given ansatz to the flavor pattern of the New Physics couplings. The complete combination of the LHC constraints on semileptonic interactions into a single framework was not available thus far. In this paper, we aim to amend this gap by combining the most recent LHC data from all possible monolepton and dilepton productions channels, without any assumption on the flavor of the colliding quarks. This combination will be done for the Standard Model EFT (SMEFT) [42,43], with a consistent EFT expansion up to O(1/Λ 4 ) including d ≤ 8 contributions at the cross-section level (see Refs. [44][45][46] for similar analyses), as well as for models with concrete mediators, which should be used if the experimental precision in a given channel is not sufficient to justify the EFT approach. In particular, implementing both EFT and concrete ultraviolet models will allow us to assess the range of EFT validity within a few selected examples. Furthermore, we will verify the EFT truncation by using a clipping procedure, which imposes a maximal energy cut (M cut ) on the data considered to extract the limits [47].
As an important by-product of our work, we introduce the Mathematica package HighPT [48] that provides the complete SMEFT likelihood for semileptonic operators in Drell-Yan processes at the LHC. 1 This package will complement the ongoing effort to provide tools for the SMEFT phenomenology of low-energy flavor observables [49][50][51][52], as well as electroweak and Higgs data [52][53][54]. HighPT also provides the likelihood for specific models of interest such as leptoquarks [55,56], including their propagation effects at the LHC. 2 The comparison of the constraints derived for both EFT and concrete models will allow the users to directly assess the validity of the EFT description for a given high-p T process.
The remainder of this paper is organized as follows. In Sec. 2, we provide a general description in term of form-factors of the neutral and charged Drell-Yan processes at hadron colliders. In Sec. 3, we use this framework to describe two specific New Physics scenarios with arbitrary flavor structures: (i) the SMEFT with operators up to d = 8, and (ii) the most relevant simplified models for new bosonic mediators exchanged at tree-level. In Sec. 4, we recast the most recent LHC searches in the monolepton and dilepton channels for all possible final states and use these to set the most stringent LHC limits on d = 6 dipoles and semileptonic operators, as well as for all leptoquark mediators. We also discuss the validity of the O(1/Λ 2 ) and O(1/Λ 4 ) EFT truncations and assess the impact of dimension-8 operators for different initial quark flavors. In Sec. 5, we apply our results to the New Physics interpretations of the ongoing discrepancies observed in b → cτ ν low-energy data, combining our LHC bounds with the relevant flavor and electroweak constraints. We summarize our findings and discuss the outlook for our study in Sec. 6.
Drell-Yan Production at Hadron Colliders
In this Section, we provide a general description of the processes pp → − α + β and pp → ± α ν β , in terms of form-factors, where α, β are generic lepton-flavor indices. This description has the advantage of covering both the EFT case, as well as scenarios containing new bosonic mediators that propagate at tree level.
Amplitude decomposition
First, we consider the scattering amplitude for the neutral Drell-Yan processq i q j → − α + β given by the first two diagrams in Fig. 2.1, with q i = {u i , d i }, where quark and lepton flavor indices are denoted by Latin letters (i, j = 1, 2, 3) and Greek letters (α, β = 1, 2, 3), respectively, unless stated otherwise. 3 The most general decomposition of the four-point scattering amplitude that is Lorentz invariant and consistent with the SU (3) c × U (1) em gauge symmetry reads
A(q i q j → − α + β ) = 1 v 2 XY ¯ α γ µ P X β (q i γ µ P Y q j ) [F XY, qq V (ŝ,t)] αβij + ¯ α P X β (q i P Y q j ) [F XY, qq S (ŝ,t)] αβij + ¯ α σ µν P X β (q i σ µν P Y q j ) δ XY [F XY, qq T (ŝ,t)] αβij + ¯ α γ µ P X β (q i σ µν P Y q j ) ik ν v [F XY, qq Dq (ŝ,t)] αβij + ¯ α σ µν P X β (q i γ µ P Y q j ) ik ν v [F XY, qq D (ŝ,t)] αβij ,(2.1)
where X, Y ∈ {L, R} are the chiralities of the anti-lepton and anti-quark fields, P R,L = (1±γ 5 )/2 are the chirality projectors, v = ( √ 2G F ) −1/2 stands for the electroweak vacuum-expectationvalue (vev), and fermion masses have been neglected. Here, it is understood that q (q) and (¯ ) denote the Dirac spinors of the incoming quark (anti-quark) and outgoing anti-lepton (lepton) fields, respectively. The four-momentum of the dilepton system is defined by k = p q + pq, and we take the Mandelstam variables to beŝ = k 2 = (p q + pq) 2 ,t = (p q − p − ) 2 andû = (p q − p + ) 2 = −ŝ −t. For each of the five components in Eq. (2.1) we define the neutral current form-factor F XY, qq I (ŝ,t) where I ∈ {V, S, T, D , D q } labels the corresponding vector, scalar, tensor, lepton-dipole and quark-dipole Lorentz structures, respectively. These form-factors are dimensionless functions of the Mandelstam variables that describe the underlying local and non-local semileptonic interactions between fermions with fixed flavors and chiralities. Note, in particular, that the tensor form-factor is non-vanishing only for X = Y . 4 Similarly, the most general scattering amplitude for the charged current Drell-Yan process can be written as
A(ū i d j → − ανβ ) = 1 v 2 XY ¯ α γ µ P X ν β (ū i γ µ P Y d j ) [F XY, ud V (ŝ,t)] αβij + ¯ α P X ν β (ū i P Y d j ) [F XY, ud S (ŝ,t)] αβij + ¯ α σ µν P X ν β (ū i σ µν P Y d j ) δ XY [F XY, ud T (ŝ,t)] αβij + ¯ α γ µ P X ν β (ū i σ µν P Y d j ) ik ν v [F XY, ud Dq (ŝ,t)] αβij + ¯ α σ µν P X ν β (ū i γ µ P Y d j ) ik ν v [F XY, ud D (ŝ,t)] αβij ,(2.2)
where the dilepton four-momentum is defined in a similar way by k = p d + pū, and where we take the Mandelstam variables to beŝ = k 2 = (p d + pū) 2
,t = (p d − p − ) 2 andû = (p d − p ν ) 2 .
The charged current form-factors are denoted by F XY, ud I (ŝ,t), with the same possible Lorentz structures as in the previous case. 5 The above equation is also valid for X = R in the presence of a light right-handed neutrino field N ∼ (1, 1, 0) that is a singlet under the SM gauge group. The amplitudes in Eqs. (2.1) and (2.2) are written in the mass basis. Similar expressions in the weak interaction basis can be recovered by rotating the quark fields accordingly, as described in Appendix B.2. From now on, we thus take all flavor indices in the basis of weak interactions if not mentioned otherwise.
Related processes
We briefly comment on two other types of semileptonic processes at hadron colliders that are closely related to Drell-Yan production. The first of these are the quark-lepton fusion processes q i α → q j β and d i α → u j ν β . These probe the same semileptonic transitions entering Drell-Yan production. In this case, the initial lepton is taken as a partonic constituent of the proton with a parton distribution function (PDF) that is suppressed by α em [57]. By using crossing symmetry, it is straightforward to express the amplitudes in terms of the Drell-Yan from-factors described above,
A(u j + α → u i + β ) = A(ū i u j → − α + β )| s→−t, t→−s ,(2.
3)
A(d j + α → d i + β ) = A(d i d j → − α + β )| s→−t, t→−s ,(2.
4)
A(d j + α → u iνβ ) = A(ū i d j → − ανβ )| s→−t, t→−s . (2.5)
Another relevant probe of semileptonic transitions, also related to Drell-Yan production, are the quark-gluon fusion processes q j g → q i − α + β and q j g → q i ∓ ανβ . Since these are 2 → 3 scattering processes, they will suffer from an additional phase-space suppression when compared to the 2 → 2 Drell-Yan processes. In the following, given that both the quark-lepton and quark-gluon fusions are generically less efficient probes of New Physics, we will focus exclusively on the Drell-Yan production modes as they are currently the most relevant ones for phenomenology. 6
Form-factor parametrization
In the following, we introduce a general parametrization of the Drell-Yan form-factors that is useful for describing tree-level contributions from generic New Physics. We perform an analytic continuation of the scattering amplitudes to the complexŝ andt Mandelstam variables. Furthermore, we assume that the form-factors are analytic functions within some radius Λ 2 except for a finite set of simple poles in theŝ,t andû complex planes. This assumption captures all possible tree-level physics entering Drell-Yan production at collider energies below the scale Λ 2 |ŝ|, |t|. 7 We decompose each form-factor into a regular term and a pole term,
F I (ŝ,t) = F I, Reg (ŝ,t) + F I, Poles (ŝ,t) ,(2.6)
encoding underlying local and non-local semileptonic interactions, respectively. To simplify the notation we drop the XY and qq superscripts wherever the equations hold true for any form-factor, and only keep the dependence on I ∈ {V, S, T, D , D q }. The regular form-factor F I, Reg is an analytic function that describes local interactions, e.g. four-point contact interactions, that arise from heavy unresolved degrees of freedom living at the scale Λ beyond the characteristic energy of the scattering process. Within the radius Λ 2 , this function admits a power series expansion of the form
F I, Reg (ŝ,t) = ∞ n,m=0 F I (n,m) ŝ v 2 n t v 2 m ,(2.7)
where F I (n,m) are dimensionless expansion coefficients. This expression provides a convenient separation of contributions with different scaling onŝ andt and, in particular, of those that become dominant in the tails of the Drell-Yan distributions. The power series in Eq. (2.7)
is not to be confused with the EFT expansion in 1/Λ, since each coefficient F I (n,m) receives contributions from an infinite tower of non-renormalizable operators, as will be discussed for the SMEFT in Sec. 3. The pole form-factor F I, Poles is a non-analytic function with a finite number of simple poles describing non-local tree-level interactions. We adopt the following parametrization,
F I, Poles (ŝ,t) = a v 2 S I (a) s − Ω a + b v 2 T I (b) t − Ω b − c v 2 U I (c) s +t + Ω c , (2.8)
where the poles Ω k = m 2 k − im k Γ k belong to each of the corresponding complex Mandelstam planes, with the last term representing the poles in the u-channel. The pole residues S I (a) , T I (b) and U I (c) are taken to be dimensionless parameters. Each term in Eq. (2.8) describes the tree-level exchange of degrees of freedom in the s-, t-and u-channels, respectively, i.e. these are the propagators for various bosons a, b, c with masses m a,b,c and widths Γ a,b,c that can be resolved at the energy scales involved in the scattering.
In principle, the simple-pole assumption for the form-factor singularities allows for the numerators in Eq. (2.8) to be general analytic functions of the form S I (a) (ŝ), T I (b) (t) and U I (c) (û), each describing the product of two local three-point interactions. However, the dependence of these residues on the Mandelstam variables can be completely removed from each pole by applying the identity,
Z I (ẑ) z − Ω = Z I (Ω) z − Ω + h(ẑ, Ω) , (2.9)
where h(ẑ, Ω) is an analytic function ofẑ = {ŝ,t,û} that can be reabsorbed into the regular form-factor by a redefinition of F I, Reg . 8
Form-factors in the SM
In the SM, the gauge bosons contribute to the Drell-Yan amplitudes in Eqs. (2.1) and (2.2) through the s-channel poles of the vector form-factors. It is therefore convenient to separate the effects of the SM from potential BSM effects by defining the s-channel vector residues in Eq. (2.8) as
S V (a) = S (a, SM) + δS (a) ,(2.11)
with a ∈ {γ, W, Z}, and where the δS (a) parametrize potential modifications of the SM gauge couplings to fermions. The SM pole residues at leading-order read
S XY, qq (γ, SM) = 4πα em Q Q q 1 1 q , (2.12) S XY, qq (Z, SM) = 4πα em c 2 W s 2 W g X g Y q , (2.13) S LL, ud (W, SM) = 1 2 g 2 1 V ,(2.14)
where g X ψ ≡ (T 3 ψ X − s 2 W Q ψ ) 1 ψ denote the Z-boson couplings to a fermion ψ with electric charge Q ψ and weak isospin T 3 ψ X , g is the SU (2) L gauge coupling and c W ≡ cos θ W and s W = sin θ W , 8 The identity in Eq. (2.9) can be shown by power expanding the numerator ZI (ẑ) and decomposing into partial fractions each of the resulting terms aŝ
z n z − Ω = Ω n z − Ω + Ω n−1 n−1 k=0 ẑ Ω k .
(2. 10) where θ W denotes the Weinberg angle. The 3 × 3 Cabibbo-Kobayashi-Maskawa (CKM) matrix is labeled V , and 1 (q) correspond to the 3 × 3 unit matrices in lepton (quark) flavor-space with components δ αβ (δ ij ). See Appendix B for our conventions. New Physics contributions to the form-factors S (a) , T (a) and U (a) will be discussed in Sec. 3.
Cross-sections
The general amplitudes given in Eq. (2.1) and (2.2) can be used to compute the neutral-and charged-current cross-sections, respectively. After integrating over the azimuthal angle, the differential partonic cross-section for the Drell-Yan process is given by (2.15) where neutral and charged currents are described by the same expression, where q ( ) ∈ {u, d} can be either a down-or up-type quark, and ∈ { , ν} denotes both neutral and charged leptons, depending on the specific process. The indices I, J ∈ {V, S, T, D , D q } account for the different contributions and M XY are 5 × 5 symmetric matrices that take the form
dσ dt (q i q j → α β ) = 1 48π v 4 XY IJ M XY IJ (ŝ,t) F XY, qq I (ŝ,t) αβij F XY, qq J (ŝ,t) * αβij ,M XY (ŝ,t) = M XY V V (t/ŝ) 0 0 0 0 0 M XY SS (t/ŝ) M XY ST (t/ŝ) 0 0 0 M XY ST (t/ŝ) M XY T T (t/ŝ) 0 0 0 0 0ŝ v 2 M XY DD (t/ŝ) 0 0 0 0 0ŝ v 2 M XY DD (t/ŝ) , (2.16)
where the different M XY IJ entries are polynomials in the angular variable ω ≡t/ŝ defined by
M XY V V (ω) = (1 + 2ω)δ XY + ω 2 , (2.17) M XY SS (ω) = 1/4 , (2.18) M XY T T (ω) = 4(1 + 2ω) 2 δ XY , (2.19) M XY ST (ω) = −(1 + 2ω)δ XY , (2.20) M XY DD (ω) = −ω(1 + ω) . (2.21)
The quantity ω = −(1 − cos θ )/2 is a function of the emission angle θ of the lepton − with respect to the incoming quark in the center-of-mass frame. At the differential level, there is only an interference term between the scalar and tensor structures that vanishes for any observable that is symmetric in θ with respect to π/2, including e.g. the full cross-section.
Hadronic cross-sections
The hadronic cross-section σ at a proton-proton collider can be written, following the conventions of Ref. [61], as the convolution of the partonic cross-sectionσ(q i q j → α¯ β ) with the PDFs fq i (x, µ F ) and f q j (x, µ F ), summed over all possible incoming quark flavor combinations,
σ(pp → α¯ β ) = ij 1 0 dx 1 dx 2 fq i (x 1 , µ)f q j (x 2 , µ)σ(q i q j → α¯ β ) + (q i ↔ q j ) ,(2.22)
where x 1,2 are the fractions of momenta that the scattering quarks carry relative to the momenta of the corresponding protons. We set the factorization and renormalization scales equal to the scale of the hard scattering µ = √ŝ . The hadronic cross-section can be more conveniently expressed as
σ pp → α¯ β = ij dŝ s L ij (ŝ)σ(q i q j → α¯ β ) , (2.23)
where √ s = 13 TeV is the proton-proton center-of-mass energy for the LHC searches considered in this paper, and where L ij (ŝ) are the dimensionless parton-parton luminosity functions [61,62] defined as
L ij (ŝ) ≡ 1 s/s dx x fq i (x, µ) f q j ŝ sx , µ + (q i ↔ q j ) . (2.24)
In Sec. 4, we will confront the predictions in Drell-Yan production from different BSM models with the LHC run-II measurements in the high-p T tails of various momentum-dependent distributions. For the neutral Drell-Yan process, we compute the particle-level distribution of the invariant mass m of the dilepton system in terms of the form-factors introduced in Eqs. (2.1).
Combining the previous results we find the expression for the hadronic cross-sections restricted to a specific invariant mass bin B ≡ [m 0 , m 1 ] to be given by
σ B (pp → − α + β ) = 1 48πv 2 XY, IJ ij m 2 1 m 2 0 dŝ s 0 −ŝ dt v 2 M XY IJ L ij F XY,qq I αβij F XY,qq J * αβij , (2.25)
where summing over up-and down-type quarks q ∈ {u, d} is implied. Similarly, for the charged Drell-Yan process we compute the particle-level distribution of the transverse momentum of the charged lepton p T ( ± ). In this case the cross-section σ B (pp → ± α ν β ) restricted to a specific highp T bin B ≡ [p T 0 , p T 1 ] takes the same form as in Eq. (2.25) but with the integration boundaries changed to 9
m 2 1 m 2 0 dŝ −→ s 4p 2 T 0 dŝ and 0 −ŝ dt −→ t + 1 t + 0 dt + t − 0 t − 1 dt , (2.26) wheret ± i (ŝ) = −ŝ 2 1 ± 1 − min 1, 4p 2 Tî s . (2.27)
For the sake of presentation, we have not explicitly expanded the form-factors in Eqs. (2.15) and (2.25) in terms of the various regular and pole form-factors defined in (2.7) and (2.8).
Complete expressions for the hadronic cross-sections in terms of these parameters can be easily extracted for any bin using the Mathematica package HighPT.
High-p T Tails
As mentioned above, the high-energy regime of the dilepton invariant mass or the monolepton transverse mass are known to be very sensitive probes for a variety of New Physics models affecting semileptonic transitions. In the SM, the partonic cross-section scales as ∼ 1/E 2 at high-energies, leading to smoothly falling tail for the kinematic distributions of momentumdependent observables. The presence of new semileptonic interactions can impact substantially 9 Notice that forŝ < 4p 2 T 1 we findt − 2 =t + 2 , whereas forŝ < 4p 2 T 0 the cross-section vanishes. Taking the limit pT 0 → 0 and pT 1 → ∞ yields again the integration boundaries for the full angular integration. the shape of these distributions at high energies, either via resonant or non-resonant effects. The most pronounced New Physics effect is the appearance of a resonant feature on top of the smoothly falling SM background, i.e. a peak in the dilepton invariant mass spectrum, or an edge in the monolepton transverse mass spectrum. This observation would indicate that a heavy colorless particle was produced on-shell in the s-channel. Non-resonant effects from contact interactions, or leptoquark states exchanged in the t/u-channels, on the other hand, lead to more subtle non-localized features that appear in the tails of the distributions. Indeed, energy-enhanced interactions coming from non-renormalizable operators modify the energyscaling of the cross-section, leading to an apparent violation of unitarity at high-energies. The effects from leptoquarks exchanged in the t/u-channels lead to a similar behavior [26,63,64]. After convolution with the quark PDFs, the non-resonant features are more difficult to uncover than a resonance peak, but they are the only potentially observable effect if the collision energy is not sufficient to produce the New Physics particles on-shell.
Finally, we remark that for the quark-lepton fusion process q i¯ α → q j¯ β , leptoquarks are exchanged in the s-channel, leading to a resonance peak in the jet-lepton invariant mass distribution [65][66][67][68], whereas the colorless mediators, now exchanged in the t/u-channels, will produce non-resonant effects in the tails of the distributions. The lepton PDFs have been recently computed in Ref. [66] and could be used to give a robust estimation of the event yields. In this paper we will not provide limits from quark-lepton fusion, focusing only on the Drell-Yan processes. The reason for this is two-fold: (i) given that both processes are related through crossing symmetry, it would be enough to focus on the best performing of the two processes, i.e. Drell-Yan, with a PDF enhancement of order O(α s /α) 2 with respect to quark-lepton fusion; (ii) dedicated LHC resonance searches in jet-lepton final states that can be recasted for quark-lepton fusion are not yet available. 10 Note, however, that for leptoquark mediators that couple to valence quarks the resonant enhancement can compensate for the lepton PDF suppression in order to produce competitive limits. If there is a large separation between the scale of New Physics Λ and the electroweak scale v, extending the SM to the SMEFT gives a general description of physical processes in the infrared without having to specify the details of the ultraviolet completion. Below the cutoff, these interactions can then be described in terms of SU (3) c × SU (2) L × U (1) Y invariant nonrenormalizable operators built out of the SM degrees of freedom. The SMEFT Lagrangian is given by
L SMEFT = L SM + d,k C (d) k Λ d−4 O (d) k + d,k C (d) k Λ d−4 O (d) k + h.c. ,(3.1)
where the first term corresponds to the SM Lagrangian, O [42,43] and [69,70], respectively. In this paper, we consider the Warsaw operator basis at d = 6 from Ref. [43], as well as its extension to d = 8 from Ref. [70]. Our conventions for the SMEFT operators are given in Appendix B.
To consistently describe a given scattering cross-section at the LHC up to order O(1/Λ 4 ) in the EFT expansion, it is necessary to include not only the contributions from dimension-6 operators, but also the interference terms between d = 8 operators and the SM contributions since they appear at the same order, stand for the New Physics contributions from dimension-6 and dimension-8 operators, respectively. The dependence on the scale Λ is explicitly factorized in each term to emphasize their order in the EFT expansion.
σ ∼ [dΦ] |A SM | 2 + v 2 Λ 2 i 2 Re A (6) i A * SM + v 4 Λ 4 ij 2 Re A (6) i A (6) * j + i 2 Re A (8) i A * SM + . . . ,(3.
In this paper, we are interested in the high-energy tails of the momentum-dependent distributions at the LHC. In this regime, only the energy-enhanced terms in Eq. (3.2) that are proportional to E/Λ will be relevant, where E = √ŝ , while those scaling as powers of v/Λ will be sub-dominant. There are three types of operators that directly contribute to the processes q i q j → α¯ β andū i d j → ανβ at tree level up to order O(1/Λ 4 ) in the EFT expansion:
• The semileptonic four-fermion operators in the classes ψ 4 , ψ 4 H 2 and ψ 4 D 2 ;
• The Higgs-current operators in the classes ψ 2 H 2 D, ψ 2 H 4 D and ψ 2 H 2 D 3 ;
• The dipole operators in the class ψ 2 XH .
These operators are defined in Appendix F, with the d = 6 ones listed in Table F.1, and the d = 8 operators in Tables F.2 and F.3. The energy scaling of the New Physics amplitude for large E is shown in Table 3.1 for each class of operators listed above, which is to be compared to the SM one that becomes constant for E v. Up to dimension-6, the semileptonic four-fermion operators ψ 4 give the dominant contributions at large E since they scale quadratically with the energy (∝ E 2 /Λ 2 ). In particular, the chirality-conserving semileptonic operators of this type can also interfere with the SM contributions, giving rise to sizable effects. Dipole operators ψ 2 XH also induce energy-enhanced contributions at the amplitude level (∝ vE/Λ 2 ), but these are suppressed compared to the previous ones since they only increase linearly with E and since they do not interfere with the SM for massless fermions. Moreover, the contributions from Higgs-current operators ψ 2 H 2 D do not increase with E since they only modify the W -and Z-couplings, being mostly relevant
Dimension d = 6 d = 8
Operator classes
ψ 4 ψ 2 H 2 D ψ 2 XH ψ 4 D 2 ψ 4 H 2 ψ 2 H 4 D ψ 2 H 2 D 3
Amplitude scaling at the W -and Z-poles [71,72]. The d = 8 operators appear in Table 3.1 with an additional factor of either v 2 /Λ 2 or E 2 /Λ 2 with respect to the d = 6 contributions described above. Since we are interested in the large E region, we will only keep in our numerical analyses the d = 8 operators that display an energy enhancement with respect to the SM contributions.
E 2 /Λ 2 v 2 /Λ 2 vE/Λ 2 E 4 /Λ 4 v 2 E 2 /Λ 4 v 4 /Λ 4 v 2 E 2 /Λ
Besides the direct contributions to the Drell-Yan cross-sections, there can also be indirect contributions arising from the redefinition of the SM inputs by the SMEFT operators. This redefinition induce O(v 2 /Λ 2 ) shifts to the SM contributions in Eq. (2.12)-(2.14) depending on the chosen scheme for the electroweak parameters [73], which we assume to be {α em , G F , m Z }. Examples of such operators are the Higgs-current operators O Hl or the purely leptonic O ll which can contribute to the muon decay, for specific flavor indices, inducing a finite renormalization of G F . Similar redefinitions are also needed in the flavor sector, since the Higgs-current and the semileptonic operators can induce finite shifts of the CKM parameters that are needed to compute the LHC processes [74]. However, these redefinitions of electroweak and flavor parameters do not lead to energy-enhanced effects at the LHC, thus being negligible in our present analysis.
Lastly, we count the number of independent SMEFT parameters at mass dimension-6 and dimension-8 in Table 3.1. For this counting it is necessary to separate operators that can contribute to LHC processes including all three quark generations and operators that can contribute only to processes involving the two light quark generation, i.e. operators involving SU (2) L singlet up-type quarks (u), due to the negligible top quark PDF. We find that there are 549 CP-even and 472 CP-odd parameters that can contribute at d = 6 to the Drell-Yan processes. There are additional 435 CP-even and 141 CP-odd parameters that can contribute to these processes when d = 8 operators are considered. 11
Form-factors in the SMEFT
In the SMEFT the Drell-Yan amplitude can be written as a perturbative expansion in the small parametersŝ/Λ 2 ,t/Λ 2 and v 2 /Λ 2 . This EFT expansion can be matched to Eq. (2.7) in order to determine the regular form-factor coefficients F I (n,m) . These are given by an infinite perturbative series in the parameter v 2 /Λ 2 of the form
F I (n,m) = ∞ d ≥ 2(n+m+3) f (d) I v Λ d−4 , (3.3) where f (d) I
correspond to linear combinations of d-dimensional Wilson coefficients. SMEFT operators of fixed dimension d give rise to only a finite number of form-factor coefficients according to Eq. (3.3). For example, d = 6 operators only contribute to the leading coefficient F I (0,0) , while d = 8 operators contribute to F I (0,0) , as well as to the next order coefficients F I (1,0) and F I (0,1) , and so on. In order to express the Drell-Yan form-factors in terms of the SMEFT Wilson coefficients we truncate the power expansion of the regular form-factors F I, Reg in Eq. (2.7) to order n + m ≤ 1. The form-factor parametrization given in Sec. 2.2 can be further simplified as
F S = F S (0,0) , (3.4) F T = F T (0,0) , (3.5) F V = F V (0,0) + F V (1,0)ŝ v 2 + F V (0,1)t v 2 + a v 2 S (a, SM) + δS (a) ŝ − m 2 a + im a Γ a , (3.6) F D = a v 2 S D (a) s − m 2 a + im a Γ a , (3.7) F Dq = a v 2 S Dq (a) s − m 2 a + im a Γ a , (3.8)
where a ∈ {γ, Z} when describing neutral Drell-Yan processesq i q j → − α + β , and a ∈ {W ± } when describing the charged Drell-Yan processesd i u j → − ανβ (ū i d j → + α ν β ), and S (a,SM) are defined in Eqs. (2.12)-(2.14). This parametrization is enough to capture all possible effects to order O(1/Λ 4 ) in semileptonic transitions. 12 Given that the scalar and tensor form-factors are independent ofŝ andt, the coefficients F S (0,0) and F T (0,0) map directly to the Wilson coefficients of the d = 6 scalar and tensor operators in the class ψ 4 , as shown in Appendix C.1. The dipole residues S D (a) also match trivially to the d = 6 SMEFT dipole operators in the class ψ 2 XH, as shown in Appendix C.3.
The regular coefficients and the pole residues of the vector form-factors, on the other hand, are generated at d = 6 and d = 8. The leading coefficient F V (0,0) receives contributions from contact operators in the classes ψ 4 and ψ 4 H 2 at d = 6 and d = 8, respectively, as well as from modified interactions between fermions and the SM gauge bosons from d = 8 operators in class ψ 2 H 2 D 3 . The higher-order coefficients F V (1,0) and F V (0,1) receive contributions from the d = 8 operators in class ψ 4 D 2 . The pole residues δS (a) receive contributions from modified fermion interactions due to dimension-6 operators in class ψ 2 H 2 D and from dimension-8 operators in class ψ 2 H 2 D 3 . Schematically, the matching between SMEFT Wilson coefficients and the form-factors takes the following form:
F V (0,0) = v 2 Λ 2 C (6) ψ 4 + v 4 Λ 4 C (8) ψ 4 H 2 + v 2 m 2 a Λ 4 C (8) ψ 2 H 2 D 3 + · · · , (3.9) F V (1,0) = v 4 Λ 4 C (8) ψ 4 D 2 + · · · , (3.10) F V (0,1) = v 4 Λ 4 C (8) ψ 4 D 2 + · · · , (3.11) δS (a) = m 2 a Λ 2 C (6) ψ 2 H 2 D + v 2 m 2 a Λ 4 C (6) ψ 2 H 2 D 2 + C (8) ψ 2 H 4 D + m 4 a Λ 4 C (8) ψ 2 H 2 D 3 + · · · , (3.12)
where the squared term [C Notice that the operators in the class ψ 2 H 2 D 3 contribute to F V (0,0) and δS (a) simultaneously. This can be understood by analyzing one of the operators in this class. As an example we take O (1)
q 2 H 2 D 3 = (q i γ µ D ν q j )D (µ D ν) H † H which,
after spontaneous symmetry breaking, gives rise to a modified coupling between the Z boson and quarks that is proportional to
(ŝ m Z v/Λ 4 ) Z µ (q i γ µ q j )
. This interaction contributes to neutral Drell-Yan production with an amplitude that scales as
A(q i q j → − α + β ) ∝ŝ/(ŝ − m 2 Z )
. This amplitude can be brought to the form in Eq. (3.6) by using the partial fraction decomposition in Eq. (2.10), which in diagrammatic form reads:
s Z = + m 2 Z Z
The first contact diagram appearing above on the right-hand side of the equality corresponds to the last term in (3.9), while the second diagram corresponds to the last term in (3.12).
Concrete UV Mediators
In this Section, we discuss the effects of new bosonic states mediating Drell-Yan processes at tree level. These states can be classified in terms of their spin and SM quantum numbers,
namely (SU (3) c , SU (2) L , U (1) Y ) with Q = Y + T 3 .
The possible semileptonic mediators are displayed in Table 3.2, where we also provide the relevant interaction Lagrangians with generic couplings in the last column. For completeness, we also allow for three right-handed neutrinos, denoted as N α ∼ (1, 1, 0), with α = 1, 2, 3. 13 Furthermore, we assume that the masses of these SM singlets are negligible compared to the collider energies and, if produced, they can escape detection as missing energy. The possible mediators fall into two broad categories, each with different collider phenomenology: (i) color-singlets exchanged in the s-channel, and (ii) leptoquarks, i.e. colortriplets, exchanged in the t/u-channels. If the masses of these states are at the O(TeV) scale their propagators will contribute to the residues S I (a) , T I (b) , U I (c) of the pole form-factors (2.8). Leptoquarks can be further classified using fermion number [56], defined as F ≡ 3B + L where B (L) stands for Baryon (Lepton) number. For Drell-Yan production, the leptoquarks with fermion number F = 0, such as U 1 , U 1 , R 2 , R 2 and U 3 , are exchanged in the t-channel, while the remaining leptoquarks S 1 , S 1 , V 2 , V 2 and S 3 , carrying fermion number F = −2, are exchanged in the u-channel. Note that certain leptoquark representations can also couple to diquark bilinears (not listed in Table 3.2) that pose a potential threat to the stability of the proton [75], unless a stabilizing symmetry is further introduced, see e.g. Ref. [76].
After electroweak symmetry breaking, each of the non-trivial SU (2) L multiplets decompose into physical eigenstates, SM rep. Table 3.2: Possible bosonic mediators contributing at tree level to Drell-Yan production classified by their SM quantum numbers and spin. In the last column, we provide the interaction Lagrangian where ≡ iτ 2 , ψ c ≡ iγ 2 γ 0ψ T and H = iτ 2 H * is the conjugate Higgs doublet, where τ i (i = 1, 2, 3) denote the Pauli matrices. The right-handed fermion fields are defined as u ≡ u R , d ≡ d R , e ≡ R and N ≡ ν R , and the left-handed fermion fields as q ≡ (V † u L , d L ) T and l ≡ (ν L , L ) T . We adopt the notation from Ref. [55,56] for the leptoquark states.
Spin L int Z (1, 1, 0) 1 L Z = ψ [g ψ 1 ] abψa / Z ψ b , ψ ∈ {u, d, e, q, l} Z (1, 1, 1) 1 L Z = [ g q 1 ] ijūi / Zd j + [ g 1 ] αβēα / ZN β Φ 1,2 (1, 2, 1/2) 0 L Φ = a=1,2 [y (a) u ] ijqi u j H a + [y (a) d ] ijqi d j H a + [y (a) e ] αβlα e β H a + h.c. W (1, 3, 0) 1 L W = [g q 3 ] ijqi (τ I / W I )q j + [g l 3 ] αβlα (τ I / W I )l β S 1 (3, 1, 1/3) 0 L S 1 = [y L 1 ] iα S 1q c i l α + [y R 1 ] iα S 1ū c i e α + [ȳ R 1 ] iα S 1d c i N α + h.c. S 1 (3, 1, 4/3) 0 L S 1 = [ y R 1 ] iα S 1d c i e α + h.c. U 1 (3, 1, 2/3) 1 L U 1 = [x L 1 ] iαqi / U 1 l α + [x R 1 ] iαdi / U 1 e α + [x R 1 ] iαūi / U 1 N α + h.c. U 1 (3, 1, 5/3) 1 L U 1 = [ x R 1 ] iαūi / U 1 e α + h.c. R 2 (3, 2, 7/6) 0 L R 2 = −[y L 2 ] iαūi R 2 l α + [y R 2 ] iαqi e α R 2 + h.c. R 2 (3, 2, 1/6) 0 L R 2 = −[ y L 2 ] iαdi R 2 l α + [ y R 2 ] iαqi N α R 2 + h.c. V 2 (3, 2, 5/6) 1 L V 2 = [x L 2 ] iαd c i / V 2 l α + [x R 2 ] iαq c i / V 2 e α + h.c. V 2 (3, 2, −1/6) 1 L V 2 = [ x L 2 ] iαū c i / V 2 l α + [ x R 2 ] iαq c i / V 2 N α + h.c. S 3 (3, 3, 1/3) 0 L S 3 = [y L 3 ] iαq c i (τ I S I 3 )l α + h.c. U 3 (3, 3, 2/3) 1 L U 3 = [x L 3 ] iαqi (τ I / U I 3 )l α + h.c.W = 1 √ 2 W 0 √ 2 W + √ 2 W − −W 0 , (3.13) R 2 = R (+5/3) 2 R (+2/3) 2 , R 2 = R (+2/3) 2 R (−1/3) 2 , V 2 = V (+4/3) 2 V (+1/3) 2 , V 2 = V (+1/3) 2 V (−2/3) 2 ,
(3.14)
S 3 = 1 √ 2 S (−1/3) 3 √ 2 S (+2/3) 3 √ 2 S (−4/3) 3 −S (−1/3) 3 , U 3 = 1 √ 2 U (+2/3) 3 √ 2 U (+5/3) 3 √ 2 U (−1/3) 3 −U (+2/3) 3 ,(3.15)
where the superscripts denote the electric charge of each component. The case of an additional Higgs doublet is to be treated separately as both doublets H 1,2 can acquire a vacuum expectation
d j d i + β − α Z , W 0 , H, A d j − α + β d i U 1 , U (2/3) 3 R (2/3) 2 , R (2/3) 2 d j + β − ᾱ d i V (4/3) 2 S 1 , S (4/3) 3 u j u i + β − α Z , W 0 , H, A u j − α + β u i R (5/3) 2 U 1 , U (5/3) 3 u j + β − ᾱ u i S 1 , S (1/3) 3 V (1/3) 2 , V (1/3) 2 d j u iνβ − α W − , Z − , H − d j − ᾱ ν β u i R (2/3) 2 , R (2/3) 2 U 1 , U (2/3) 3 d jν β − ᾱ u i V (1/3) 2 S 1 , S (1/3) 3 Figure 3.2: Contributions to dilepton transitionsdd → − + (upper row),ūu → − + (middle row)
andūd → ± ν (lower row), from the tree-level exchange of the BSM mediators displayed in Table 3.2 via the s-channel (left column), t-channel (middle column) and u-channel (right column).
value, with the identification of the SM-like Higgs boson (h) depending on the parameters of the scalar potential. In this case, it is also fundamental to devise a mechanism that will suppress scalar mediated FCNC to make these models phenomenologically viable [77], such as a Z 2 symmetry [78], or assumptions regarding the alignment of the Yukawa and fermion-mass matrices [79,80]. Here, we consider a parametrization of the Lagrangian in terms of the mass eigenstates,
L Φ ⊃ − f =u,d, i,j ξ f h ijf i f j h + ξ f H ijf i f j H − i ξ f A ijf i γ 5 f j A (3.16) − X=L,R i,j ξ q X H + ijū i P X d j + ξ R H + ijν i P R j + ξ L H + ijN i P L j H + + h.c. ,
with flavor indices i, j and generic couplings ξ f ϕ that can be easily matched to the models of interest [77]. In the above equation, H ± denotes the charged scalar, h is the SM-like Higgs, A the neutral CP-odd scalar and H the heavy neutral CP-even scalar. The contributions of each of these mediators to Drell-Yan production can be found in the Feynman diagrams depicted in Fig. 3
.2.
Form-factors in concrete BSM models
The complete matching of the pole form-factors to concrete models is given in Appendix D, where the flavor structure of the residues of the pole form-factors take the following form:
[S I (a) ] αβij = [g * a ] αβ [g * a ] ij ,(3.17)[T I (b) ] αβij = [g * b ] iα [g * b ] jβ ,(3.18)[U I (c) ] αβij = [g * c ] iβ [g * c ] jα ,(3.19)
for I ∈ {V, S, T }, where g * a,b,c denote generic couplings of the mediators to fermions of a given chirality and each index a, b, c labels the possible mediators contributing to the s-, t-and uchannels, respectively, as displayed in Fig. 3
Other BSM scenarios
We have already discussed Drell-Yan production both in the SMEFT and in simplified models. These two cases cover the most common tree-level BSM scenarios but leave out some potentially interesting possibilities. For instance, one can extend the SM matter content with light righthanded fermion singlets N α ∼ (1, 1, 0) and build an effective field theory known as νSMEFT [81,82]. At d = 6, the resulting semileptonic four-fermion operators [83] are given by,
O eN ud = (ēγ µ N )(ūγ µ d) , (3.20) O lN uq = (lN )(ūq) , (3.21) O (1) lN qd = (lN ) (qd) , (3.22) O (3) lN qd = (lσ µν N ) (qσ µν d) , (3.23)
where flavor indices are omitted. These new local interactions will only contribute to the charged current Drell-Yan process pp → ± α N β without interfering with the SM. Higher order operators at dimension-7 or above can be dropped since these only contribute to the production crosssection starting at order O(1/Λ 6 ). The mapping of the Wilson coefficients to the leading regular form-factors are provided in Appendix E. Notice that the operators in Eqs. (3.20)-(3.23) can be generated at tree level by integrating out at any of the heavy bosonic mediators coupling to N α in Table 3.2.
Collider Limits
The detectors at hadron colliders are complex and imperfect environments with a finite experimental resolution and limited acceptance. When dealing with dilepton and monolepton searches at the LHC, differential distributions are measured from the reconstructed four-momenta of high-level objects such as isolated leptons, τ -tagged jets, and missing transverse energy / E T . These objects are meant to approximate the underlying final-state leptons produced in the hard scattering. The theoretical predictions, on the other hand, are typically computed from the experimentally inaccessible final-state leptons. This mismatch between the predicted distribution of a particle-level observable x and the measured distribution of the corresponding observable x obs is described by the convolution
dσ dx obs = dx K(x obs |x) dσ dx , (4.1)
where K(x obs |x) is a kernel function that parametrizes the detector response [84]. 14 In practice, for a given LHC search, both the measured and the particle-level distributions are binned into histograms leading to the discretization of Eq. (4.1). For a binning A of x obs and B of x, the expected number of signal events N A in a bin A ∈ A is given by [48]. The last two columns refer to the measured and particle-level observables considered in our analyses. The data used in the present work corresponds to x obs ≥ 200 GeV, for all observables.
Process Experiment Luminosity Ref. x obs x pp → τ τ ATLAS 139 fb −1 [85] m tot T (τ 1 h , τ 2 h , / E T ) m τ τ pp → µµ CMS 140 fb −1 [86] m µµ m µµ pp → ee CMS 137 fb −1 [86] m ee m ee pp → τ ν ATLAS 139 fb −1 [87] m T (τ h , / E T ) p T (τ ) pp → µν ATLAS 139 fb −1 [88] m T (µ, / E T ) p T (µ) pp → eν ATLAS 139 fb −1 [88] m T (e, / E T ) p T (e) pp → τ µ CMS 138 fb −1 [89] m col τ h µ m τ µ pp → τ e CMS 138 fb −1 [89] m col τ h e m τ e pp → µe CMS 138 fb −1 [89] m µe m µeN A = B ∈ B L int · K AB · σ B , (4.2)
where L int is the integrated luminosity used in the search, σ B is the particle-level cross-section restricted to a bin B ∈ B, and K is a N ×M response matrix, where N and M are the number of bins in A and B, respectively. The response matrix represents the probability that an event produced in a bin B of x passes all event selections of the search and is measured in bin A of x obs . When estimating the event yields N A of a BSM signal, each independent term contributing to the computation of the cross-section σ B (e.g. see Eq. (2.25)) needs to be convoluted with a different K AB matrix since each term can respond differently to the selection cuts and the detector. Therefore, in full generality, the response matrices entering Eq. (4.2) are quantities depending on the chiralities {X, Y } and flavors {α, β, i, j} of the external leptons and quarks, as well as the shape of the New Physics, i.e. the regular and pole form-factors that are involved. It is clearly not possible to compute the entries of each response matrix from first principles. These must be estimated numerically for each LHC search using Monte Carlo event generators and detector simulators.
Dilepton and monolepton searches at the LHC
The experimental searches considered in our analysis are collected in Table 4.1. These correspond to data sets from the full run-II ATLAS and CMS searches for heavy resonances in dilepton and monolepton production at the LHC. In the last two columns we display the tail observables that are measured in each search (x obs ) which serve as proxies for the particle-level observables (x) used to compute the signal cross-sections. Specific details concerning the definition of the measured observables, selection cuts and any other inputs used in these experimental analyses are available in the respective ATLAS and CMS papers listed in Table 4.1.
Limits on the SMEFT and on mediator models are extracted with the HighPT package [48] where each Drell-Yan search has been repurposed for generic New Physics scenarios. For each signal hypothesis we compute the 95% confidence intervals using Pearson's χ 2 test statistic. In order to obtain reliable limits, beforehand we made sure to combine the data between neighbouring experimental bins until N obs ≥ 10 for any bin, where N obs is the number of observed events (background errors are added in quadrature when combining).
Internally, for each search in Table 4.1, HighPT extracts the number of signal events N A (θ) in a bin A ∈ A of x obs by convoluting the relevant response matrix K AB with the analytical expressions for σ B . These are computed with the PDF set PDF4LHC15 nnlo mc [90]. We denote by θ the parameters of the New Physics model that we wish to constrain, e.g. form-factors or specific model parameters such as Wilson coefficients, or mediator masses and couplings. The χ 2 is then built from the number of background events N b , background uncertainties δN b and observed events N obs provided by the experimental collaborations:
χ 2 (θ) = A∈A N A (θ) + N b A − N obs A ∆ A 2 , (4.3)
where the uncertainty ∆ A in bin A is obtained by adding in quadrature the background and observed uncertainties,
∆ 2 A = (δN b A ) 2 + N obs A ,
where the last term corresponds to the Poissonian uncertainty of the data. Notice that we consider the pure SM contribution as a background included in N b A . Thus, N A (θ) contains only the New Physics contribution to the event yield, i.e. both the New Physics squared contribution and the interference of the New Physics with the SM. The background predictions N b A are taken from the experimental analyses, which are computed including higher-order corrections. Only the New Physics signal N A (θ) is computed at tree-level using our form-factor decomposition presented in Sec. 2.
The response matrices K AB have been provided in the HighPT package for each LHC search. These were obtained from Monte Carlo simulations using the following pipeline: first all relevant operators in the SMEFT with d ≤ 8 and all mediator Lagrangians in Table (3.2) were implemented with FeynRules [91]. The resulting UFO model files [92] were then imported into MadGraph5 [93] and used to simulate statistically significant event samples for the dilepton and monolepton processes with all possible initial quark flavors. Samples were then showered and hadronized using Pythia8 [94], and the final-state object reconstruction and detector simulation were performed using Delphes3 [95] tuned to match the experimental searches. After applying the same event selections as in each experiment, the events were binned into x obs histograms. The simulation pipeline outlined above was used to produce a x obs histogram from each bin B ∈ B of x. The rows of the matrix K AB were then extracted from the resulting histograms. The validation of our recast with the BSM benchmark scenarios provided by the experimental papers can be found in Appendix A.
The χ 2 statistic described above is a reliable statistic whenever the number of observed, signal and background events in each bin are approximately Gaussian, which is usually the case for the bulk of the dilepton and monolepton differential distributions. Deep in the tails, however, the experimental bins will typically contain very low event yields, making these regions much more susceptible to data fluctuations. Indeed, if the tails are plagued with sizeable underfluctuations, testing a signal hypothesis that relies on these regions of low-sensitivity can lead to spuriously strong exclusion limits for the New Physics parameters θ. In order to remedy this, one can instead use the CL s method [96] based on the profiled Poissonian likelihood ratio [97]. We checked for the particular binnings used in our analyses that the two statistical methods give very good agreement. For an explicit comparison see Ref. [48]. For this reason, in the following Sections we present all exclusion limits based on the χ 2 statistic.
Single-operator limits on the dimension-6 SMEFT
In this Section we present upper bounds on the dimension-6 SMEFT operators using LHC run-II data from the pp → − α + β and pp → ± α ν β Drell-Yan searches listed in Table 4.1. Singleparameter limits are extracted for individual Wilson coefficients by assuming them, for simplicity, to be real parameters and by setting all other coefficients to zero. Since we are interested in tails of the Drell-Yan distributions, we only focus on four-fermion semileptonic operators, as well as the quark/lepton dipole operators, because these are the only ones that lead to an energy growth in the amplitude. Our limits are derived by keeping the O(1/Λ 4 ) corrections from the dimension-6 squared pieces. 15 Indeed, these are the lowest-order contributions driving the bounds for the dipole operators and for the four-fermion operators that are chirality-flipping or have a non-diagonal flavor structure, i.e. any operator not interfering with the SM. For the flavor sector we assume flavor alignment in the down sector for the CKM matrix and a unit PMNS matrix. The results are presented in seven figures in Appendix G for all semileptonic operators with fixed leptonic indices, namely ee, µµ, τ τ , eµ, eτ and µτ , and for all possible quark-flavor indices that can be probed at the LHC. All limits on the Wilson coefficients (C) are given at 95% CL at a fixed reference scale of Λ = 1 TeV with no loss of generality since the LHC processes probe the combination C/Λ 2 in the EFT limit.
To simplify the discussion of our results, we have replicated in Fig. 4.1 the LHC constraints for the following representative semileptonic operators,
[O (1) lq ] αβij = (l α γ µ l β )(q i γ µ q j ) , [O (3) lq ] αβij = (l α γ µ τ I l β )(q i γ µ τ I q j ) ,(4.4)
and, similarly, in Fig. 4.2 for the dipole operators,
[O dB ] ij = (q i σ µν d j ) HB µν , [O dW ] ij = (q i σ µν d j ) τ I HW I µν , [O uB ] ij = (q i σ µν u j ) HB µν , [O uW ] ij = (q i σ µν u j ) τ I HW I µν , (4.5) [O eB ] αβ = (l α σ µν e β ) HB µν , [O eW ] αβ = (l α σ µν e β ) τ I HW I µν ,
where we keep the most general flavor structure. Note, in particular, that the singlet operator O lq has opposite sign for up-type and down-type quarks, leading to an approximate cancellation of the interference term in this case. This explains why the constraints on the coefficient C
(3)
lq are considerably more stringent than for C (1) lq , as can be seen by comparing the plots in the first row of Fig. 4.1 with the third row. For the Lepton Flavor Violating (LFV) couplings, the constraints for both operator types are of the same order of magnitude, as in this case the interference terms vanish and the constraints only derive from the New Physics squared contribution.
Before discussing the remaining results, it is worth mentioning that, while the singleparameter fits provide a useful benchmark leading to the most optimistic limits, these analyses completely neglect potential correlations between different Wilson coefficients that could relax substantially the constraints or even lead to flat directions in the parameter space. For this reason it would be preferable to perform a global fit, or at least a multi-parameter analysis involving the most relevant effective operators. While this lies outside the intended scope of this paper, in Sec. 5 we carry out a two-parameter analysis fitting pairs of ultraviolet-motivated operators to Drell-Yan data, as well as combined fits to low-energy flavor and electroweak pole data. We leave for future work a more thorough multi-parameter analysis of the SMEFT with all three generations and different flavor structure hypotheses [98]. Note, however, that the full LHC likelihood for the d = 6 SMEFT truncated at O(1/Λ 4 ) including all 1021 flavored Wilson coefficients (see Table 3.1) is promptly available in HighPT, which also allows to include dimension-8 coefficients. First, we test the sensitivity of the constraints on the EFT operators to individual bins in the invariant-mass tails. For this purpose, we build the jack-knife function [36,99], χ 2 Jack (m µµ ), defined as the statistic that results from holding out a single invariant-mass bin at a time from the total χ 2 function. In Fig. 4.3, we show the quantity R Jack , defined as the ratio of the expected jack-knife limit over the expected limit extracted using the whole invariant-mass spectrum, for different quark flavors. For the semileptonic operators (upper row), when truncating the EFT at O(1/Λ 2 ) (dashed lines), we see that for first generation valence quarks the last experimental bin, spanning {2300, 7000} GeV, is the most sensitive one in the search, while for second-and thirdgeneration quarks the most sensitive bins are found at lower invariant masses, around 1000 GeV and 800 GeV, respectively. On the other hand, when truncating at O(1/Λ 4 ) (solid lines) the most sensitive bin is always the highest mass bin of the search, irrespective of the quark flavor. Furthermore, we note that the last bin is less relevant for second and third generation quarks, whereas for valence quarks removing this bin can weaken the limits by as much as ∼ 30%. From these results, we find that truncating the EFT at order O(1/Λ 4 ) for semileptonic vector operators has a significant impact when setting limits on single operators and that the highest mass bins are the most sensitive ones in the dimuon search. This is not entirely surprising given that for high energies, the quadratic terms growing asŝ 2 /Λ 4 in the partonic cross-section can compete with the interfering termŝ/Λ 2 that is typically more important at lower invariant masses. For the dipole operator, even though these are purely New Physics squared terms, the energy enhancement scales as v 2ŝ /Λ 4 . The sensitivity of the search is therefore at lower invariant mass bins, similar to the four-fermion interference terms at O(1/Λ 2 ), as shown in the second row of Fig. 4.3.
Next, we investigate how the limits on single operators are affected by restricting the Drell-Yan data in the tails with a maximal energy cut (M cut ) [100]. This procedure, known as clipping [101], provides a useful way to obtain more robust EFT constraints. In Fig. 4.4 we show the expected limits for the vector operator (upper row) and the quark-dipole operator as a function of the sliding upper cut M cut > m µµ . The results are given for an EFT truncation at O(1/Λ 2 ) (dashed lines) and O(1/Λ 4 ) (solid lines). Here again we can appreciate the relevance of including O(1/Λ 4 ) corrections for the vector operators. Irrespective of the truncation order and the operator, these results also indicate that the clipped limits for maximal values above M cut ∼ 1.5 TeV saturate, leading to upper limits that are comparable to those obtained when using the full kinematic spectrum.
Flavor dependence
We now discuss the constraints on the SMEFT coefficients shown in Fig. 4.1 from the perspective of flavor. For fixed leptonic flavors, as expected, we find that the most constrained coefficients are the ones involving valence quarks, but useful constraints are also obtained for operators involving the heavier s-, c-and b-quarks despite their PDF suppression. Overall, the upper limits for O (1,3) lq and the quark dipoles for different (i, j) indices follow approximately the expected hierarchies between the parton-parton luminosity functions L ij given in Eq. (2.24). For fixed quark flavor indices, we find comparable constraints for the e and µ channels, with much weaker constraints for τ 's. Similar patterns are also observed for the other semileptonic and quarkdipole coefficients that are collected in Appendix G. For the leptonic dipoles O eB , the limits for different lepton indices, which are all driven by valence-quark production, are determined by the experimental sensitivities of each LHC search.
The constraints on the vector semileptonic operators with flavor-diagonal indices i = j and α = β are not symmetric due to the interplay of interference and New Physics squared terms. The skewness of the limits towards a specific sign will depend on the relative size between these contributions, as well as on the size and sign of the fluctuations in the observed data in the most sensitive regions of the tails. 16 As can be seen in the first three rows in Fig. 4.1, the upper limits derived for the operators with first generation quarks are much more stringent for negative (positive) values of C On the quark-flavor alignment While in the SM the only source of quark-flavor violation is the CKM matrix, this is generally no longer true in the presence of BSM Physics. In other words, the assumption concerning the flavor basis for quarks is fundamental, as a given operator can simultaneously contribute to several partonic processes depending on this choice, having a significant impact on the fits using LHC observables. The minimalistic approach is to consider right-handed fermions in the mass basis, and to assume the alignment between flavor and mass eigenstates for either left-handed up-or down-type quarks, with the CKM matrix appearing in the down-or up-type sectors, respectively. Scenarios with up-type alignment turn out to be tightly constrained by ∆F = 1 and ∆F = 2 processes in the K-and B-meson sectors, which can be induced in this case via the CKM matrix [1]. The down-type alignment is far less constrained by low-energy observable and it is typically a more convenient choice for phenomenology, as we have considered e.g. in Fig. 4.1 and Appendix G.
We now explore the Drell-Yan processes induced via the CKM matrix for the operators that 16 The number of signal events, when turning on one (real) Wilson coefficient C at a time, is given by N (C) = N int C + N NP C 2 where C ≡ C/Λ 2 is the New Physics (NP) parameter, N int and N NP are the SMEFT signals yields at |C|/Λ 2 = 1 TeV −2 for the SM-NP interference and NP squared terms, respectively. It is convenient to write the χ 2 function for the expected data in a specific bin as χ 2 (C) = χ 2 int (C) 1 + (N NP /N int ) C 2 where χ 2 int ∝ C 2 is the expected χ 2 function when truncating the EFT expansion at O(1/Λ 2 ) and the bracket contains the effects of O(1/Λ 4 ) corrections from the dimension-6 squared terms. Since χ 2 int is a symmetric function, the bracket above will skew the expected upper limits. For observed limits, there will be an additional asymmetry caused by the statistical fluctuations in the data. involve quark doublets, both for the up-and down-type alignment. We expect this effect to be prominent for SMEFT operators with second-generation quark indices. 17 In these scenarios, the contribution of first generation valence quarks is only mildly Cabibbo suppressed O(λ) and can thus compete with, and even dominate over, the direct contribution from the sea quarks. The effect of the alignment on fits of operators with first-or third-generation quarks is much less pronounced, because the leading contributions already comes from the valence quarks, or the Cabibbo suppression is stronger (O(λ 3 )).
As an example, we investigate the Wilson coefficients [C (1) lq ] 22ij and [C
lq ] 22ij for all possible combinations of quark-flavor indices (i, j). We checked that similar results are obtained for other choices of lepton-flavor indices. Our results are presented in Fig. 4.5, where we show the 95% confidence regions for three different scenarios:
• Assuming a (fictitious) diagonal CKM matrix V CKM = 1 3×3 (black dashed lines) only sea-quarks can contribute to operators with second and third generation quarks. The constraints obtained in this scenario are weaker than those obtained considering a nondiagonal V CKM due to the missing valence quark contributions.
• Considering the CKM-matrix and assuming down alignment (blue region), processes with up quarks can also contribute to the constraints for operators with charm quarks. The contribution of the former is suppressed by O(λ), but this effect can be compensated by the enhancement of the up PDF with respect to the charm PDF. The constraints thus obtained are therefore stronger than the constraints obtained with a diagonal V CKM . Furthermore, the confidence regions can be shifted with respect to the previous scenario due to the additional interference term of the New Physics contribution and the SM contribution with valence quarks.
• Considering the CKM-matrix and assuming up alignment (yellow region), operators with strange quarks receive contributions induced by the down-quark PDF, leading to a behaviour analogous to the down-aligned scenario. However, since the ratio of down to strange PDF is smaller than the ratio of up and charm PDF, the constraints obtained in this scenario are weaker than in the down-aligned case.
For operators with only first generation quarks the alignment has a negligible effect since the leading contribution to the constraints is already obtained by the valence quarks, see e.g. top left plot in Fig. 4 .7)). In the case of down alignment there is a non-vanishing top-quark contribution due to CKM rotations, breaking the flat direction. The bounds obtained this way are however quite weak.
In the analysis presented in this Section we restricted ourselves to the choices of up and down alignment or a diagonal CKM. However, our results are derived with HighPT which also allows to define any arbitrary alignment of mass and flavor basis.
Impact of dimension-8 effects
Up to now, we have neglected the effect of dimension-8 operators that contribute to the EFT truncation at O(1/Λ 4 ). The impact of these higher-order corrections in Drell-Yan has been recently discussed in Refs. [44][45][46] for searches with light leptons without including detector effects. These works have shown that including the d = 8 contributions can have a significant impact when fitting SMEFT operators to Drell-Yan data at the LHC. In this Section we extend these analyses for generic flavor structures using the latest run-II LHC dilepton and monolepton searches. We present exclusion limits on the leading regular vector form-factor coefficient F V (0,0) in the presence/absence of dimension-8 effects.
For concreteness, we focus on the left-handed form-factor coefficients F LL,qq V (0,0) . In the SMEFT, these are generated to lowest order by the d = 6 operators O (1,3) lq : 18)). In general, these higher-order corrections will have a small impact when constraining the SMEFT and can be safely ignored in the remainder of this discussion [44].
F LL, uu V (0,0) v 2 Λ 2 C (1) lq − C (3) lq , (4.6) F LL, dd V (0,0) v 2 Λ 2 C (1) lq + C (3) lq , (4.7) F LL, ud V (0,0) 2 v 2 Λ 2 C (3) lq ,(4.l 2 q 2 H 2 , O (k) l 2 H 2 D 3 , O (k)
The contributions from momentum-dependent dimension-8 operators in class ψ 4 D 2 , however, can be relevant when fitting the SMEFT to LHC data. The four operators O Table F.2, match to the form-factor coefficients F LL V (1,0) and F LL V (0,1) . Explicitly, one finds at leading order
F LL, uu V (1,0) v 4 Λ 4 C (1+2−3−4) l 2 q 2 D 2 , F LL, uu V (0,1) 2 v 4 Λ 4 C (2−4) l 2 q 2 D 2 , (4.9) F LL, dd V (1,0) v 4 Λ 4 C (1+2+3+4) l 2 q 2 D 2 , F LL, dd V (0,1) 2 v 4 Λ 4 C (2+4) l 2 q 2 D 2 , (4.10) F LL, ud V (1,0) 2 v 4 Λ 4 C (3+4) l 2 q 2 D 2 , F LL, ud V (0,1) 4 v 4 Λ 4 C (4) l 2 q 2 D 2 ,(4.11)
where the notation C (1 ± 2 ±··· ) corresponds to the signed sum of Wilson coefficients defined in Eq. (B.6) in Appendix B. In the amplitude, the energy-enhancement associated with these form-factor coefficients can potentially overcome the relative suppression of O(v 2 /Λ 2 ) with respect to F LL, qq V (0,0) , specially in the high-p T tails of the differential distributions. In order to examine the importance of these dimension-8 effects, we set bounds on the leading dimension-6 form-factors F LL V (0,0) with different flavor structures for three scenarios:
i) Single-parameter limits on F LL V (0,0) while completely neglecting the dimension-8 coefficients F LL V (1,0) and F LL V (0,1) .
ii) Single-parameter limits on the F LL V (0,0) form-factor while turning on
F LL V (1,0) such that F LL V (1,0) = (v 2 /Λ 2 ) F LL V (0,0)
. This specific correlation between different order coefficients can arise from a ultraviolet setting where a heavy s-channel vector mediator has been integrated out at the scale Λ. While here we have focused on left-handed currents, a similar analysis can be performed with other form-factor helicities leading to similar conclusions. The marginalized limits in item iii) are extracted using the profiled chi-square statistic χ 2 (θ, ν(θ)) where θ represent the parameters of interest and ν the marginalized parameters. The double-hat notation [102] is defined as the values of ν that minimize the χ 2 (θ, ν) function for a given θ.
Furthermore, when minimizing the χ 2 functions we constrain the form-factors to take values in the ranges |F I (0,0) | ≤ v 2 C * /Λ 2 , |F I (1,0) | ≤ v 4 C * /Λ 4 and |F I (0,1) | ≤ v 4 C * /Λ 4 , where we set C * = 4π, i.e. the value for which the Wilson coefficients in Eqs. (4.6)-(4.11) would enter the nonperturbative regime, assuming these come from a tree-level matching in the ultraviolet. Notice that this particular choice for C * has minimal impact when extracting the single-parameter limits on F V (0,0) . In contrast, when including the effects of dimension-8 corrections via F V (1,0) and/or F V (0,1) , the choice of C * , which depends on the details of the UV completion, can affect substantially the fits. In this work we use the strong-coupling limit C * = 4π as a benchmark since it is the choice that maximizes the effects of the dimension-8 operators. The LHC limits at 95% CL for the form-factors can be found in Fig. 4.6 for first generation (first row), second generation (second row) and third generation (third row) leptons at three different EFT cutoff choices: Λ = 2 TeV (left column), Λ = 4 TeV (middle column) and Λ = 6 TeV (right column). The limits for valence quarks in the first two rows have been rescaled by a factor of five for visibility and the gray regions correspond to the strong-coupling regime C * ≥ 4π discussed above.
We show in blue the upper limits for F V (0,0) for scenario i) i.e. when all d = 8 corrections are neglected. As expected, these are mostly independent of the cutoff choice. In yellow we display the limits on F V (0,0) for scenario ii) where the d = 6 and d = 8 Wilson coefficients are maximally correlated, e.g. in Z (W ) models for neutral (charged) currents. One can see that these limits are most of the time similar to the blue ones for any quark flavor at Λ = 4 TeV and 6 TeV, and for sea-quarks already at 2 TeV. This shows that for these cases the convergence of the EFT is good since the d = 8 corrections have a small effect. This is not entirely surprising for these values of the cutoff given that the Wilson coefficients arise from expanding the same massive propagator (s-channel in this case) where the d = 8 terms will always be parametrically suppressed with respect to the d = 6 ones. However, for uu, dd and ud initial quarks at Λ = 2 TeV, the inclusion of d = 8 corrections can strengthen the limits substantially and also lead to a slightly better fit to the observed data. This indicates that the EFT expansion is not to be trusted for valence quarks at such low cutoff scales. In red, we provide the limits for scenario iii), i.e. marginalizing over d = 8 contributions that are completely uncorrelated from the d = 6 ones. Here we find that for valence quarks, marginalizing over the F LL V (1,0) and F LL V (0,1) lead to substantial corrections to F LL V (0,0) even at the large cutoff scales Λ = 4 TeV and 6 TeV. Notice that the main effect is to both weaken and symmetrize the skewed d = 6 limits in blue. The symmetrizing effect arises since the sign of the d = 8 interference term is decorrelated from the sign of the d = 6 term. For sea-quarks these effects are completely mitigated due to the limited energy reach from the PDF suppression.
In summary, these results show that the effects of d = 8 operators are only relevant for first generation (valence) quarks and only if the dimension-6 and d = 8 Wilson coefficients are uncorrelated. This last condition likely requires a non-trivial cancellation of different contributions to d = 6 operators from the ultraviolet. For example, in New Physics scenarios where a single heavy tree-level mediator is integrated out the dimension-6 and dimension-8 Wilson coefficients are expected to be maximally correlated.
Constraints on leptoquark models
In this Section we provide the LHC constraints on the couplings of the leptoquark states listed in Table 3.2. As already discussed, leptoquarks can be non-resonantly exchanged in the t/uchannels, as shown in Fig. 3.2. We begin by setting upper limits on individual couplings at a time for a fixed leptoquark mass of 2 TeV and a negligible width. For each leptoquark state, we take into account all components contributing to the relevant production channels and assume these to be mass-degenerate. 18 We also include the interference with the SM background if present. Even though we explicitly include in our analysis the full effect of the propagators, it is worth noting that since leptoquarks are non-resonant it is possible to set fairly reliable limits on their couplings by using the constraints on the corresponding SMEFT operators for masses above a few TeV, see e.g. Refs. [39,40].
The 95% confidence intervals for each individual coupling are collected in Fig. 4.7 for all possible scalar and vector leptoquarks coupling to all three leptons and first generation quarks (red intervals), second generation quarks (green intervals) and third generation quarks (blue intervals). 19 Since the amplitude scales as the square of the probed couplings, the limits are therefore fully symmetric. For fixed quark-flavors, these results follow a similar pattern as in the SMEFT results presented in Sec. 4.2, where the strongest bounds correspond to the lightest quark flavors, as they have the largest PDFs. The only couplings that are not constrained by our observables are the ones that involve right-handed top-quarks. Interestingly, we also 18 For example, the S3 ∼ (3, 3, 1/3) state being a SU (2)L triplet will contribute todidj → α β via the S component. 19 For the right-handed coupling [x R 2 ]11 of the V2 vector leptoquark, we observe a mild discrepancy with the SM prediction with a low significance of ≈ 2 σ. find that in most cases the bounds on the leptoquark couplings to µ's are considerably more constraining than the ones coupling to e's or τ 's. Surprisingly, we find that the bounds to the electron couplings turn out to be comparable to the tauonic ones, which is caused by a poor background description of the dielectron data over several high-mass m ee bins [86].
The complete LHC likelihood for all leptoquarks with all possible flavor couplings are available in HighPT for several benchmark masses.
Leptoquarks and LFV at the LHC Considering a single leptoquark coupling at a time will only lead to lepton flavor conserving Drell-Yan processes. However, if the leptoquark states couple to two different leptons, then they will also induce LFV modes in Drell-Yan production. In other words, the lepton flavor conserving modes pp → α α and pp → β β , and the LFV process pp → α β with α = β would be correlated. These three production modes are perfectly complementary since the scattering amplitudes are proportional to different combinations of the leptoquark couplings.
In the following, we set 95% exclusion limits on leptoquark couplings using data from both LFV and lepton flavor conserving tails to highlight their complementarity. For concreteness, we consider the U 1 ∼ (3, 1, 2/3) vector leptoquark with mass m U 1 = 2 TeV and couplings to left-handed currents,
L U 1 ⊃ [x L 1 ] iα U µ 1q i γ µ l α + h.c. . (4.12)
We assume that U 1 couples exclusively to the third-generation quarks, with nonzero couplings [x L 1 ] 3α and [x L 1 ] 3β and α < β. The results for the two-parameter limits are shown in the coupling planes given in Fig. 4.8. There, the blue and red regions corresponds to the excluded region from lepton flavor conserving searches, while the region in gray corresponds to the excluded region from the LFV searches. One can see that for this particular example the LFV searches can probe regions of parameter space that are not covered by the lepton flavor conserving modes. This complementarity can be understood from the inequality,
2 [x L 1 ] iα [x L 1 ] * iβ ≤ [x L 1 ] iα 2 + [x L 1 ] iβ 2 , (4.13)
where the left-hand side is the combination of couplings entering pp → α β , whereas the righthand side can be bounded by lepton flavor conserving searches.
Combining Flavor and LHC Constraints: a Case Study
In this Section, we illustrate the relevance of our results by combining the high-p T constraints derived in Sec. 4 with the ones obtained from flavor and electroweak observables. The New Physics scenarios that can accommodate the hints of Lepton Flavor Universality (LFU) violation in charged-current B-meson decays will be considered as an example. This study case will be illustrative of the potential of high-p T physics to probe flavor physics operators, since the explanations of these discrepancies require relatively low values of the New Physics scale, in the few TeV range, where LHC constraints are known to be useful [25,[36][37][38][39][40].
LFU violation in b → c ν
We start by reminding the reader of the status of LFU tests in the b → c ν transition and the EFT description of these observables. The deviations from LFU in charged currents B-decays have been observed in the ratios defined by [12][13][14][15][16][17][18][19], 20
R D ( * ) = B(B → D ( * ) τ ν) B(B → D ( * ) ν) ∈{e µ} , (5.1)
where light leptons ∈ {e, µ} are averaged in the denominator. The current experimental averages reported by HFLAV [105],
R exp D = 0.34(3) , R exp D * = 0.295(14) , (5.2)
appear to be systematically larger than the SM predictions [105][106][107][108],
R SM D = 0.294(4) , R SM D * = 0.246(9) , (5.3)
amounting to a combined discrepancy at the ≈ 3σ level [105].
Low-energy EFT
We assume that New Physics only contributes to b → cτ ν, i.e. leaving the channels with electrons and muons unaffected, and we write the most general low-energy effective Lagrangian for this transition with operators up to d = 6,
L b→cτ ν eff = − 2 √ 2G F V cb (1 + C V L ) c L γ µ b L τ L γ µ ν L + C V R c R γ µ b R τ L γ µ ν L + C S L c R b L τ R ν L + C S R c L b R τ R ν L + C T c R σ µν b L τ R σ µν ν L + h.c. , (5.4)
where C i ≡ C i (µ) denotes the effective coefficients, defined at the scale µ = m b , and flavor indices are omitted. Many studies have determined the allowed values of the couplings C i ≡ C i (µ), at the scale µ = m b , by using the ratios R D and R D * , see e.g. [109][110][111][112] and references therein. The results from Ref. [112] will be considered in this paper. In addition to the ratios R D ( * ) , an important constraint on C P ≡ C S R − C S L can be derived from the B c -meson lifetime [113,114]. Even though the B c → τ ν branching fraction has not yet been measured, it is clear that the corresponding partial decay width should not saturate the value of Γ Bc determined experimentally [102]. In the following, we conservatively require that B(B c → τ ν) 30%, which forbids the possibility of addressing both R D and R D * with only the C S L(R) coefficients [113].
SMEFT description
In order to consistently explore the implications of the R D ( * ) anomalies at high-p T , the lowenergy effective Lagrangian (5.4) must be replaced by the SMEFT Lagrangian. This comes with many important features, such as the correlations among different transitions that can arise from SU (2) L gauge invariance, which relates e.g. modes with neutrinos and charged leptons [115]. Moreover, the Yukawa and electroweak running effects can induce sizable mixing among operators [116][117][118].
Firstly, the low-energy effective coefficients defined in Eq. (5.4) must be evolved from the scale µ = m b up to µ = µ ew by using the renormalization group equations which are given e.g. in Ref. [119]. The tree-level matching to the SMEFT Lagrangian at µ ew then reads,
C V L = − v 2 Λ 2 i V 2i V 23 C (3) lq 33i3 + C (3) Hq 33 − δ i3 C (3)
Hl 33 , (5.5)
C V R = v 2 2Λ 2 1 V 23 C (3)
Hud 23 , (5.6)
C S L = − v 2 2Λ 2 1 V 23 C (1) lequ * 3332 , (5.7) C S R = − v 2 2Λ 2 3 i=1 V * 2i V 23 C ledq * 333i , (5.8) C T = − v 2 2Λ 2 1 V 23 C (3) lequ * 3332 , (5.9)
where the scale µ ew is implicit in both sides of the above equations. The operator O Hud gives rise to lepton-flavor universal contributions, thus being irrelevant for R D ( * ) . Moreover, the operator O
Hl is tightly constrained at tree-level by the Z-and W -pole observables [71]. From Eq. (5.7), we conclude that the possible New Physics explanations of R D ( * ) must involve a combination of the operators {O The mediators that can induce these operators via tree-level exchange are known [120]. They can be a scalar Φ ∼ (1, 2, 1/2) or vector W ∼ (1, 3, 0) color-singlet, or a scalar/vector leptoquark state [55,56]. The scalar doublet Φ is excluded from the constraints derived from the B c lifetime [113], whereas the vector triplet W is tightly constrained by pp → τ τ at high-p T and by ∆F = 2 processes [121,122]. Therefore, if we assume that a single mediator is behind the R D ( * ) anomalies, this mediator must necessarily be a leptoquark. Among these mediators, only three are capable of explaining R exp D ( * ) > R SM D ( * ) while being consistent with other existing bounds: (i) the vector U 1 ∼ (3, 1, 2/3), and the scalars (ii) S 1 ∼ (3, 1, 1/3) and (iii) R 2 ∼ (3, 2, 7/6), see [112,122,123] and references therein.
From EFT to concrete scenarios
The viable leptoquark scenarios mentioned above predict specific combinations of effective semileptonic operators, as shown in Table 5.1. In order to successfully explain the b → cτ ν anomalies, the flavor pattern of the effective coefficients must couple exclusively, or predominantly to the second and third generation of quarks and leptons. The most relevant operators, at the matching scale Λ, in each of these scenarios read
U 1 : C (1) lq 3323 = C (3) lq 3323 , C(1)
lq 3333 = C
lq 3333 , C ledq 3333 . (5.10)
S 1 : C (1) lq 3333 = − C(3)
lq 3333 , C
lequ 3332 = −4 C These operators contribute not only to the b → cτ ν transition, but also to many other precision observables that we briefly describe below:
• B → K ( * ) νν : The b → sνν transition provides stringent constraints on operators with left-handed leptons [124]. The observables based on this transition are particularly relevant to probe couplings to τ -leptons, which are difficult to assess otherwise. The lowenergy effective Lagrangian describing the b → sνν transition can be written as,
L b→sνν eff = 4G F √ 2 V tb V * ts α em 4π αβ [C L ] αβ [O L ] αβ + [C R ] αβ [O R ] αβ + h.c. , (5.13) with [O L ] αβ = (s L γ µ b L )(ν Lα γ µ ν Lβ ) , [O R ] αβ = (s R γ µ b R )(ν Lα γ µ ν Lβ ) . (5.
14)
The SM contributions are lepton-flavor conserving and they are given by the coefficient C SM L = −13.6(1.2), which includes NLO QCD corrections [125][126][127] and two-loop electroweak contributions [128]. The low-energy Wilson coefficients can be matched to the semileptonic SMEFT operators at µ = µ ew ,
[C L ] αβ = δ αβ C SM L + 2π α em V tb V * ts v 2 Λ 2 C (1−3) lq αβ23 − δ αβ C (1−3) Hq 23 , (5.15) [C R ] αβ = 2π α em V tb V * ts v 2 Λ 2 C ld αβ23 − δ αβ C Hd 23 . (5.16)
These effective coefficients can be evolved up to the scale Λ by using the one-loop anomalous dimensions computed in Ref. [116][117][118]. The B → K ( * ) νν branching fractions can be easily computed in terms of the coefficients defined in Eq. (5.13) [124]. The most stringent experimental limits are given by B(B + → K + νν) < 1.6 × 10 −5 and B(B 0 → K * 0 νν) < 2.7 × 10 −5 [129][130][131], which lie just above the SM predictions, namely B(B + → K + νν) SM = 4.9(4) × 10 −6 and B(B 0 → K * 0 νν) SM = 1.00(9) × 10 −6 [98].
• W -and Z-pole observables : The precise determinations of the W -and Z-couplings at LEP and the LHC can be used to constrain semileptonic interactions at loop level [132][133][134]. The SMEFT operators describing modifications of the Z and W leptonic couplings up to d = 6 read O (1)
Hl αβ = H † ← → D µ H l α γ µ l β , O He αβ = H † ← → D µ H ē α γ µ e β ,(5.
lq mix into these operators at one loop [116][117][118]. In particular, these effects can be sizable for semileptonic couplings to the top quark. We account for these contributions by using a leading-logarithmic approximation and we consider the recent fit to the W -and Z-couplings from Ref. [71].
• H → τ τ : Measurements of the Higgs Yukawa coupling to τ -leptons at the LHC can also provide a useful constraint on specific semileptonic operators at one loop. This is the case for the chirality-breaking operators O
lequ and O ledq , since they mix at one loop with the following operator, which induces a shift in the SM value of the τ -lepton Yukawa after the electroweaksymmetry breaking. This contribution is particularly relevant if the semileptonic operators couple with third-generation quarks, due to the chirality-enhancement induced via the Yukawa (i.e. ∝ m t /m τ ) [135]. The latest PDG average for the H → τ τ signal strength reads [102], 20) which is used to constrain the relevant operators at one loop, with a leading-logarithm approximation, and assuming that the Higgs production cross-section at the LHC is unaffected by New Physics.
O eH αβ = H † H l α He β ,(5.[C ledq ] αβij - - 2[x L 1 ] * iα [x R 1 ] jβ C (1) lequ αβij 1 2 [y L 1 ] * iα [y R 1 ] jβ − 1 2 [y R 2 ] iβ [y L 2 ] * jα - C (3) lequ αβij − 1 8 [y L 1 ] * iα [y R 1 ] jβ − 1 8 [y R 2 ] iβ [y L 2 ] * jα - [C eu ] αβij 1 2 [y R 1 ] jβ [y R 1 ] * iα - - [C ed ] αβij - - −[x R 1 ] iβ [x R 1 ] * jα [C u ] αβij - − 1 2 [y L 2 ] iβ [y L 2 ] * jα - [C qe ] ijαβ - − 1 2 [y R 2 ] iβ [y R 2 ] * jα - C (1) lq αβij 1 4 [y L 1 ] * iα [y L 1 ] jβ - − 1 2 [x L 1 ] iβ [x L 1 ] * jα C (3) lq αβij − 1 4 [y L 1 ] * iα [y L 1 ] jβ - − 1 2 [x L 1 ] iβ [x L 1 ] * jαµ exp τ τ = σ(pp → h) · B(H → τ τ ) σ(pp → h) SM · B(H → τ τ ) SM = 1.15 +0.16 −0.15 ,(5.
The observables discussed above will be incorporated in a future release of HighPT, along with numerous other low-energy observables, in order to provide a complete likelihood for flavor physics [98].
Numerical results
In this Section we combine the LHC constraints derived in Sec. 4 with the flavor and electroweak precision observables discussed above. To illustrate the main features of the HighPT package, which provides constraints for both EFT and concrete model scenarios, we perform a twostep analysis. In a first step, we will consider the minimal set of SMEFT operators that can accommodate R D ( * ) in the viable scenarios described in Eqs. (5.10)-(5.12). In a second step, we directly consider the leptoquark models that predict these Wilson coefficients, including their propagation effects in the LHC observables. The comparison of the results obtained for the EFTs and the concrete models will allow us to directly assess the validity of the EFT approach for the high-p T observables that we have considered. Using the leptoquark models will also allow us to correlate the effective coefficients entering flavor processes in different sectors, as shown in Table 5.1.
EFT approach
Starting with the EFT scenarios inspired by the viable leptoquarks, we consider the effective coefficients C
(1)
lq = C(3)
lq , which are predicted at tree level by the vector leptoquark U 1 with purely left-handed couplings, see Table 5.1. 21 In the top left panel of Fig. 5.1, we show the allowed Wilson coefficients with flavor indices that contribute directly to the b → cτ ν transition. The flavor constraints in this case are dominated by R D ( * ) (blue region), which are combined with electroweak (gray) and LHC constraints (red). In this case, the LHC constraints are dominated by pp → τ τ , whereas pp → τ ν gives weaker bounds. From Fig. 5.1, we see that low-and highenergy observables are complementary, and the synergy of the different searches is fundamental to restrict the allowed region of the effective coefficients.
In a similar way, the scenario with C
(1)
lequ = −4 C(3)
lequ and C
(1)
lq = −C (3)
lq is considered in Fig. 5.1(b). This pattern of effective coefficients is predicted by the S 1 leptoquark at tree level [123,[138][139][140][141][142]. For simplicity, we assume real couplings and focus on the flavor indices 3332 and 3333 for the scalar/tensor and vector operators, respectively. 22 In this case, we find that the most relevant constraints arise from flavor observables, which are once again dominated by R D ( * ) , and from electroweak observables. In particular, the latter prevent an explanation of the b → cτ ν anomalies via only left-handed couplings in this scenario. Note, also, that LHC constraints turn out to be practically irrelevant, at the EFT level, since the contributions to pp → τ τ are CKM suppressed for the scalar/tensor operators, and absent for the particular combination of C
lequ = 4 C (3)
lequ , which is predicted by the R 2 leptoquark. The corresponding constraints are shown in Fig. 5.1(c) for the flavor indices entering the b → cτ ν transition, with the same color code as before. This scenario is peculiar since purely real effective coefficients would induce contributions to R D and R D * with different signs, which are incompatible with current data [123,138,[143][144][145][146]. In other words, an imaginary part of the scalar/tensor coefficients is needed to simultaneously explain the deviations observed in R D and R D * , as shown in Fig. 5.1. Electroweak and Higgs constraints are not shown in this plot since they turn out to be weak in comparison to flavor bounds at the EFT level. LHC constraints are dominated by pp → τ ν and they appear to probe a small portion of the favored flavorregion. However, this conclusion should be taken with caution since the propagation effects of the leptoquark have a non-negligible effects in the LHC observables, as will be shown in the following.
Concrete models
From the EFT examples discussed above, it is clear that the Drell-Yan tails provide complementary information to low-energy observables, being particularly useful to single out the viable solutions of the R D ( * ) anomalies. However, there are limitations of the EFT approach that must be kept in mind. First of all, there can be non-negligible corrections to the EFT description of the LHC observables if the EFT cutoff Λ is not sufficiently larger than the partonic center-ofmass energy. Moreover, there are correlations among low-and high-energy observables that are only manifest within the concrete models. 21 Notice that the presence of right-handed U1 couplings is also allowed by current constraints, which would predict a different pattern of low-and high-energy observables [136,137]. 22 Note that the effective coefficients C The constraints on the concrete models are shown in Fig. 5.2 for the leptoquarks U 1 (upper left), S 1 (upper right) and R 2 (bottom), with the leptoquark masses fixed to 2 TeV, in agreement with current constraints from leptoquark pair-production at the LHC [112]. For each scenario, a minimal set of two Yukawa couplings has been chosen to induce the SMEFT operators needed to explain R D ( * ) in Fig. 5.1. The leptoquark Lagrangians are defined in Table 3.2, with their tree-level matching to the SMEFT given in Table 5.1.
From Fig. 5.2 we see that the three models are viable explanations of R D ( * ) and we confirm that there is a complementarity of the low-and high-energy constraints. The high-p T constraints turn out to be slightly relaxed in all cases in comparison to the EFT computation, due to the propagation effects of the leptoquarks,
1 (t − m 2 ) 2 1 m 4 1 + 2t m 2 + . . . , t ∈ (−s, 0) ,(5.21)
where m denotes the leptoquark mass and we assume without loss of generality that the leptoquark is exchanged in the t-channel. The first power-correction on t/m 2 comes with a relative negative sign which reduces the cross-section estimated with the EFT [39,40].
Going from the EFT description to concrete models also allows us to obtain additional constraints arising from the correlation of the different SMEFT operators. An example is given by the electroweak constraints for the R 2 model in Fig. 5.2(c), which are not present for the minimal set of EFT operators contributing to the charged currents in Fig. 5.1(c). This correlation is also the reason why LHC constraints seem to be weak in Fig. 5.1(b), but become relevant for the full S 1 models in Fig. 5.2(b).
Summary and Outlook
In this paper, we have explored New Physics effects in semileptonic transition using the highenergy tails of the monolepton and dilepton searches at the LHC. We introduced a general parametrization of the Drell-Yan scattering amplitudes in terms of form-factors describing generic New Physics at tree level, such as EFT contributions or new (bosonic) propagating degrees of freedom with O(1 TeV) masses. For the SMEFT, this allowed us to systematically include the leading O(1/Λ 4 ) corrections to the pp → and pp → ν differential cross-sections. These corrections come from the New Physics squared contributions generated by the d = 6 quark/lepton dipoles and semileptonic four-fermion operators, and from the SM-interfering terms generated by the dimension-8 operators with four fermions and two derivatives.
We provided, for the first time, the complete high-p T Drell-Yan likelihood for the full set of relevant d ≤ 8 SMEFT operators with arbitrary flavor indices. This was achieved by recasting the most recent run-II data-sets by ATLAS and CMS in the monolepton (eν, µν, τ ν) and dilepton (ee, µµ, τ τ , eµ, eτ , µτ ) production channels as shown in Table 4.1. These results are compiled in the Mathematica package HighPT, developed as a tool to facilitate phenomenological studies of New Physics flavor models at high-p T [48]. Furthermore, we derived single-parameter limits on the Wilson coefficients for the dipole and semileptonic four-fermion operators with any possible flavor combination. We also extracted two-parameter limits for specific vector operators, where we analyzed the effects of a non-diagonal CKM matrix with different quarkflavor alignments. These results are intended to be the initial steps towards a global analysis of the semileptonic sector of the SMEFT including all three fermion generations, where the goal is to combine LHC data from high-p T tails with complementary experimental probes such as electroweak precision tests and flavor observables [98].
With the aim of providing a more complete EFT analysis, we also looked into the limitations of the SMEFT description of the Drell-Yan tails at the LHC. For different quark flavors, we assessed for several four-fermion operators the impact of a quadratic truncation of the EFT expansion at O(1/Λ 4 ) and confronted it to the linear truncation at O(1/Λ 2 ). We found in most instances that the New Physics squared corrections are too large to be neglected, and in the case of heavy-flavor initial quarks these corrections completely drive the limits even when disposing of the last experimental bins of the differential distributions. We explicitly checked this feature using the jack-knife and clipping procedures for specific examples [99,100]. Furthermore, following previous work in the literature [44,46], we estimated the relative contributions at O(1/Λ 4 ) between dimension-6 and (the often neglected) dimension-8 operators. In this case we explicitly showed that the precision of the measurements in the tails of the dilepton and monoleptons distributions (for all three lepton generations) are only sensitive to dimension-8 effects if: (i) the initial states are first generation valence quarks and (ii) the dimension-8 Wilson coefficients are uncorrelated from the dimension-6 ones. This last requirement, however, may only be possible in fairly complicated ultraviolet setups.
In addition to the SMEFT, we also computed the LHC likelihoods for all possible leptoquark mediators contributing non-resonantly to Drell-Yan production via t/u-channel exchange. These can also be found in HighPT in full generality for each leptoquark state. We derived singleparameter constraints on each leptoquark with arbitrary couplings, and two-parameter limits to illustrate the interplay between bounds from lepton flavor conserving and LFV Drell-Yan production modes. Finally, as a highlight of our results, we considered the example of LFU tests in B-meson decays based on the b → c ν transition, showing that the LHC limits are complementary to low-energy data and to electroweak precision measurements, for the main scenarios aiming to accommodate the observed discrepancies in low-energy data. This analysis has been performed within the SMEFT, but also within viable leptoquark scenarios, including the leptoquark propagation effects in the LHC observables which are more accurate than naively recasting the EFT results.
Our study leaves room for a few improvements needed to fully exploit the potential of LHC data to constrain flavor-physics scenarios. These include the incorporation of electroweak and QCD corrections to the LHC observables for the SMEFT [147,148] and leptoquark models [149], and the joint determination of quark PDFs and the BSM couplings [150], which would increase the accuracy of our constraints. The measurement of double differential-distributions is also a direction to be explored, as it could provide a useful handle to increase the sensitivity of Drell-Yan searches to New Physics effects [151]. These improvements and directions are foreseen in the future, in addition to a complete model-independent combination of our Drell-Yan constraints with electroweak and low-energy flavor data [98]. column should be interpreted as the acceptance times efficiency (A × ) obtained in our recasts divided by values provided in the CMS and ATLAS papers for the specific benchmark New Physics models listed in the third column. We find an overall good agreement, except for the eτ , µτ and τ τ channels in which deviations are up to O(1). Note, in particular, that for these channels the main limitation of our recasts is the τ -tagging efficiencies which are flat in p T and η. This could be improved in the future by going beyond the default settings of Delphes.
B Notation and Conventions
We consider the same notation of Ref. [116][117][118] for the operators in the Warsaw basis. Quark and lepton doublets are denoted by q and l, while up and down quarks and lepton singlets are denoted by u, d and e, respectively. Our convention for the covariant derivative is given by
D µ = ∂ µ + ig Y B µ + ig τ I 2 W I µ + ig 3 T A G A µ , (B.1)
where T A = λ A /2 are the SU (3) c generators, τ I are the Pauli matrices and Y denotes the hypercharge. The Yukawa couplings are defined in the flavour basis as
−L yuk = H †d Y d q + H †ū Y u q + H †ē Y e l + h.c. , (B.2) where Y f (with f ∈ {u, d,
B.1 SMEFT conventions
For Hermitian semileptonic operators, we define the Hermitian conjugate C † of a Wilson coefficients C that can be a two-tensor or four-tensor in quark/lepton flavor space as:
[ C † ] αβ ≡ [ C * ] βα , (B.3) [ C † ] ij ≡ [ C * ] ji , (B.4) [ C † ] αβij ≡ [ C * ] βαji . (B.5)
These coefficients have a redundancy under the flavor index swappings α ↔ β and/or i ↔ j.
One can remove this redundancy by adopting the following convention: for Hermitian Wilson coefficients with four flavor-indices, we fix the lepton indices to α ≤ β, which also determines the ordering of the quark-flavor indices of the semileptonic four fermion operators if α < β. For the case α = β we adopt the ordering i ≤ j. For operators with two indices, we also adopt the convention α ≤ β and i ≤ j. 23 In order to keep expressions compact, it is useful to introduce the following notation for the signed sums of Wilson coefficients associated to operators with the same field content, but different gauge/Lorentz structure:
C (i ± j ± k ± ...) ≡ C (i) ± C (j) ± C (k) ± . . . (B.6)
B.2 Form-factor rotations from the weak basis to the mass basis
In this Appendix, we provide the rotations between the mass and weak basis for the formfactors. Our expressions are given in terms of the matrices V u and V d , which are the left-handed rotations to the mass basis for up-type and down-type quarks, respectively. The CKM matrix can then be expressed as V = V † u V d . These rotations read:
F XL, ud V → V u F XL, ud V V † d , F XL, ud I =V → F XL, ud I =V V † d , F XR, ud I =V → V u F XR, ud I =V , F XL, uu V → V u F XL, uu V V † u , F XL, uu I =V → F XL, uu I =V V † u , (B.7) F XL, dd V → V d F XL, dd V V † d , F XL, dd I =V → F XL, dd I =V V † d .
The matching relations of these form-factors to the SMEFT and to models with concrete mediators are provided in the weak basis in Appendix C and D, respectively. The rotation to the down-aligned basis can be obtained from the above by setting V u = V † and V d = 1, while the up-aligned one with V u = 1 and V d = V . The rotations for all the remaining form-factors can be obtained via: C Form-factors in the SMEFT
C.1 Scalar and tensor form-factors
In this Appendix, we provide the full matching between the form-factor coefficients and the SMEFT Wilson coefficients. Notice that contributions coming from the redefinition of input parameters are not included in the matching conditions below.
Neutral currents
The matching to the SMEFT Wilson coefficients depends on whether the process is up-quark or down-quark initiated,
ū i u j → − α + β andd i d j → − α +
β . It is given by:
F RR, uu S (0,0) = − v 2 Λ 2 C (1) lequ , F RL, dd S (0,0) = v 2 Λ 2 C ledq , (C.1) F RR, uu T (0,0) = − v 2 Λ 2 C(3)
lequ .
Charged currents
For the monolepton processesū i d j → − α ν β (and their conjugates), the matching reads:
F RR, du S (0,0) = v 2 Λ 2 C (1) lequ , F RL, du S (0,0) = v 2 Λ 2 C ledq , (C.2) F RR, du T (0,0) = v 2 Λ 2 C (3)
lequ .
C.2 Vector form-factors
Neutral currentsū i u j → l − α l + β
The matching of the coefficients F XY V (0,0) to the SMEFT for up-quark initiated processes is given by:
F LL, uu V (0,0) = v 2 Λ 2 C (1−3) lq + v 4 2Λ 4 C (1+2−3−4) l 2 q 2 H 2 + v 2 m 2 Z 2Λ 4 g L l C (1−2−3+4) q 2 H 2 D 3 + g L u C (1−2+3−4) l 2 H 2 D 3 , F LR, uu V (0,0) = v 2 Λ 2 C lu + v 4 2Λ 4 C (1+2) l 2 u 2 H 2 + v 2 m 2 Z 2Λ 4 g L l C (1−2) u 2 H 2 D 3 + g L u C (1−2+3−4) l 2 H 2 D 3 , F RL, uu V (0,0) = v 2 Λ 2 C qe + v 4 2Λ 4 C (1−2) q 2 e 2 H 2 + v 2 m 2 Z 2Λ 4 g R l C (1−2−3+4) q 2 H 2 D 3 + g L u C (1−2) e 2 H 2 D 3 , F RR, uu V (0,0) = v 2 Λ 2 C eu + v 4 2Λ 4 C e 2 u 2 H 2 + v 2 m 2 Z 2Λ 4 g R l C (1−2) u 2 H 2 D 3 + g R u C (1−2) e 2 H 2 D 3 . (C.3)
The higher-order coefficients F XY V (1,0) and F XY V (0,1) are generated in the SMEFT at d = 8 from momentum-dependent contact operators in the class ψ 4 D 2 . These read:
F LL, uu V (1,0) = v 4 Λ 4 C (1+2−3−4) l 2 q 2 D 2 , F LL, uu V (0,1) = 2 v 4 Λ 4 C (2−4) l 2 q 2 D 2 , (C.4) F LR, uu V (1,0) = v 4 Λ 4 C (1+2) l 2 u 2 D 2 , F LR, uu V (0,1) = 2 v 4 Λ 4 C(2)l 2 u 2 D 2 , (C.5) F RL, uu V (1,0) = v 4 Λ 4 C (1+2) q 2 e 2 D 2 , F RL, uu V (0,1) = 2 v 4 Λ 4 C(2)
q 2 e 2 D 2 , (C.6)
F RR, uu V (1,0) = v 4 Λ 4 C (1+2) e 2 u 2 D 2 , F RR, uu V (0,1) = 2 v 4 Λ 4 C (2) e 2 u 2 D 2 . (C.7)
The matching of the pole residues to the SMEFT is given by:
δS LL, uu (Z) = − 2 m 2 Z Λ 2 g L l C (1−3) Hq + g L u C (1+3) Hl + v 2 m 2 Z Λ 4 C (1+3) Hl C (1−3) Hq − v 2 m 2 Z Λ 4 g L l C (1) q 2 H 4 D − 2C (2) q 2 H 4 D + g L u C
(1)
l 2 H 4 D + 2C (2) l 2 H 4 D + m 4 Z 2Λ 4 g L l C (1−2−3+4) q 2 H 2 D 3 + g L u C (1−2+3−4) l 2 H 2 D 3 , δS LR, uu (Z) = − 2 m 2 Z Λ 2 g L l C Hu + g R u C (1+3) Hl + v 2 m 2 Z Λ 4 C (1+3) Hl C Hu − v 2 m 2 Z Λ 4 g L l C u 2 H 4 D + g R u C
(1)
l 2 H 4 D + 2C (2) l 2 H 4 D + m 4 Z 2Λ 4 g L l C (1−2) u 2 H 2 D 3 + g R u C (1−2+3−4) l 2 H 2 D 3 , δS RL, uu (Z) = − 2 m 2 Z Λ 2 g R l C (1−3) Hq + g L u C He + v 2 m 2 Z Λ 4 C He C (1−3) Hq − v 2 m 2 Z Λ 4 g R l C (1) q 2 H 4 D − 2C (2) q 2 H 4 D + g L u C e 2 H 4 D + m 4 Z 2Λ 4 g R l C (1−2−3+4) q 2 H 2 D 3 + g L u C (1−2) e 2 H 2 D 3 , δS RR, uu (Z) = − 2 m 2 Z Λ 2 g R l C Hu + g R u C He + v 2 m 2 Z Λ 4 C He C Hu − g R L C u 2 H 4 D − g R u C e 2 H 4 D + m 4 Z 2Λ 4 g R l C (1−2) u 2 H 2 D 3 + g R u C (1−2) e 2 H 2 D 3 . (C.8) Neutral currentsd i d j → − α + β
For down-quark initiated processes the matching for the leading coefficient F XY V (0,0) is given by:
F LL, dd V (0,0) = v 2 Λ 2 C (1+3) lq + v 4 2Λ 4 C (1+2+3+4) l 2 q 2 H 2 + v 2 m 2 Z 2Λ 4 g L l C (1−2+3−4) † q 2 H 2 D 3 + g L d C (1−2+3−4) l 2 H 2 D 3 , (C.9) F LR, dd V (0,0) = v 2 Λ 2 C ld + v 4 2Λ 4 C (1+2) l 2 d 2 H 2 + v 2 m 2 Z 2Λ 4 g L l C (1−2) † d 2 H 2 D 3 + g R d C (1−2+3−4) l 2 H 2 D 3 , (C.10) F RL, dd V (0,0) = v 2 Λ 2 C qe + v 4 2Λ 4 C (1+2) q 2 e 2 H 2 + v 2 m 2 Z 2Λ 4 g R l C (1−2+3−4) † q 2 H 2 D 3 + g L d C (1−2) e 2 H 2 D 3 , (C.11) F RR, dd V (0,0) = v 2 Λ 2 C ed + v 4 2Λ 4 C e 2 d 2 H 2 + v 2 m 2 Z 2Λ 4 g R l C (1−2) † d 2 H 2 D 3 + g R d C (1−2) e 2 H 2 D 3 . (C.12)
The higher order coefficients F XY V (1,0) and F XY V (0,1) read:
F LL, dd V (1,0) = v 4 Λ 4 C (1+2+3+4) l 2 q 2 D 2 , F LL, dd V (0,1) = 2 v 4 Λ 4 C (2+4) l 2 q 2 D 2 , (C.13) F LR, dd V (1,0) = v 4 Λ 4 C (1+2) l 2 d 2 D 2 , F LR, dd V (0,1) = 2 v 4 Λ 4 C(2)
l 2 d 2 D 2 , (C.14) 16) and the pole residues are given by:
F RL, dd V (1,0) = v 4 Λ 4 C (1+2) q 2 e 2 D 2 , F RL, dd V (0,1) = 2 v 4 Λ 4 C(2)q 2 e 2 D 2 , (C.15) F RR, dd V (1,0) = v 4 Λ 4 C (1+2) e 2 d 2 D 2 , F RR, dd V (0,1) = 2 v 4 Λ 4 C (2) e 2 d 2 D 2 , (C.δS LL, dd (Z) = − 2 m 2 Z Λ 2 g L l C (1+3) Hq + g L d C (1+3) Hl + v 2 m 2 Z Λ 4 C (1+3) Hl C (1+3) Hq − v 2 m 2 Z Λ 4 g L l C (1) q 2 H 4 D + 2C
(2)
q 2 H 4 D + g L d C
(1)
l 2 H 4 D + 2C (2) l 2 H 4 D + m 4 Z 2Λ 4 g L l C (1−2+3−4) q 2 H 2 D 3 + g L d C (1−2+3−4) l 2 H 2 D 3 , δS LR, dd (Z) = − 2 m 2 Z Λ 2 g L l C Hd + g R d C (1+3) Hl + v 2 m 2 Z Λ 4 C (1+3) Hl C Hd − v 2 m 2 Z Λ 4 g L l C d 2 H 4 D + g R d C
(1)
l 2 H 4 D + 2C (2) l 2 H 4 D + m 4 Z 2Λ 4 g L l C (1−2) d 2 H 2 D 3 + g R d C (1−2+3−4) l 2 H 2 D 3 , δS RL, dd (Z) = − 2 m 2 Z Λ 2 g R l C (1+3) Hq + g L d C He + v 2 m 2 Z Λ 4 C He C (1+3) Hq − v 2 m 2 Z Λ 4 g R l C (1) q 2 H 4 D + 2C
(2)
q 2 H 4 D + g L d C e 2 H 4 D + m 4 Z 2Λ 4 g R l C (1−2+3−4) q 2 H 2 D 3 + g L d C (1−2) e 2 H 2 D 3 , δS RR, dd (Z) = −2 m 2 Z Λ 2 g R l C Hd + g R d C He + v 2 m 2 Z Λ 4 C He C Hd − g R L C d 2 H 4 D − g R d C e 2 H 4 D + m 4 Z 2Λ 4 g R l C (1−2) d 2 H 2 D 3 + g R d C (1−2) e 2 H 2 D 3 . (C.17) Charged-currentsū i d j → − αν β
The matching of the leading form-factor coefficients F LL(LR) V (0,0) is given by
F LL, ud V (0,0) αβij = 2 v 2 Λ 2 C (3) lq αβij + v 4 Λ 4 C (3) l 2 q 2 H 2 αβij + i(1 − δ ij ) C (5) l 2 q 2 H 2 αβij + − g 2 2 v 4 2Λ 4 C (3) l 2 H 2 D 3 − C (4) † l 2 H 2 D 3 1 q + C (3) † q 2 H 2 D 3 − C (4) q 2 H 2 D 3 1 l αβij . (C.18)
Notice that the d = 8 operator O
l 2 q 2 H 2 = IJK (lγ µ τ I l)(qγ µ τ J q)(H † τ K H)(5)
only contributes to flavor violating processes and therefore does not enter into neutral currents, but does affect charged currents like e.g. us → ± ν at order O(1/Λ 4 ). The effects of this operator are small because they only interfere with CKM suppressed transitions in the SM. For the higher order regular coefficients we obtain the following matching to the SMEFT:
F LL, ud V (1,0) = 2 v 4 Λ 4 C (3+4) l 2 q 2 D 2 , F LL, ud V (0,1) = 4 v 4 Λ 4 C (4) l 2 q 2 D 2 . (C.19)
The matching of the pole residues is given by:
δS LL, ud (W ) = g 2 2 v 2 Λ 2 C (3) † Hl 1 q + C(3)Hl 1 + g 2 2 v 4 Λ 4 C (3) † Hl C (3) Hq + g 2 2 v 4 2Λ 4 C (2 † −3 † +4) l 2 H 4 D 1 q + C (2−3+4 † ) q 2 H 4 D 1 l − g 2 2 v 2 m 2 W 2Λ 4 C (3) l 2 H 2 D 3 − C (4) † l 2 H 2 D 3 1 q + C (3) † q 2 H 2 D 3 − C (4) q 2 H 2 D 3 1 l , (C.20)
δS LR, ud
(W ) = g 2 4 v 2 Λ 2 C Hud 1 l ,
and δS LL, du
(W ) = δS LL, ud (W )
and δS LR, du
(W ) = δS LR, ud † (W ) .
C.3 Dipole form-factors
Neutral currents
The matching conditions for the Z boson and photon pole coefficients are give by:
S RR,qq D l (γ) = S RL,qq D l (γ) = − √ 2eQ q v 2 Λ 2 (s w C eW − c w C eB ) 1 q , (C.21) S LR,qq D l (γ) = S LL,qq D l (γ) = √ 2eQ q v 2 Λ 2 s w C † eW − c w C † eB 1 q , (C.22) S RR,qq D l (Z) = S RL,qq D l (Z) = − √ 2g R/L q v 2 Λ 2 (c w C eW + s w C eB ) 1 q , (C.23) S LR,qq D l (Z) = S LL,qq D l (Z) = √ 2g R/L q v 2 Λ 2 c w C † eW + s w C † eB 1 q , (C.24) S RR,dd Dq (γ) = S LR,dd Dq (γ) = √ 2eQ e v 2 Λ 2 (s w C dW − c w C dB ) 1 , (C.25) S RL,dd Dq (γ) = S LL,dd Dq (γ) = − √ 2eQ e v 2 Λ 2 s w C † dW − c w C † dB 1 , (C.26) S RR,dd Dq (Z) = S LR,dd Dq (Z) = √ 2g R/L l v 2 Λ 2 (c w C dW + s w C dB ) 1 , (C.27) S RL,dd Dq (Z) = S LL,dd Dq (Z) = − √ 2g R/L l v 2 Λ 2 c w C † dW + s w C † dB 1 , (C.28) S RR,uu Dq (γ) = S LR,uu Dq (γ) = − √ 2eQ e v 2 Λ 2 (s w C uW + c w C uB ) 1 , (C.29) S RL,uu Dq (γ) = S LL,uu Dq (γ) = √ 2eQ e v 2 Λ 2 s w C † uW + c w C † uB 1 , (C.30) S RR,uu Dq (Z) = S LR,uu Dq (Z) = − √ 2g R/L l v 2 Λ 2 (c w C uW − s w C uB ) 1 , (C.31) S RL,uu Dq (Z) = S LL,uu Dq (Z) = √ 2g R/L l v 2 Λ 2 c w C † uW − s w C † uB 1 . (C.32)
Charged currents
The W boson pole coefficients read:
S RL,ud D l (W ) = √ 2g v 2 Λ 2 C eW 1 q , S LL,du D l (W ) = − √ 2g v 2 Λ 2 C † eW 1 q , (C.33) S LR,ud Dq (W ) = − √ 2g v 2 Λ 2 C dW 1 , S LL,du Dq (W ) = √ 2g v 2 Λ 2 C † dW 1 , (C.34) S LR,du Dq (W ) = − √ 2g v 2 Λ 2 C uW 1 , S LL,ud Dq (W ) = √ 2g v 2 Λ 2 C † uW 1 . (C.35)
D Form-factors in Concrete UV Models
We now give the matching relations to the pole form-factors for the tree-level mediators collected in Table 3.2. The poles below are defined in terms of the mass and width of each mediator as
Ω i ≡ m 2 i − im i Γ i .
D.1 Scalar form-factors
Neutral-currentsū i u j → − α + β 1 v 2 F LL, uu 1 v 2 F LR, uu V, Poles αβij = [g l 1 ] αβ [g u 1 ] iĵ s − Ω Z + 1 2 [y L 2 ] iβ [y L 2 ] * jα t − Ω R (5/3) 2 + [ x L 2 ] iβ [ x L 2 ] * iα u − Ω V (= [g e 1 ] αβ [g u 1 ] iĵ s − Ω Z + [ x R 1 ] iβ [ x R 1 ] * jα t − Ω U 1 − 1 2 [y R 1 ] jβ [y R 1 ] * iα u − Ω S 1 . (D.12) Neutral currentsd i d j → − α + β 1 v 2 F LL, dd V, Poles αβij = [g l 1 ] αβ [g q 1 ] iĵ s − Ω Z + [g l 3 ] αβ [g q 3 ] iĵ s − Ω W + [x L 1 ] iβ [x L 1 ] * jα t − Ω U 1 + [x L 3 ] iβ [x L 3 ] * jα t − Ω U (2/3) 3 − [y L 3 ] jβ [y L 3 ] * iα u − Ω S (4/3) 3 , (D.13) 1 v 2 F LR, dd V Poles αβij = [g l 1 ] αβ [g d 1 ] iĵ s − Ω Z + 1 2 [ y L 2 ] iβ [ y L 2 ] * jα t − Ω R (2/3) 2 − [x L 2 ] jβ [x L 2 ] * iα u − Ω V (4/3) 2 , (D.14) 1 v 2 F RL, dd V, Poles αβij = [g e 1 ] αβ [g q 1 ] iĵ s − Ω Z + 1 2 [y R 2 ] iβ [y R 2 ] * jα t − Ω R (2/3) 2 − [x R 2 ] jβ [x R 2 ] * iα u − Ω V (4/3) 2 , (D.15) 1 v 2 F RR, dd V, Poles αβij = [g e 1 ] αβ [g d 1 ] iĵ s − Ω Z + [x R 1 ] iβ [x R 1 ] * jα t − Ω U 1 + 1 2 [ y R 1 ] jβ [ y R 1 ] * iα u − Ω S 1 . (D.16) Charged-currentsū i d j → − αν β 1 v 2 F LL, ud V, Poles αβij = 2 [g l 3 ] αβ [g q 3 ] iĵ s − Ω W + [x L 1 ] iβ [x L 1 ] * jα t − Ω U 1 − [x L 3 ] iβ [x L 3 ] * jα t − Ω U (2/3) 3 + 1 2 [y L 1 ] jβ [y L 1 ] iα * u − Ω S 1 − 1 2 [y L 3 ] jβ [y L 3 ] iα *1 v 2 F RR, ud V, Poles αβij = [ g l 1 ] αβ [ g q 1 ] iĵ s − Ω Z + [x R 1 ] * jα [x R 1 ] iβ t − Ω U 1 − 1 2 [ȳ R 1 ] jβ [y R 1 ] * iα u − Ω S 1 .
(D.20)
D.3 Tensor form-factors
Neutral-currentsū i u j → −
E Form-factors in the νSMEFT
The matching of the form-factors with dimension-6 operators involving light right-handed neutrinos read:
F RR,ud V (0,0) = v 2 Λ 2 C eN ud , (E.1) F RL,ud S (0,0) = v 2 Λ 2 C lN uq , (E.2) F RR,ud S (0,0) = − v 2 Λ 2 C (1) lN qd , (E.3) F RR,ud T (0,0) = − v 2 Λ 2 C (3) lN qd . (E.4)
F Semileptonic SMEFT operators
The SMEFT operators of dimension d = 6 [43] and d = 8 [70] that are relevant to our study are defined in Table F
G LHC Limits on SMEFT operators
We report in this Appendix the high-p T limits derived on d = 6 semileptonic operators, with all possible flavor indices, assuming a single coefficient at a time. These results are reported at 95% CL for the τ τ , µµ and ee channels in Figs Table F.3: SMEFT d = 8 two-fermion operators, in the basis of Ref. [70], that contribute to the processes pp → and pp → ν.
ψ 4 pp → pp → ν O (1) lq (l α γ µ l β )(q i γ µ q j ) - O (3) lq (l α γ µ τ I l β )(q i γ µ τ I q j ) O lu (l α γ µ l β )(ū i γ µ u j ) - O ld (l α γ µ l β )(d i γ µ d j ) - O eq (ē α γ µ e β )(q i γ µ q j ) - O eu (ē α γ µ e β )(ū i γ µ u j ) - O ed (ē α γ µ e β )(d i γ µ d j ) - O ledq + h.c. (l α e β )(d i q j ) O (1) lequ + h.c. (l α e β )ε(q i u j ) O (3) lequ + h.c. (l α σ µν e β )ε(q i σ µν u j ) d = 6 ψ 2 H 2 D pp → pp → ν O (1) Hl (l α γ µ l β )(H † i ← → D µ H) - O (3) Hl (l α γ µ τ I l β )(H † i ← → D I µ H) O (1) Hq (q i γ µ q j )(H † i ← → D µ H) - O (3) Hq (q i γ µ τ I q j )(H † i ← → D I µ H) O He (ē α γ µ e β )(H † i ← → D µ H) - O Hu (ū i γ µ u j )(H † i ← → D µ H) - O Hd (d i γ µ d j )(H † i ← → D µ H) - O Hud + h.c. (ū i γ µ d j )( H † iD ν H) - d = 6 ψ 2 XH + h.c. pp → pp → ν O eW (l α σ µν e β ) τ I HW I µν O eB (l α σ µν e β ) HB µν - O uW (q i σ µν u j ) τ I HW I µν O uB (q i σ µν u j ) HB µν - O dW (q i σ µν d j ) τ I HW I µν O dB (q i σ µν d j )HB µν -ψ 4 H 2 pp → pp → ν O (1) l 2 q 2 H 2 (l α γ µ l β )(q i γ µ q j )(H † H) - O (2) l 2 q 2 H 2 (l α γ µ τ I l β )(q i γ µ q j )(H † τ I H) - O (3) l 2 q 2 H 2 (l α γ µ τ I l β )(q i γ µ τ I q j )(H † H) O (4) l 2 q 2 H 2 (l α γ µ l β )(q i γ µ τ I q j )(H † τ I H) - O (5) l 2 q 2 H 2 IJK (l α γ µ τ I l β )(q i γ µ τ J q j )(H † τ K H) - O (1) l 2 u 2 H 2 (l α γ µ l β )(ū i γ µ u j )(H † H) - O (2) l 2 u 2 H 2 (l α γ µ τ I l β )(ū i γ µ u j )(H † τ I H) - O (1) l 2 d 2 H 2 (l α γ µ l β )(d i γ µ d j )(H † H) - O (2) l 2 d 2 H 2 (l α γ µ τ I l β )(d i γ µ d j )(H † τ I H) - O (1) e 2 q 2 H 2 (ē α γ µ e β )(q i γ µ q j )(H † H) - O (2) e 2 q 2 H 2 (ē α γ µ e β )(q i γ µ τ I q j )(H † τ I H) - O e 2 u 2 H 2 (ē α γ µ e β )(ū i γ µ u j )(H † H) - O e 2 d 2 H 2 (ē α γ µ e β )(d i γ µ d j )(H † H) - d = 8 ψ 4 D 2 pp → pp → ν O (1) l 2 q 2 D 2 D ν (l α γ µ l β )D ν (q i γ µ q j ) - O (2) l 2 q 2 D 2 (l α γ µ ← → D ν l β )(q i γ µ ← → D ν q j ) - O (3) l 2 q 2 D 2 D ν (l α γ µ τ I l β )D ν (q i γ µ τ I q j ) O (4) l 2 q 2 D 2 (l α γ µ ← → D Iν l β )(q i γ µ ← → D I ν q j ) O (1) l 2 u 2 D 2 D ν (l α γ µ l β )D ν (ū i γ µ u j ) - O (2) l 2 u 2 D 2 (l α γ µ ← → D ν l β )(ū i γ µ ← → D ν u j ) - O (1) l 2 d 2 D 2 D ν (l α γ µ l β )D ν (d i γ µ d j ) - O (2) l 2 d 2 D 2 (l α γ µ ← → D ν l β )(d i γ µ ← → D ν d j ) - O (1) e 2 q 2 D 2 D ν (ē α γ µ e β )D ν (q i γ µ q j ) - O (2) e 2 q 2 D 2 (ē α γ µ ← → D ν e β )(q i γ µ ← → D ν q j ) - O (1) e 2 u 2 D 2 D ν (ē α γ µ e β )D ν (ū i γ µ u j ) - O (2) e 2 u 2 D 2 (ē α γ µ ← → D ν e β )(ū i γ µ ← → D ν u j ) - O (1) e 2 d 2 D 2 D ν (ē α γ µ e β )D ν (d i γ µ d j ) - O (2) e 2 d 2 D 2 (ē α γ µ ← → D ν e β )(d i γ µ ← → D ν d j ) -ψ 2 H 4 D pp → pp → ν O (1) l 2 H 4 D i(l α γ µ l β )(H † ← → D µ H)(H † H) - O (2) l 2 H 4 D i(l α γ µ τ I l β )[(H † ← → D I µ H)(H † H) + (H † ← → D µ H)(H † τ I H)] O (3) l 2 H 4 D IJK (l α γ µ τ I l β )(H † ← → D J µ H)(H † τ K H) - O (4) l 2 H 4 D IJK (l α γ µ τ I l β )(H † τ J H)(D µ H) † τ K H - O (1) q 2 H 4 D i(q i γ µ q j )(H † ← → D µ H)(H † H) - O (2) q 2 H 4 D i(q i γ µ τ I q j )[(H † ← → D I µ H)(H † H) + (H † ← → D µ H)(H † τ I H)] O (3) q 2 H 4 D i IJK (q i γ µ τ I q j )(H † ← → D J µ H)(H † τ K H) - O (4) q 2 H 4 D IJK (q i γ µ τ I q j )(H † τ J H)(D µ H) † τ K H - O e 2 H 4 D i(ē α γ µ e β )(H † ← → D µ H)(H † H) - O u 2 H 4 D i(ū i γ µ u j )(H † ← → D µ H)(H † H) - O d 2 H 4 D i(d i γ µ d j )(H † ← → D µ H)(H † H) - d = 8 ψ 2 H 2 D 3 pp → pp → ν O (1) l 2 H 2 D 3 i(l α γ µ D ν l β ) (D (µ D ν) H) † H - O (2) l 2 H 2 D 3 i(l α γ µ D ν l β ) H † (D (µ D ν) H) - O(O (1) q 2 H 2 D 3 i(q i γ µ D ν q j ) (D (µ D ν) H) † H - O (2) q 2 H 2 D 3 i(q i γ µ D ν q j ) H † (D (µ D ν) H) - O (3) q 2 H 2 D 3 i(q i γ µ τ I D ν q j ) (D (µ D ν) H) † τ I H O (4) q 2 H 2 D 3 i(q i γ µ τ I D ν q j ) H † τ I (D (µ D ν) H) O (1) u 2 H 2 D 3 i(ū i γ µ D ν u j ) (D (µ D ν) H) † H - O (2) u 2 H 2 D 3 i(ū i γ µ D ν u j ) H † (D (µ D ν) H) - O (1) d 2 H 2 D 3 i(d i γ µ D ν d j ) (D (µ D ν) H) † H - O (2) d 2 H 2 D 3 i(d i γ µ D ν d j ) H † (D (µ D ν) H) -
Figure 2 . 1 :
21Neutral and charged Drell-Yan production processes at proton-proton colliders.
k
) denote Hermitian (non-Hermitian) operators of dimension d > 4, and the ultraviolet physics is encoded in the Wilson coefficients C
. The complete classification of SMEFT operators for d = 6 and d = 8 can be found in Refs.
Figure 3 . 1 :
31Feynman diagrams of the leading contributions in the SM (a) and in the SMEFT (b)-(e) to the partonic processes qq → and ud → ν assuming an EFT expansion up to O(1/Λ 4 ). The green mediator represents the exchange of the SM gauge bosons V ∈ {γ, Z, W } and the black vertices are insertions of the SMEFT effective interaction.
H 2 D
2] 2 in Eq.(3.12) corresponds to double vertex insertions of the corresponding dimension-6 operators, as depicted in diagram (e) ofFig. 3.1.The ellipsis indicate contributions from the neglected higher-dimensional operators. The precise matching of the SMEFT to the vector form-factors can be found in Appendix C.2.
Figure 4 . 1 :
41LHC constraints on the SMEFT Wilson C (1,3) lq coefficients with different flavor indices at 95% CL, where a single coefficient is turned on at a time. Quark-flavor indices are denoted by ij and are specified on the left-hand side of each plot. All coefficients are assumed to be real and contributions to the cross-section up to and including O(1/Λ 4 ) are considered. The scale Λ is fixed to 1 TeV.
Figure 4 . 2 :
42LHC constraints on the SMEFT dipole Wilson coefficients with different flavor indices at 95% CL, where a single coefficient is turned on at a time. Quark (lepton) flavor indices are denoted by ij (αβ) and are specified on the left-hand side of each plot. All coefficients are assumed to be real and contributions to the cross-section at order O(1/Λ 4 ) are considered. The scale Λ is fixed to 1 TeV.
only contributes to neutral Drell-Yan processes, whereas the triplet O (3) lq contributes to both neutral-and charged-current processes, and similarly for the dipole operators. Neutraland charged-current constraints have been combined to set the constraints on triplet operators depicted in Figs. 4.1 and 4.2. Note, in particular, that the interference of the SM with the contribution by O (3) lq has the same sign for up-type and down-type quarks. However, the interference of the SM with O (1)
Figure 4 . 3 :
43The expected sensitivity of individual m µµ bins to the dimension-6 flavor conserving operators O (1) lq and O dW using the Jack-knife ratio R Jack (see text for details). The dashed and solid lines correspond to the EFT truncation at O(1/Λ 2 ) and O(1/Λ 4 ), respectively.
SMEFT truncations at O(1/Λ 2 ) and O(1/Λ 4 ) To assess the sensitivity of the LHC searches to the SMEFT truncation at O(1/Λ 2 ) or O(1/Λ 4 ), we investigate the impact on the upper limits on dimension-6 operators when specific portions of the Drell-Yan data are removed from the analysis. For definiteness, we focus on the dimuon searches and on single-parameter constraints for the vector operators [O (1) lq ] 22ii and the quark dipole [O dW ] ii for i = 1, 2, 3.
Figure 4 . 4 :
44Clipped expected limits from LHC dimuon searches for flavor conserving operators O (1) lq and O dW as a function of the sliding maximal scale M cut . The dashed and solid contours correspond to the EFT truncation at O(1/Λ 2 ) and O(1/Λ 4 ), respectively.
), while the asymmetry is much less pronounced for operators involving second or third generation quarks where the New Physics squared components dominate.For the operators with flavor-changing quark indices i < j the limits are pretty much symmetric except for (i, j) = (1, 2). When turning on one of these operators, the cross-sections receives a positive definite contribution from the New Physics squared piece coming from the flavor-changing modeq i q j → − α + α ,as well as subleading contributions from the CKM-suppressed flavor-conserving modesq i q i → − α + α . These last contributions can potentially skew the upper limits since they interfere with the SM. For (i, j) = (1, 2) the bounds are skewed because thē uu → − α + α modes can compete with the flavor-changing modes since these have a mild Cabibbo suppression of O(λ) that can be compensated by the up-quark PDF. The effects of quark-flavor mixing through the CKM matrix will be discussed in more detail below. For (i, j) = (1, 3) and (2, 3), on the other hand, the limits appear symmetric because the flavor-conserving modes have a CKM-suppression of order O(λ 2 ) and O(λ 3 ), respectively, making these subleading with respect to the flavor-changing modes. Finally, the bounds on the LFV vector operators α < β and the dipole operators shown in the fourth and last row of Fig. 4.1 do not interfere with the SM, leading to perfectly symmetric bounds.
Figure 4 . 5 :
45Two-parameter fit of the Wilson coefficients [C(1) lq ] 22ij and [C(3) lq ] 22ij for all allowed quarkflavor indices. We show the 95% confidence regions for these coefficients assuming V CKM = 1 (black dashed line), a non-diagonal V CKM with down alignment (blue region), and a non-diagonal V CKM with up alignment (yellow region). Both dimuon and monomuon searches are considered for the constraints presented here. The coefficients are normalized by choosing Λ = 1 TeV.
q 2 H 2
2D 3 with k = 1, 2, 3, 4 in the classes ψ 4 H 2 and ψ 2 H 2 D 3 defined in Tables F.2 and F.3 (for the exact expressions see Eqs. (C.3), (C.9) and (C.
q 2 D 2 ,
Figure 4 . 6 :
4695% CL limits on the form-factor coefficients F LL V (0,0) for first generation (top panel), second generation (middle panel) and third generation (bottom panel) leptons using LHC run-II searches in the dilepton and monolepton channels. Blue intervals are single-parameter limits, the red ones are limits marginalizing over momentum-dependent effects from d = 8 operators, and the yellow ones are the expected limits from d = 6 and d = 8 operators assuming a specific UV scenario. For the first two rows, the limits for uu, dd, ud have been rescaled by a factor of 5 for visibility. The gray bands, given by |F LL, qq I (0,0) | ≥ 4π v 2 /Λ 2 , correspond to the region where perturbative unitarity is expected to break down. See text for more details.
iii) Marginalized limits on F LL V (0,0) while profiling over F LL V (1,0) and F LL V (0,1) . To enforce the correlations between dimension-8 form-factors that arise in the SMEFT from SU (2) L invariance, we marginalize over the quantities 2 F LL V (1,0) − F LL V (0,1) and F LL V (0,1) . Notice that these two parameters map at O(1/Λ 4 ) to independent Wilson coefficient combinations, each proportional to C q 2 D 2 ) for neutral (charged) currents, respectively.
and toūiuj → α β anddiuj → ανβ via the S (1/3) 3
Figure 4 . 7 :
47LHC constraints at 95% CL on the coupling constants of all leptoquarks, where a single coupling is turned on at a time. The masses of all leptoquarks are fixed to 2 TeV. The numbers on the left-hand side of each plot correspond to the respective quark and lepton flavor indices iα. See
Figure 4 . 8 :
48Two-dimensional exclusion regions on the U 1 ∼ (3, 1, 2/3) leptoquark left-handed couplings derived from pp → and pp → (red and blue contours), and pp → (gray contours), with = . All contours are depicted at 2σ.
, O ledq } with appropriate flavor indices.
Hl αβ = H † ← → D I µ H l α γ µ τ I l β .
with flavor indices 3323 also contribute to R D ( * ) , but these effective coefficients are subject to stringent constraints from B → Kνν.
Figure 5 . 1 :
51Constraints on the SMEFT coefficients from flavor-physics (blue region), electroweakprecision (gray) and high-p T LHC observables (red). The combined fit is shown in green. For each type of observables, we show the 1σ (2σ) regions with darker (lighter) colors. The dashed (dotted) red line indicates the projection of the 1σ (2σ) region for an integrated luminosity of 3 ab −1 . Three effective scenarios are considered which are motivated by different leptoquark models, as explained in the text. The EFT cutoff is set to Λ = 2 TeV and the minimum values for the combined χ 2 in our fits are given by χ 2 min = 28.3 (a), 29.5 (b), 78.0 (c).
Figure 5 . 2 :
52Bounds on the leptoquark couplings from low-energy (blue), electroweak pole (gray) and high-p T LHC (red) observables. The combined fit is shown in green. For every bound we show the 1σ and 2σ regions. The dashed (dotted) red line indicates the projection of the 1σ (2σ) region for an integrated luminosity of 3 ab −1 . The leptoquark masses are set to Λ = 2 TeV. The minimum values for the combined χ 2 in our fits are given by χ 2 min = 26.4 (a), 30.0 (b), 38.6 (c).
F
XY
. G.1-G.3, and for the τ µ, τ e and µe ones in Figs. G.4-G.6. Similar limits for the d = 6 quark-and lepton-dipole operators are reported in Fig. 4.2. d = 6
q 2 D 2 , respectively, to conveniently have lepton-before quark-flavor indices. d = 8
l α γ µ τ I D ν l β ) (D (µ D ν) H) † τ I H) l α γ µ τ I D ν l β ) H † τ I (D (µ D ν) ē α γ µ D ν e β ) (D (µ D ν) H) ē α γ µ D ν e β ) H † (D (µ D ν) H) -
Figure G. 1 :Figure G. 2 :Figure G. 3 :Figure G. 4 :Figure G. 5 :Figure G. 6 :
123456LHC constraints on semileptonic d = 6 Wilson coefficients with τ τ flavor indices to 95% CL accuracy, where a single coefficient is turned on at a time. Quark-flavor indices are denoted by ij and are specified on the left hand-side of each plot. All coefficients are assumed to be real and contributions to the cross-section up to and including O(1/Λ 4 ) are considered. The New Physics scale is chosen as Λ = 1 TeV. LHC constraints on semileptonic d = 6 Wilson coefficients with µµ flavor indices to 95% CL accuracy, where a single coefficient is turned on at a time. See caption of Fig. G.1. LHC constraints on semileptonic d = 6 Wilson coefficients with ee flavor indices to 95% CL accuracy, where a single coefficient is turned on at a time. See caption of Fig. G.1. The largest deviation from the SM observed is ∼ 2σ. LHC constraints on ψ 4 semileptonic d = 6 Wilson coefficients with µτ flavor indices, where a single coefficient is turned on at a time. See caption of Fig. LHC constraints on ψ 4 semileptonic d = 6 Wilson coefficients with eτ flavor indices, where a single coefficient is turned on at a time. See caption of Fig. LHC constraints on semileptonic d = 6 Wilson coefficients with eµ flavor indices, where a single coefficient is turned on at a time. See caption of Fig. G.1.
Table 3 .
31: Counting of SMEFT parameters relevant to the high-p T observables and the corresponding
energy scaling of the amplitude for each class of operators. The number of real and imaginary free
parameters that contribute to the Drell-Yan cross-sections at order O(1/Λ 4 ) are listed for each operator
class. In total we find 549 (472) real (imaginary) parameters at d = 6 and an additional 435 (141) real
(imaginary) parameters at d = 8, where for the latter we only consider those parameters that affect the
interference of these operators with the SM.
.2.
Table 4 . 1 :
41Experimental searches by the ATLAS and CMS collaborations that have been recast in the HighPT package
Table 3 .
32
Table 5 . 1 :
51Matching of the leptoquarks to the semileptonic operators in the Warsaw basis[43]. In the matching conditions we have set Λ = m LQ .
Table A .
A1: Validation of our simulation of the New Physics signal used as a benchmark in the
experimental analysis. In the fourth column, we compare against the acceptance × efficiency (A × )
provided by the experimental collaborations.
e}) denote the Yukawa matrices and flavor indices have been omitted, H corresponds to the SM Higgs doublet with the conjugate field defined as H ≡ H * and the SU (2) anti-symmetric tensor is defined as ≡ iτ 2 . We work in the basis where Y and Y d are diagonal matrices, while Y u contains the CKM matrix, V ≡ V CKM .
.1, and Tables F.2 and F.3, respectively.
Table F . 1 :
F1SMEFT d = 6 operators that contribute to the processes pp → and pp → ν. Semileptonic operators (ψ 4 ) are collected in the upper table, whereas Higgs-current (ψ 2 H 2 D) and dipole operators (ψ 2 XH) appear in the middle and bottom tables, respectively. We use the operators in the Warsaw basis[43], where we renamed the operator O qe to O eq to conveniently have lepton-before quark-flavor indices.d = 8
Table F . 2 :
F2SMEFT d = 8 four-fermion operators that contribute to the processes pp → and pp → ν. We use the basis defined in Ref.[70], where we relabeled the operators O
For up-type quarks the indices run as i, j = 1, 2 because of the negligible top-quark content of the proton at the LHC.4 This can be shown, e.g., using the identity σ µν γ5 = iε µναβ σ αβ /2 . 5 Note that the charge-conjugate process can be described by a similar expression to Eq. (2.2). The relations between thedjui → + α ν β and the djūi → − ανβ form-factors are spelled out in Appendix B.
A notable exception are the quark-gluon fusion processes gb → b and gc → b ν. The enhancement of the gluon over the bottom PDF and the background reduction from the additional b-tagged jet make these processes important probes for New Physics entering third-generation semileptonic transitions[58][59][60].7 This assumption leaves out scenarios with loop-level contributions from light degrees of freedom where branch cuts can appear.
Several searches for SUSY in + nj final with n large are available in the literature and could be recasted to give limits on semileptonic New Physics[68]. However, these are far from being optimized and give just a marginal improvement in some regions of parameter space when compared to Drell-Yan.
Notice that we only count d = 8 operators that interfere with the SM, and for which we consider a non-diagonal CKM matrix.
Note that the regular pieces of the scalar and tensor form-factors were truncated to order n = m = 0 because when squaring the amplitude, the d = 8 terms with n + m = 1 do not interfere with the SM. For the dipoles we can set FD , Reg and FD q , Reg to zero since these arise non-locally via SM gauge boson exchange. For the pole form-factors F I, Poles , we only need to consider the vector poles and dipoles arising from the s-channel exchange of the SM gauge bosons.
For simplicity we assume exactly three right-handed neutrinos, but this need not be the case.
For example, the particle-level observable relevant for resonance searches in ditau production at the LHC[85] is the invariant mass x = mττ of the ditau system. Given that τ -leptons always decay into neutrinos, a precise experimental reconstruction of mττ is challenging. Therefore, what is actually measured is the quantity x obs = m tot T known as the total transverse mass, which serves as a proxy for mττ . This observable is computed from the two leading τ -tagged jets coming from the visible part of the hadronic decay of each underlying tau-lepton (τ h ) and the missing transverse energy of the event which accounts for the undetected neutrinos.
The effects of dimension-8 operators will be discussed in the following Section.
Similar effects are expected for fits in explicit New Physics models.
Similar LFU tests have been performed by LHCb in the Bc → J/ψ ν[103] and Λ b → Λc ν[104] channels, but with sizable experimental uncertainties. Currently, these observables have a minor impact in the b → cτ ν fit.
This choice agrees with the conventions adapted by WCxf[152] with the exception of the C qe ijαβ operator in the original Warsaw basis that we dub Ceq αβij .
AcknowledgementsWe are grateful to Gino Isidori for encouraging us to pursue this project. We also thank Adam Falkowski for discussions regarding Ref.[71]. This project has received support from the Euro-A Validation of LHC RecastThe constraints presented in this work are based on recasts of the ATLAS and CMS searches listed inTable 4.1. To validate our recasts with the experimental searches, we simulated the Drell-Yan process for the New Physics models considered using our simulation pipeline discussed in Sec. 4.1. We provide the information on the quality of our recast inTable AD.2 Vector form-factors
Flavor Physics Constraints for Physics Beyond the Standard Model. G Isidori, Y Nir, G Perez, 10.1146/annurev.nucl.012809.1045341002.0900Ann. Rev. Nucl. Part. Sci. 60355G. Isidori, Y. Nir and G. Perez, Flavor Physics Constraints for Physics Beyond the Standard Model, Ann. Rev. Nucl. Part. Sci. 60 (2010) 355, [1002.0900].
Dark Sectors at fixed targets: The example of NA62. B Döbrich, NA62 collaboration1807.10170Frascati Phys. Ser. 66NA62 collaboration, B. Döbrich, Dark Sectors at fixed targets: The example of NA62, Frascati Phys. Ser. 66 (2018) 312-327, [1807.10170].
. T Yamanaka, KOTO collaborationJ-Parc The, KOTO collaborationKoto, KOTO collaboration10.1093/ptep/pts057PTEP. 2012KOTO collaboration, T. Yamanaka, The J-PARC KOTO experiment, PTEP 2012 (2012) 02B006.
Future Physics Programme of BESIII. M Ablikim, BESIII collaboration10.1088/1674-1137/44/4/0400011912.05983Chin. Phys. C. 4440001BESIII collaboration, M. Ablikim et al., Future Physics Programme of BESIII, Chin. Phys. C 44 (2020) 040001, [1912.05983].
Physics case for an LHCb Upgrade II -Opportunities in flavour physics, and beyond, in the HL-LHC era. R Aaij, LHCb collaboration1808.08865LHCb collaboration, R. Aaij et al., Physics case for an LHCb Upgrade II -Opportunities in flavour physics, and beyond, in the HL-LHC era, 1808.08865.
The Belle II Physics Book. W Altmannshofer, Belle-II collaboration10.1093/ptep/ptz1061808.10567PTEP. 2019Belle-II collaboration, W. Altmannshofer et al., The Belle II Physics Book, PTEP 2019 (2019) 123C01, [1808.10567].
Test of lepton universality using B + → K + + − decays. R Aaij, LHCb collaboration10.1103/PhysRevLett.113.1516011406.6482Phys. Rev. Lett. 113151601LHCb collaboration, R. Aaij et al., Test of lepton universality using B + → K + + − decays, Phys. Rev. Lett. 113 (2014) 151601, [1406.6482].
Test of lepton universality with B 0 → K * 0 + − decays. R Aaij, LHCb collaboration10.1007/JHEP08(2017)0551705.05802JHEP. 0855LHCb collaboration, R. Aaij et al., Test of lepton universality with B 0 → K * 0 + − decays, JHEP 08 (2017) 055, [1705.05802].
Test of lepton universality with Λ 0 b → pK − + − decays. R Aaij, LHCb collaboration10.1007/JHEP05(2020)0401912.08139JHEP. 0540LHCb collaboration, R. Aaij et al., Test of lepton universality with Λ 0 b → pK − + − decays, JHEP 05 (2020) 040, [1912.08139].
Test of lepton universality in beauty-quark decays. R Aaij, LHCb collaboration10.1038/s41567-021-01478-8Nature Phys. 182103.11769LHCb collaboration, R. Aaij et al., Test of lepton universality in beauty-quark decays, Nature Phys. 18 (2022) 277-282, [2103.11769].
Tests of lepton universality using B 0 → K 0. R Aaij, LHCb collaboration10.1103/PhysRevLett.128.1918022110.09501Phys. Rev. Lett. 128191802LHCb collaboration, R. Aaij et al., Tests of lepton universality using B 0 → K 0 S + − and B + → K * + + − decays, Phys. Rev. Lett. 128 (2022) 191802, [2110.09501].
Measurement of an Excess ofB → D ( * ) τ −ν τ Decays and Implications for Charged Higgs Bosons. J P Lees, BaBar collaboration10.1103/PhysRevD.88.0720121303.0571Phys. Rev. D. 8872012BaBar collaboration, J. P. Lees et al., Measurement of an Excess ofB → D ( * ) τ −ν τ Decays and Implications for Charged Higgs Bosons, Phys. Rev. D 88 (2013) 072012, [1303.0571].
Measurement of the branching ratio ofB → D ( * ) τ −ν τ relative tō B → D ( * ) −ν decays with hadronic tagging at Belle. M Huschle, Belle collaboration10.1103/PhysRevD.92.0720141507.03233Phys. Rev. D. 9272014Belle collaboration, M. Huschle et al., Measurement of the branching ratio ofB → D ( * ) τ −ν τ relative tō B → D ( * ) −ν decays with hadronic tagging at Belle, Phys. Rev. D 92 (2015) 072014, [1507.03233].
Measurement of the ratio of branching fractions B(B 0 → D * + τ −ν τ )/B(B 0 → D * + µ −ν µ). R Aaij, LHCb collaboration10.1103/PhysRevLett.115.1118031506.08614Phys. Rev. Lett. 115111803LHCb collaboration, R. Aaij et al., Measurement of the ratio of branching fractions B(B 0 → D * + τ −ν τ )/B(B 0 → D * + µ −ν µ), Phys. Rev. Lett. 115 (2015) 111803, [1506.08614].
Measurement of the τ lepton polarization and R(D * ) in the decaȳ B → D * τ −ν τ. S Hirose, Belle collaboration10.1103/PhysRevLett.118.2118011612.00529Phys. Rev. Lett. 118211801Belle collaboration, S. Hirose et al., Measurement of the τ lepton polarization and R(D * ) in the decaȳ B → D * τ −ν τ , Phys. Rev. Lett. 118 (2017) 211801, [1612.00529].
Measurement of the τ lepton polarization and R(D * ) in the decaȳ B → D * τ −ν τ with one-prong hadronic τ decays at Belle. S Hirose, Belle collaboration10.1103/PhysRevD.97.0120041709.00129Phys. Rev. D. 9712004Belle collaboration, S. Hirose et al., Measurement of the τ lepton polarization and R(D * ) in the decaȳ B → D * τ −ν τ with one-prong hadronic τ decays at Belle, Phys. Rev. D 97 (2018) 012004, [1709.00129].
Test of Lepton Flavor Universality by the measurement of the B 0 → D * − τ + ντ branching fraction using three-prong τ decays. R Aaij, LHCb collaboration10.1103/PhysRevD.97.0720131711.02505Phys. Rev. D. 9772013LHCb collaboration, R. Aaij et al., Test of Lepton Flavor Universality by the measurement of the B 0 → D * − τ + ντ branching fraction using three-prong τ decays, Phys. Rev. D 97 (2018) 072013, [1711.02505].
Measurement of R(D) and R(D * ) with a semileptonic tagging method. A Abdesselam, Belle collaboration8794Belle collaboration, A. Abdesselam et al., Measurement of R(D) and R(D * ) with a semileptonic tagging method, 1904.08794.
Measurement of R(D) and R(D * ) with a semileptonic tagging method. G Caria, Belle collaboration10.1103/PhysRevLett.124.1618031910.05864Phys. Rev. Lett. 124161803Belle collaboration, G. Caria et al., Measurement of R(D) and R(D * ) with a semileptonic tagging method, Phys. Rev. Lett. 124 (2020) 161803, [1910.05864].
Minimal flavor violation: An Effective field theory approach. G Ambrosio, G F Giudice, G Isidori, A Strumia, 10.1016/S0550-3213(02)00836-2hep-ph/0207036Nucl. Phys. B. 645G. D'Ambrosio, G. F. Giudice, G. Isidori and A. Strumia, Minimal flavor violation: An Effective field theory approach, Nucl. Phys. B 645 (2002) 155-187, [hep-ph/0207036].
U (2) and Minimal Flavour Violation in Supersymmetry. R Barbieri, G Isidori, J Jones-Perez, P Lodone, D M Straub, 10.1140/epjc/s10052-011-1725-zEur. Phys. J. C. 7117251105.2296R. Barbieri, G. Isidori, J. Jones-Perez, P. Lodone and D. M. Straub, U (2) and Minimal Flavour Violation in Supersymmetry, Eur. Phys. J. C 71 (2011) 1725, [1105.2296].
Lepton Flavor Violation and Dilepton Tails at the LHC. A Angelescu, D A Faroughy, O Sumensari, 10.1140/epjc/s10052-020-8210-5Eur. Phys. J. C. 806412002.05684A. Angelescu, D. A. Faroughy and O. Sumensari, Lepton Flavor Violation and Dilepton Tails at the LHC, Eur. Phys. J. C 80 (2020) 641, [2002.05684].
Charm physics confronts high-pT lepton tails. J Fuentes-Martin, A Greljo, J Martin Camalich, J D Ruiz-Alvarez, 10.1007/JHEP11(2020)080JHEP. 1180J. Fuentes-Martin, A. Greljo, J. Martin Camalich and J. D. Ruiz-Alvarez, Charm physics confronts high-pT lepton tails, JHEP 11 (2020) 080, [2003.12421].
Energy helps accuracy: electroweak precision tests at hadron colliders. M Farina, G Panico, D Pappadopulo, J T Ruderman, R Torre, A Wulzer, 10.1016/j.physletb.2017.06.0431609.08157Phys. Lett. B. 772M. Farina, G. Panico, D. Pappadopulo, J. T. Ruderman, R. Torre and A. Wulzer, Energy helps accuracy: electroweak precision tests at hadron colliders, Phys. Lett. B 772 (2017) 210-215, [1609.08157].
Confronting lepton flavor universality violation in B decays with high-pT tau lepton searches at LHC. D A Faroughy, A Greljo, J F Kamenik, 10.1016/j.physletb.2016.11.011Phys. Lett. B. 7641609.07138D. A. Faroughy, A. Greljo and J. F. Kamenik, Confronting lepton flavor universality violation in B decays with high-pT tau lepton searches at LHC, Phys. Lett. B 764 (2017) 126-134, [1609.07138].
Composite Leptoquarks in Hadronic Colliders. O J P Eboli, A V Olinto, 10.1103/PhysRevD.38.3461Phys. Rev. D. 383461O. J. P. Eboli and A. V. Olinto, Composite Leptoquarks in Hadronic Colliders, Phys. Rev. D 38 (1988) 3461.
The search for a third-generation leptoquark coupling to a τ lepton and a b quark through single, pair and nonresonant production at √ s = 13 TeV. CMS collaboration, The search for a third-generation leptoquark coupling to a τ lepton and a b quark through single, pair and nonresonant production at √ s = 13 TeV, .
Search for scalar leptoquarks in the bτ τ final state in pp collisions at √ s =˜13˜TeV with the ATLAS detector. ATLAS collaboration, Search for scalar leptoquarks in the bτ τ final state in pp collisions at √ s =˜13˜TeV with the ATLAS detector, .
High-pT dilepton tails and flavor physics. A Greljo, D Marzocca, 10.1140/epjc/s10052-017-5119-81704.09015Eur. Phys. J. C. 77548A. Greljo and D. Marzocca, High-pT dilepton tails and flavor physics, Eur. Phys. J. C 77 (2017) 548, [1704.09015].
Global Constraints on Lepton-Quark Contact Interactions. J De Blas, M Chala, J Santiago, 10.1103/PhysRevD.88.0950111307.5068Phys. Rev. D. 8895011J. de Blas, M. Chala and J. Santiago, Global Constraints on Lepton-Quark Contact Interactions, Phys. Rev. D 88 (2013) 095011, [1307.5068].
Standard model EFT and the Drell-Yan process at high energy. S Dawson, P P Giardino, A Ismail, 10.1103/PhysRevD.99.0350441811.12260Phys. Rev. D. 9935044S. Dawson, P. P. Giardino and A. Ismail, Standard model EFT and the Drell-Yan process at high energy, Phys. Rev. D 99 (2019) 035044, [1811.12260].
Non-standard Charged Current Interactions: beta decays versus the LHC. V Cirigliano, M Gonzalez-Alonso, M L Graesser, 10.1007/JHEP02(2013)0461210.4553JHEP. 0246V. Cirigliano, M. Gonzalez-Alonso and M. L. Graesser, Non-standard Charged Current Interactions: beta decays versus the LHC, JHEP 02 (2013) 046, [1210.4553].
Correlating nonresonant di-electron searches at the LHC to the Cabibbo-angle anomaly and lepton flavor universality violation. A Crivellin, C A Manzari, M Montull, 10.1103/PhysRevD.104.115016Phys. Rev. D. 1041150162103.12003A. Crivellin, C. A. Manzari and M. Montull, Correlating nonresonant di-electron searches at the LHC to the Cabibbo-angle anomaly and lepton flavor universality violation, Phys. Rev. D 104 (2021) 115016, [2103.12003].
Nonstandard Semileptonic Hyperon Decays. H.-M Chang, M González-Alonso, J. Martin Camalich, 10.1103/PhysRevLett.114.1618021412.8484Phys. Rev. Lett. 114161802H.-M. Chang, M. González-Alonso and J. Martin Camalich, Nonstandard Semileptonic Hyperon Decays, Phys. Rev. Lett. 114 (2015) 161802, [1412.8484].
Hadronic τ Decays as New Physics Probes in the LHC Era. V Cirigliano, A Falkowski, M González-Alonso, A Rodríguez-Sánchez, 10.1103/PhysRevLett.122.2218011809.01161Phys. Rev. Lett. 122221801V. Cirigliano, A. Falkowski, M. González-Alonso and A. Rodríguez-Sánchez, Hadronic τ Decays as New Physics Probes in the LHC Era, Phys. Rev. Lett. 122 (2019) 221801, [1809.01161].
Mono-τ Signatures at the LHC Constrain Explanations of B-decay Anomalies. A Greljo, J Martin Camalich, J D Ruiz-Álvarez, 10.1103/PhysRevLett.122.1318031811.07920Phys. Rev. Lett. 122131803A. Greljo, J. Martin Camalich and J. D. Ruiz-Álvarez, Mono-τ Signatures at the LHC Constrain Explanations of B-decay Anomalies, Phys. Rev. Lett. 122 (2019) 131803, [1811.07920].
Non-resonant new physics search at the LHC for the b → cτ ν anomalies. M Endo, S Iguro, T Kitahara, M Takeuchi, R Watanabe, 10.1007/JHEP02(2022)1062111.04748JHEP. 02106M. Endo, S. Iguro, T. Kitahara, M. Takeuchi and R. Watanabe, Non-resonant new physics search at the LHC for the b → cτ ν anomalies, JHEP 02 (2022) 106, [2111.04748].
Bottom-Flavored Mono-Tau Tails at the LHC. D Marzocca, U Min, M Son, 10.1007/JHEP12(2020)035JHEP. 12352008.07541D. Marzocca, U. Min and M. Son, Bottom-Flavored Mono-Tau Tails at the LHC, JHEP 12 (2020) 035, [2008.07541].
Testing leptoquark/EFT inB → D ( * ) lν at the LHC. S Iguro, M Takeuchi, R Watanabe, 10.1140/epjc/s10052-021-09125-5Eur. Phys. J. C. 814062011.02486S. Iguro, M. Takeuchi and R. Watanabe, Testing leptoquark/EFT inB → D ( * ) lν at the LHC, Eur. Phys. J. C 81 (2021) 406, [2011.02486].
Revisiting mono-tau tails at the LHC. F Jaffredo, 10.1140/epjc/s10052-022-10504-92112.14604Eur. Phys. J. C. 82541F. Jaffredo, Revisiting mono-tau tails at the LHC, Eur. Phys. J. C 82 (2022) 541, [2112.14604].
. S Bressler, F D V Halevy, Y Nir, 10.1007/JHEP07(2022)0772201.11393JHEP. 0777contributions to R(D ( * )S. Bressler, F. D. V. Halevy and Y. Nir, b → cτ νe, contributions to R(D ( * ) ), JHEP 07 (2022) 077, [2201.11393].
Effective Lagrangian Analysis of New Interactions and Flavor Conservation. W Buchmuller, D Wyler, 10.1016/0550-3213(86)90262-2Nucl. Phys. B. 268W. Buchmuller and D. Wyler, Effective Lagrangian Analysis of New Interactions and Flavor Conservation, Nucl. Phys. B 268 (1986) 621-653.
Dimension-Six Terms in the Standard Model Lagrangian. B Grzadkowski, M Iskrzynski, M Misiak, J Rosiek, 10.1007/JHEP10(2010)0851008.4884JHEP. 1085B. Grzadkowski, M. Iskrzynski, M. Misiak and J. Rosiek, Dimension-Six Terms in the Standard Model Lagrangian, JHEP 10 (2010) 085, [1008.4884].
Dilepton production in the SMEFT at O(1/Λ4). R Boughezal, E Mereghetti, F Petriello, 10.1103/PhysRevD.104.0950222106.05337Phys. Rev. D. 10495022R. Boughezal, E. Mereghetti and F. Petriello, Dilepton production in the SMEFT at O(1/Λ4), Phys. Rev. D 104 (2021) 095022, [2106.05337].
Exploring the SMEFT at dimension-8 with Drell-Yan transverse momentum measurements. R Boughezal, Y Huang, F Petriello, 2207.01703R. Boughezal, Y. Huang and F. Petriello, Exploring the SMEFT at dimension-8 with Drell-Yan transverse momentum measurements, 2207.01703.
Monolepton production in SMEFT to O(1/Λ 4 ) and beyond. T Kim, A Martin, 2203.11976T. Kim and A. Martin, Monolepton production in SMEFT to O(1/Λ 4 ) and beyond, 2203.11976.
Truncation, validity, uncertainties. I Brivio, 2201.04974I. Brivio et al., Truncation, validity, uncertainties, 2201.04974.
HighPT: A Tool for high-pT Drell-Yan Tails Beyond the Standard Model. L Allwicher, D A Faroughy, F Jaffredo, O Sumensari, F Wilsch, L. Allwicher, D. A. Faroughy, F. Jaffredo, O. Sumensari and F. Wilsch, HighPT: A Tool for high-pT Drell-Yan Tails Beyond the Standard Model, 2207.10756.
flavio: a Python package for flavour and precision phenomenology in the Standard Model and beyond. D M Straub, 1810.08132D. M. Straub, flavio: a Python package for flavour and precision phenomenology in the Standard Model and beyond, 1810.08132.
A Global Likelihood for Precision Constraints and Flavour Anomalies. J Aebischer, J Kumar, P Stangl, D M Straub, 10.1140/epjc/s10052-019-6977-z1810.07698Eur. Phys. J. C. 79509J. Aebischer, J. Kumar, P. Stangl and D. M. Straub, A Global Likelihood for Precision Constraints and Flavour Anomalies, Eur. Phys. J. C 79 (2019) 509, [1810.07698].
EOS: a software for flavor physics phenomenology. D Van Dyk, EOS Authors collaboration10.1140/epjc/s10052-022-10177-42111.15428Eur. Phys. J. C. 82569EOS Authors collaboration, D. van Dyk et al., EOS: a software for flavor physics phenomenology, Eur. Phys. J. C 82 (2022) 569, [2111.15428].
HEPfit: a code for the combination of indirect and direct constraints on high energy physics models. J De Blas, 10.1140/epjc/s10052-020-7904-z1910.14012Eur. Phys. J. C. 80456J. De Blas et al., HEPfit: a code for the combination of indirect and direct constraints on high energy physics models, Eur. Phys. J. C 80 (2020) 456, [1910.14012].
Flavourful SMEFT likelihood for Higgs and electroweak data. A Falkowski, D Straub, 10.1007/JHEP04(2020)0661911.07866JHEP. 0466A. Falkowski and D. Straub, Flavourful SMEFT likelihood for Higgs and electroweak data, JHEP 04 (2020) 066, [1911.07866].
Combined SMEFT interpretation of Higgs, diboson, and top quark data from the LHC. J J Smefit Collaboration, G Ethier, F Magni, L Maltoni, E R Mantani, J Nocera, Rojo, 10.1007/JHEP11(2021)089JHEP. 11892105.00006SMEFiT collaboration, J. J. Ethier, G. Magni, F. Maltoni, L. Mantani, E. R. Nocera, J. Rojo et al., Combined SMEFT interpretation of Higgs, diboson, and top quark data from the LHC, JHEP 11 (2021) 089, [2105.00006].
Leptoquarks in Lepton -Quark Collisions. W Buchmuller, R Ruckl, D Wyler, 10.1016/0370-2693(87)90637-XPhys. Lett. B. 191W. Buchmuller, R. Ruckl and D. Wyler, Leptoquarks in Lepton -Quark Collisions, Phys. Lett. B 191 (1987) 442-448.
Physics of leptoquarks in precision experiments and at particle colliders. I Doršner, S Fajfer, A Greljo, J F Kamenik, N Košnik, 10.1016/j.physrep.2016.06.001Phys. Rept. 6411603.04993I. Doršner, S. Fajfer, A. Greljo, J. F. Kamenik and N. Košnik, Physics of leptoquarks in precision experiments and at particle colliders, Phys. Rept. 641 (2016) 1-68, [1603.04993].
Photon and leptons induced processes at the LHC. L Buonocore, P Nason, F Tramontano, G Zanderighi, 10.1007/JHEP12(2021)0732109.10924JHEP. 1273L. Buonocore, P. Nason, F. Tramontano and G. Zanderighi, Photon and leptons induced processes at the LHC, JHEP 12 (2021) 073, [2109.10924].
R D ( * ) anomaly: A possible hint for natural supersymmetry with R-parity violation. W Altmannshofer, P S Dev, A Soni, 10.1103/PhysRevD.96.0950101704.06659Phys. Rev. D. 9695010W. Altmannshofer, P. S. Bhupal Dev and A. Soni, R D ( * ) anomaly: A possible hint for natural supersymmetry with R-parity violation, Phys. Rev. D 96 (2017) 095010, [1704.06659].
Searching for New Physics with bb + − contact interactions. Y Afik, S Bar-Shalom, J Cohen, Y Rozen, 10.1016/j.physletb.2020.1355411912.00425Phys. Lett. B. 807135541Y. Afik, S. Bar-Shalom, J. Cohen and Y. Rozen, Searching for New Physics with bb + − contact interactions, Phys. Lett. B 807 (2020) 135541, [1912.00425].
Bottom-Flavored Mono-Tau Tails at the LHC. D Marzocca, U Min, M Son, 10.1007/JHEP12(2020)035JHEP. 12352008.07541D. Marzocca, U. Min and M. Son, Bottom-Flavored Mono-Tau Tails at the LHC, JHEP 12 (2020) 035, [2008.07541].
Hard Interactions of Quarks and Gluons: A Primer for LHC Physics. J M Campbell, J W Huston, W J Stirling, 10.1088/0034-4885/70/1/R02hep-ph/0611148Rept. Prog. Phys. 70J. M. Campbell, J. W. Huston and W. J. Stirling, Hard Interactions of Quarks and Gluons: A Primer for LHC Physics, Rept. Prog. Phys. 70 (2007) 89, [hep-ph/0611148].
C Buttar, hep-ph/0604120Les houches physics at TeV colliders 2005, standard model and Higgs working group: Summary report. 4th Les Houches Workshop on Physics at TeV CollidersC. Buttar et al., Les houches physics at TeV colliders 2005, standard model and Higgs working group: Summary report, in 4th Les Houches Workshop on Physics at TeV Colliders, 4, 2006. hep-ph/0604120.
Constraints on t -channel leptoquark exchange from LHC contact interaction searches. A Bessaa, S Davidson, 10.1140/epjc/s10052-015-3313-01409.2372Eur. Phys. J. C. 7597A. Bessaa and S. Davidson, Constraints on t -channel leptoquark exchange from LHC contact interaction searches, Eur. Phys. J. C 75 (2015) 97, [1409.2372].
Of Contact Interactions and Colliders. S Davidson, S Descotes-Genon, P Verdier, 10.1103/PhysRevD.91.0550311410.4798Phys. Rev. D. 9155031S. Davidson, S. Descotes-Genon and P. Verdier, Of Contact Interactions and Colliders, Phys. Rev. D 91 (2015) 055031, [1410.4798].
Single leptoquark production at hadron colliders. J Ohnemus, S Rudaz, T F Walsh, P M Zerwas, 10.1016/0370-2693(94)90612-2hep-ph/9406235Phys. Lett. B. 334J. Ohnemus, S. Rudaz, T. F. Walsh and P. M. Zerwas, Single leptoquark production at hadron colliders, Phys. Lett. B 334 (1994) 203-207, [hep-ph/9406235].
Lepton-Quark Collisions at the Large Hadron Collider. L Buonocore, U Haisch, P Nason, F Tramontano, G Zanderighi, 10.1103/PhysRevLett.125.231804Phys. Rev. Lett. 1252318042005.06475L. Buonocore, U. Haisch, P. Nason, F. Tramontano and G. Zanderighi, Lepton-Quark Collisions at the Large Hadron Collider, Phys. Rev. Lett. 125 (2020) 231804, [2005.06475].
Lepton-Quark Fusion at Hadron Colliders, precisely. A Greljo, N Selimovic, 10.1007/JHEP03(2021)2792012.02092JHEP. 03279A. Greljo and N. Selimovic, Lepton-Quark Fusion at Hadron Colliders, precisely, JHEP 03 (2021) 279, [2012.02092].
H K Dreiner, V M Lozano, S Nangia, T Opferkuch, Lepton PDFs and Multipurpose. Single Lepton Searches at the LHC, 2112.12755H. K. Dreiner, V. M. Lozano, S. Nangia and T. Opferkuch, Lepton PDFs and Multipurpose Single Lepton Searches at the LHC, 2112.12755.
Complete set of dimension-eight operators in the standard model effective field theory. H.-L Li, Z Ren, J Shu, M.-L Xiao, J.-H Yu, Y.-H Zheng, 10.1103/PhysRevD.104.015026Phys. Rev. D. 104150262005.00008H.-L. Li, Z. Ren, J. Shu, M.-L. Xiao, J.-H. Yu and Y.-H. Zheng, Complete set of dimension-eight operators in the standard model effective field theory, Phys. Rev. D 104 (2021) 015026, [2005.00008].
C W Murphy, 10.1007/JHEP10(2020)174Dimension-8 operators in the Standard Model Eective Field Theory. 1742005.00059C. W. Murphy, Dimension-8 operators in the Standard Model Eective Field Theory, JHEP 10 (2020) 174, [2005.00059].
AF B in the SMEFT: precision Z physics at the LHC. V Bresó-Pla, A Falkowski, M González-Alonso, 10.1007/JHEP08(2021)0212103.12074JHEP. 0821V. Bresó-Pla, A. Falkowski and M. González-Alonso, AF B in the SMEFT: precision Z physics at the LHC, JHEP 08 (2021) 021, [2103.12074].
Electroweak legacy of the LHC run II. E D S Almeida, A Alves, O J P Éboli, M C Gonzalez-Garcia, 10.1103/PhysRevD.105.0130062108.04828Phys. Rev. D. 10513006E. d. S. Almeida, A. Alves, O. J. P.Éboli and M. C. Gonzalez-Garcia, Electroweak legacy of the LHC run II, Phys. Rev. D 105 (2022) 013006, [2108.04828].
. I Brivio, S Dawson, J Blas, G Durieux, P Savard, A Denner, Electroweak input parameters, 2111.12515I. Brivio, S. Dawson, J. de Blas, G. Durieux, P. Savard, A. Denner et al., Electroweak input parameters, 2111.12515.
The CKM parameters in the SMEFT. S Descotes-Genon, A Falkowski, M Fedele, M González-Alonso, J Virto, 10.1007/JHEP05(2019)1721812.08163JHEP. 05172S. Descotes-Genon, A. Falkowski, M. Fedele, M. González-Alonso and J. Virto, The CKM parameters in the SMEFT, JHEP 05 (2019) 172, [1812.08163].
Baryon Number and Lepton Universality Violation in Leptoquark and Diquark Models. N Assad, B Fornal, B Grinstein, 10.1016/j.physletb.2017.12.0421708.06350Phys. Lett. B. 777N. Assad, B. Fornal and B. Grinstein, Baryon Number and Lepton Universality Violation in Leptoquark and Diquark Models, Phys. Lett. B 777 (2018) 324-331, [1708.06350].
J Davighi, A Greljo, A E Thomsen, 2202.05275Leptoquarks with Exactly Stable Protons. J. Davighi, A. Greljo and A. E. Thomsen, Leptoquarks with Exactly Stable Protons, 2202.05275.
Theory and phenomenology of two-Higgs-doublet models. G C Branco, P M Ferreira, L Lavoura, M N Rebelo, M Sher, J P Silva, 10.1016/j.physrep.2012.02.002Phys. Rept. 5161106.0034G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva, Theory and phenomenology of two-Higgs-doublet models, Phys. Rept. 516 (2012) 1-102, [1106.0034].
Natural Conservation Laws for Neutral Currents. S L Glashow, S Weinberg, 10.1103/PhysRevD.15.1958Phys. Rev. D. 151958S. L. Glashow and S. Weinberg, Natural Conservation Laws for Neutral Currents, Phys. Rev. D 15 (1977) 1958.
Yukawa Alignment in the Two-Higgs-Doublet Model. A Pich, P Tuzon, 10.1103/PhysRevD.80.091702Phys. Rev. D. 80917020908.1554A. Pich and P. Tuzon, Yukawa Alignment in the Two-Higgs-Doublet Model, Phys. Rev. D 80 (2009) 091702, [0908.1554].
Global fits in the Aligned Two-Higgs-Doublet model. O Eberhardt, A P N Martínez, A Pich, 10.1007/JHEP05(2021)0052012.09200JHEP. 055O. Eberhardt, A. P. n. Martínez and A. Pich, Global fits in the Aligned Two-Higgs-Doublet model, JHEP 05 (2021) 005, [2012.09200].
Heavy Majorana Neutrinos in the Effective Lagrangian Description: Application to Hadron Colliders. F Aguila, S Bar-Shalom, A Soni, J Wudka, 10.1016/j.physletb.2008.11.031Phys. Lett. B. 6700806.0876F. del Aguila, S. Bar-Shalom, A. Soni and J. Wudka, Heavy Majorana Neutrinos in the Effective Lagrangian Description: Application to Hadron Colliders, Phys. Lett. B 670 (2009) 399-402, [0806.0876].
One-loop matching in the SMEFT extended with a sterile neutrino. M Chala, A Titov, 10.1007/JHEP05(2020)139JHEP. 051392001.07732M. Chala and A. Titov, One-loop matching in the SMEFT extended with a sterile neutrino, JHEP 05 (2020) 139, [2001.07732].
D J Robinson, B Shakya, J Zupan, 10.1007/JHEP02(2019)1191807.04753Right-handed neutrinos and R(D (. 119D. J. Robinson, B. Shakya and J. Zupan, Right-handed neutrinos and R(D () ), JHEP 02 (2019) 119, [1807.04753].
Unfolding Methods in High-energy Physics Experiments. V Blobel, CERN School of Computing. 12V. Blobel, Unfolding Methods in High-energy Physics Experiments, in 1984 CERN School of Computing, 12, 1984.
Search for heavy Higgs bosons decaying into two tau leptons with the ATLAS detector using pp collisions at √ s = 13 TeV. G Aad, ATLAS collaboration10.1103/PhysRevLett.125.051801Phys. Rev. Lett. 125518012002.12223ATLAS collaboration, G. Aad et al., Search for heavy Higgs bosons decaying into two tau leptons with the ATLAS detector using pp collisions at √ s = 13 TeV, Phys. Rev. Lett. 125 (2020) 051801, [2002.12223].
Search for resonant and nonresonant new phenomena in high-mass dilepton final states at √ s = 13 TeV. A M Sirunyan, CMS collaboration10.1007/JHEP07(2021)2082103.02708JHEP. 07208CMS collaboration, A. M. Sirunyan et al., Search for resonant and nonresonant new phenomena in high-mass dilepton final states at √ s = 13 TeV, JHEP 07 (2021) 208, [2103.02708].
Search for high-mass resonances in final states with a tau lepton and missing transverse momentum with the ATLAS detector. ATLAS collaboration, Search for high-mass resonances in final states with a tau lepton and missing transverse momentum with the ATLAS detector, .
Search for a heavy charged boson in events with a charged lepton and missing transverse momentum from pp collisions at √ s = 13 TeV with the ATLAS detector. G Aad, ATLAS collaboration10.1103/PhysRevD.100.0520131906.05609Phys. Rev. D. 10052013ATLAS collaboration, G. Aad et al., Search for a heavy charged boson in events with a charged lepton and missing transverse momentum from pp collisions at √ s = 13 TeV with the ATLAS detector, Phys. Rev. D 100 (2019) 052013, [1906.05609].
Search for heavy resonances and quantum black holes in eµ, eτ , and µτ final states in proton-proton collisions at √ s = 13 TeV. 2205.06709CMS collaboration, Search for heavy resonances and quantum black holes in eµ, eτ , and µτ final states in proton-proton collisions at √ s = 13 TeV, 2205.06709.
PDF4LHC recommendations for LHC Run II. J Butterworth, 10.1088/0954-3899/43/2/0230011510.03865J. Phys. G. 4323001J. Butterworth et al., PDF4LHC recommendations for LHC Run II, J. Phys. G 43 (2016) 023001, [1510.03865].
FeynRules 2.0 -A complete toolbox for tree-level phenomenology. A Alloul, N D Christensen, C Degrande, C Duhr, B Fuks, 10.1016/j.cpc.2014.04.012Comput. Phys. Commun. 1851310.1921A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks, FeynRules 2.0 -A complete toolbox for tree-level phenomenology, Comput. Phys. Commun. 185 (2014) 2250-2300, [1310.1921].
UFO -The Universal FeynRules Output. C Degrande, C Duhr, B Fuks, D Grellscheid, O Mattelaer, T Reiter, 10.1016/j.cpc.2012.01.022Comput. Phys. Commun. 1831108.2040C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer and T. Reiter, UFO -The Universal FeynRules Output, Comput. Phys. Commun. 183 (2012) 1201-1214, [1108.2040].
The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations. J Alwall, R Frederix, S Frixione, V Hirschi, F Maltoni, O Mattelaer, 10.1007/JHEP07(2014)0791405.0301JHEP. 0779J. Alwall, R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP 07 (2014) 079, [1405.0301].
An introduction to PYTHIA 8.2. T Sjöstrand, S Ask, J R Christiansen, R Corke, N Desai, P Ilten, 10.1016/j.cpc.2015.01.024Comput. Phys. Commun. 1911410.3012T. Sjöstrand, S. Ask, J. R. Christiansen, R. Corke, N. Desai, P. Ilten et al., An introduction to PYTHIA 8.2, Comput. Phys. Commun. 191 (2015) 159-177, [1410.3012].
A modular framework for fast simulation of a generic collider experiment. J De Favereau, DELPHES 3 collaborationC Delaere, DELPHES 3 collaborationP Demin, DELPHES 3 collaborationA Giammanco, DELPHES 3 collaborationV Lemaître, DELPHES 3 collaborationA Mertens, DELPHES 3 collaboration10.1007/JHEP02(2014)0571307.6346DELPHES. 30257JHEPDELPHES 3 collaboration, J. de Favereau, C. Delaere, P. Demin, A. Giammanco, V. Lemaître, A. Mertens et al., DELPHES 3, A modular framework for fast simulation of a generic collider experiment, JHEP 02 (2014) 057, [1307.6346].
Modified frequentist analysis of search results (The CL(s) method). A L Read, Workshop on Confidence Limits. 8A. L. Read, Modified frequentist analysis of search results (The CL(s) method), in Workshop on Confidence Limits, pp. 81-101, 8, 2000.
Asymptotic formulae for likelihood-based tests of new physics. G Cowan, K Cranmer, E Gross, O Vitells, 10.1140/epjc/s10052-011-1554-01007.1727Eur. Phys. J. C. 711554G. Cowan, K. Cranmer, E. Gross and O. Vitells, Asymptotic formulae for likelihood-based tests of new physics, Eur. Phys. J. C 71 (2011) 1554, [1007.1727].
HighPT: A Tool for high-pT Drell-Yan Tails Beyond the Standard Model. L Allwicher, D A Faroughy, F Jaffredo, O Sumensari, F Wilsch, L. Allwicher, D. A. Faroughy, F. Jaffredo, O. Sumensari and F. Wilsch, HighPT: A Tool for high-pT Drell-Yan Tails Beyond the Standard Model, 2207.10756.
The jackknife-a review. R G Miller, Biometrika. 61R. G. Miller, The jackknife-a review, Biometrika 61 (1974) 1-15.
On the Validity of the Effective Field Theory Approach to SM Precision Tests. R Contino, A Falkowski, F Goertz, C Grojean, F Riva, 10.1007/JHEP07(2016)1441604.06444JHEP. 07144R. Contino, A. Falkowski, F. Goertz, C. Grojean and F. Riva, On the Validity of the Effective Field Theory Approach to SM Precision Tests, JHEP 07 (2016) 144, [1604.06444].
Truncation, validity, uncertainties. I Brivio, 2201.04974I. Brivio et al., Truncation, validity, uncertainties, 2201.04974.
. P A Zyla, Particle Data Group collaboration10.1093/ptep/ptaa104Review of Particle Physics. 2020PTEPParticle Data Group collaboration, P. A. Zyla et al., Review of Particle Physics, PTEP 2020 (2020) 083C01.
Measurement of the ratio of branching fractions B(B + c → J/ψτ + ντ )/B(B + c → J/ψµ + νµ). R Aaij, LHCb collaboration10.1103/PhysRevLett.120.1218011711.05623Phys. Rev. Lett. 120121801LHCb collaboration, R. Aaij et al., Measurement of the ratio of branching fractions B(B + c → J/ψτ + ντ )/B(B + c → J/ψµ + νµ), Phys. Rev. Lett. 120 (2018) 121801, [1711.05623].
Observation of the decay Λ 0 b → Λ + c τ − ντ. R Aaij, LHCb collaboration10.1103/PhysRevLett.128.1918032201.03497Phys. Rev. Lett. 128191803LHCb collaboration, R. Aaij et al., Observation of the decay Λ 0 b → Λ + c τ − ντ , Phys. Rev. Lett. 128 (2022) 191803, [2201.03497].
Averages of b-hadron, c-hadron, and τ -lepton properties as. Y S Amhis, HFLAV collaboration10.1140/epjc/s10052-020-8156-71909.12524Eur. Phys. J. C. 81226HFLAV collaboration, Y. S. Amhis et al., Averages of b-hadron, c-hadron, and τ -lepton properties as of 2018, Eur. Phys. J. C 81 (2021) 226, [1909.12524].
B → Dlν form factors at nonzero recoil and extraction of |V cb |. H Na, HPQCD collaborationC M Bouchard, HPQCD collaborationG P Lepage, HPQCD collaborationC Monahan, HPQCD collaborationJ Shigemitsu, HPQCD collaboration10.1103/PhysRevD.93.1199061505.03925Phys. Rev. D. 9254510HPQCD collaboration, H. Na, C. M. Bouchard, G. P. Lepage, C. Monahan and J. Shigemitsu, B → Dlν form factors at nonzero recoil and extraction of |V cb |, Phys. Rev. D 92 (2015) 054510, [1505.03925].
B→D ν form factors at nonzero recoil and -V cb -from 2+1-flavor lattice QCD. J A Bailey, MILC collaboration10.1103/PhysRevD.92.0345061503.07237Phys. Rev. D. 9234506MILC collaboration, J. A. Bailey et al., B→D ν form factors at nonzero recoil and -V cb -from 2+1-flavor lattice QCD, Phys. Rev. D 92 (2015) 034506, [1503.07237].
. L Allwicher, D Faroughy, F Jaffredo, O Sumensari, F Wilsch, In preparation, 22XX.XXXXXL. Allwicher, D. Faroughy, F. Jaffredo, O. Sumensari and F. Wilsch, In preparation, 22XX.XXXXX.
Addendum to "Impact of polarization observables and Bc → τ ν on new physics explanations of the b → cτ ν anomaly. M Blanke, A Crivellin, T Kitahara, M Moscati, U Nierste, I Nišandžić, 1905.08253M. Blanke, A. Crivellin, T. Kitahara, M. Moscati, U. Nierste and I. Nišandžić, Addendum to "Impact of polarization observables and Bc → τ ν on new physics explanations of the b → cτ ν anomaly", 1905.08253.
Revisiting the new-physics interpretation of the b → cτ ν data. R.-X Shi, L.-S Geng, B Grinstein, S Jäger, J Martin Camalich, 10.1007/JHEP12(2019)0651905.08498JHEP. 1265R.-X. Shi, L.-S. Geng, B. Grinstein, S. Jäger and J. Martin Camalich, Revisiting the new-physics interpretation of the b → cτ ν data, JHEP 12 (2019) 065, [1905.08498].
Global fit to b → cτ ν transitions. C Murgui, A Peñuelas, M Jung, A Pich, 10.1007/JHEP09(2019)1031904.09311JHEP. 09103C. Murgui, A. Peñuelas, M. Jung and A. Pich, Global fit to b → cτ ν transitions, JHEP 09 (2019) 103, [1904.09311].
Single leptoquark solutions to the B-physics anomalies. A Angelescu, D Bečirević, D A Faroughy, F Jaffredo, O Sumensari, 10.1103/PhysRevD.104.055017Phys. Rev. D. 104550172103.12504A. Angelescu, D. Bečirević, D. A. Faroughy, F. Jaffredo and O. Sumensari, Single leptoquark solutions to the B-physics anomalies, Phys. Rev. D 104 (2021) 055017, [2103.12504].
Lifetime of B − c Constrains Explanations for Anomalies in B → D ( * ) τ ν. R Alonso, B Grinstein, J Martin Camalich, 10.1103/PhysRevLett.118.0818021611.06676Phys. Rev. Lett. 11881802R. Alonso, B. Grinstein and J. Martin Camalich, Lifetime of B − c Constrains Explanations for Anomalies in B → D ( * ) τ ν, Phys. Rev. Lett. 118 (2017) 081802, [1611.06676].
Revisiting the one leptoquark solution to the R(D () ) anomalies and its phenomenological implications. X.-Q Li, Y.-D Yang, X Zhang, 10.1007/JHEP08(2016)0541605.09308JHEP. 0854X.-Q. Li, Y.-D. Yang and X. Zhang, Revisiting the one leptoquark solution to the R(D () ) anomalies and its phenomenological implications, JHEP 08 (2016) 054, [1605.09308].
Matching of gauge invariant dimension-six operators for b → s and b → c transitions. J Aebischer, A Crivellin, M Fael, C Greub, 10.1007/JHEP05(2016)0371512.02830JHEP. 0537J. Aebischer, A. Crivellin, M. Fael and C. Greub, Matching of gauge invariant dimension-six operators for b → s and b → c transitions, JHEP 05 (2016) 037, [1512.02830].
Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence. E E Jenkins, A V Manohar, M Trott, 10.1007/JHEP10(2013)0871308.2627JHEP. 1087E. E. Jenkins, A. V. Manohar and M. Trott, Renormalization Group Evolution of the Standard Model Dimension Six Operators I: Formalism and lambda Dependence, JHEP 10 (2013) 087, [1308.2627].
Renormalization Group Evolution of the Standard Model Dimension Six Operators II: Yukawa Dependence. E E Jenkins, A V Manohar, M Trott, 10.1007/JHEP01(2014)0351310.4838JHEP. 0135E. E. Jenkins, A. V. Manohar and M. Trott, Renormalization Group Evolution of the Standard Model Dimension Six Operators II: Yukawa Dependence, JHEP 01 (2014) 035, [1310.4838].
Renormalization Group Evolution of the Standard Model Dimension Six Operators III: Gauge Coupling Dependence and Phenomenology. R Alonso, E E Jenkins, A V Manohar, M Trott, 10.1007/JHEP04(2014)159JHEP. 041591312.2014R. Alonso, E. E. Jenkins, A. V. Manohar and M. Trott, Renormalization Group Evolution of the Standard Model Dimension Six Operators III: Gauge Coupling Dependence and Phenomenology, JHEP 04 (2014) 159, [1312.2014].
Renormalization-group evolution of new physics contributions to (semi)leptonic meson decays. M González-Alonso, J Martin Camalich, K Mimouni, 10.1016/j.physletb.2017.07.0031706.00410Phys. Lett. B. 772M. González-Alonso, J. Martin Camalich and K. Mimouni, Renormalization-group evolution of new physics contributions to (semi)leptonic meson decays, Phys. Lett. B 772 (2017) 777-785, [1706.00410].
Effective description of general extensions of the Standard Model: the complete tree-level dictionary. J De Blas, J C Criado, M Perez-Victoria, J Santiago, 10.1007/JHEP03(2018)1091711.10391JHEP. 03109J. de Blas, J. C. Criado, M. Perez-Victoria and J. Santiago, Effective description of general extensions of the Standard Model: the complete tree-level dictionary, JHEP 03 (2018) 109, [1711.10391].
On the breaking of Lepton Flavor Universality in B decays. A Greljo, G Isidori, D Marzocca, 10.1007/JHEP07(2015)1421506.01705JHEP. 07142A. Greljo, G. Isidori and D. Marzocca, On the breaking of Lepton Flavor Universality in B decays, JHEP 07 (2015) 142, [1506.01705].
B-physics anomalies: a guide to combined explanations. D Buttazzo, A Greljo, G Isidori, D Marzocca, 10.1007/JHEP11(2017)0441706.07808JHEP. 1144D. Buttazzo, A. Greljo, G. Isidori and D. Marzocca, B-physics anomalies: a guide to combined explanations, JHEP 11 (2017) 044, [1706.07808].
Closing the window on single leptoquark solutions to the B-physics anomalies. A Angelescu, D Bečirević, D A Faroughy, O Sumensari, 10.1007/JHEP10(2018)1831808.08179JHEP. 10183A. Angelescu, D. Bečirević, D. A. Faroughy and O. Sumensari, Closing the window on single leptoquark solutions to the B-physics anomalies, JHEP 10 (2018) 183, [1808.08179].
B → K ( * ) νν decays in the Standard Model and beyond. A J Buras, J Girrbach-Noe, C Niehoff, D M Straub, 10.1007/JHEP02(2015)1841409.4557JHEP. 02184A. J. Buras, J. Girrbach-Noe, C. Niehoff and D. M. Straub, B → K ( * ) νν decays in the Standard Model and beyond, JHEP 02 (2015) 184, [1409.4557].
QCD corrections to rare K and B decays for arbitrary top quark mass. G Buchalla, A J Buras, 10.1016/0550-3213(93)90405-ENucl. Phys. B. 400G. Buchalla and A. J. Buras, QCD corrections to rare K and B decays for arbitrary top quark mass, Nucl. Phys. B 400 (1993) 225-239.
The rare decays K → πνν, B → Xνν and B → l + l − : An Update. G Buchalla, A J Buras, 10.1016/S0550-3213(99)00149-2hep-ph/9901288Nucl. Phys. B. 548G. Buchalla and A. J. Buras, The rare decays K → πνν, B → Xνν and B → l + l − : An Update, Nucl. Phys. B 548 (1999) 309-327, [hep-ph/9901288].
QCD corrections to FCNC decays mediated by Z penguins and W boxes. M Misiak, J Urban, 10.1016/S0370-2693(99)00150-1hep-ph/9901278Phys. Lett. B. 451M. Misiak and J. Urban, QCD corrections to FCNC decays mediated by Z penguins and W boxes, Phys. Lett. B 451 (1999) 161-169, [hep-ph/9901278].
Two-Loop Electroweak Corrections for the K → πνν Decays. J Brod, M Gorbahn, E Stamou, 10.1103/PhysRevD.83.0340301009.0947Phys. Rev. D. 8334030J. Brod, M. Gorbahn and E. Stamou, Two-Loop Electroweak Corrections for the K → πνν Decays, Phys. Rev. D 83 (2011) 034030, [1009.0947].
Search for B → K ( * ) νν and invisible quarkonium decays. J P Lees, BaBar collaboration10.1103/PhysRevD.87.1120051303.7465Phys. Rev. D. 87112005BaBar collaboration, J. P. Lees et al., Search for B → K ( * ) νν and invisible quarkonium decays, Phys. Rev. D 87 (2013) 112005, [1303.7465].
Search for B → hνν decays with semileptonic tagging at Belle. J Grygier, Belle collaboration10.1103/PhysRevD.96.0911011702.03224Phys. Rev. D. 9691101Belle collaboration, J. Grygier et al., Search for B → hνν decays with semileptonic tagging at Belle, Phys. Rev. D 96 (2017) 091101, [1702.03224].
Search for B+→K+νν Decays Using an Inclusive Tagging Method at Belle II. F Abudinén, Belle-II collaboration10.1103/PhysRevLett.127.1818022104.12624Phys. Rev. Lett. 127181802Belle-II collaboration, F. Abudinén et al., Search for B+→K+νν Decays Using an Inclusive Tagging Method at Belle II, Phys. Rev. Lett. 127 (2021) 181802, [2104.12624].
Revisiting Lepton Flavor Universality in B Decays. F Feruglio, P Paradisi, A Pattori, 10.1103/PhysRevLett.118.0118011606.00524Phys. Rev. Lett. 11811801F. Feruglio, P. Paradisi and A. Pattori, Revisiting Lepton Flavor Universality in B Decays, Phys. Rev. Lett. 118 (2017) 011801, [1606.00524].
On the Importance of Electroweak Corrections for B Anomalies. F Feruglio, P Paradisi, A Pattori, 10.1007/JHEP09(2017)0611705.00929JHEP. 0961F. Feruglio, P. Paradisi and A. Pattori, On the Importance of Electroweak Corrections for B Anomalies, JHEP 09 (2017) 061, [1705.00929].
Low-energy Effects of Lepton Flavour Universality Violation. C Cornella, F Feruglio, P Paradisi, 10.1007/JHEP11(2018)0121803.00945JHEP. 1112C. Cornella, F. Feruglio and P. Paradisi, Low-energy Effects of Lepton Flavour Universality Violation, JHEP 11 (2018) 012, [1803.00945].
Implications of scalar and tensor explanations of R D ( * ). F Feruglio, P Paradisi, O Sumensari, 10.1007/JHEP11(2018)1911806.10155JHEP. 11191F. Feruglio, P. Paradisi and O. Sumensari, Implications of scalar and tensor explanations of R D ( * ) , JHEP 11 (2018) 191, [1806.10155].
Revisiting the vector leptoquark explanation of the B-physics anomalies. C Cornella, J Fuentes-Martin, G Isidori, 10.1007/JHEP07(2019)1681903.11517JHEP. 07168C. Cornella, J. Fuentes-Martin and G. Isidori, Revisiting the vector leptoquark explanation of the B-physics anomalies, JHEP 07 (2019) 168, [1903.11517].
Reading the footprints of the B-meson flavor anomalies. C Cornella, D A Faroughy, J Fuentes-Martin, G Isidori, M Neubert, 10.1007/JHEP08(2021)050JHEP. 08502103.16558C. Cornella, D. A. Faroughy, J. Fuentes-Martin, G. Isidori and M. Neubert, Reading the footprints of the B-meson flavor anomalies, JHEP 08 (2021) 050, [2103.16558].
Testing leptoquark models inB → D ( * ) τν. Y Sakaki, M Tanaka, A Tayduganov, R Watanabe, 10.1103/PhysRevD.88.0940121309.0301Phys. Rev. D. 8894012Y. Sakaki, M. Tanaka, A. Tayduganov and R. Watanabe, Testing leptoquark models inB → D ( * ) τν, Phys. Rev. D 88 (2013) 094012, [1309.0301].
Minimal Leptoquark Explanation for the R D ( * ) , RK , and (g − 2)µ Anomalies. M Bauer, M Neubert, 10.1103/PhysRevLett.116.1418021511.01900Phys. Rev. Lett. 116141802M. Bauer and M. Neubert, Minimal Leptoquark Explanation for the R D ( * ) , RK , and (g − 2)µ Anomalies, Phys. Rev. Lett. 116 (2016) 141802, [1511.01900].
Palatable Leptoquark Scenarios for Lepton Flavor Violation in Exclusive b → s 1 2 modes. D Bečirević, N Košnik, O Sumensari, R Zukanovich Funchal, 10.1007/JHEP11(2016)0351608.07583JHEP. 1135D. Bečirević, N. Košnik, O. Sumensari and R. Zukanovich Funchal, Palatable Leptoquark Scenarios for Lepton Flavor Violation in Exclusive b → s 1 2 modes, JHEP 11 (2016) 035, [1608.07583].
A Crivellin, D Müller, F Saturnino, 10.1007/JHEP06(2020)0201912.04224Flavor Phenomenology of the Leptoquark Singlet-Triplet Model. 20A. Crivellin, D. Müller and F. Saturnino, Flavor Phenomenology of the Leptoquark Singlet-Triplet Model, JHEP 06 (2020) 020, [1912.04224].
Low-energy phenomenology of scalar leptoquarks at one-loop accuracy. V Gherardi, D Marzocca, E Venturini, 10.1007/JHEP01(2021)138JHEP. 011382008.09548V. Gherardi, D. Marzocca and E. Venturini, Low-energy phenomenology of scalar leptoquarks at one-loop accuracy, JHEP 01 (2021) 138, [2008.09548].
Seeking leptoquarks in IceCube. D Bečirević, B Panes, O Sumensari, R Zukanovich Funchal, 10.1007/JHEP06(2018)0321803.10112JHEP. 0632D. Bečirević, B. Panes, O. Sumensari and R. Zukanovich Funchal, Seeking leptoquarks in IceCube, JHEP 06 (2018) 032, [1803.10112].
Scalar leptoquarks from grand unified theories to accommodate the B-physics anomalies. D Bečirević, I Doršner, S Fajfer, N Košnik, D A Faroughy, O Sumensari, 10.1103/PhysRevD.98.0550031806.05689Phys. Rev. D. 9855003D. Bečirević, I. Doršner, S. Fajfer, N. Košnik, D. A. Faroughy and O. Sumensari, Scalar leptoquarks from grand unified theories to accommodate the B-physics anomalies, Phys. Rev. D 98 (2018) 055003, [1806.05689].
On a model with two scalar leptoquarks -R2 and S3. D Bečirević, I Doršner, S Fajfer, D A Faroughy, F Jaffredo, N Košnik, 9717D. Bečirević, I. Doršner, S. Fajfer, D. A. Faroughy, F. Jaffredo, N. Košnik et al., On a model with two scalar leptoquarks -R2 and S3, 2206.09717.
Explaining the hints for lepton flavour universality violation with three S2 leptoquark generations. A Crivellin, B Fuks, L Schnell, 10.1007/JHEP06(2022)1692203.10111JHEP. 06169A. Crivellin, B. Fuks and L. Schnell, Explaining the hints for lepton flavour universality violation with three S2 leptoquark generations, JHEP 06 (2022) 169, [2203.10111].
Automated one-loop computations in the standard model effective field theory. C Degrande, G Durieux, F Maltoni, K Mimasu, E Vryonidou, C Zhang, 10.1103/PhysRevD.103.096024Phys. Rev. D. 103960242008.11743C. Degrande, G. Durieux, F. Maltoni, K. Mimasu, E. Vryonidou and C. Zhang, Automated one-loop computations in the standard model effective field theory, Phys. Rev. D 103 (2021) 096024, [2008.11743].
New physics through Drell-Yan standard model EFT measurements at NLO. S Dawson, P P Giardino, 10.1103/PhysRevD.104.0730042105.05852Phys. Rev. D. 10473004S. Dawson and P. P. Giardino, New physics through Drell-Yan standard model EFT measurements at NLO, Phys. Rev. D 104 (2021) 073004, [2105.05852].
On Drell-Yan production of scalar leptoquarks coupling to heavy-quark flavours. U Haisch, L Schnell, S Schulte, 2207.00356U. Haisch, L. Schnell and S. Schulte, On Drell-Yan production of scalar leptoquarks coupling to heavy-quark flavours, 2207.00356.
Parton distributions in the SMEFT from high-energy Drell-Yan tails. A Greljo, S Iranipour, Z Kassabov, M Madigan, J Moore, J Rojo, 10.1007/JHEP07(2021)1222104.02723JHEP. 07122A. Greljo, S. Iranipour, Z. Kassabov, M. Madigan, J. Moore, J. Rojo et al., Parton distributions in the SMEFT from high-energy Drell-Yan tails, JHEP 07 (2021) 122, [2104.02723].
High-energy EFT probes with fully differential Drell-Yan measurements. G Panico, L Ricci, A Wulzer, 10.1007/JHEP07(2021)0862103.10532JHEP. 0786G. Panico, L. Ricci and A. Wulzer, High-energy EFT probes with fully differential Drell-Yan measurements, JHEP 07 (2021) 086, [2103.10532].
WCxf: an exchange format for Wilson coefficients beyond the Standard Model. J Aebischer, 10.1016/j.cpc.2018.05.0221712.05298Comput. Phys. Commun. 232J. Aebischer et al., WCxf: an exchange format for Wilson coefficients beyond the Standard Model, Comput. Phys. Commun. 232 (2018) 71-83, [1712.05298].
| [] |
[
"From density-matrix renormalization group to matrix product states",
"From density-matrix renormalization group to matrix product states"
] | [
"Ian P Mcculloch [email protected] \nInstitut für Theoretische Physik C\nRWTH-Aachen\nD-52056AachenGermany\n"
] | [
"Institut für Theoretische Physik C\nRWTH-Aachen\nD-52056AachenGermany"
] | [] | In this paper we give an introduction to the numerical density matrix renormalization group (DMRG) algorithm, from the perspective of the more general matrix product state (MPS) formulation. We cover in detail the differences between the original DMRG formulation and the MPS approach, demonstrating the additional flexibility that arises from constructing both the wavefunction and the Hamiltonian in MPS form. We also show how to make use of global symmetries, for both the Abelian and non-Abelian cases. | 10.1088/1742-5468/2007/10/p10014 | [
"https://export.arxiv.org/pdf/cond-mat/0701428v3.pdf"
] | 18,235,173 | cond-mat/0701428 | 49617779ea15e33eed9ab4ac2f4c2fd71f51d451 |
From density-matrix renormalization group to matrix product states
19 Sep 2007
Ian P Mcculloch [email protected]
Institut für Theoretische Physik C
RWTH-Aachen
D-52056AachenGermany
From density-matrix renormalization group to matrix product states
19 Sep 2007
In this paper we give an introduction to the numerical density matrix renormalization group (DMRG) algorithm, from the perspective of the more general matrix product state (MPS) formulation. We cover in detail the differences between the original DMRG formulation and the MPS approach, demonstrating the additional flexibility that arises from constructing both the wavefunction and the Hamiltonian in MPS form. We also show how to make use of global symmetries, for both the Abelian and non-Abelian cases.
Introduction
The DMRG algorithm was introduced by Steven White [1], as an algorithm for calculating ground state properties of principally one-dimensional strongly correlated systems in condensed matter physics. The connection between DMRG and matrix product states [2,3,4,5] (also known as finitely correlated states) was first made by Rommer andÖstlund [6], who identified the thermodynamic limit of DMRG with a position-independent matrix product wavefunction. Although DMRG had already proven itself to be useful empirically, this was an important step in rigorously establishing the physical basis of the algorithm due to the concrete and easy to manipulate form of matrix product states. Work on the spectra of density matrices [7], later formulated as scaling of the von Neumann entropy [8] has placed the algorithm on a firm footing, showing that the required computational effort (realized via the basis dimension m) is essentially a function of the entanglement of the wavefunction [9], which for one-dimensional ground-states scales at worst logarithmically with the system size [10].
Computationally, MPS algorithms came to the fore with the assistance of a quantum information perspective, leading to algorithms for periodic boundary conditions [11], and finite temperature algorithms based on density operators [13,12,14]. At around the same time, methods for simulation of real time evolution were developed in DMRG [15,16], which can also benefit from MPS formulations [17].
The common theme of MPS approaches is to allow algorithms that operate on multiple, distinct wavefunctions at the same time. This is possible in the original formulation of DMRG only by constructing a mixed effective Hilbert space that is weighted appropriately to represent all of the relevant states simultaneously. This is inefficient, as the algorithms typically scale as O(m 3 ) (or up to O(m 5 ) for periodic boundary conditions [11]) in the number of basis states m, so increasing m so as to represent multiple states in the same basis is typically much slower than performing separate operations on each basis. In addition, the mixed basis approach lacks flexibility. While traditional DMRG programs calculate the wavefunction and a few (often predetermined) expectation values or correlation functions, if instead the wavefunction is calculated in the MPS representation of Eq. (1) it can be saved for later use as an input for many purposes. Perhaps the simplest such operation beyond the scope of traditional DMRG is to calculate the fidelity, or overlap between the wavefunctions obtained from separate calculations. In the MPS formulation, this calculation is straightforward. Nevertheless the determination of the scaling function for the fidelity of finite-size wavefunctions for different interaction strengths, provides a new tool for investigating phase transitions and crossover phenomena [18,19,20]. Indeed, due to the simplicity of the calculation the fidelity is likely in the coming years to be the first choice for quantitatively determining critical points. Similar measures of entanglement, such as the concurrence and single-and two-site entropy [21,22], are also straightforward to calculate, hence the MPS formalism allows us to apply directly the emerging tools of quantum information to the study of realistic systems in condensed matter physics. An alternative measure, the Loschmidt Echo [23] is important because, unlike many of the quantum information theoretic measures, this is directly accessible in experiments while showing the rich behavior of the simpler measures. The Loschmidt Echo is more timeconsuming to measure numerically as it requires a full time evolution simulation rather than a direct measurement, nevertheless it is well within the current state of the art [24].
In this paper, we focus on the case of open boundary condition matrix product states. This does not preclude calculation of periodic systems, however the entanglement of such periodic states is increased such that in the large L limit (where L is the lattice size), the number of states kept tends to the square of that required for open boundary conditions [25]. Algorithms exist for periodic boundary conditions [11] and infinite systems [26] (not to be confused with the 'infinite-size' DMRG algorithm), and the basic formulas introduced here carry over to these cases, but we do not describe the specific algorithms here. In Sec. 2, we introduce the basic formulation of matrix product states, and formulas for the fidelity. Sec. 3 is devoted to a new approach, whereby we construct the Hamiltonian operator itself as an MPS, with many advantages. We cover some remaining details of the DMRG algorithm in Sec. 4, before discussing in detail the use of Abelian and non-Abelian quantum numbers in Sec. 5. We finish with a few concluding remarks in Sec. 6, including some observations on finite temperature states.
Matrix Product States
We denote an MPS on an L-site lattice by the form
Tr {s i } A s 1 A s 2 · · · A s L | s 1 ⊗ | s 2 ⊗ · · · ⊗ | s L ,(1)
The local index s i represents an element of a local Hilbert space at site i. The two important cases we cover here are when s i runs over a d-dimensional local basis for a wavefunction | s i , in which case we refer to this as a matrix product wavefunction (MPW), or s i is a d 2 -dimensional local basis for all operators acting on a local site, which we refer to as a matrix product operator (MPO). In this paper, we use MPS for a generic state irrespective of the form of the local space, and use MPW or MPO as necessary when the distinction between wavefunctions and operators is important. In general, the matrix product form can also represent periodic [11] and infinite (nonperiodic) states [26], but here we use only the open-boundary form equivalent to the wavefunction obtained by DMRG [6]. To enforce this boundary condition, we require the left-most matrix A s 1 to be 1 × m dimensional, and the right-most matrix A s L to be m × 1. Here we have introduced m as the basis size, or dimension of the matrix basis of the A-matrices. This quantity is often denoted D, or sometimes χ, in the quantum information literature, but we emphasize it is exactly the same quantity in all cases. In general m is position dependent, as we do not require the A-matrices to be square even away from the boundary. Because of the 1-dimensional basis at the boundaries we can regard the MPS wavefunction to be a sequence of operators attached to left and right (or outgoing and incoming) vacuum states. This makes the operator product in Eq. (1) an ordinary number, so the trace operation can be dropped.
Orthogonality constraints
In practice, a MPS state with no particular constraints on the form of the A-matrices is numerically difficult to handle. We are always free to insert some product of a nonsingular m × m operator X and its inverse X −1 in the middle of our MPS, thus we can apply an arbitrary transformation to the matrix basis of an A-matrix, as long as we make the corresponding transformation to its neighbor. Using this freedom, we can transform the A-matrices into a form where they are orthonormalized, that is, we prefer that they satisfy one of two possible constraints,
s A s A s † = 1 (right-handed) s A s † A s = 1 (left-handed)(2)
States satisfying these conditions are orthonormalized in the sense that if all A-matrices to the left of some matrix A sn are orthogonalized in the left-handed sense, then the basis on the left-hand side of A sn is orthonormal (ie the identity operator in the effective Hilbert space is trivial). Conversely, if all A-matrices to the right of A sn are orthogonalized in the right-hand sense, then the basis on the right-hand side of A sn is orthogonal. Usually, we want both these conditions to be true simultaneously. Note that it is not, in general, possible for all of the A-matrices (including A sn itself) to be in orthonormal form at the same time. There are several ways of transforming an arbitrary MPS into this normalized form. Two ways that we consider here are the singular value decomposition (SVD), and the related reduced density matrix, as used in DMRG [1]. The simplest, and in principle the fastest, is the SVD, well-known from linear algebra [27]. For example, for the left-handed orthogonality constraint on A s ij , where we have re-inserted the matrix indices i, j, we consider s, i to be a single index of dimension dm, giving an ordinary dm × m dimensional matrix, and carry out the singular value decomposition,
A s ij = kl U (si)k D kl (V † ) lj ,(3)
where U is column-orthogonal, U † U = 1, and V † is row-orthogonal, V † V = 1. D is a non-negative diagonal matrix containing the singular values. This form coincides with the Schmidt decomposition, where D gives the coefficients of the wavefunction in the Schmidt basis [21]. The matrix U therefore satisfies the left-handed orthogonality constraint, so we use this as the updated A-matrix, and multiply the A-matrix on the right by DV † . This implies that the A-matrix on the right is no longer orthonormalized (even if it was originally), but we can apply this procedure iteratively, to shift the nonorthogonal A-matrix to the boundary -or even beyond it -at which point the 1 × 1 A-matrix coincides with the norm of the wavefunction. An important point here is that we can choose to discard some of the states, typically those that have the smallest singular value. This reduces the matrix dimension m, at the expense of introducing an approximation to our wavefunction, such that the squared norm of the difference of our approximate and exact wavefunctions is equal to the sum of the squares of the discarded singular values. Note however that the singular values only correspond to the coefficients of the Schmidt decomposition if all of the remaining A-matrices are orthogonalized according to Eq. (2). If this is not the case, the singular values are not useful for determining which states can be safely discarded. Alternatively, we can construct the reduced density matrix, obtained by tracing over half of the system. This is achieved by
ρ s ′ s ij = k A s ′ * ik A s jk ,(4)
which is a dm × dm matrix, with m eigenvalues coinciding with the values on the diagonal of D 2 , and the remaining eigenvalues are zero. Again, the eigenvalues are only meaningful if the remaining A-matrices are appropriately orthogonalized. The utility of the density matrix approach over the SVD, is that we can introduce mixing terms into the density matrix which can have the effect of stabilizing the algorithm and accelerating the convergence, which is further discussed in Sec. 4. The overlap of two MPS is an operation that appears in many contexts. For wavefunctions this gives the fidelity of the two states, and for operators this is equivalent to the operator inner product A|B = Tr A † B which induces the Frobenius norm. Direct expansion of the MPS form yields,
A|B = {s i } ( Tr A s 1 * A s 2 * . . . ) ( Tr B s 1 B s 2 . . . ) = Tr {s i } (A s 1 * ⊗ B s 1 )(A s 2 * ⊗ B s 2 ) . . . .(5)
Due to the open boundary conditions, the direct product E 1 = A s 1 * ⊗ B s 1 reduces to an ordinary m × m matrix, after which can construct successive terms recursively, via
E n = sn A sn † n E n−1 B sn n ,(6)
with an analogous formula if we wish to start at the right hand side of the system and iterate towards the left boundary,
F n = sn A sn n F n+1 B sn † n .(7)
For the purposes of numerical stability, it is advisable for the A-and B-matrices to be orthogonalized in the same pattern, that is, E-matrices are associated exclusively with the left-hand orthogonality constraint and F -matrices are associated with the right-hand orthogonality constraint. If we iterate all the way to the boundary, the E-(or F -) matrix ends up as a 1 × 1 matrix that contains the final value of the fidelity. Alternatively, we can iterate from both ends towards the middle and calculate the fidelity as Tr EF † .
Local updates
The key notion of the matrix product scheme is that of local updates; that is, we modify, typically though some iterative optimization scheme, one (or perhaps a few) Amatrix while keeping the remainder fixed. A useful and flexible alternative is the center matrix formulation, where, instead of modifying an A-matrix directly, we introduce an additional matrix C into the MPS,
A s 1 A s 2 · · · A sn C A s n+1 · · · A s L .(8)
This allows us to preserve orthogonality of the matrices at all times; matrices A s i for i ≤ n are normalized always according to the left-handed constraint, and matrices for i > n are normalized according to the right-handed constraint. We directly modify only the matrix C which simplifies the local optimization problem as C is just an ordinary matrix. To introduce the local degrees of freedom, say for the | s n states, we expand the basis for A sn . That is, we replace the m × m dimensional matrices A sn with m × dm matrices A ′ sn , given by
A ′ sn ij = δ id+sn,j ,(9)
and introduce the dm × m dimensional center matrix
C jk = A sn ik ,(10)
with j = id + s n running over dm states. This doesn't change the physical wavefunction, as A ′ sn C = A sn . Similarly, we can expand the basis for the A-matrix on the right side of C, to give the effect of modifying either a single A-matrix, or two (or more) at once. In the center matrix formulation, the singular value decomposition required for truncating the basis is simply the ordinary SVD on the matrix C = UDV † , and we multiply (for a right-moving iteration) A ′ sn U, which preserves the left-handed orthogonality constraint, and DV † A s n+1 which is not orthogonal, but becomes so when we again expand the basis to construct the new C matrix. The density matrix in the center matrix formulation is simply ρ = CC † or ρ = C † C for left and right moving iterations respectively. For readers already familiar with DMRG, the center matrix corresponds exactly with the matrix form of the superblock wavefunction [29].
Matrix product arithmetic
The utility of the MPS approach is realized immediately upon attempting manipulations on the wavefunction Eq. (1). Suppose we have two distinct MPS, defined over the same local Hilbert spaces,
| Ψ A = A s 1 A s 2 . . . A s L | s 1 | s 2 . . . | Ψ B = B s 1 B s 2 . . . B s L | s 1 | s 2 . . .(11)
The superposition | ψ C = | Ψ A + | Ψ B is formed by taking the sum of the matrix products, A s 1 A s 2 . . . + B s 1 B s 2 . . ., which can be factorized into a new MPS,
| Ψ C = C s 1 C s 2 . . ., with C s i = A s i ⊕ B s i .(12)
To preserve the one-dimensional basis at the boundaries, the direct sum is replaced by a concatenation of columns or rows, for the left and right boundary respectively. This procedure results in an MPS of increased dimension, m C = m A + m B . Thus, after constructing these matrices we need to re-orthogonalize the state, and then we can, if necessary, truncate the basis size to a given truncation error, which is well defined here and measures the exact difference between the original and truncated wavefunctions. Alternatively, the normalized and truncated MPS | Ψ C can be constructed directly, by calculating the overlap matrices E between | Ψ A and | Ψ B . From the E-matrices introduced in Eq. (6), we can construct directly the orthogonalized reduced density matrices of | Ψ C and truncate the basis as required, in a single step. This approach has better computational scaling than the two-step procedure of first orthogonalizing and then truncating, especially when the number of MPS in the superposition is large. But in general, iterative optimization approaches, where we use a DMRG-like algorithm to optimize the overlap Ψ C | (| Ψ A + | Ψ B ), have even better performance scaling with large m or more states in the superposition.
Operators
A useful generalization of the MPS structure Eq. (1) is to use it to represent an operator (an MPO) instead of a wavefunction. This has been used before for calculating finitetemperature density matrices [12], but here we instead want to use this structure to represent the Hamiltonian operator itself. All Hamiltonian operators with finite-range interactions have an exact MPS representation with a relatively small matrix dimension M. For example, the Ising model in a transverse field has a dimension M = 3, and the fermionic Hubbard model has dimension M = 6. We use the capital letter to distinguish from the dimension of the wavefunction, m. Similarly, the local dimension of the upper index is denoted here by D, which is usually just equal to d 2 , but slightly complicated in the case of non-Abelian quantum numbers (see Sec. 5.2). We denote an MPO by the form
{s i ,s ′ i } M s ′ 1 s 1 M s ′ 2 s 2 · · · M s ′ L s L | s ′ 1 s 1 | ⊗ | s ′ 2 s 2 | · · · ,(13)
where again we require that the first and last dimensions are M = 1, for open boundary conditions. The orthogonality constraint used previously for the MPS, Eq. (2), is not appropriate for Hamiltonian operators.
When applied to an operator, the usual orthogonality constraints utilize the (Frobenius) operator norm, which scales exponentially with the dimension of the Hilbert space. With this normalization, components of an MPO Hamiltonian, such as the identity operator or some interaction term, tend to differ in magnitude by some factor that increases exponentially with the lattice size. Arithmetic manipulations on such quantities is a recipe for catastrophic cancellation [27] resulting in loss of precision. Mixing operators with a unitary
transformation (for example O ′ 1 = O 1 cos θ + O 2 sin θ, O ′ 2 = −O 1 sin θ + O 2 cos θ),
will lead to a disaster if O 1 and O 2 differ by a sufficiently large order of magnitude, ∼ 10 16 for typical double-precision floating point arithmetic. But such rotations are inevitable in the orthogonalization procedure because in general the operator inner product O 1 |O 2 = Tr O † 1 O 2 will not be zero. Instead we completely avoid mixing different rows/columns of the operator M-matrices, only collapsing a row or column if it is exactly parallel with another row or column. In this case, the actual norm of each component of the A-matrices is irrelevant, as they are never mixed with each other (but see also the discussion of the single-site algorithm in Sec. 4). For physical Hamiltonian operators this remains essentially optimal, with the minimum possible matrix dimension M. The only operators for which this orthogonalization scheme does not produce an optimal representation are operators that have a form analogous to an AKLT [28] state where the local basis states of the S = 1 chain are replaced by local operators. The resulting operator contains an exponentially large number of N-body interactions for all N → ∞. We know of no physical Hamiltonians of this form.
Given a Hamiltonian as a sum of finite-range interactions, it is possible to construct the operator M-matrices such that they are entirely lower (or upper) triangular matrices, thus in principle we can 'normalize' the matrices via some kind of generalized LU or QR decomposition. In practice we don't need to do this, as the Hamiltonian can easily be constructed in lower-triangular form from the beginning. Imposing, again without loss of generality, that the top-left and bottom-right components of the operator Mmatrices are equal to the identity operator I, we can construct the sum of 1-site local terms H = i X i as a position-independent MPS,
M = I 0 X I ,(14)
which we regard as a 2 × 2 matrix, the elements of which are d × d dimensional local operators. For nearest-neighbor terms,
H = i X i Y i+1 , we have M = I 0 0 Y 0 0 0 X I ,(15)
with the obvious generalization to N-body terms. The direct sum and direct product of lower triangular matrices is itself lower triangular, thus this form can be preserved throughout all operations. For open boundary conditions, the left (right) boundary 1 × M (or M × 1) matrices are initialized to (0, . . . , 0, I) and (I, 0, . . . , 0) T respectively. The principal advantage of formulating the Hamiltonian operator (and indeed, all operators needed in the calculation) in this way that it can be manipulated extremely easily, amounting to a symbolic computation on the operators. This is in contrast to the ad hoc approach used in past DMRG and MPS approaches where the block transformations required for each operator are encoded manually, with limited scope for arithmetic operations. In particular, the sum of operators is achieved via Eq. (12). Products of operators are achieved by matrix direct product; given MPO's A and B, the product C = AB is given by the matrices
C s ′ s = t A s ′ t ⊗ B ts .(16)
An implication of this is that the square of an MPO has a matrix dimension of at most M 2 , which, since M is usually rather small, means that it is quite practical to calculate expectation values for higher-order moments, for example of the variance
σ 2 = H 2 − H 2 = (H − E) 2 ,(17)
which has been mentioned previously [31] as it gives a rigorous lower bound on the energy (although with no guarantee that this corresponds to the ground-state). In practice this lower bound is too wide to be useful in all but the simplest cases, but of more interest is the property that the variance is, to first order, proportional to the squared norm of the difference between the exact and numerical wavefunctions, and therefore also proportional to the truncation error [33] (see Sec. 4). Thus, this quantity gives a quantitative estimate of the goodness of the wavefunction even for situations where the truncation error is not available. For our numerical MPS algorithms the variance takes the role of the precision ǫ in numerical analysis [27] via ǫ ∼ √ σ 2 . Of a similar form to the product of two operators, the action of an operator M on a wavefunction | A gives a wavefunction | B with matrix elements,
B s ′ = s M s ′ s ⊗ A s .(18)
The MPO formulation also gives a natural form for the evaluation of expectation values, similarly to the fidelity of Eq. (5),
A | M | B = α Tr E α n F α † n ,(19)
where the E-and F -matrices now have an additional index α that represents the terms in the MPO M, with a recursive definition
E α ′ n = s ′ ,s,α M s ′ s α ′ α A s ′ n † E α n−1 B sn F α ′ n = s ′ ,s,α M s ′ s α ′ α A s ′ n F α n+1 B sn †(20)
where again we can either iterate all the way to a boundary, at which point the α index collapses down to one-dimensional and the E α or F α are 1 × 1 dimensional matrices containing the expectation value, or we can iterate from both boundaries and meet at the middle, where our expectation value is given by the scalar product Eq. (19). Incidentally, given that the identity operators occur in a fixed location in the operator M-matrix (ie. at the top-left and bottom-right of the M-matrix) this fixes the index α of the reduced Hamiltonian and identity operators for the left and right partitions of the system. That is, in the application of the Hamiltonian E α , F α matrices to the wavefunction we are guaranteed that the α = 1 component of E α ⊗ F α † corresponds precisely to H L ⊗ I R , and the α = M component corresponds to I L ⊗ H R . Thus, even after an arbitrary series of MPO computations we can still identify exactly which component of the E, F matrices corresponds to the block Hamiltonian. This is useful for eigensolver preconditioning schemes [30], for example to change to a basis where the block Hamiltonian is diagonal.
As an example of the utility of this approach, Fig. 1 shows the time evolution of the magnetization of the impurity spin in the single impurity Anderson model (SIAM), where the ground-state is obtained with a small magnetic field which is then turned off at time t = 0. The MPS operator approach readily allows the evaluation of the commutators required for a small t expansion of the expectation value of an observable in the Heisenberg picture,
A(t) = A + it h [H, A] − t 2 2!h 2 [H, [H, A]] − it 3 3!h 3 [H, [H, [H, A]]] + · · · . (21)
Since the number of terms in the repeated commutator will, in general, increase exponentially the accessible time-scales from such an expansion are clearly limited. Nevertheless this is a very fast and accurate way to obtain short-time-scale dynamics, and in this example 12 th order is easily enough to determine the T 1 relaxation rate. For this calculation, the terms up to t 8 took a few seconds to calculate, while the t 10 term took 6 minutes and the t 12 term took just over an hour, on a single processor 2GHz Athlon64. This time was divided between calculating the MPO matrix elements (the dimension of which was just over 2500 at the impurity site), and calculating the expectation value itself.
DMRG
We now have all of the ingredients necessary to construct the DMRG algorithm for determining the ground-state. Indeed, given the previous formulations, the DMRG itself is rather simple; using the center matrix formulation, we iteratively improve the wavefunction locally by using the center matrix C as input to an eigensolver for the ground-state of the Hamiltonian. The details of this calculation are precisely as for DMRG, already covered elsewhere [29].
An important component of DMRG, which has been neglected in some matrix product approaches, is the truncation error. If only a single site is modified at a time, the maximum number of non-zero singular values is bounded by the matrix dimension m, thus the matrix dimension cannot be incrementally increased as the calculation progresses. Some way of avoiding this limitation is practically essential for a robust algorithm. The original DMRG formulation [1] solved this problem by modifying two A-matrices simultaneously, equivalent to expanding the matrix dimension for both the left and right A-matrices in Eq. (8) so that the center matrix has dimension md × md. A scheme for single-site algorithms that avoids the limit on the singular values was introduced by Steven White [34], which uses a mixed density matrix constructed by a weighted sum of the usual reduced density matrix and a perturbed density matrix formed by applying the E α -matrices (on a right-moving sweep) or F α -matrices (on a left-moving sweep) of the Hamiltonian,
ρ ′ = ρ + c α E α ρE α † ,(22)
where c is some small factor that fixes the magnitude of the fluctuations. This solves nicely the problem of the bound on the number of singular values and introduces fluctuations into the density matrix that give the algorithm good convergence properties, often better than the two-site algorithm. A minor problem is that the scaling of the E α matrices is not well defined, in that we can scale the E α -matrices by an arbitrary M ×M non-singular matrix X α ′ α , while at the same time scaling the F -matrices by X −1 . For one-and two-site terms, there is an 'obvious' scaling factor to use, whereby the scaling factors are chosen such that the (Frobenius) operator norm of the E and F -matrices are identical, but I don't know how this would apply more generally. An alternative that appears interesting is to apply the full Hamiltonian to a density operator for the full system, ρ ′ = ρ + c Tr R H(ρ ⊗ I)H, constructed from the left (right) reduced density matrix and the right (left) identity operator, followed by a trace over the right (left) partition. In MPS form, this operation is
ρ ′ = ρ + c α,β E α ρE β † G αβ ,(23)
where G αβ = Tr F α † F β is an M × M coefficient matrix. However, this scheme often fails; incorporating the G αβ matrix reduces the fluctuations such that E α ρE β † G αβ differs little from ρ itself and the algorithm typically fails to reach the ground-state. The single-site algorithm [34] corresponds to choosing G αβ = δ αβ . A useful compromise appears to be using only the diagonal elements, such that G αβ = δ αβ Tr F α † F β , but this is surely not the last word on this approach. Both the two-site and mixed single-site algorithm inevitably result in a reduction in the norm of the wavefunction by truncating the smallest non-zero eigenvalues of the density matrix. The sum of the discarded eigenvalues, summed over all iterations in one sweep, is equal to the truncation error τ , familiar from DMRG [1] (but note that it is common in the literature to quote an average or maximum truncation error per site). This quantity is useful in giving an estimate of the error in the wavefunction in this is, for a properly converged wavefunction, proportional to the norm of the difference between the exact ground-state and the numerical approximation. The presence of the truncation error explains why the bare single-site algorithm, despite having slightly better variational wavefunction than the two-site or mixed single-site algorithms [35], converges much slower; the single site algorithm is a highly constrained optimization within an m-dimensional basis, whereas the two-site and mixed single-site algorithms are selecting the optimal m basis states out of a pool of a much larger set of states, namely the discarded states at each iteration (total ∼ Lm states). While the notion of truncation error remains useful in MPS algorithms, for the purposes of error analysis we much prefer the variance Eq. (17) as being a direct measure of the accuracy of the wavefunction, independent of the details of a particular algorithm [33]. Low-lying excited states can be constructed using this algorithm. This has been done in the past in DMRG by targeting multiple eigenstates in the density matrix, but the MPS formulation allows a substantial improvement. Namely, it is easy to incorporate into the eigensolver a constraint that the obtained wavefunction is orthogonal to an arbitrary set of predetermined MPS's. That is, after constructing the MPS approximation to the ground-state, we can, as a separate calculation, determine the first excited state by running the algorithm again with the constraint that our obtained wavefunction is orthogonal to the ground-state. This is achieved by constructing the E-matrices that project the set of states to orthogonalize against onto the local Hilbert space. These matrices are precisely those used in constructing the fidelity, Eq. (5), thus given the center matrix of some state C X , we project this onto the current Hilbert space C ′ X = EC X F † , and as part of the eigensolver, orthogonalize our center matrix against this state. This is a very fast operation, much faster than even a single Hamiltonian-wavefunction multiplication. So it is quite practical to orthogonalize against a rather large number of states, the practical limit is rather on numerical limitations in orthogonalizing the Krylov subspace in the eigensolver. If this is combined with an eigensolver capable of converging to eigenvalues in the middle of the spectrum (say, the lowest eigenvalue larger than some bound E 0 ), then we need only a small number of states to orthogonalize against, say half a dozen states immediately above E 0 in energy. In our numerical tests it seems to be rather common to skip eigenvalues, which is why we cannot simply orthogonalize against a single state. With a larger number of states to orthogonalize against, skipping eigenvalues is less of a problem as we are likely to recover the missing eigenstate on a later calculation. Using this approach, quantities such as the level spacing statistics can be determined for system sizes far beyond exact diagonalization [33].
Quantum Numbers
An important feature of matrix product states is that they can easily be constrained by quantum numbers representing the global symmetries of the Hamiltonian, as long as the symmetry is not broken by the spatial geometry of the MPS. For example, internal rotational symmetries such as U(1) and SU(2) [36] can be maintained exactly, but for a real-space lattice we cannot utilize the momentum in the same way because the representation itself violates this symmetry ‡. To achieve this, we impose a symmetry constraint on the form of the A-matrices, so that they are irreducible tensor operators. That is, under a symmetry rotation the matrix A s for each local degree of freedom s transforms according to an irreducible representation D(j s ) of the global symmetry group. This is a very general procedure, that is applicable to essentially all MPS approaches and generalizations thereof.
Abelian symmetries
For Abelian symmetries, the representations are one-dimensional therefore the set of quantum numbers labeling the irreducible representations also forms a group, which we can write as
D(j) ⊗ D(k) = D(j + k) ,(24)
for two representations D(j) and D(k), where j + k denotes the group operation. Thus to incorporate Abelian symmetries into the algorithm we simply attach a quantum number to all of the labels appearing in the MPS, with the constraint that each Amatrix transforms irreducibly, so that the only non-zero matrix elements are
A k q ′ q q ′ = q + k ,(25)
where k, q ′ , q are the quantum numbers attached to the local basis state and left and right matrix basis states respectively. We have suppressed here indices not associated with a quantum number, a convention which will be followed for the remainder of the paper. By convention, for our open boundary condition MPS we choose the right hand vacuum state to have quantum number zero. The symmetry constraint Eq. (25) then implies that the quantum number at the left hand vacuum will denote how the state as a whole transforms (the target state, in DMRG terminology). This is the only real difference between DMRG and MPS algorithms, in that the DMRG convention is to construct both blocks starting from a scalar (quantum number zero) vacuum state, so that the superblock wavefunction is a tensor product of two ket states,
| Ψ = uv ψ uv | u ⊗ | v , u + v = target ,(26)
‡ See also Ref. [37] for a real-space approach to constructing momentum eigenstates.
whereas for the MPS formulation the superblock wavefunction is represented by a scalar operator with a tensor product basis | u ⊗ v| with quantum numbers u = v. This means that, in contrast to the usual formulation of DMRG, the target quantum number is not encoded in the superblock but rather in the left vacuum state. A consequence of this is that DMRG is capable of representing simultaneously states with different quantum numbers, but an MPS is not. This is an important detail, for example in the calculation of dynamical correlations, as both the correction vector [38] and the similar DDMRG [39] algorithm require a basis optimized for both the ground-state | Ψ and the so-called Lanczos-vector A| Ψ , where A is some operator that may not be scalar. However, the MPS formulation allows significant optimizations to these algorithms whereby the the calculation of the ground-state is decoupled from that of the Lanczos vector [31,32] and the two need never appear in the same basis.
Non-Abelian symmetries
If the symmetry group is large enough that some elements do not commute with each other, then it is no longer possible to construct a basis that simultaneously diagonalizes all of the generators hence the approach of the previous section needs some modification. Instead, we label the basis states by quantum numbers that denote the representation, which is no longer simply related to the group elements themselves as the representations are in general no longer ordinary numbers, but instead are matrices of dimension > 1, and Eq. (24) no longer applies. For SU(2), we choose to label the representations by the total spin s, being related to the eigenvalue of of the spin operator, S 2 = s(s + 1).
Assuming that all of the required operations can be formulated in terms of manipulations of these representations, we have a formulation that is manifestly SU(2) invariant; the rotational invariance is preserved at all steps and at no time in the calculation is it necessary to choose an axis of quantization [36]. This supersedes the earlier approach based on the Clebsch-Gordan coefficients [40]. The non-Abelian formulation is an important optimization, because it increases the performance of the algorithm by an order of magnitude or more [36] compared with the Abelian case, while enabling more accurate and detailed information about the ground-state magnetization. The basic ingredient that enables this rotationally invariant construction is the Wigner-Eckart theorem [41], which we can state as: When written in an angular momentum basis, each matrix element of an irreducible tensor operator is a product of two factors, a purely angular momentum dependent factor (the "Clebsch-Gordan coefficient") and a factor that is independent of the projection quantum numbers (the "reduced matrix element"). We formulate the algorithm in such a way that we store and manipulate only the reduced matrix elements, factorizing out completely the Clebsch-Gordan coefficients. The efficiency improvement resulting from the non-Abelian formulation is that the matrix dimensions m and M now refer to the number of irreducible representations in the basis, which is typically much smaller than the total degree of the representation. For a scalar state, this equivalence is precise: a single representation of degree N in the non-Abelian approach results in N degenerate eigenstates when the symmetry is not used, with a corresponding improvement in efficiency. We do not give here a full introduction to the theory of quantum angular momentum, rather we present, in the style of a reference, the important formulas required to manipulate MPS wavefunctions and operators. For a comprehensive introduction see for example references [42,43].
Using the normalization convention of Biedenharn [42], we define the matrix elements of a tensor operator T [k] transforming as a rank k tensor under SU(2) rotations, as
j ′ m ′ | T [k] M | jm = j ′ T [k] j C j m k M j ′ m ′ ,(27)
where C ··· ··· is the Clebsch-Gordan (CG) coefficient, j ′ j label the representation of SU (2), and m = −j, −j + 1, . . . , j and m ′ = −j ′ , −j ′ + 1, . . . , j ′ label the projections of the total spin onto the z-axis. Using the orthogonality of the Clebsch-Gordan coefficients, this defines the reduced matrix elements,
j ′ T [k] j = mM C j m k M j ′ m ′ j ′ m ′ | T [k] M | jm ,(28)
where m ′ is arbitrary. Note that this normalization is not the same as that used by Varshalovich et. al [43], whom instead use an additional factor √ 2k + 1 in the reduced matrix elements. This is a tradeoff; some formulas simplify slightly with this normalization, but the normalization used here has the advantage that the reduced matrix elements of scalar operators (with k = 0) coincide with the actual matrix elements as all of the relevant Clebsch-Gordan coefficients are equal to unity. Given the definition of the reduced matrix elements, we formulate the remaining formulas without further reference to the axis of quantization, except as an intermediate step to relate the reduced matrix elements prior to factorizing out the Clebsch-Gordan coefficients.
The coupling of two operators is just as for the coupling of ordinary spins;
S [k 1 ] × T [k 2 ] [k] ,(29)
which denotes the set of operators with components
S [k 1 ] × T [k 2 ] [k] µ = µ 1 µ 2 C k 1 µ 1 k 2 µ 2 k µ S [k 1 ] µ 1 T [k 2 ] µ 2 .(30)
Applying the Wigner-Eckart theorem gives, after a few lines of algebra,
j ′ S [k 1 ] × T [k 2 ] [k] j = (−1) j+j ′ +k j ′′ (2j ′′ + 1)(2k + 1) j ′ k 1 j ′′ k 2 j k × j ′ S [k 1 ] j ′′ j ′′ T [k 2 ] j ,(31)
where {· · ·} denotes the 6j coefficient [42,43].
A special case of the coupling law Eq. (31) that we will need is when the operators act on different spaces, such that they have a tensor product form
S [k 1 ] µ 1 = T [k 1 ] µ 1 (1) ⊗ I(2) , T [k 2 ] µ 2 = I(1) ⊗ T [k 2 ] µ 2 (2) .(32)
Here I(i) denotes the identity operator and T [k i ] (i) is an irreducible tensor operator with respect to the angular momentum J(i) of part i of a two-part physical system (i = 1, 2). The total angular momentum of the system is J = J (1) + J (2). In this case, we write the coupling as [
S [k 1 ] ×T [k 2 ] ] [k] ≡ [T [k 1 ] (1)⊗T [k 2 ] (2)] [k]
. Repeated application of the Wigner-Eckart theorem to these tensor operators gives, after some algebra,
j ′ (j ′ 1 j ′ 2 α ′ 1 α ′ 2 ) [T [k 1 ] (1)⊗T [k 2 ] (2)] [k] j (j 1 j 2 α 1 α 2 ) = j 1 j 2 j k 1 k 2 k j ′ 1 j ′ 2 j ′ j ′ 1 (α ′ 1 ) T [k 1 ] (1) j 1 (α 1 ) j ′ 2 (α ′ 2 ) T [k 2 ] (2) j 2 (α 2 ) ,(33)
where
j 1 j 2 j j 1 k 2 k j ′ 1 j ′ 2 j ′ ≡ [(2j ′ 1 + 1)(2j ′ 2 + 1)(2j + 1)(2k + 1)] 1 2 j 1 j 2 j k 1 k 2 k j ′ 1 j ′ 2 j ′ ,(34)
and the term in curly brackets is the Wigner 9j coefficient, which can be defined as a summation over 6j coefficients [42],
j 1 j 2 j k 1 k 2 k j ′ 1 j ′ 2 j ′ ≡ (−1) j 1 +j 2 +j+k 1 +k 2 +k+j ′ 1 +j ′ 2 +j ′ j ′′ (−1) 2j ′′ (2j ′′ + 1) × j ′ k 1 j ′′ k 2 j k j ′ j ′ 2 j ′ 1 j 1 k 1 j ′′ j ′′ j 1 j ′ 2 j 2 k 2 j .(35)
We can define an operator norm, corresponding to the usual Frobenius norm, such that
||X [k] || 2 frob = Tr X [k] · X †[k] = Tr X †[k] · X [k] .(36)
After some arithmetic, we see that
||X [k] || 2 frob = j ′ j (2j ′ + 1)| j ′ T [k] j | 2(37)
For the center-matrix formalism, we need the transformation
A [s] ij → k C ik A ′ [s] kj(38)
where k is a d × m dimensional index that encapsulates both a s ′ and a j ′ index § : k ≃ (s ′ , j ′ ). Requiring A ′ [s] kj to satisfy the right orthogonality constraint, A ′ A ′ † = 1, this requires
A ′ [s] kj = δ j ′ j δ s ′ s [with k ≃ (s ′ , j ′ )](39)
with In the other direction, we need
C ik = A [s ′ ] ij ′(A [s] ij → k A ′ [s] ik C kj(41)
where k ≃ (s ′ , i ′ ). Requiring A ′ [s] ik to satisfy the left orthogonality constraint, A ′ † A ′ = 1, this requires
A ′ [s] ik = δ s ′ s δ i ′ i 2k + 1 2i + 1 [with k ≃ (s ′ , i ′ )](42)
and
C kj = A [s] j ′ j 2i + 1 2k + 1(43)
The most natural definition for a matrix product operator has two lower indices and three upper, (44) which transforms as the product of two operators of rank [k], with matrix elements
M [k] s ′ i ′ sis ′ q ′ ; j ′ m ′ | M [k] r | sq; jm = s ′ ; j ′ M [k] s; j C s q k r s ′ q ′ C j m k r j ′ m ′ .(45)
Note that the product of an operator and a state requires a contraction of the index s, which has the symmetry of over two lower indices, and then shifting the result index s ′ from upper to lower. For SU(2), the required phase factor is (−1) s+k−s ′ , giving the rule
B [s ′ ] = (M A) [s ′ ] = s (−1) s+k−s ′ M [k] s ′ s ⊗ A [s] .(46)
The action of a matrix-product operator on another matrix product operator is
X [x] = M [m] N [n] ,(47)
which corresponds to the ordinary (contraction) product in the local basis and the tensor product in the matrix basis, and therefore results in the product of a 6j and a 9j coefficient from equations Eq. (31) and Eq. (33) respectively. For the evaluation of matrix elements, we need the operation
F ′ a ′ i ′ j ′ = s ′ ,s,i,j,a M s ′ s a ′ a A * s ′ i ′ i B s j ′ j E a ij(48)
On expanding out the reduced matrix elements, we see immediately that the coupling coefficient is
F ′ [a ′ ] i ′ j ′ = a,
On interchanging A ↔ E, B ↔ F , this becomes the equation for a direct operatormatrix-product multiply. But using the center-matrix formalism, we want instead the operation
C ′ i ′ i = j ′ jk E k i ′ j ′ C j ′ j F k ij ,(52)
where C and C ′ transform as scalars, i.e. the quantum numbers impose i ′ = i, j ′ = j. This is essentially a scalar product E · F , and the coupling coefficients drop out.
Conclusions
In this paper, we have presented an introduction to the MPS formulation of the DMRG algorithm for the calculation of ground-and excited states of one-dimensional lattice Hamiltonians. The MPS formulation is extremely flexible, allowing the possibility for algorithms that act on several distinct wavefunctions at once. The simplest such algorithms are for the fidelity and expectation values involving unrelated wavefunctions, ψ|φ and ψ | M | φ , which are difficult to extract from conventional DMRG. This gives access to new tools for the analysis of quantum phase transitions, by measuring the scaling function and exponents for the fidelity between ground-states as a function of the interaction strength. In addition, the MPS formulation allow optimized versions of algorithms for dynamical correlations [31,32] and time evolution [17], which remains a fertile area for continued algorithmic improvements. Finally, we note that in the simulation of finite temperature states via a density operator or purification [12,14] in the absence of dissipative terms that mix the particle numbers between the real and auxiliary systems, the symmetries of the system are doubled, such that the symmetries of the Hamiltonian are preserved by the real and auxiliary parts independently. For simulations in a canonical ensemble, this leads to a significant efficiency improvement that, as far as we know, has not yet been taken into consideration.
Figure 1 .
1Polynomial expansion for the relaxation of the impurity magnetization in the SIAM, up to order t 12 . The × symbols denote the impurity magnetization calculated via adaptive time DMRG, the dashed line is a guide to the eye. Parameters of the calculation were (in units of bandwidth), Γ = 0.1, U = 0.2, h 0 = 0.1, ǫ d0 = 0.05, on a log-discretized Wilson chain with Λ = 1.8. At time t = 0, the Hamiltonian was switched to h 1 = 0 and ǫ d1 = −0.1.
40) § More precisely, k runs over the Clebsch-Gordan expansion of s ′ ⊗ j ′ .
i,j,k,s,s ′ M [k] s ′ s a ′ a A [s ′ ] * i ′ i B [s] j ′ j F [a] s ′ ,s,i ′ ,j ′ ,a ′ E a ′ i ′ j ′ M s ′ s a ′ a A * s ′ i ′ i B s a ′ ,i ′ ,j ′ ,k,s ′ ,s E [a ′ ] i ′ j ′ M [k] s ′ s a ′ a A [s ′ ] * i ′ i B [s] j ′ j
j s j ′
a k a ′
i s ′ i ′
ij
(49)
Conversely, from the left hand side,
E
′ a
ij =
j ′ j
(50)
is
E ′ [a]
ij =
2i ′ + 1
2i + 1
j s j ′
a k a ′
i s ′ i ′
AcknowledgmentsThanks to Ulrich Schollwöck and Thomas Barthel for many stimulating conversations. While preparing this manuscript, we learned that a rotationally invariant formulation using the Clebsch-Gordan coefficients[40]has been applied to the TEBD algorithm for infinite systems[44].
. S R White, Phys. Rev. Lett. 692863S. R. White, Phys. Rev. Lett. 69, 2863 (1992);
. Phys. Rev. B. 48345Phys. Rev. B 48, 345 (1993).
. A Klümper, A Schadschneider, J Zittartz, J. Phys. A. 24955A. Klümper, A. Schadschneider and J. Zittartz, J. Phys. A 24, L955 (1991).
. M Fannes, B Nachtergaele, R F Werner, Commun. Math. Phys. 144443M. Fannes, B. Nachtergaele, and R. F. Werner, Commun. Math. Phys. 144, 443 (1992).
. A Klümper, A Schadschneider, J Zittartz, Z. Phys. B. 87281A. Klümper, A. Schadschneider and J. Zittartz, Z. Phys. B 87, 281 (1992).
. B Derrida, M R Evans, V Hakim, V Pasquier, J. Phys. A. 261493B Derrida, M R Evans, V Hakim and V Pasquier, J. Phys. A 26, 1493 (1993).
. S Östlund, S Rommer, Phys. Rev. Lett. 753537S.Östlund and S. Rommer, Phys. Rev. Lett. 75, 3537 (1995).
. M C Chung, I Peschel, Phys. Rev. B. 6464412M. C. Chung and I. Peschel, Phys. Rev. B 64, 064412 (2001).
. G Vidal, J I Latorre, E Rico, A Kitaev, Phys. Rev. Lett. 90227902G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Phys. Rev. Lett. 90, 227902 (2003).
. N Schuch, M M Wolf, F Verstraete, J I Cirac, arXiv:0705.0292preprintN. Schuch, M. M. Wolf, F. Verstraete and J. I. Cirac, preprint arXiv:0705.0292.
. V E Korepin, Phys. Rev. Lett. 9296402V. E. Korepin, Phys. Rev. Lett. 92, 096402 (2004).
. F Verstraete, D Porras, J I Cirac, Phys. Rev. Lett. 93227205F. Verstraete, D. Porras, and J. I. Cirac, Phys. Rev. Lett.93, 227205 (2004).
. F Verstraete, J J García-Ripoll, J I Cirac, Phys. Rev. Lett. 93207204F. Verstraete, J. J. García-Ripoll and J. I. Cirac, Phys. Rev. Lett. 93, 207204 (2004).
. Michael Zwolak, Guifré Vidal, Phys. Rev. Lett. 93207205Michael Zwolak and Guifré Vidal, Phys. Rev. Lett. 93, 207205 (2004).
. A E Feiguin, Steven R White, Phys. Rev. B. 72220401A. E. Feiguin and Steven R. White, Phys. Rev. B 72, 220401 (2005).
. A J Daley, C Kollath, U Schollwöck, G Vidal, J. Stat. Mech.: Theor. Exp. 4005A. J. Daley, C. Kollath, U. Schollwöck and G. Vidal, J. Stat. Mech.: Theor. Exp. P04005 (2004).
. R Steven, Adrian E White, Feiguin, Phys. Rev. Lett. 9376401Steven R. White and Adrian E. Feiguin, Phys. Rev. Lett. 93, 076401 (2004).
. Juan José García-Ripoll , New J. Phys. 8305Juan José García-Ripoll, New J. Phys. 8, 305 (2006).
. P Zanardi, N Paunković, Phys. Rev. E. 7431123P. Zanardi and N. Paunković, Phys. Rev. E 74, 031123 (2006).
. M Cizzini, R Ionicioiu, P Zanardi, preprint cond-mat/0611727M. Cizzini, R. Ionicioiu and P. Zanardi, preprint cond-mat/0611727.
. Qiang Huan, John Paul Zhou, Barjaktarevic, preprint cond-mat/0701608Huan-Qiang Zhou and John Paul Barjaktarevic, preprint cond-mat/0701608.
A Michael, Isaac L Nielsen, Chuang, Quantum Computation and Quantum Information. Cambridge University PressMichael A. Nielsen and Isaac L. Chuang: Quantum Computation and Quantum Information, Cambridge University Press, 2000.
. Ö Legeza, J Sólyom, Phys. Rev. Lett. 96116401Ö. Legeza and J. Sólyom, Phys. Rev. Lett. 96, 116401 (2006) .
. H T Quan, Z Song, X F Liu, P Zanardi, C P Sun, Phys. Rev. Lett. 96140604H.T. Quan, Z. Song, X.F. Liu, P. Zanardi and C.P. Sun, Phys. Rev. Lett. 96, 140604 (2006).
. Andreas Freidrich, RWTH-AachenPhD ThesisAndreas Freidrich, PhD Thesis, RWTH-Aachen, 2006.
. Örs Legeza, Florian Gebhard, Jörg Rissler, Phys. Rev. B. 74195112Örs Legeza, Florian Gebhard, and Jörg Rissler, Phys. Rev. B 74, 195112 (2006).
. G Vidal, Phys. Rev. Lett. 9870201G. Vidal, Phys. Rev. Lett. 98, 070201 (2007).
Accuracy and Stability of Numerical Algorithms. Nicholas J Higham, Society for Industrial and Applied Mathematics. Nicholas J. Higham: Accuracy and Stability of Numerical Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, 2002.
. I Affleck, T Kennedy, E H Lieb, H Tasaki, Phys. Rev. Lett. 59799I. Affleck, T. Kennedy, E. H. Lieb, and H. Tasaki, Phys. Rev. Lett. 59, 799 (1987).
. U Schollwöck, Rev. Mod. Phys. 77259U. Schollwöck, Rev. Mod. Phys. 77, 259 (2005).
. I P Mcculloch, Australian National UniversityPhD ThesisI. P. McCulloch, PhD Thesis, Australian National University, 2001.
. F Verstraete, A Weichselbaum, U Schollwöck, J I Cirac, Jan Von Delft, preprint cond-mat/0504305F. Verstraete, A. Weichselbaum, U. Schollwöck, J. I. Cirac, and Jan von Delft, preprint cond-mat/0504305.
. A Friedrich, A K Kolezhuk, I P Mcculloch, U Schollwöck, Phys. Rev. B. 7594414A. Friedrich, A. K. Kolezhuk, I. P. McCulloch, and U. Schollwöck, Phys. Rev. B 75, 094414 (2007).
. I P Mcculloch, in preparationI. P. McCulloch, in preparation
. S R White, Phys. Rev. B. 72180403S. R. White, Phys. Rev. B 72, 180403(R) (2005).
. J Dukelsky, M A Martín-Delgado, T Nishino, G Sierra, Europhys. Lett. 43457J. Dukelsky, M. A. Martín-Delgado, T. Nishino and G. Sierra, Europhys. Lett. 43, 457 (1998).
. I P Mcculloch, M Gulácsi, Europhys. Lett. 57852I. P. McCulloch and M. Gulácsi, Europhys. Lett. 57, 852 (2002).
. D Porras, F Verstraete, J I Cirac, Phys. Rev. B. 7314410D. Porras, F. Verstraete, and J. I. Cirac, Phys. Rev. B 73, 014410 (2006).
. T D Kühner, S R White, Phys. Rev. B. 60T. D. Kühner and S. R. White, Phys. Rev. B 60, (1999).
. E Jeckelmann, Phys. Rev. B. 6645114E. Jeckelmann, Phys. Rev. B 66, 045114 (2002).
. I P Mcculloch, M Gulácsi, Phil. Mag. Lett. 81447I. P. McCulloch and M. Gulácsi, Phil. Mag. Lett. 81, 447 (2001);
. I P Mcculloch, M Gulácsi, Aust , J. Phys. 53597I. P. McCulloch and M. Gulácsi, Aust. J. Phys. 53, 597 (2000).
E P Wigner, Group Theory and Its Applications to the Quantum Mechanics of Atomic Spectra. New YorkAcademic PressE. P. Wigner: Group Theory and Its Applications to the Quantum Mechanics of Atomic Spectra, Academic Press, New York, 1959.
L C Biedenharn, J D Louck, Angular Momentum in Quantum Physics. MassachusettsAddison-WesleyL. C. Biedenharn and J. D. Louck: Angular Momentum in Quantum Physics, Addison-Wesley, Massachusetts, 1981.
D A Varshalovich, A N Moskalev, V K Khersonskii, Quantum Theory of Angular Momentum. SingaporeWorld ScientificD. A. Varshalovich, A. N. Moskalev and V. K. Khersonskii: Quantum Theory of Angular Momentum, World Scientific, Singapore, 1988.
. S Singh, H.-Q Zhou, G Vidal, cond-mat/0701427S. Singh, H.-Q. Zhou, and G. Vidal, preprint cond-mat/0701427.
| [] |
[
"Mechanical properties of simple computer glasses",
"Mechanical properties of simple computer glasses"
] | [
"Edan Lerner \nInstitute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 9041098 XHAmsterdamThe Netherlands\n"
] | [
"Institute for Theoretical Physics\nUniversity of Amsterdam\nScience Park 9041098 XHAmsterdamThe Netherlands"
] | [] | Recent advances in computational glass physics enable the study of computer glasses featuring a very wide range of mechanical and kinetic stabilities. The current literature, however, lacks a comprehensive data set against which different computer glass models can be quantitatively compared on the same footing. Here we present a broad study of the mechanical properties of several popular computer glass forming models. We examine how various dimensionless numbers that characterize the glasses' elasticity and elasto-plasticity vary under different conditions -in each model and across models -with the aim of disentangling the model-parameter-, external-parameter-and preparation-protocol-dependences of these observables. We expect our data set to be used as an interpretive tool in future computational studies of elasticity and elasto-plasticity of glassy solids. arXiv:1902.08991v1 [cond-mat.soft] | 10.1016/j.jnoncrysol.2019.119570 | [
"https://arxiv.org/pdf/1902.08991v1.pdf"
] | 118,944,732 | 1902.08991 | a2afb6bffe91598daf40be44579f5f12f66b1394 |
Mechanical properties of simple computer glasses
Edan Lerner
Institute for Theoretical Physics
University of Amsterdam
Science Park 9041098 XHAmsterdamThe Netherlands
Mechanical properties of simple computer glasses
Recent advances in computational glass physics enable the study of computer glasses featuring a very wide range of mechanical and kinetic stabilities. The current literature, however, lacks a comprehensive data set against which different computer glass models can be quantitatively compared on the same footing. Here we present a broad study of the mechanical properties of several popular computer glass forming models. We examine how various dimensionless numbers that characterize the glasses' elasticity and elasto-plasticity vary under different conditions -in each model and across models -with the aim of disentangling the model-parameter-, external-parameter-and preparation-protocol-dependences of these observables. We expect our data set to be used as an interpretive tool in future computational studies of elasticity and elasto-plasticity of glassy solids. arXiv:1902.08991v1 [cond-mat.soft]
I. INTRODUCTION
Computational studies of glass formation and deformation constitute a substantial fraction of the research conducted in relation to these problems. The attention drawn by this line of work has been on the rise recently due to several methodological developments that allow investigators to create computer glasses with a very broad variation in the degree of their mechanical and kinetic stability. These include the ongoing optimization of GPU-based algorithms [1,2] that now offer the possibility to probe several orders of magnitude in structural relaxation rates in the supercooled liquid regime [3]. Various sampling methods based on generalized statistical ensembles have been shown to yield well-annealed states [4]. In groundbreaking work of Berthier and coworkers [5], inspired by previous advances [6], a glass forming model was optimized such to stupendously increase the efficiency of the Swap Monte Carlo algorithm, allowing the equilibration of supercooled liquids down to unprecedented low temperatures, while remaining robust against crystallization. In [7] a model and algorithm were put forward that allows to create extremely stable computer glasses, albeit with a protocol which is not physical. Mechanical annealing by means of oscillatory shear was also recently shown to be an efficient protocol for creating stable glasses [8]. Finally, numerical realizations of experimental vapor deposition protocols [9] have shown good success in creating well-annealed glasses [10,11].
This recent proliferation of methods for creating stable computer glasses highlights the need for approaches to meaningfully and quantitatively compare between the various glasses created by these methods. In particular, it is important to disentangle the effects of parameter choices -both in the interaction potentials that define computer glass formers, and choices of external control parameters -from the effects of annealing near and below the models' respective computer glass transition temperatures. In addition, in some cases it is useful to quantitatively assess the effective distance a given computer glass is positioned away from the unjamming point -the loss of rigidy seen e.g. upon decompressing essemblies of repulsive soft spheres [12][13][14].
This work is aimed towards establishing how elastic properties and elasto-plastic responses of simple computer glasses depend on various key external and internal control parameters, how they change between different models, and how they are affected by the preparation protocol of glasses. In order to disentangle annealing effects from model-and external-parameter dependences, we exploit the observation that creating computer glasses by instantaneous quenches of high energy states to zero temperature defines an ensemble of configurations whose elastic properties can be meaningfully and quantitatively compared between models and across different parameter regimes. Sampleto-sample mean inherent state potential energy-per-particle U/N . Interestingly, while G saturates above the crossover temperature, U/N does not. In this work we focus on several observables that feature a saturation as seen for G in panel (a).
The existence of the aformentioned ensemble is demonstrated in Fig. 1, where we plot measurements of the sample-to-sample mean athermal shear modulus (see definition below) of underlying inherent states of parent equilibrium configurations (labelled by their equilibrium temperature T 0 ) of a simple glass-forming model (see details in Sect. II A 4 below). This high-temperature saturation of elastic properties of very poorly annealed glassy states appears to be a generic feature of computer glasses [15,16]. We therefore carry out in what follows a comparative study of elastic properties of different computer glass models created by instantaneous quenches from high energy states. Our analyses of elastic properties of instantanously-quenched glasses are compared against the behavior of the same key observables measured in a variant of the glass forming model introduced in [5] that can be annealed very deeply below the conventional computer glass transition temperature. This allows us to compare the relative protocol-and parameter-induced variation in these key observables.
In the same spirit, we also investigate the elasto-plastic steady state as seen by deforming our instantaneouslyquenched glasses using an athermal, quasistatic shear protocol, that gets rid of any rate effects associated with finite deformation rate and finite temperature protocols. We anticipate our results to constitute a benchmark for quantitative assessment of other model glasses in future studies of elasticity, elasto-plasticity and glass formation.
This paper is structured as follows; in Sect. II we spell out the models employed in our study, and list the physical observables that were calculated in those models. Sect. III presents various data sets that characterize the elasticity and elasto-plasticity of the computer glasses we have investigated, and discusses various points of interest and connections to related previous work. Our work is summarized in Sect. IV.
II. MODELS, METHODS AND OBSERVABLES
In this Section we provide details about the model glass formers we employed in our study, and explain the methods used to create glassy samples. We then spell out the definitions of all reported observables.
A. Computer glasses
We have studied 4 model glass formers ind = 3 dimenions. We have created ensembles of 1000 configurations of N = 8000 particles for each model system, and for each value of the respective control parameter (see below).
Inverse-power-law
The inverse-power-law (IPL) model is a 50:50 binary mixture of 'large' and 'small' particles of equal mass m. Pairs of particles i, j at distance r ij from each other in-teract via the inverse-power law pairwise potential
ϕ IPL (r ij ) = λij rij β ,(1)
where ε is a microscopic energy scale. Distances in this model are measured in terms of the interaction lengthscale λ between two 'small' particles, and the rest are chosen to be λ ij = 1.18λ for one 'small' and one 'large' particle, and λ ij = 1.4λ for two 'large' particles. For computational efficiency, we also employ a variant of the IPL model with a finite cutoff in the pairwise interactions, of the form
ϕ IPL (r ij ) = ε λij rij β + q =0 c 2 rij λij 2 , rij λij ≤ x c 0 , rij λij > x c ,(2)
where x c is the dimensionless distance for which ϕ IPL vanishes continuously up to q derivatives. We chose the parameters q = 2 and x c = 1.5. The coefficients c 2 , determined by demanding that ϕ vanishes continuously up to q derivatives, are given by
c 2 = (−1) +1 (2q−2 )!!(2 )!! (β +2q)!! (β −2)!!(β +2 ) x −(β+2 ) c .(3)
The control parameter of interest for this system is the exponent β of the inverse-power-law pairwise interaction. Glassy samples were created by placing N = 8000 particles randomly on a cubic lattice and minimizing the potential energy by a conjugate gradient minimization. The finite-cutoff variant was used for β = 12, then we set the number density ρ ≡ N/V = 2.0, for β > 12, then we set ρ = 0.82, and for β = 4, for which we set ρ = 10.0. In Sect. III A we motivate these number density choices.
Hertzian spheres
The Hertzian spheres model (HRTZ) we employ is a 50:50 binary mixture of soft spheres with equal mass m and a 1:1.4 ratio of the radii of small and large particles. The units of length λ are chosen to be the diameter of the small particles, and ε denotes the microscopic units of energy. Pairs of particles whose pairwise distance r ij is smaller than the sum of their radii R i +R j interact via the Hertzian pairwise potential
ϕ Hertz (r ij , R i , R j ) = 2ε 5λ 5/2 (R i + R j ) − r ij 5/2 ,(4)
and ϕ Hertz = 0 otherwise. In this model we control the imposed pressure; glassy samples at target pressures of p = 10 −1 , 10 −2 , 10 −3 , 10 −4 , 10 −5 were created by combining a Berendsen barostat [17] into the FIRE minimization algorithm [18]. Initial states at the highest pressure were created by placing particles randomly on a cubic lattice, followed by minimizing the potential energy. Subsequent lower pressure glasses were created by changing the target pressure and relaunching the minimization algorithm.
Kob-Andersen binary Lennard-Jones
We employ a slightly modified variant of the wellstudied Kon-Andersen binary Lennard-Jones (KABLJ) glass former [19], which is perhaps the most widely studied computer glass model. Our variant of the KABLJ model is a binary mixture of 80% type A particles and 20% type B particles, that interact via the pairwise potential
ϕ KABLJ (r ij ) = 4ε ij rij λij −12 − rij λij −6 + c 4 rij λij 4 + c 2 rij λij 2 + c 0 ,(5)
if r ij /λ ij ≤ 2.5, and ϕ KABLJ = 0 otherwise. Lengths are expressed in terms of λ AA , then λ AB = 4/5 and λ BB = 22/25. Energies are expressed in terms of ε AA , then ε AB = 3/2 and ε BB = 1/2. Both particle species share the same mass m. The coefficients c 4 , c 2 and c 0 are chosen such that ϕ KABLJ , ϕ KABLJ and ϕ KABLJ vanish at r ij /λ ij = 5/2. In this model we control the density ρ ≡ N/V with V denoting the volume.
Polydisperse soft spheres
The computer glass model we employed is a slightly modified variant of the model put forward in [5]. We enclose N particles of equal mass m in a square box of volume V = L 3 with periodic boundary conditions, and associate a size parameter λ i to each particle, drawn from a distribution p(λ) ∼ λ −3 . We only allow λ min ≤ λ i ≤ λ max with λ ≡ λ min forming our units of length, and λ max = 2.22λ. The number density N/V = 0.58λ −3 is kept fixed. Pairs of particles interact via the same pairwise interaction give by Eq. (2). We chose the parameters x c = 1.4, n = 10, and q = 3. The pairwise length parameters λ ij are given by
λ ij = 1 2 (λ i + λ j )(1 − n a |λ i − λ j |) .(6)
Following [5] we set the non-additivity parameter n a = 0.1. In what follows energy is expressed in terms of ε, temperature is expressed in terms of ε/k B with k B the Boltzmann constant, stress, pressure, and elastic moduli are expressed in terms of ε/λ 3 . This model is referred to in what follows as POLY.
Ensembles of equilibrium states of the POLY model were created using the Swap Monte Carlo method [5,6]; within this method, trial moves include exchanging (swapping) the size parameters of pairs of particles, in addition to the conventional random displacements of particles. For each temperature we have simulated 50 independent systems of N = 8000 particles, and collected 20 configurations for each system that were separated by at least the structural relaxation time (here time is understood as Monte-Carlo steps) as measured by the stress autocorrelation function, resulting in equilibrium ensembles of 1000 members for each parent temperature T 0 . Ensembles of inherent states were created by performing an instantaneous quench of equilibrium states from each parent temperature by means of a conjugate gradient minimization of the potetial energy. We note that since particle size parameters are sampled from a rather broad distribution, and our simulated systems are of only N = 8000 particles, very large finitesize sampling-induced fluctuations of the equilibrium energy of different systems (which are entirely absent in e.g. binary systems such as the KABLJ) can occur; a description of how we reduced these fluctuations -which can affect various fluctuation measures described in what follows -is provided in Appendix A.
B. Observables
In what follows we will denote by x i the 3-dimensional coordinate vector of the i th particle, then x ij ≡ x j −x i is the vector distance between the i th and j th particles, and r ij ≡ √ x ij · x ij is the pairwise distance between them.
We also omit the explicit mentioning of dimensional observables' units, for the sake of simplifying our notations; those observables should be understood as expressed in the appropriate microscopic units. In all computer glasses considered in this work, pairs of particles (i, j) interact via a radially-symmetric pairwise potential ϕ ij = ϕ ij (r ij ), then the potential energy reads
U = i<j ϕ ij .(7)
The (simple shear) stress in athermal glasses is given by
σ = 1 V ∂U ∂γ .(8)
We also consider the shear and bulk moduli, defined as
G = ∂ 2 U ∂γ 2 − ∂ 2 U ∂γ∂x · M −1 · ∂ 2 U ∂x∂γ V ,(9)
and
K = ∂ 2 U ∂η 2 − ∂ 2 U ∂η∂x · M −1 · ∂ 2 U ∂x∂η d 2 V + p ,(10)
respectively, where the pressure p is given by
p = − 1 Vd ∂U ∂η , M ≡ ∂ 2 U ∂x∂x
is the Hessian matrix, and η, γ parametrize the strain tensor To quantify the effect of nonaffinity on the bulk modulus K, we also consider the nonaffine term alone (cf. Eq. (10)), namely
= 1 2 2η + η 2 γ + γη 0 γ + γη 2η + η 2 + γ 2 0 0 0 2η + η 2 .(11)T 0 ν (a) (b) (c) (d) (e) (f) (g) (h) IPLK na ≡ 1 d 2 V ∂ 2 U ∂η∂x · M −1 · ∂ 2 U ∂x∂η .(12)
The Poisson's ratio is given by
ν ≡ 3K − 2G 6K + 2G = 3 − 2G/K 6 + 2G/K .(13)
For every model studied in what follows, we also consider an "unstressed" potential energy
U = 1 2 i<j ϕ ij (r ij − r (0) ij ) 2 ,(14)
where ϕ ij is the second derivative of the i, j interaction of the original potential, and r (0)
ij is the distance between the i th and j th particles in the mechanical equilibrium state ∂U/∂x = 0 of the original potential. The potential U can be understood as obtained by replacing the original interactions by Hookean springs whose stiffnesses are inherented from the original interaction potential, and that reside exactly at their rest lengths r (0) ij so that the springs exert no forces on the particles. The observable we focus on is then the shear modulus G ≡ V −1 d 2 U dγ 2 of the unstressed potential.
III. RESULTS
Here we present the various data sets of dimensionless observables that describe the mechanical properties of the glasses of different models and control parameters discussed in the previous Section.
A. Poisson's ratio
We begin with presenting data for the Poisson's ratio ν (see definition in Eq. (13)), which is a conventional dimensionless characterizer of the elastic properties of solids [20,21], whether glassy [22] or crystalline [23]. Fig. 2(a)-(d) shows the sample-to-sample means of the Poisson's ratio measured in our ensembles of the model glasses studied. To gain insight on the behavior of ν, we also plot in panels (e)-(h) the ratio G/K (cf. Eq. (13)) of the sample-to-sample means of the shear and bulk moduli. Fig. 2a shows the Poisson's ratio of the IPL model; we observe an interesting non-monotonic behavior of ν as a function of the exponent β that characterizes the pairwise interaction. A corresponding non-monotonic behavior of the ratio G/K is also observed (Fig. 2e); the decrease of G/K at large β is expected: in previous work [24] it was shown that increasing β is akin to approaching the unjamming point of repulsive soft spheres [12][13][14]. In [24] it is shown that G/K is expected to vanish as 1/ √ β, represented in Fig. 2e by the dashed line.
The decrease of G/K at small β seen in Fig. 2e, that leads in turn to an increase in ν for small β, is however unexpected. While the careful investigation of the decrease of G/K for small β is left for future investigation, we postulate its origin to lie in the increasing relative importance of higher coordination-shell interactions with decreasing β.
ρ ν β =6 β =8 β =12 (a) (b) FIG. 3.
Interaction-cutoff-induced density dependence of the Poisson's ratio ν and the shear-to-bulk modulus ratio G/K for the IPL model, see Sect. II A 1 for details. The dashed and continuous horizontal lines represent the values of ν and G/K for β = 8 and β = 6, respectively, measured in the IPL model with no interaction cutoff. For β ≥ 12 we see no appreciable dependence on density, while for β < 12 a measurable dependence is observed, see text for discussion.
The importance of higher coordination shells for small β can be appreciated by considering the density dependence of the Poisson's ratio ν and the ratio G/K for the IPL model with a finite interaction cutoff length (see Sect. II A 1), shown in Fig. 3. We note that the inversepower-law interactions of the IPL model should imply the invariance of any dimensionless numbers to changes of the density [25]. However, upon introducing a cutoff (for computational efficiency) in the pairwise interactions, the invariance becomes only approximate. In particular, for β < 12 a measurable dependence of G/K (and therefore also of ν) is seen over a broad range of densities, indicating the greater importance of higher coordination shells for small β in the IPL model. We note that for β = 4 some 10% of the solids we created without cutting off the potential were unstable (their Hessian matrix possesses at least one negative eigenvalue). We therefore use a finite interaction cutoff and fix ρ = 10.0 to obtain approximations for ν and G/K for β = 4, which are representd by the open symbols in Fig. 2a,e. Fig. 2b shows the Poisson's ratio ν measured in the HRTZ system, plotted against the imposed pressure p. As p → 0 it appears that the incompressible limit ν = 1/2 is approached. As expected, this is a consequence of the aformentioned vanishing of the ratio G/K upon approaching the unjamming point, as indeed seen in Fig. 2f. It is known [12] that in the HRTZ model G ∼ p 2/3 and K ∼ p 1/3 , and so one expects G/K ∼ p 1/3 , represented by the dashed line in Fig.2f. Interestingly, in both the HRTZ model and in the IPL model it appears that the onset of the scaling regime takes place at G/K ≈ 0.1. Fig. 2c shows the Poisson's ratio ν measured in the KABLJ system, plotted against the density ρ. Here we see that the large density ν agrees with the IPL results for β ≈ 12; indeed one expects the repulsive part of the KABLJ pairwise potential to dominate the mechanics at high densities [26]. At lower densities, the attractive part of the pairwise interactions of the KABLJ model start to play an increasingly important roll, leading to a plummet of ν as the density approaches unity. This sharp decrease is echoed by a sharp increase in G/K seen in Fig. 2g. To better understand these observations in the KABLJ data, we plot in Fig. 4a the ratio of the pressure to bulk modulus of the KABLJ systems, vs. the density. As expected, the pressure decreases with decreasing density, and appears to vanish a bit below ρ = 1.2 [27]. Accompanying the vanishing of pressure is a substantial increase in nonaffine nature of displacements under compressive strains, which we quantify via the nonaffine contribution to the bulk modulus K na defined in Eq. (12). Fig. 4b shows that the relative fraction that K na amounts to in the bulk modulus grows from nearly zero at ρ ≥ 2.0 to about 13% at ρ = 1.15. This increase in the nonaffine contribution to the moduli, together with the contribution of the negative pressure (cf. Eq. (10)), can explain most of the increase of G/K, and the corresponding decrease of the Poisson's ratio at low densities, in the KABLJ model.
Finally, in Fig. 2d we show the Poisson's ratio measured in the POLY system, plotted against the equilibrium parent temperature T 0 from which the ensembles of glasses were quenched. The annealing at the lowest temperature leads to a decrease of slightly more than 8% in ν. In terms of the ratio G/K, we observe an annealinginduced increase of over 55% above the high-T 0 plateau. For comparison, in [28] an increase of nearly 20% in G/K was observed by varying the quench rate of a model of a Cu 64 Zr 36 metallic glass over two orders of magnitude, with an associated increase of ≈ 3.5% in the Poisson's ratio, whose typical values were found around ν = 0.41. We note that typical values for the Poisson's ratio of metallic glasses ranges between 0.3-0.4 [22,28], i.e. mostly lower than what we observe in our simple models, with the exception of the KABLJ model, discussed in length above. We attribute the higher values of ν seen in our models that feature inverse-power-law pairwise interactions (i.e. the IPL model, and the KABLJ at high densities) to the relative smallness of the nonaffine term in the bulk modulus. This relative smallness results in relatively larger bulk moduli (compared to shear moduli), and, in turn, to higher Poisson's ratios. Laboratory glasses experience a significant degree of annealing upon preparation, which would further reduce their Poisson's ratio, as suggested by our measurements of the POLY system shown in Fig. 2h.
B. Degree of internal stresses
One of the hallmark features of glasses is their structural frustration. How can the degree of structural frustration of different computer glasses be compared? Here we offer to quantitatively compare different simple computer glasses via the following observable: consider a glassy sample that is comprised of N particles; consider next replacing the fixed-shape box in which the glass is confined by a box that can undergo simple shear deformation, and consider fixing the imposed shear stress (instead of the box shape) at zero. Under these conditions, the internal residual stresses of the glass would lead to some shear deformation δγ of the box, that can be estimated as δγ ≈ σ/G, where σ is the as-cast shear stress of the original glass. Since δγ decays with system size N as 1/ √ N (it is N −1 times a sum of O(N ) random contributions, see Appendix B for numerical validation), we thus form a dimensionless characterization of glassy structural frustration by
δσ ≡ √ N δγ = √ N δσ/G ,(15)
where δσ denotes the sample-to-sample standard deviation of the residual stresses.
In Fig. 5 we show δσ measured in our ensembles of glasses. Interestingly, in the IPL and HRTZ models we see that δσ tends to decrease upon approaching the unjamming point by increasing β (for IPL) or decreasing the pressure (for HRTZ), respectively. In contrast with our observations for e.g. the Poisson's ratio showed in Fig. 2, no non-monotonic behavior in δσ is observed in the IPL model. At β 16 it appears that δσ ∼ log β.
The KABLJ and POLY models appear to agree at high densities and high T 0 , respectively, showing δσ ≈ 0.22 in those regimes. The POLY system exhibits a significant reduction of δσ upon annealing (i.e. for lower T 0 ), up to roughly 40% below the high-T 0 plateau value.
C. Shear modulus fluctuations
We next turn to characterizing the degree of mechanical disorder of our simple computer glasses. We propose to quantify the mechanical disorder of a given ensemble of computer glasses by first measuring
∆G ≡ median i (G i − G) 2 ,(16)
where the median is taken over the ensemble of glasses, and G denotes the sample-to-sample mean shear modulus. In Appendix B we demonstrate that, as expected for an intensive variable (and see also [29]), ∆G ∼ 1/ √ N . A dimensionless and N -independent quantifier of disorder is therefore given by
∆G ≡ √ N ∆G/G .(17)
In Fig. 6 we plot ∆G for our different computer glasses. We find that ∆G grows substantially in the IPL and HRTZ models as the respective unjamming points are approached, suggesting that ∆G → ∞ upon approaching unjamming.
While ∆G remains essentially constant at ≈ 2.5 over the entire density range in the KABLJ model, in the 16) and (17), and motivated in Sect. III C. The p = 10 −5 data point for which ∆G = 11.43 was omitted for visual purposes.
POLY model we find a very substantial decrease of ∆G as a function of the parent temperature T 0 , by over a factor of 3. The noise in our data is quite substantial; we nevertheless speculate based on our data that the variation rate d(δG)/dT 0 changes nonmonotonically with decreasing T 0 , namely that the decrease in δG slows down at low T 0 . An interesting question to address in future studies is a possible relation between this nonmonotonicity with temperature, and that reported in [3] for thermal activation barriers in deeply supercooled computer liquids. The reason we choose to measure the median of fluctuations instead of the considering the more conventional standard deviation is that for small N the distribution p(G) of the shear modulus can feature a large tail at low values. This is demonstrated in Fig. 7, where we show the distribution of shear moduli measured in the IPL model for glasses of N = 2000 particles that were instantaneously quenched from high temperature states. Fig. 7b shows that the low-G tail is substantial, leading to a large discrepancy between the full width at half maximum of the distribution of G and its standard deviation. To overcome this discrepancy we opt for a measure which is based on the (square root of the) median of fluctuations rather than their mean. We note however that the large tail of p(G) at low values of G is expected to disappear as the system size is increased [29].
D. Effect of internal stresses on shear modulus
We conclude our study of the elastic properties of our computer glasses with presenting and discussion the effect of internal stresses on the shear modulus. To this aim we recall Eq. 14 which defines a modified potential energy U, constructed based on the original potential energy U by connecting a relaxed, Hookean spring between all pairs of interacting particles, with stiffnesses k = ϕ | rij adopted from the original pairwise potentials ϕ. An associated shear modulus G is then defined as V −1 d 2 U dγ 2 . In previous work [30] it has been shown using mean field calculations that G/G indicates the distance of a system from an internal-stress-induced elastic instability. It is predicted in [30] that G/G ≈ 1/2 in marginally-stable states with harmonic pairwise interactions, and G/G > 1/2 as glass stability increases. The ratio G/G can also depend on statistical properties of interparticle interactions, as discussed in [31].
In Fig. 8 we show measurements of G/G in our different computer glasses. In the IPL model we find that G/G < 1/2 in the entire β range, but approaches 1/2 in the large β limit at which the system unjams. In the HRTZ system we find G/G ≈ 1/2 over most of the investigated pressure range, with a slight decrease at high pressures. The KABLJ system shows that G/G can attain high values in the low density regime in which attractive interactions become dominant, and, similarly to as we have seen above, at large densities it agrees well with the β ≈ 12 result for G/G of the IPL model. Finally, in the POLY system at high T 0 , G/G agrees well with the IPL model for β ≈ 12, as expected. Equilibration deep into the supercooled regime increases G/G by nearly 50%, bringing it to ≈ 1/2 at the deepest supercooling.
E. Yield stress
Up untill this point we have only discussed various dimensionless characterizations of the elastic properties of our computer glasses. In this last Subsection we present results regarding the simple shear yield stress of a subset of the models we have investigated, measured in athermal quasistatic plastic flow simulations. In particular, we exclude the POLY model from this analysis; its elastoplastic transient behavior was characterized in detail in [32], and its steady-flow state stress (referred to here as the yield stress) is expected to be independent of the key control parameter of the POLY model -the parent temperature T 0 .
We employ the standard procedure for driving our glasses under athermal quasistatic deformation: the simulations consist of repeatedly applying a simple shear deformation transformation (we use strain steps of ∆γ = 10 −3 ), followed by a potential energy minimization under Lees-Edwards boundary conditions [33]. As explained in Sect. II A 2, simulations of the HRTZ model involved embedding a barostat functionality [17] into our minimization algorithm, in order to maintain the pressure approximately constant during the deformation simulations, see further discussion in Appendix C.
In Fig. 9 we present the average yield stress σ ∞ , defined here as the average steady-flow stress, taken after the initial elastoplastic transients, rescaled by the isotropicstates average shear modulus G. Each point is obtained by averaging over the steady flow shear stress of 200 in-dependent runs of each computer glass model, and for each control parameter value.
We find that in the IPL and HRTZ models σ ∞ /G decreases upon approaching their respective unjamming points β → ∞ and p → 0. In the IPL model we observe σ ∞ /G ∼ log β at large β; understanding this behavior is left for future investigations. In the HRTZ model one expects σ ∞ ∼ p and G ∼ p 1/3 (it should scale with pressure similarly to the bulk modulus K of isotropic, ascast states, see [34]), then σ ∞ /G ∼ p 2/3 is predicted. We cannot however confirm this prediction numerically; we postulate that the pressure range explored is not sufficiently close to the unjamming point in order to observe the asymptotic scaling. Finally, the KABLJ model features σ ∞ /G ≈ 0.038 over the majority of the explored density range, with a slight increase as attractive forces become more dominant at low densities.
How do these numbers compare to more realistic computer glasses? In [28] values of around σ ∞ /G ≈ 0.05 were reported for a model Cu 64 Zr 36 metallic glasses that employs the embedded atom method [35], i.e. some 30% higher than what we find in e.g. the KABLJ model. Similar results were also found by [36] for model Cu 50 Zr 50 and Cu 47.5 Zr 47.5 Al 5 metallic glasses. In [37] a value of σ ∞ /G ≈ 0.03 was observed using the Stillinger-Weber model for amorphous silicon [38]. A value of σ ∞ /G ≈ 0.11 can be estimated based on the stress-strain signals reported in [39] for computer models of sodium silicate glasses that employ the van Beest-Kramer-van Santen potential [40]. The spread in these values indicates that the simple computer models investigated in this work only represent a narrow class of amorphous solids.
IV. SUMMARY
The goal of this paper is to offer a comprehensive data set that compares -on the same footing -various dimensionless quantifiers of elastic and elasto-plastic properties of popular computer glass models. We build on the assertion that instantaneously quenching high-energy represents an approximation obtained using the finite-cutoff variant of the IPL pairwise potential, see Sect. II A 1 for details. We further note that data points for p = 10 −5 in the HRTZ model and β = 256 in the IPL model could not be measured due to numerical convergence difficulties.
configurations to zero temperature defines an ensemble of glassy samples that can be meaningfully compared between different models. We aimed at disentangling the effects on mechanical properties of various features of the interaction potentials that define computer glass models, from those induced by varying external control parameters and preparation protocols. We hope that the various data sets presented in this work, and the dimensionless observables put forward in this work, will be used as a benchmark for future studies, allowing to meaningfully compare the mechanical properties of different computer glass models. In addition to putting forward our various analyses of mechanical properties of computer glasses, we have also made a few new observations, summarized briefly here: we have identified an interesting nonmonotonicity in the Poisson's ratio in the IPL model (see Fig. 2), as a function of the exponent β of the inverse-power-law interactions. The shear-to-bulk moduli ratio G/K echos this nonmonotonicity: G/K decreases dramatically as β is made small, in addition to its expected decrease at large β -the limit at which the IPL model experiences an unjamming transition [24]. The decrease of G/K at low β indicates the proximity of an elastic instability, which, to the best of our knowledge, has not been addressed in previous literature. The numerical difficulties we encountered in our attempts to create glasses with β = 4 without using a truncated pairwise potential further support that the non-truncated IPL model becomes unstable at low β.
Importantly, we have shown that the KABLJ model features a Poisson's ratio that resembles that of laboratory metallic glasses, and, at density of order unity is generally lower than that seen for the purely repulsive and isomorph-invariant [25] IPL model; our study indicates that the increased nonaffinity of the bulk modulus at low pressures plays an important role in determining the Poisson's ratio in the KABLJ model.
We offered a dimensionless quantifier of internal glassy frustration, δσ, shown to decrease by up to 40% in well annealed glasses compared to poorly annealed glasses.
Even more remarkable is the annealing-induced variation in the sample-to-sample relative fluctuations of the shear modulus ∆G (cf. Eq. (17) and Fig. 6d), that decrease by over a factor of 3 between poorly annealed and well annealed glasses. Finally, an intriguing nonmonotonic behavior of d(δG)/dT 0 with equilibrium parent temperature T 0 was also observed.
An observable inaccesible experimentally but easily measured numerically is the ratio G/G of the shear modulus G to that obtained by removing the internal forces between particles, denoted here and above by G. A similar procedure was carried out in previous work in the context of the vibrational spectrum of glasses [30,41,42], and for the investigation of the lengthscale associated with the unjamming point [43]. In theoretical work [30,31] some trends are predicted for G/G; however, since it varies both with stability and depends on details of the interaction potential, it usefulness as a characterizer of stability of a computer glass appears to be limited. ferent finite-size samples. To demonstrate this, we show in Fig. 10a the distribution of the mean energies (per particle), calculated over 1000 independent equilibrium runs at T = 0.6, each run pertaining to a different, independent realization of the particle-size parameters λ i drawn from the same parent distribution p(λ), and with N = 8000 particles. The mean (over realizations) standard deviation of the energy per particle of individual runs was found to be ≈ 0.01, whereas the standard deviation (over realizations) of the mean energy per particle is ≈ 0.16, i.e. much larger than the characteristic energy per particle fluctuations of any given realization of particle-size parameters, for N = 8000.
In order to minimize the effects of these finite-size fluctuations, we selected the particular realizations whose mean equilibrium energy deviated from the mean over realizations (measured here to be ≈ 6.114) by less than 0.5%, and discard of the rest. To test whether this selection protocol has any observable effect on the distribution of particle size parameters, in Fig. 10b we plot the distribution of particle size parameters measured only in the selected states. We find no observable effect of discarding of the realizations with too large or too small energiesas described above -on the distribution of particle size parameters. In Sections III B and III C we define two dimensionless measures of elastic properties of glasses: δσ ≡ √ N δσ/G and ∆G ≡ √ N ∆G/G, respectively, where δσ denotes the standard deviation of the as-cast shear stress σ, and ∆G is a measure of fluctuations that follows the definition given by Eq. 16. To establish that δσ and ∆G are independent of system size N , in Fig. 11a we plot δσ vs. system size N , and in Fig. 11b we plot ∆G vs. N . The model glass employed is the IPL model with β = 10 [15]. As asserted, both of these observables depend on system size as 1/ √ N , implying the N -independence of δσ and ∆G. The key control parameter of the HRTZ model is the external pressure p; when creating glassy samples of this model, we incorporated a numerical scheme [17] that allows to specify the desired target pressure into our potential energy minimization algorithm. While this scheme does not fix the pressure exactly, it is sufficiently accurate for our purposes. The performance of the fixed pressure protocol in our quasistatic shear simulations can be gleaned from the example signals shown in Fig. 12.
FIG. 1 .
1(a) Sample-to-sample mean athermal shear modulus G measured in inherent states that underlie liquid states at equilibrium parent temperatures T 0 of the POLY model, see Sect. II A 4 for model details. The vertical line approximates the crossover temperature above which several elastic properties saturate. (b)
FIG. 2 .
2(a)-(d) Sample-to-sample mean Poisson's ratio measured in our different models and ensembles of glassy solids. Here and in the following figures, panel (a) shows data for the IPL model, panel (b) shows data for the HRTZ model, panel (c) shows data for the KABLJ model, and panel (d) shows data for the POLY model. The bottom row shows the shear to bulk moduli ratio G/K. The dashed lines in panels (e) and (f) represent the scaling laws expected upon approaching the unjamming point, while the open symbols in panels (a) and (e) (as well as in the IPL panels in the following figures) represent approximations obtained with a finite interaction cutoff, see text for discussions.
FIG. 4 .
4(a) Pressure to bulk modulus ratio vs. the density, for the KABLJ system. (b) The relative fraction of the nonaffine term of the bulk modulus, see text for definitions and discussion.
FIG. 5 .
5Sample-to-sample standard deviations of the shear stress δσ, scaled by √ N /G with G the mean shear modulus.
FIG. 6 .
6∆G ≡ √ N ∆G/G is a N -independent dimensionless quantifier of mechanical disorder, defined via Eqs. (
FIG. 7 .
7(a) The dotted line represents the probability distribution function p(G) of the shear modulus of 20,000 computer glasses of N = 2000 particles of the IPL model with β = 10, made by a instantaneous quench from a high temperature liquid state. The continuous line is a fit to a Gaussian, that demonstrates the asymmetry of p(G) about its mean. To better quantify the low-value tail of p(G), in panel (b) we plot with a dotted line the cummulative distribution G p(G )dG ; the continuous line represents the cummulative distribution associated with the Gaussian fit of panel (a), shown for comparison.
FIG. 8 .
8The ratio G/G of the mean shear modulus to the mean unstressed shear modulus (see Sect. II B for definitions). Lower G/G indicates lower stability and an increasing role played by the interparticle forces in determining shear moduli.
FIG. 9 .
9Yield stress σ∞ rescaled by the mean isotropic, as-cast shear modulus G. We reiterate that the open symbol in panel (a)
FIG. 10. (a) Distribution of mean energy per particle, measured for 1000 independent realizations equlibrated at T = 0.60. (b) Distribution of post-selection particle size parameters, see text for details. Appendix B: System size scaling of fluctuations
11. (a) Sample-to-sample standard deviations δσ of the ascast shear stress, plotted against system size N . (b) The measure ∆G (cf. Eq. (16)) vs. system size N . Both measures of fluctuations decay as 1/ √ N .Appendix C: Athermal quasistatic simulations of the HRTZ model at fixed external pressure
FIG. 12 .
12(a) Stress vs. strain measured in a quasistatic shear deformation simulation of the HRTZ model at constant external pressure of p = 10 −2 . (b) Pressure vs. strain in the same run shown in (a). Small fluctuations of less than 1% are still observed; our numerical scheme does not fix the pressure exactly, but rather only approximately.
ACKNOWLEDGMENTSWe warmly thank Eran Bouchbinder, Geert Kapteijns and Eric DeGiuli for discussions and for their useful comments on the manuscript. Support from the Netherlands Organisation for Scientific Research (NWO) (Vidi grant no. 680-47-554/3259) is acknowledged.Appendix A: Sample-to-sample realization fluctuationsThe POLY model employed in this work considers soft spheres with polydispersed size parameters, which are drawn from a distribution p(λ) ∼ λ −3 sampled between λ min and λ max[5], see Sect. II A 4. Following[5], we chose λ max /λ min = 2.22; this choice can lead to large fluctuations between the energetic and elastic properties of dif-
. J Glaser, T D Nguyen, J A Anderson, P Lui, F Spiga, J A Millan, D C Morse, S C Glotzer, 10.1016/j.cpc.2015.02.028Comput. Phys. Commun. 19297J. Glaser, T. D. Nguyen, J. A. Anderson, P. Lui, F. Spiga, J. A. Millan, D. C. Morse, and S. C. Glotzer, Comput. Phys. Commun. 192, 97 (2015).
. N P Bailey, T S Ingebrigtsen, J S Hansen, A A Veldhorst, L Bøhling, C A Lemarchand, A E Olsen, A K Bacher, L Costigliola, U R Pedersen, H Larsen, J C Dyre, T B Schrøder, 10.21468/SciPostPhys.3.6.038SciPost Phys. 338N. P. Bailey, T. S. Ingebrigtsen, J. S. Hansen, A. A. Veld- horst, L. Bøhling, C. A. Lemarchand, A. E. Olsen, A. K. Bacher, L. Costigliola, U. R. Pedersen, H. Larsen, J. C. Dyre, and T. B. Schrøder, SciPost Phys. 3, 038 (2017).
. D Coslovich, M Ozawa, W Kob, 10.1140/epje/i2018-11671-2Eur. Phys. J. E. 4162D. Coslovich, M. Ozawa, and W. Kob, Eur. Phys. J. E 41, 62 (2018).
. F Turci, C P Royall, T Speck, 10.1103/PhysRevX.7.031028Phys. Rev. X. 731028F. Turci, C. P. Royall, and T. Speck, Phys. Rev. X 7, 031028 (2017).
. A Ninarello, L Berthier, D Coslovich, 10.1103/PhysRevX.7.021039Phys. Rev. X. 721039A. Ninarello, L. Berthier, and D. Coslovich, Phys. Rev. X 7, 021039 (2017).
. R Gutiérrez, S Karmakar, Y G Pollack, I Procaccia, Europhys. Lett. 11156009R. Gutiérrez, S. Karmakar, Y. G. Pollack, and I. Pro- caccia, Europhys. Lett. 111, 56009 (2015).
. G Kapteijns, W Ji, C Brito, M Wyart, E Lerner, 10.1103/PhysRevE.99.012106Phys. Rev. E. 9912106G. Kapteijns, W. Ji, C. Brito, M. Wyart, and E. Lerner, Phys. Rev. E 99, 012106 (2019).
. P Das, A D Parmar, S Sastry, arXiv:1805.12476arXiv preprintP. Das, A. D. Parmar, and S. Sastry, arXiv preprint arXiv:1805.12476 (2018).
. M D Ediger, 10.1063/1.5006265J. Chem. Phys. 147210901M. D. Ediger, J. Chem. Phys. 147, 210901 (2017).
. S Singh, M D Ediger, J J De Pablo, 10.1038/nmat3521Nat Mater. 12139S. Singh, M. D. Ediger, and J. J. De Pablo, Nat Mater. 12, 139 (2013).
. L Berthier, P Charbonneau, E Flenner, F Zamponi, 10.1103/PhysRevLett.119.188002Phys. Rev. Lett. 119188002L. Berthier, P. Charbonneau, E. Flenner, and F. Zam- poni, Phys. Rev. Lett. 119, 188002 (2017).
. C S O'hern, L E Silbert, A J Liu, S R Nagel, 10.1103/PhysRevE.68.011306Phys. Rev. E. 6811306C. S. O'Hern, L. E. Silbert, A. J. Liu, and S. R. Nagel, Phys. Rev. E 68, 011306 (2003).
. M Van Hecke, J. Phys.: Condens. Matter. 2233101M. van Hecke, J. Phys.: Condens. Matter 22, 033101 (2010).
. A J Liu, S R Nagel, 10.1146/annurev-conmatphys-070909-104045Annu. Rev. Condens. Matter Phys. 1347A. J. Liu and S. R. Nagel, Annu. Rev. Condens. Matter Phys. 1, 347 (2010).
. E Lerner, E Bouchbinder, 10.1063/1.5024776J. Chem. Phys. 148214502E. Lerner and E. Bouchbinder, J. Chem. Phys. 148, 214502 (2018).
. L Wang, A Ninarello, P Guan, L Berthier, G Szamel, E Flenner, 10.1038/s41467-018-07978-1Nat. Commun. 1026L. Wang, A. Ninarello, P. Guan, L. Berthier, G. Szamel, and E. Flenner, Nat. Commun. 10, 26 (2019).
. H J C Berendsen, J P M Postma, W F Van Gunsteren, A Dinola, J R Haak, 10.1063/1.448118J. Chem. Phys. 813684H. J. C. Berendsen, J. P. M. Postma, W. F. van Gun- steren, A. DiNola, and J. R. Haak, J. Chem. Phys. 81, 3684 (1984).
. E Bitzek, P Koskinen, F Gähler, M Moseler, P Gumbsch, 10.1103/PhysRevLett.97.170201Phys. Rev. Lett. 97170201E. Bitzek, P. Koskinen, F. Gähler, M. Moseler, and P. Gumbsch, Phys. Rev. Lett. 97, 170201 (2006).
. W Kob, H C Andersen, 10.1103/PhysRevE.51.4626Phys. Rev. E. 514626W. Kob and H. C. Andersen, Phys. Rev. E 51, 4626 (1995).
. G N Greaves, A L Greer, R S Lakes, T Rouxel, 10.1038/nmat3134Nat. Mater. 10823G. N. Greaves, A. L. Greer, R. S. Lakes, and T. Rouxel, Nat. Mater. 10, 823 (2011).
. K K Saxena, R Das, E P Calius, 10.1002/adem.201600053Adv. Eng. Mater. 181847K. K. Saxena, R. Das, and E. P. Calius, Adv. Eng. Mater. 18, 1847 (2016).
. W H Wang, 10.1063/1.3632972J. Appl. Phys. 11053521W. H. Wang, J. Appl. Phys. 110, 053521 (2011).
. R H Baughman, J M Shacklette, A A Zakhidov, S Stafström, 10.1038/32842Nature. 392362R. H. Baughman, J. M. Shacklette, A. A. Zakhidov, and S. Stafström, Nature 392, 362 (1998).
. S Kooij, E Lerner, 10.1103/PhysRevE.95.062141Phys. Rev. E. 9562141S. Kooij and E. Lerner, Phys. Rev. E 95, 062141 (2017).
. J C Dyre, 10.1088/0953-8984/28/32/323001J. Phys. Condens. Matter. 28323001J. C. Dyre, J. Phys. Condens. Matter 28, 323001 (2016).
. T B Schrøder, N Gnan, U R Pedersen, N P Bailey, J C Dyre, 10.1063/1.3582900J. Chem. Phys. 134164505T. B. Schrøder, N. Gnan, U. R. Pedersen, N. P. Bailey, and J. C. Dyre, J. Chem. Phys. 134, 164505 (2011).
. S Sastry, 10.1103/PhysRevLett.85.590Phys. Rev. Lett. 85590S. Sastry, Phys. Rev. Lett. 85, 590 (2000).
. Y Cheng, A Cao, E Ma, 10.1016/j.actamat.2009.03.027Acta Mater. 573253Y. Cheng, A. Cao, and E. Ma, Acta Mater. 57, 3253 (2009).
. H G E Hentschel, S Karmakar, E Lerner, I Procaccia, 10.1103/PhysRevE.83.061101Phys. Rev. E. 8361101H. G. E. Hentschel, S. Karmakar, E. Lerner, and I. Pro- caccia, Phys. Rev. E 83, 061101 (2011).
. E Degiuli, A Laversanne-Finot, G During, E Lerner, M Wyart, 10.1039/C4SM00561ASoft Matter. 10E. DeGiuli, A. Laversanne-Finot, G. During, E. Lerner, and M. Wyart, Soft Matter 10, 5628 (2014).
. E Degiuli, E Lerner, C Brito, M Wyart, 10.1073/pnas.1415298111Proc. Natl. Acad. Sci. U.S.A. 11117054E. DeGiuli, E. Lerner, C. Brito, and M. Wyart, Proc. Natl. Acad. Sci. U.S.A. 111, 17054 (2014).
. M Ozawa, L Berthier, G Biroli, A Rosso, G Tarjus, 10.1073/pnas.1806156115Proc. Natl. Acad. Sci. U.S.A. 1156656M. Ozawa, L. Berthier, G. Biroli, A. Rosso, and G. Tar- jus, Proc. Natl. Acad. Sci. U.S.A. 115, 6656 (2018).
M P Allen, D J Tildesley, Computer simulation of liquids. Oxford university pressM. P. Allen and D. J. Tildesley, Computer simulation of liquids (Oxford university press, 1989).
. M Baity-Jesi, C P Goodrich, A J Liu, S R Nagel, J P Sethna, 10.1007/s10955-016-1703-9J. Stat. Phys. 167735M. Baity-Jesi, C. P. Goodrich, A. J. Liu, S. R. Nagel, and J. P. Sethna, J. Stat. Phys. 167, 735 (2017).
. M S Daw, M I Baskes, 10.1103/PhysRevLett.50.1285Phys. Rev. Lett. 501285M. S. Daw and M. I. Baskes, Phys. Rev. Lett. 50, 1285 (1983).
. B Wang, L Luo, E Guo, Y Su, M Wang, R O Ritchie, F Dong, L Wang, J Guo, H Fu, 10.1038/s41524-018-0097-4NPJ Comput. Mater. 441B. Wang, L. Luo, E. Guo, Y. Su, M. Wang, R. O. Ritchie, F. Dong, L. Wang, J. Guo, and H. Fu, NPJ Comput. Mater. 4, 41 (2018).
. M J Demkowicz, A S Argon, 10.1103/PhysRevB.72.245205Phys. Rev. B. 72245205M. J. Demkowicz and A. S. Argon, Phys. Rev. B 72, 245205 (2005).
. F H Stillinger, T A Weber, 10.1103/PhysRevB.31.5262Phys. Rev. B. 315262F. H. Stillinger and T. A. Weber, Phys. Rev. B 31, 5262 (1985).
. G Molnár, P Ganster, A Tanguy, E Barthel, G Kermouche, 10.1016/j.actamat.2016.03.053Acta Mater. 111129G. Molnár, P. Ganster, A. Tanguy, E. Barthel, and G. Kermouche, Acta Mater. 111, 129 (2016).
. B W H Van Beest, G J Kramer, R A Van Santen, 10.1103/PhysRevLett.64.1955Phys. Rev. Lett. 641955B. W. H. van Beest, G. J. Kramer, and R. A. van Santen, Phys. Rev. Lett. 64, 1955 (1990).
. E Lerner, E Bouchbinder, 10.1103/PhysRevE.97.032140Phys. Rev. E. 9732140E. Lerner and E. Bouchbinder, Phys. Rev. E 97, 032140 (2018).
. H Mizuno, H Shiba, A Ikeda, 10.1073/pnas.1709015114Proc. Natl. Acad. Sci. U.S.A. 1149767H. Mizuno, H. Shiba, and A. Ikeda, Proc. Natl. Acad. Sci. U.S.A. 114, E9767 (2017).
. E Lerner, E Degiuli, G During, M Wyart, 10.1039/C4SM00311JSoft Matter. 105085E. Lerner, E. DeGiuli, G. During, and M. Wyart, Soft Matter 10, 5085 (2014).
| [] |
[
"ALL TOTALLY SYMMETRIC COLORED GRAPHS",
"ALL TOTALLY SYMMETRIC COLORED GRAPHS"
] | [
"Mariusz Grech ",
"Andrzej Kisielewicz "
] | [] | [] | In this paper we describe all edge-colored graphs that are fully symmetric with respect to colors and transitive on every set of edges of the same color. They correspond to fully symmetric homogeneous factorizations of complete graphs. Our description completes the work done in our previous paper, where we have shown, in particular, that there are no such graphs with more than 5 colors. Using some recent results, with a help of computer, we settle all the cases that was left open in the previous paper.Date: December 2, 2011. | 10.46298/dmtcs.630 | [
"https://arxiv.org/pdf/1201.4464v1.pdf"
] | 33,651,136 | 1201.4464 | f5f7119548122e854247789cfe0477f5df2997dc |
ALL TOTALLY SYMMETRIC COLORED GRAPHS
21 Jan 2012
Mariusz Grech
Andrzej Kisielewicz
ALL TOTALLY SYMMETRIC COLORED GRAPHS
21 Jan 2012
In this paper we describe all edge-colored graphs that are fully symmetric with respect to colors and transitive on every set of edges of the same color. They correspond to fully symmetric homogeneous factorizations of complete graphs. Our description completes the work done in our previous paper, where we have shown, in particular, that there are no such graphs with more than 5 colors. Using some recent results, with a help of computer, we settle all the cases that was left open in the previous paper.Date: December 2, 2011.
A k-colored graph G = (V, ψ) on a set of vertices V is a complete graph on V together with a function ψ from the set of edges onto the set of colors {0, 1, . . . , k − 1}. The automorphism group Aut(G) of G is the set of permutations of V preserving the colors of the edges. The extended automorphism group Ext(G) is the set of permutations of V preserving the partition into the colors. Obviously, Aut(G) is a normal subgroup of Ext(G). Moreover, the factor group Ext(G)/Aut(G) may be considered as one acting on the set of colors, and as such is called the symmetry group of colors of G.
A graph G is called edge-transitive if Aut(G) acts transitively on each set of the edges of the same color, and is called arc-transitive (or strongly edge-transitive) if Aut(G) acts transitively on each set of ordered pairs of vertices corresponding to a set of edges of the same color. It is called color-symmetric, if Ext(G) acts as the symmetric group on the set of colors. If G is both color-symmetric and edgetransitive it is called totally symmetric, or TSC-graph, in short. In the previous paper [11], we have proved that TSC-graphs may occur only for less than 6 colors. In this paper we give a complete description of such graphs. For two colors these are self-complementary symmetric graphs, which have been recently described by Peisert [23]. Our result may be viewed as a natural generalization of Peisert's result.
The problems closely related to the subject of this paper have been studied under various names. In [25], T. Sibley classifies edge colored graphs with 2-transitive extended automorphism group Ext(G) (he calls this group the automorphism group, and only mention Aut(G) calling it the group of isometries). Colored graphs with transitive Aut(G) have been considered by Chen and Teh in [6]; they call them pointcolor symmetric graphs. Ashbacher [1] uses the name rainbows for such structures. Various highly-symmetrical structures in this class have been intensively studied using results based on the classification of finite simple groups (see e.g. [2,4,12,15,19,22,24].
The most interesting colored graphs arise from factorizations of complete graphs, that is partitions of the edges into factors that correspond to spanning subgraphs (not necessarily connected). Isomorphic factorizations, coloring the edges of a graph so that the colored subgraphs are isomorphic, have been introduced and studied in a series of papers by Harary, Robinson and Wormald (cf. [13,14]). A factorization a complete graph is homogeneous if there are subgroups M < G ≤ S n such that M is vertex-transitive and fixes each factor setwise, and G permutes the factors transitively (in particular, the factors are isomorphic). TSC-graphs correspond to those factorizations where M is edge-transitive and G permutes factors symmetrically. Recently, in [18], Li, Lim and Praeger have classified all homogeneous factorizations with M being edge-transitive. Their result helped us to finish our study and get the following complete description of totally symmetric colored graphs.
Theorem. If G is an edge-transitive color-symmetric k-colored graph, then G is arc-transitive and k ≤ 5. Moreover, one of the following cases holds:
(i) k = 5 and G = F 5 (4 2 ) or G = H 5 (3 4 );
(ii) k = 4 and G = F 4 (3 2 ); (iii) k = 3 and G belongs to an infinite family of generalized Paley graphs GP 3 (q) or G = G 3 (5 2 ) or G = G 3 (11 2 ); (iv) k = 2 and G belongs to an infinite family of Paley graphs P G(q) or Peisert graphs P G * (q), or else G = G(23 2 ).
All graphs mentioned in the theorem are defined in the next section. We note that this description is something much stronger than just a classification. In a sense one may say that our description is contained in the results classifying finite simple groups, rank 3-groups, and homogeneous factorizations. A good example of what makes a difference here is that (as we shall see in the sequel) all classifications show the possibility of existence of a 5-colored TSC-graph on 2 8 vertices. Only combining a suitable knowledge with heavy computations has allowed to learn that such an object, in fact, does not exists.
The paper is organized as follows. In Section 1 we classify and give defintions of totally symmetric graphs. In Section 2, we summarize and recall results needed in our proof. Then, in the three following sections we consider 4-, 5-and 3-colored TSC-graphs, respectively, which are the cases (ii), (i), and (iii) of the theorem above. The 2-colored TSC-graphs, as we have already mentioned, are described completely in [23]. Finally, in Section 6 we present computations that allowed to settle the most complicated cases that arose during consideration in Section 4.
Definitions of TSC-graphs
We assume that the reader is familiar with general terminology of finite fields, vector spaces and permutation groups (as used, e.g., in [3,8]). We start from simple graphs (k = 2). Whenever we consider a finite field F q (q = p r ), by ω we denote a fixed primitive root of F q .
1.1. Simple graphs. Totally symmetric 2-colored graphs are simple graphs that are symmetric and self-complementary. They have been fully described in [23]. Recall first that the Paley graph P G(q), where q = 1(mod 4), is one whose vertex set is F q , and two distinct elements are adjacent, if their difference is a square in F q (i.e., if it is of the form ω 2k ). If in addition, p = 3(mod 4) (and consequently, r is even), then the Peisert graph P G * (q) is one whose vertex set is F q , and two elements are adjacent, if their difference is of the form ω j , where j = 0, 1(mod 4). The constructions above do not depend on the choice of the primitive root. Moreover, P G(q) is isomorphic with P G * (q) only if q = 9. For details see [23].
An exceptional self-complementary symmetric graph G(23 2 ) has a rather complicated definition. It can be found in [23].
1.2. Generalized Paley graphs. Following the definition of Paley graphs we define now their 3-colored counterparts. Let p = 2(mod 3), q = p r , and q = 1(mod 3) (which is equivalent to r being even). Let GP 3 (q) denote 3-colored graph whose vertex set is F q , and the edge between two distinct elements has color i (i = 0, 1, 2) if their difference is of the form ω 3m+i . It is not difficult to see that this definition does not depend on the choice of the primitive root (see. [11]), and the graph may be viewed as the orbital graph of the subgroup the affine group generated by translations and multiplication by ω 3 . In [18] such graphs appear as cyclotomic partition Cyc(q, k) of K q and k = 3. In general, graphs corresponding to such partitions are color-transitive. The case specified here (k = 3, p = 2(mod 3), r even) is the only one with k > 2, when they are color-symmetric.
Following [18] we define also generalized Paley graphs with more colors. Namely, for each k > 2 and q = 1(modk), and such that either p = 2 or (q − 1)/k is even we define GP k (q) as k-colored graph whose vertex set is F q , and the edge between two distinct elements has color i (i = 0, . . . , k − 1) if their difference is of the form ω km+i . This correspond to cyclotomic partitions Cyc(q, k) in [18]. We note that our usage of the term "generalized Paley graph" is slightly different: we mean the resulting colored graph, while Li at al. [18] mean the simple Caley graph occurring as the factor of the partition. We needed to write a special computer program to make sure that for k > 3 there is no TSC-graph in this family (see Section 6).
1.3. Graphs determined by directions in vector space. We consider now finite vector spaces F d q constructed from finite fields F q . For d > 1, by F k (q d ) we denote a k-colored graph defined on F d q , with k = (q d − 1)/(q − 1), whose colors are determined naturally by k independent directions in the space. Scalar multiplication and addition (translations) preserve colors and move vector (0, v) to any vector in the direction generated by (0, v). This shows that F k (q d ) is edgetransitive. Linear automorphisms of F d q act transitively on directions, which shows that F k (q d ) is color-transitive.
One may check directly that three of these graphs, F 5 (4 2 ), F 4 (3 2 ), F 3 (2 2 ) are color-symmetric. The last graph is isomorphic to GP 3 (2 4 ) and its automorphism group is abstractly isomorphic to the Klein four-group. This is the only known permutation group that is closed but is not the relational group (see. [7,16]; note that Corollary 5.3 in [7] has wrong proof).
1.4. Exceptional 3-colored TSC-graphs. There are two further 3-colored TSCgraphs defined in a similar way on vector spaces F 2 5 and F 2 11 . The vertex set of The fact that these graphs are color-symmetric is proved in the Section 5, where also the automorphism groups of these graphs are presented.
G 3 (5 2 ) is V = F 2 5 . It
1.5. Colored Hamming graph. Let us consider the exceptional affine 2-transitive group M ≤ AGL 3 (4) with M 0 = 2 1+4 given by Hering's Theorem in [21, Table 10]. Let H 5 (3 4 ) be the colored graph determined by the orbitals of this group. Then combining Theorem 1.1 with Proposition 5.9 of [18] we see that H 5 (3 4 ) is a 5-colored TCS-graph. The corresponding factorization is considered in detail in [17]. In particular, it is observed that factors are isomorphic with Hamming graph H(2, 9) (both in [17] and [18] the notation H(9, 2) is used, but this seems to be a mistake).
Proof
We start from recalling the results we apply in the sequel. First, let us recall the results of [11] summarized suitably as follows:
Theorem 2.1. If G is a k-colored TSC-graph then k ≤ 5 and G is arc transitive. In addition, for k > 2 we have the following. Corollary 2.2. If G is a k-colored TSC-graph mentioned in the theorem above, then one of the following holds (i) k = 5 and G = H 5 (3 4 ), (ii) k = 3 and n = 5 2 or n = 11 2 , or else (iii) Aut(G) is an affine group contained in AΓL 1 (n) and there exists larger
group M ≤ AΓL 1 (n) such that Aut(G) ≤ M ≤ Ext(G)
, and M permutes transitively colors of G (that is, orbits of Aut(G)).
Let G be a TSC-graph satisfying (iii). Since Aut(G) contains translations, we may restrict to stabilizers of zero:
A = Aut(G) 0 and M 0 . Then A ≤ M 0 ≤ ΓL 1 (n).
This makes possible to apply Foulser's description of one-dimensional semilinear groups [9,10]. Let us recall briefly the facts we shall need.
Let ω be a primitive root of the underlying field F q (q = p r ), and at the same time, let it denotes the scalar multiplication by ω, and let α be the generating field automorphism α : x → x p . Then ΓL 1 (p r ) = ω, α is the semidirect product of the normal subgroup ω by the subgroup α . In particular, every element of g ∈ ΓL 1 (p r ) has a unique presentation as g = ω e α s for some 0 ≤ e < p r − 1, 0 ≤ s < r. (Alternatively, one may take 0 < s ≤ r which is more suitable for the lemma below). Moreover, integers d, e, s satisfying the conditions above are unique, and the presentation H = ω d , ω e α s is called the standard form. In addition, the proof shows that each of d and s < r is in fact the least positive integer such that H = ω d , ω e α s for any d, e, s. Moreover, H is a subdirect product of the normal subgroup ω d by the (another cyclic) subgroup ω e α s . The cardinality |H| = (p r − 1)r/ds. We note also that αω = ω p α.
In our situation, A acts on the set {ω 0 , . . . , ω p r −1 } of nonzero elements of the field F q . It has k equal orbits corresponding to orbitals of Aut(G), representing colors of G. Thus we may consider the nonzero elements of F q as colored: the color of ω i ∈ F q is the color of the edge (0, ω i ). We will refer to such an edge as the edge
ω i . By Foulser's description, A = ω d , ω e α s M 0 = ω d1 , ω e1 α s1 , where d 1 |d, and if s 1 > 0, then s 1 |s. Moreover if s > 0 then s 1 > 0.
In the following sections we show that these conditions are very strong and leave little space for existence of suitable objects. The description presented in the main theorem follows directly from the lemmas in the subsequent sections combined with Theorem 2.1 and the Peisert's result in [23].
TSC-graphs with 4 colors
We start from the case (ii) of Theorem 2.1. We prove the following. Proof. Suppose, to the contrary, that G is a 4-color TSC-graph on 3 4 vertices. By Corollary 2.2 Aut(G) is an affine group satisfying conditions given in (iii). We apply the notation given above. In this case k = 4, n = 3 4 , p = 3, r = 4. In particular,
α : x → x 3 .
By cardinality formula |A| = 80 · 4/ds. Since A is transitive on each color, (80/4) divides |A|. It follows that ds|16. Since, A has four orbits, d ≥ 4. Thus, we have three cases to consider.
Case 1: d = 16 and s = 1.
Here A = ω 16 , ω e α . Permutation ω 16 has 16 orbits represented by elements ω i . Since A has 4 equal orbits, for each such an orbit, permutation ω e α should "glue together" 4 ω 16 -orbits, that is, it should act on ω 16 -orbits as a product of 4 cycles of length 4. It means, that for each i = 0, 1, . . . , 15, the 4 consecutive images of element ω i by permutation ω e α should belong to 4 different orbits. We compute two of these images: ω e α(ω i ) = ω 3i+e and (ω e α) 2 (ω i ) = ω 9i+4e . By Lemma 2.3(iii), it follows that e must be divisible by 2, so e = 2f for some f . Thus, (ω e α) 2 (ω i ) = ω 9i+8f . It should belong to a different on ω 16 -orbit than ω i itself. It means that 9i + 8f is different from i modulo 16. Applied for i = 0, 1 it means that (modulo 16) 8f is different from 0, and 9 + 8f is different from 1, which is impossible.
Case 2: d = 8 and s ∈ {1, 2}.
Assume first that s = 1. In this case ω 8 has 8 orbits, and permutation ω e α should "glue together" 4 pairs of such orbits. The consecutive images of ω 0 by ω e α are ω e and ω 4e , which implies that e = 2f for some f . Now, the images ω e α(ω i ) = ω 3i+2f should be in a different ω 8 -orbit than ω i , for each i, which means that 3i + 2f is different from i modulo 8. It follows that 2i + 2f is different from 0 modulo 8, that is, i + f is different from 0 modulo 4, for each i. Yet, for each f = 0, 1, 2, 3, there exists i such that i + f = 0 modulo 4, a contradiction.
If s = 2, then by Lemma 2.3(3), e = 0 or e = 4. Also, observe that α 2 :
x → x 9 preserves the orbits of ω 8 . Hence α 2 ∈ A. Now, A = ω 8 , α 2 , since the latter has 8 orbits. On the other hand, α 2 / ∈ ω 8 , ω 4 α 2 , a contradiction. Case 3: d = 4. As above, we see that α 2 preserves the orbits of ω 4 . Since no other permutation ω e α s preserves these orbits, A = ω 4 , α 2 . Here, we need a deeper argument, exploiting total symmetricity. Since G is a 4-color TSC-graph, by identifying any two pairs of colors we should obtain 2-color TSC-graphs that are isomorphic. Yet, identifying color 0 with color 2 and color 1 with color 3, we obtain a Paley graph P (3 4 ), while identifying color 0 with color 1 and color 2 with color 3, we obtain a Peisert graph P * (3 4 ). These graphs are not isomorphic, and thus we have a contradiction.
TSC-graphs with 5 colors
In a quite similar way we consider now the case (i) of Theorem 2.1 with k = 5. Proof. Let G be a 4-color TSC-graph on 7 4 vertices. By Corollary 2.2 Aut(G) is an affine group satisfying conditions given in (iii). In this case k = 5, p = 7, and r = 4. The cardinality |A| = (7 4 − 1) · 4/ds. Since Aut(G) is arc transitive, (7 4 − 1)/5 divides |A|. It follows that ds|20. Since, A has five orbits, d ≥ 5.
Case 1: d = 20 and s = 1. Here A = ω 20 , ω e α . Permutation ω 20 has 20 orbits, and ω e α should act as a product of 5 cycles of length 4 on ω 20 -orbits. We compute: ω e α(ω i ) = ω 7i+e and (ω e α) 2 (ω i ) = ω 49i+8e . The value 49i + 8f should be different from i modulo 16, which means that 9i + 8f is different from i, that is 8i + 8f is different form 0 modulo 20. Consequently, for some f , and for all i, i + f should be different from 0 modulo 5, which is impossible.
Case 2: d = 10 and s ∈ {1, 2}. Now ω e α should act as a product of 5 transpositions on ω 10 -orbits. Hence, for s = 1, (ω e α) 2 (ω i ) = ω 49i+8e is the identity on these orbits. Yet, for i = 0, 1 we have 9i + 8e = 8e and 9 + 8e, respectively. It follows that 5 divides e and, consequently, 9 + 8 · 5f is equal 1 modulo 10, for some f , which is a contradiction.
For s = 2, ω e α 2 (ω i ) = ω 49i+e . For e odd, this yields a required product of transpositions, so we need a deeper argument to get a contradiction in this case.
We make use of the larger group M in Corollary 2.2(3). Since M 0 permutes the colors of G transitively, and n = 7, M 0 contains in particular a cyclic permutation of colors. Without loss of generality we may assume that M is an extensions of Aut(G) by a single permutation c permuting colors cyclically. We have M 0 = ω d1 , ω e1 α s1 , where d 1 |10, and the index [M 0 : A] = 5. The only possibility is M 0 = ω 2 , ω e α 2 . It follows that ω 2 is a cyclic permutation of colors of order 5. Now, the images of ω 0 and ω 2 by ω e α 2 are, respectively, in ω 10 -orbits represented by ω e and ω e+8 . In, particular the corresponding pairs have the same colors. It follows, that ω 2 ω e = ω e+2 should have the same color as ω e+8 . This contradicts the fact that ω 2 is a cyclic permutation of colors of order 5.
Case 3. d = 5 and s ∈ {1, 2, 4}. Here ω 5 -orbits have to correspond to 5 colors. We check that ω e α s does not preserve ω 5 -orbits unless e = 0 and s = 4. Thus, A = ω 5 . Note, that in this case, ω permutes ω 5 -orbits cyclically, so we cannot obtain a contradiction with the methods applied so far. Adding to A translations, we obtain a group whose orbitals form a 5-colored graph. This is just the only exception pointed out in the formulation of the theorem.
We observe that this exceptional graph is generalized Paley graph GP 5 (7 4 ) defined in Section 1. One may check that the whole group AΓL 1 (7 4 ) preserves colors of GP 5 (7 4 ), which suggests that it may be a TSC-graph with the automorphism group containing AΓL 1 (7 4 ) (and contained in AΓL 4 (7)). Note that we are able easily to compute this graph and store in computer memory, but since it has 2401 vertices, computing its automorphism group is beyond the capabilities of modern computer technology. Only combining suitably the computational power with the knowledge we possess makes possible to settle the case. This will be done in the next section.
Lemma 4.2.
There is at least one and at most two 5-colored TSC-graph on 3 4 vertices.
Proof. One of such graphs is H 5 (3 4 ) described in Section 1. By Corollary 2.2, for every other 5-colored TSC-graph G on 3 4 , Aut(G) is an affine group satisfying conditions given in (iii).
In this case k = 5, p = 3, and r = 4. The cardinality |A| = (3 4 − 1) · 4/ds. Since Aut(G) is arc transitive, (3 4 − 1)/5 divides |A|. It follows that ds|20. Since, A has five orbits, d ≥ 5. The remaining of the proof is essentially the same as that for n = 7 4 . We leave it to the reader. Again, similarly as in the previous case, the only unsettled case is A = ω 5 acting on nonzero elements of F 81 .
As before, the exceptional graph is generalized Paley graph, GP 5 (3 4 ), and one may check that the whole group AΓL 1 (3 4 ) preserves colors. So, we need other methods to check this case. Here, GP 5 (3 4 ) has "only" 80 vertices, so one could try to check it using existing computation tools for permutation groups. Yet, we will do it more efficiently applying the same approach as in the case of GP 5 (7 4 ). This is presented in the next section. The last case of Theorem 2.1(i) to consider is that for n = 2 8 . Proof. If G be a 5-color TSC-graph on 2 8 vertices, then by Corollary 2.2, Aut(G) is an affine group satisfying conditions given in (iii). In this case k = 5, p = 2, and r = 8. Using cardinality arguments we have that |A| = (2 8 − 1) · 8/ds, and by arc transitivity, (2 8 − 1)/5 divides |A|. It follows that ds|40. Since, A has five orbits, d ≥ 5, and since (by Lemma 2.3(ii)) d divides 2 8 − 1 = 255 = 5 · 51, it follows that we have only one possibility d = 5.
Again we check that for A to have 5 orbits, we need to have e = 0 and s = 0 or 4. Since α 4 : x → x 16 preserves ω 5 -orbits, it follows that the only possibility is A = ω 5 , α 4 , and the possible exception mentioned in the formulation of the lemma is, again, the generalized Paley graph GP 5 (2 8 ).
TSC-graphs with 3 colors
Before we deal with three unsettled cases in the previous section, we complete our investigation considering the case k = 3. In the three lemmas below we use the fact that, by Corollary 2.2, Aut(G) is an affine group satisfying conditions given in (iii). As before we use the notation of Section 2. Proof. This case is very similar to the previous one. We have k = 3, p = 2, and r = 6. We have |A| = (2 6 − 1) · 4/ds and (2 6 − 1)/3 divides |A|. Whence ds|24. By Lemma 2.3(ii), d divides 2 6 − 1 = 63, yielding d = 3. The only possibility for A to have three orbits is e = 0 and s ∈ {2, 4}. The latter, s = 4, is excluded by Lemma 2.3(i), since 4 does not divide r = 6. Hence, A = ω 3 , α 2 , which leads to generalized Paley graph GP (2 6 ). Proof. Let p ∈ {17, 23, 89}. If s = 2, then A = ω 3 , and the graph is GP (p 2 ). The remaining case is s = 1.
Here, again, we make use of the facts that |A| = 2(p 2 − 1)/d and (p 2 − 1)/3 divides |A|. Hence d|6. Since d ≥ 3, we have two cases d = 6 or d = 3. The proof is the same in each case p = 17, 23, 89, since in each case p = 5 modulo 6 (in particular, p = 2 modulo 3).
Case 1. d = 3. Since A should have exactly 3 orbits, ω e α should preserve ω 3orbits. Yet, the image of ω i by ω e α is ω j , where j = pi + e = 2i + e modulo 3. It follows that i = 2i + e, and consequently, i = −e modulo 3. This should be satisfied for each i and a field e, a contradiction.
Case 2. d = 6. Here, we look for e such that ω e α acts as a product of three transpositions on ω 6 -orbits. Now, the image of ω i by ω e α is ω j , where j = e + 5i = e − i modulo 6. It follows that A has exactly three orbits only when e is odd.
Here we need again a deeper argument, analogous to that applied in Case 2 of Lemma 4.1. In the same way we infer that there exists a group M 0 = ω d1 , ω e1 α s1 , where d 1 |6, and the index [M 0 : A] = 3. The only possibility is M 0 = ω 2 , ω e α . It follows that ω 2 is a cyclic permutation of colors of order 3. Now, the images of ω 0 , ω 1 and ω 2 by ω e α are, respectively, in ω 3 -orbits represented by ω e , ω e+2 and ω e+1 . This contradicts the fact that ω 2 is a cyclic permutation of colors of order 3.
It remains to consider exceptional cases of n = 5 2 and 11 2 . We consider each of these cases separately, but before, we establish a more general result we need here. By Theorem 2.1 of [11] we know that if G is a k-colored TSC-graph then Aut(G) is an affine group. It follows we may speak of the (finite) vector space V associated with G, and consequently, of sets of vertices forming lines in V . We prove that for k > 2 lines are monochromatic in the following sense.
Lemma 5.4. If G is a 3-colored TSC-graph with k > 2, and V a vector space associated with G, then for each one-dimensional subspace L of V , if v, u ∈ L, v, u = 0, then the edges (0, v) and (0, u) have the same color.
Proof. We make use of the fact that by the proof of Theorem 2.1 of [11] not only Aut(G), but also Ext(G) is an affine group (see also [25,Theorem 15]). This means that Ext(G) 0 ≤ GL r (p), where |V | = p r . In particular, permutations in Ext(G) preserve lines.
Let L = {0, x 1 , . . . , x p−1 }, and x 1 (that is, (0, x 1 )) has color 0. Let f ∈ Ext(G) 0 be a permutation of vertices that is a transposition of colors 1 and 2. Since color 0 is fixed, and Aut(G) is transitive on each color, we may assume that f fixes x 1 , as well. Consequently, f (L) = L. Now assume that there is a vertex x i in L is colored 1. It follows that the number of vertices x i ∈ L colored 1 is the same as that colored 2. Since the choice of colors is arbitrary, it follows that all the colors are represented in L in the same number. This contradicts the fact that, by Theorem 2.1, p = 2(mod 3).
Lemma 5.5. There are exactly two nonisomorphic 3-colored TSC-graphs on 5 2 vertices.
Proof. Let G be a 3-colored TSC-graph on 5 2 vertices. We first construct the field F 25 taking 2 + x 2 as an irreducible polynomial over F 5 , and ω = 1 + x as a primitive root. Then we have the natural injection of ΓL 1 (25) into GL 2 (5) given by
ω −→ 1 3 1 1 , α −→ 1 0 0 −1
The associated vector space V = F 2 5 has six lines (one-dimensional subspaces) determined by vectors (1, 0), (0, 1), (1, 1), (2, 1), (3,1), (4,1). Taking powers of ω = 1 + x modulo 2 + x 2 , we check that they correspond to the lines containing 1, ω 3 , ω, ω 2 , ω 4 , and ω 5 , respectively. By Lemma 5.4, each line is monochromatic, and consequently, two lines correspond to each color. Without loss of generality we may assume that lines of (1, 0) and (0, 1) have the same color. (This is so because changing a base by conjugation preserve lines, and therefore it is enough to consider only graphs, in a fixed presentation, with base lines of (1, 0) and (0, 1) having the same color). Then one of the lines generated by (2, 1), (3,1), (4,1) has to have the same color as (1,1). This implies that we have at most three nonisomorphic 3-colored TSC-graphs on 5 2 .
The second possibility (lines of (3, 1) and (1, 1) have the same color) leads to a graph G 2 whose Aut(G 2 ) 0 = ω 3 . This group is isomorphic to Z 8 and G = GP 3 (5 2 ). The first possibility (lines of (2, 1) and (1, 1) have the same color) leads to a graph G 1 whose Aut(G 1 ) 0 = ω 6 , ω 3 α . It is not difficult to see that this graph is isomorphic to G 1 . Indeed the permutation of F 2 5 corresponding to the transposition (2, 3) of the underlined field F 5 yields the desired isomorphism.
The last possibility (lines of (4, 1) and (1, 1) have the same color) leads to another 3-colored TSC-graph G 3 on 5 2 vertices. It is straightforward to check that in this case Aut(G 3 ) 0 is generated by matrices
2 0 0 2 , 0 1 1 0 , −1 0 0 1 ,
which is isomorphic to D 4 × Z 2 , has order 16, and cannot be embedded in ΓL 1 (5 2 ). Obviously, G 3 = G 3 (5 2 ) defined in Section 1.
Lemma 5.6. There are exactly two non isomorphic 3-colored TSC-graphs on 11 2 vertices.
Proof. In this case the fact that there are at most two 3-colored TSC-graphs on 11 2 vertices has been established in the proof of [11,Theorem 5.1]. The second case in that proof is Aut(G) 0 = ω 3 , α 2 < ΓL 1 (11 2 ) (note that in this case α 2 is an identity). This leads to generalized Paley graph GP (11 2 ). Below, we will use the fact that, in this case, Aut(G) 0 is cyclic and isomorphic to Z 40 The other case in the proof of [11,Theorem 5.1] is Aut(G) 0 = ω 6 , ω 3 α (where ω 3 may be replaced by ω or ω 5 leading to isomorphic graphs). This group is also of order 40 but is isomorphic with the group a, b|a 20 = e, b 2 = a 2 , ba = a 11 b , which is not abelian and therefore not isomorphic to Z 40 . It follows that the graph G determined by the orbitals of the corresponding group is not isomorphic to GP (11 2 ). We need still to prove that it is totally symmetric.
In order to construct the field F 11 2 we take the polynomial 1 + x 2 , which is irreducible over F 11 . As a primitive root we take ω = 6 + 2x. We have twelve lines which contain ω 0 , . . . , ω 11 , respectively. For each i, we compute vector (x, y) belonging to the same line as ω i , and using Aut(G) 0 we determine lines with the same color. The results are presented in Table 1. As we see, this defines graph G 3 (11 2 ) introduced in Section 1. Using the table we may check that the graph is i vector i vector i vector i vector i vector i vector 0 (1,0) 1 (3,1) 2 (5,1) 3 (10,1) 4 (9,1) 5 (4,1) 6 (0,1) 7 (7,1) 8 (2,1) 9 (1,1) 10 (6,1) 11 (8,1) color 0 color 1 color 1 color 0 color 2 color 2 exchanges colors 0 and 1.
Computations
In order to check whether the exceptional graphs mentioned in Section 4 are totally symmetric we wrote a dedicated computer program that made an intensive use of the facts about the structure of the graphs we have established. Below, we present the result and the details of the computations. Theorem 6.1. None of the graphs GP 5 (3 4 ), GP 5 (7 4 ), GP 5 (2 8 ) is totally symmetric.
The general idea of computations is the same in each case p r = 3 4 , 7 4 , 2 8 . First, we use the fact mentioned in the proof of Lemma 5.4 that the extended automorphism group of a totally symmetric graph is contained in AGL r (p). It follows that we may restrict to stabilizers of 0 anyway, and our aim is to prove that GL r (p) does not permute colors in the symmetric way. Colors of GP 5 (p r ) corresponds to the orbits of A = ω 5 . To fix notation, let us assign color i to the orbit of ω i for i = 0, 1, . . . , 4. It is enough to show that, for instance, no permutation in GL r (p) transposes colors 0 and 1. We make use of the fact that permutations in GL r (p) can be represented by suitable r × r matrices. We have, respectively, 3 16 , 7 16 , and 2 64 matrices to check, which are still too large numbers. So we need to use further facts in order to reduce these numbers. Since now the details of the program differ in each case, we describe them, first, for n = 7 4 .
First, we construct a concrete field on n = 7 4 elements, using the polynomial x 4 + x 3 + x 2 + 3 irreducible over F 7 . We check that ω = x is a primitive root of F n , generating the multiplicative group. Computing all the powers ω i we establish the colors of all vectors (a 0 , a 1 , a 2 , a 3 ) = a 0 + a 1 x + a 2 x 2 + a 3 X 3 . In order to prove that the graph determined by the colored vectors is not totally symmetric it is enough to show that there exists a permutation of colors that is not preserved by any linear transformation. For technical reason we demonstrate this for transposition (1, 2) of colors 1 and 2. We note that the since the powers ω 0 , ω 1 , ω 2 are the vectors (1, 0, 0, 0), (0, 1, 0, 0) and (0, 0, 1, 0), respectively, their colors are 0, 1 and 2, respectively.
Thus, we are looking for a 4 × 4 matrix B = (a ij ), a ij ∈ F 7 such that for each vector v ∈ F n , v and Bv have the same colors, except for that if v has color 1 then Bv has color 2, v has color 2 then Bv has color 1. Note that since A is transitive on each color, if there is B with the required properties, then there must be one that in addition fixes ω 0 = (1, 0, 0, 0). In other words, we may assume that the first column of B is just (1, 0, 0, 0). Further way to reduce the number of matrices to check is to observe that, since the color of (0, 1, 0, 0) is 1, the second column must be a vector of color 2. Similarly, the third vector must be of color 1, while the fourth column must be a vector of color 3. Since there are (7 4 − 1)/5 = 480 vectors of each color we have 480 3 = 11 × 10 7 matrices to generate and check. We wrote a suitable program in C++ and it took some 20 minutes on our PC-computer with a 2GHz processor to get the answer that no matrix satisfies these conditions. The analogous program for the case of n = 3 4 has obtained the same answer checking 4096 matrices in an instant time. The case n = 2 8 is the hardest one. Here we have (2 8 − 1)/5 = 51 vectors for each color, and consequently, 51 7 = 9 × 10 11 matrices to check. To reduce this number we make use of the fact that candidates for each column may be further restricted as follows. The sum of the first and the i-th column, i > 1, must be of the same color as the vector (1, 0, . . . , 0, 1, . . . , 0) with the second 1 on the i-th place. The sum in question may be obtained just by switching the first bite in the vector representing the i-th column. Computations in the field F 2 8 may be also speeded up due to the fact that the underlined field is the so called Rijndael field [5], where each vector may be treated as a byte, and consequently bitwise operations may be applied directly. In particular, computing the image of a vector by a matrix reduces to computing the exclusive-OR operation on the set of column-bytes. We were able to reduce the computation time to a few hours to get the answer that no matrix satisfies the required conditions.
Since the answers in all the three cases was negative, we modified the program slightly to make sure it gives correct answers to other related questions. In particular, our program found correctly the matrices permuting colors in a cyclic way and computed the correct cardinalities of the automorphism groups.
acknowledgement
We thank Kornel Kisielewicz who helped us to write suitable programs to compute unsettled cases n = 3 4 , 7 4 and 2 8 .
have color 1, and the remaining have color 2. In other words, the graph is determined by the partition: [(1, 0), (0, 1)], [(1, 1), (2, 1)], [(3, 1), (4, 1)]. Graph G 3 (11 2 ) has an analogous definition. The vertex set of G 3 (11 2 ) is V = F 1 1 2 . It has 12 directions determined by vectors starting in (0, 0) and ending in (1, 0) or (i, 1) for i = 0, . . . 10. The graph is then determined by
(i) F 5 (4 2 ) is the unique 5-colored TSC-graph on 16 vertices. Except, possibly, for n = 2 8 , 3 4 or 7 4 there is no other 5-colored TSC-graph on n vertices. (ii) F 5 (3 2 ) is the unique 4-colored TSC-graph on 9 vertices. Except, possibly, for n = 3 4 there is no other 4-colored TSC-graph on n vertices. (iii) Except for a known infinite family of 3-colored TSC-graphs (generalized Paley graphs), there are only finitely many other 3-colored TSC-graphs with the number of vertices belonging to the set {2 4 , 2 6 , 5 2 , 11 2 , 17 2 , 23 2 , 89 2 }. Combining the above with [18, Theorem 1.1] (and careful inspecting Tables 2 and 3 of [18]), we see that almost all TSC-graphs G have Aut(G) contained in one-dimensional semilinear affine group.
Let H be a subgroup of ω, α . Then H has the form H = ω d , ω e α s , where d, e, s can be chosen to satisfy the following conditions (i) s > 0 and s|r;(ii) d > 0 and d|(p r − 1); (iii) 0 ≤ e < d and d|e(p r − 1)/(p s − 1).
Lemma 3 . 1 .
31There is no 4-colored TSC-graph on 3 4 vertices.
Lemma 4. 1 .
1There is at most one 5-colored TSC-graph on 7 4 vertices.
Lemma 4. 3 .
3There is at most one 5-colored TSC-graph on 2 8 vertices.
Lemma 5 . 1 .
51There is exactly one 3-colored TSC-graph on 2 4 vertices.Proof. In this case k = 3, n = 2 4 , p = 2, r = 4. In particular, α :x → x 2 . The cardinality formula yields |A| = (2 4 − 1) · 4/ds. Since Aut(G) is arc transitive, (2 4 − 1)/3 divides |A|. It follows that ds divides 12. By Lemma 2.3(ii), d divides 2 4 − 1 = 15. Hence d = 3. The only possibility for A to have three orbits is e = 0 and s = 2. Since α 2 preserves ω 3 -orbits, this leads to generalized Paley graph GP (2 4 ).
Lemma 5 . 2 .
52There is exactly one 3-colored TSC-graph on 2 6 vertices.
Lemma 5 . 3 .
53There is exactly one 3-colored TSC-graph on n = 17 2 , 23 2 , 89 2 vertices.
Table 1 .
1Correspondence between lines of F 2 11 and elements ω i . totally symmetric. To this end it is enough to check that the matrix1 0
0 −1
exchanges colors 1 and 2, while the matrix
2 1
1 4
Chromatic geometry. M Aschbacher, Quart. J. Math. Oxford Ser. 38M. Aschbacher, Chromatic geometry, Quart. J. Math. Oxford Ser. 38 (1987), 277-296.
On groups of degree n and n − 1, and highly-symmetric edge colourings. P J Cameron, J. London Math. Soc. 9P. J. Cameron, On groups of degree n and n − 1, and highly-symmetric edge colourings, J. London Math. Soc. 9 (1975) 385-391.
Permutation Groups. P J Cameron, Cambridge University PressP.J. Cameron, Permutation Groups, Cambridge University Press 1999.
One-factorizations of complete graphs with a doubly transitive automorphism group. P J Cameron, G Korchmaros, Bull. London Math. Soc. 25P.J. Cameron, G. Korchmaros, One-factorizations of complete graphs with a doubly transitive automorphism group, Bull. London Math. Soc. 25 (1993) 1-6.
Basismethoden cryptografie" and "Basic methods of cryptography. K Cartrysse, J C Van Der Lubbe, The Advanced Encryption Standard: Rijndael, ChapterK. Cartrysse, J.C. van der Lubbe, The Advanced Encryption Standard: Rijndael, Chapter in "Basismethoden cryptografie" and "Basic methods of cryptography", 2004;
Constructions of point-colour-symmetric graphs. C C Chen, H H Teh, J. Combin. Theory Ser. B. 27C.C. Chen, H.H. Teh, Constructions of point-colour-symmetric graphs, J. Combin. Theory Ser. B 27 (1979) 160-167.
Orbit equivalence and permutation groups defined by unordered relations. F Volta, J Siemons, 10.1007/s10801-011-0313-5J. Algebr. Combin. F. Dalla Volta, J. Siemons, Orbit equivalence and permutation groups defined by unordered relations, J. Algebr. Combin. (23 September 2011), pp. 1-18. doi:10.1007/s10801-011-0313-5.
Permutation Groups. J D Dixon, B Mortimer, Springer-VerlagGTM 163J.D. Dixon and B. Mortimer, Permutation Groups, Springer-Verlag (GTM 163) 1996.
The flag-transitive collineation groups of the finite Desarguesian affine planes, Canad. D A Foulser, J. Math. 16D.A. Foulser, The flag-transitive collineation groups of the finite Desarguesian affine planes, Canad. J. Math. 16 (1964), 443-472.
Solvable, flag-transitive collineation groups. D A Foulser, M J Kallaher, Geometriae Dedicata. 7D.A. Foulser, M.J. Kallaher, Solvable, flag-transitive collineation groups, Geometriae Dedi- cata 7 (1978) 111-130.
Totally symmetric colored graphs. M Grech, A Kisielewicz, J. Graph Theory. 62M. Grech, A. Kisielewicz, Totally symmetric colored graphs. J. Graph Theory 62 (2009), 329 -345.
Homogeneous factorizations of graphs and digraphs. M Giudici, C H Li, P Potočnik, C E Praeger, European J. Combin. 27M. Giudici, C. H. Li, P. Potočnik, C. E. Praeger, Homogeneous factorizations of graphs and digraphs, European J. Combin. 27 (2006), 11-37.
Isomorphic Factorisations I: Complete Graphs. F Harary, R W Robinson, N C Wormald, Trans. Amer. Math. Soc. 242F. Harary, R. W. Robinson, N. C. Wormald, Isomorphic Factorisations I: Complete Graphs, Trans. Amer. Math. Soc. 242 (1978), 243-260.
Isomorphic Factorisations X: Unsolved problems. F Harary, R W Robinson, J. Graph Theory. 9F. Harary, R. W. Robinson, Isomorphic Factorisations X: Unsolved problems, J. Graph The- ory 9 (1985), 67-86.
Homogeneous designs and geometric lattices. W M Kantor, J. Combin. Theory Ser. A. 38W. M. Kantor, Homogeneous designs and geometric lattices, J. Combin. Theory Ser. A 38 (1985) 66-74.
Symmetry groups of boolean functions and constructions of permutation groups. A Kisielewicz, J. Algebra. 199A. Kisielewicz, Symmetry groups of boolean functions and constructions of permutation groups, J. Algebra 199 (1998), 379-403.
Arc-transitive homogeneous factorizations and affine planes. T K Lim, J. Combin. Designs. 14T. K. Lim, Arc-transitive homogeneous factorizations and affine planes, J. Combin. Designs 14 (2006), 290-300.
Homogeneous factorizations of complete graphs with edge-transitive factors. C H Li, T K Lim, C E Praeger, J. Algebr. Combin. 29C. H. Li, T. K. Lim, C. E. Praeger, Homogeneous factorizations of complete graphs with edge-transitive factors. J. Algebr. Combin. 29 (2009) 107-132.
On partitioning the orbitals of a transitive permutation group. C H Li, C E Praeger, Trans. Amer. Math. Soc. 355C. H. Li, C. E. Praeger, On partitioning the orbitals of a transitive permutation group. Trans. Amer. Math. Soc. 355 (2003), 637-653.
On generalised Paley graphs and their automorphism groups. T K Lim, C E Praeger, Michigan Math. 58T. K. Lim, C. E. Praeger, On generalised Paley graphs and their automorphism groups, Michigan Math. 58 (2009), 293-308.
The affine permutation groups of rank three. M W Liebeck, Proc. London Math. Soc. 3M.W. Liebeck, The affine permutation groups of rank three, Proc. London Math. Soc. (3) 54 (1987) 477-516.
On Sylow subgraphs of vertex-transitive self-complementary graphs. M Muzychuk, Bull. London Math. Soc. 31M. Muzychuk, On Sylow subgraphs of vertex-transitive self-complementary graphs, Bull. London Math. Soc. 31 (1999), 531-533.
All self-complementary symmetric graphs. W Peisert, J. Algebra. 240W. Peisert, All self-complementary symmetric graphs, J. Algebra 240 (2001), 209-229.
Brick assignments and homogeneously almost self-complementary graphs. P Potočnik, M Šajna, J. Combin. Theory Ser. B. 99P. Potočnik, M.Šajna, Brick assignments and homogeneously almost self-complementary graphs, J. Combin. Theory Ser. B 99 (2009), 185-201.
On classifying finite edge colored graphs with two transitive automorphism groups. T Q Sibley, J. Combin. Theory Ser. B. 90T.Q. Sibley, On classifying finite edge colored graphs with two transitive automorphism groups. J. Combin. Theory Ser. B 90 (2004), 121-138.
Wroc law, Poland E-mail address: [email protected]. [email protected]. 2Institute of Mathematics, University of Wroc law plplInstitute of Mathematics, University of Wroc law pl.Grunwaldzki 2, 50-384 Wroc law, Poland E-mail address: [email protected], [email protected]
| [] |
[
"Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers",
"Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers",
"Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers",
"Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers"
] | [
"Lucrezia Cossetti [email protected] \nFakultät für Mathematik\nInstitut für Analysis\nInstitut für Technologie\nEnglerstraße 276131Karlsruher, KarlsruheGermany\n",
"Luca Fanelli [email protected] \nDipartimento di Matematica\nSAPIENZA Università di Roma\nP. le Aldo Moro 500185RomeItaly\n",
"David Krejčiřík [email protected] \nDepartment of Mathematics\nFaculty of Nuclear Sciences and Physical Engineering\nCzech Technical University\nTrojanova 13, 12000 Prague 2PragueCzechia\n",
"Lucrezia Cossetti [email protected] \nFakultät für Mathematik\nInstitut für Analysis\nInstitut für Technologie\nEnglerstraße 276131Karlsruher, KarlsruheGermany\n",
"Luca Fanelli [email protected] \nDipartimento di Matematica\nSAPIENZA Università di Roma\nP. le Aldo Moro 500185RomeItaly\n",
"David Krejčiřík [email protected] \nDepartment of Mathematics\nFaculty of Nuclear Sciences and Physical Engineering\nCzech Technical University\nTrojanova 13, 12000 Prague 2PragueCzechia\n"
] | [
"Fakultät für Mathematik\nInstitut für Analysis\nInstitut für Technologie\nEnglerstraße 276131Karlsruher, KarlsruheGermany",
"Dipartimento di Matematica\nSAPIENZA Università di Roma\nP. le Aldo Moro 500185RomeItaly",
"Department of Mathematics\nFaculty of Nuclear Sciences and Physical Engineering\nCzech Technical University\nTrojanova 13, 12000 Prague 2PragueCzechia",
"Fakultät für Mathematik\nInstitut für Analysis\nInstitut für Technologie\nEnglerstraße 276131Karlsruher, KarlsruheGermany",
"Dipartimento di Matematica\nSAPIENZA Università di Roma\nP. le Aldo Moro 500185RomeItaly",
"Department of Mathematics\nFaculty of Nuclear Sciences and Physical Engineering\nCzech Technical University\nTrojanova 13, 12000 Prague 2PragueCzechia"
] | [] | By developing the method of multipliers, we establish sufficient conditions on the magnetic field and the complex, matrix-valued electric potential, which guarantee that the corresponding system of Schrödinger operators has no point spectrum. In particular, this allows us to prove analogous results for Pauli operators under the same electromagnetic conditions and, in turn, as a consequence of the supersymmetric structure, also for magnetic Dirac operators. | 10.1007/s00220-020-03853-7 | [
"https://export.arxiv.org/pdf/1912.02443v1.pdf"
] | 208,637,400 | 1912.02443 | ab58c1b36366a3695b0282d6db70af83ecc80883 |
Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers
5 Dec 2019 5 December 2019
Lucrezia Cossetti [email protected]
Fakultät für Mathematik
Institut für Analysis
Institut für Technologie
Englerstraße 276131Karlsruher, KarlsruheGermany
Luca Fanelli [email protected]
Dipartimento di Matematica
SAPIENZA Università di Roma
P. le Aldo Moro 500185RomeItaly
David Krejčiřík [email protected]
Department of Mathematics
Faculty of Nuclear Sciences and Physical Engineering
Czech Technical University
Trojanova 13, 12000 Prague 2PragueCzechia
Absence of eigenvalues of Dirac and Pauli Hamiltonians via the method of multipliers
5 Dec 2019 5 December 2019
By developing the method of multipliers, we establish sufficient conditions on the magnetic field and the complex, matrix-valued electric potential, which guarantee that the corresponding system of Schrödinger operators has no point spectrum. In particular, this allows us to prove analogous results for Pauli operators under the same electromagnetic conditions and, in turn, as a consequence of the supersymmetric structure, also for magnetic Dirac operators.
Introduction
Objectives and state of the art
Understanding electromagnetic phenomena has played a fundamental role in quantum mechanics. The simplest mathematical model for the Hamiltonian of an electron, subject to an external electric field described by a scalar potential V : R 3 → R and an external magnetic field B = curl A with a vector potential A : R 3 → R 3 , is given by the Schrödinger operator
− ∇ 2 A + V in L 2 (R 3 ; C) ,(1.1)
where ∇ A := ∇ + iA is the magnetic gradient. Unfortunately, the mathematically elegant model (1.1) is not sufficient to explain finer electromagnetic effects, for it disregards an inner structure of electrons, namely their spin. A partially successful attempt to take the spin into account is to enrich the algebraic structure of the Hilbert space and consider the Pauli operator
H P (A, V ) := −∇ 2 A I C 2 + σ · B + V in L 2 (R 3 ; C 2 ) ,(1.2)
where σ := (σ 1 , σ 2 , σ 3 ) are Pauli matrices. Here the term σ · B describes the interaction of the spin with the magnetic field and V := V I C 2 stands for the electric interaction as above.
To get a more realistic description of the electron, subject to an external electromagnetic field, one has to take relativistic effects into account. A highly successful model is given by the Dirac operator
H D (A, V ) := −iα · ∇ A + 1 2 β + V in L 2 (R 3 ; C 4 ) ,(1.3)
where α := (α 1 , α 2 , α 3 ) and β are Dirac matrices and V := V I C 4 . The principal objective of this paper is to develop the so-called method of multipliers in order to establish spectral properties of the Pauli and Dirac operators. This technique comes from partial differential equations, but it seems to be much less known in spectral theory. We are primarily interested in physically relevant sufficient conditions, which guarantee the absence of point spectra (including possibly embedded eigenvalues). We proceed in greater generality by allowing V : R 3 → C to be complex-valued in (1.1) and V : R 3 → C 2×2 to be a general matrix-valued potential, possibly non-Hermitian, in (1.2). However, some of our results are new even in the self-adjoint setting. Since the spin-magnetic term σ · B can be included in V , we simultaneously consider matrix electromagnetic Schrödinger operators
H S (A, V ) := −∇ 2 A I C 2 + V in L 2 (R 3 ; C 2 ) . (1.4)
Since the operator acts on spinors, we occasionally call the corresponding spectral problem the spinor Schrödinger equation.
As the last but not least generalisation to mention, in the main body of the paper, we shall consider the Pauli and Dirac operators in the Euclidean space R d of arbitrary dimension d ≥ 1.
The study of spectral properties of scalar Schrödinger operators (1.1) constitutes a traditional domain of mathematical physics and the literature on the subject is enormous. Much less is known in the mathematically challenging and still physically relevant situations where V is allowed to be complex-valued, see [16,15] and references therein. Works concerning non-self-adjoint Pauli operators are much more sparse in the literature, see [26] and references therein. More results are available in the case of non-self-adjoint Dirac operators, see [8,11,6,25,7,12,9,14].
The paper [16] represents a first application of the method of multipliers to spectral theory: the authors established sufficient conditions, which guarantee the total absence of eigenvalues of (1.1). It is remarkable that the conditions are physically relevant in the sense that they involve the magnetic field B rather than the vector potential A. The two-dimensional situation was covered later in [15]. The robustness of the method of multipliers has been demonstrated in its successful application to the half-space instead of the whole Euclidean space in [5] and to Lamé instead of Schrödinger operators in [4]. In the present paper, we push the analysis forward by investigating how the unconventional method provides meaningful and interesting results in the same direction also in the less explored setting of the spinorial Hamiltonians.
The strategy
The main ingredient in our proofs is the method of multipliers as developed in [16] for scalar Schrödinger operators (1.1). In the present paper, however, we carefully revisit the technique and provide all the painful details, which were missing in the previous works. We identify various technical hypothesis about the electromagnetic potentials to justify the otherwise formal manipulations. We believe that this part of the paper will be of independent interest for communities interested in spectral theory as well as partial differential equations.
The next, completely new contribution is the adaptation of the method to the matrix electromagnetic Schrödinger operators (1.4). The Pauli Hamiltonians (1.2) are then covered as a particular case.
The method of multipliers does not seem to apply directly to Dirac operators, because of the lack of positivity of certain commutators. Our strategy is to employ the supersymmetric structure of Dirac operators (cf. [27,Ch. 5]). More specifically, using the standard representation is available, which, in turn, follows as a consequence of the corresponding result for the general Schrödinger operators H S (A, V ) with matrix-valued potentials V . Notice that, in this way, we are not able to treat magnetic Dirac operators with electric perturbations.
α µ = 0 σ µ σ µ 0 , β = I C 2 0 0 −I C 2 , µ = 1, 2, 3,(1.
The results in three dimensions
As usual, the sums on the right-hand sides of (1.1), (1.2) and (1.4) should be interpreted in a form sense (cf. [18,Ch. VI]). More specifically, the operators are introduced as the Friedrichs extension of the operators initially defined on smooth functions of compact support. The regularity hypotheses and the functional inequalities stated in the theorems below ensure that the operators are well defined as m-sectorial operators. The Dirac operator (1.3) with V = 0 is a closed symmetric operator under the stated assumptions. Henceforth, we use the notation r(x) := |x| for the distance function from the origin of R d and ∂ r f (x) :=
x |x| · ∇f (x) for the radial derivative of a function f : R d → C. We also set f ± (x) := max{±f (x), 0} if f is real-valued. For matrix Schrödinger operators (1.4), we prove the following result.
Theorem 1.1 (Spinor Schrödinger equation). Let A ∈ L 2 loc (R 3 ; R 3 ) be such that B ∈ L 2 loc (R 3 ; R 3 ). Suppose that V ∈ L 1 loc (R 3 ; C 2×2 ) admits the decomposition V = V (1) + V (2) with components V (1) ∈ L 1 loc (R 3 ) and V (2) = V (2)
I C 2 , where V (2) ∈ L 1 loc (R 3 )
is such that [∂ r (r Re V (2) )] + ∈ L 1 loc (R 3 ) and rV (1) , r(Re V (2) ) − , r Im V (2) ∈ L 2 loc (R 3 ). Assume that there exist numbers a, b, β, b, c ∈ [0, 1) satisfying 2(b + β + 2a) < 1 and 2c + 2β
+ 6a + b 2 + √ 2(b + a)( β + √ a) < 1 (1.7)
such that, for all two-vector u with components in C ∞ 0 (R 3 ), the inequalities As a consequence of the previous result, one has the corresponding theorem for Pauli operators. Due to the supersymmetric structure (1.6) of the Dirac operator, the spectra of the Dirac and Pauli operators are intimately related. In particular, we deduce the following result from the previous theorem. Theorem 1.3 (Dirac equation). Let A ∈ L 2 loc (R 3 ; R 3 ) be such that B ∈ L 2 loc (R 3 ; R 3 ). Assume that there exists a number c ∈ [0, 1) satisfying 4 √ 3c < 1 and 2c + 6 √ 3c + √ 2( √ 3c) 3/2 < 1 (1.12) such that, for all four-vector u with components in C ∞ 0 (R 3 ), the inequality
R 3 r 2 |V (1) | 2 |u| 2 ≤ a 2 R 3 |∇ A u| 2 , R 3 r 2 |B| 2 |u| 2 ≤ c 2 R 3 |∇ A u| 2 , (1.8) and R 3 r 2 (Re V (2) ) 2 − |u| 2 ≤ b 2 R 3 |∇ A u| 2 , R 3 r 2 |Im V (2) | 2 |u| 2 ≤ β 2 R 3 |∇ A u| 2 , (1.9) R 3 [∂ r (r Re V (2) )] + |u| 2 ≤ b 2 R 3 |∇ A u| 2 ,(1.R 3 r 2 |B| 2 |u| 2 ≤ c 2 R 3 |∇ A u| 2 (1.13)
holds true. If in addition A ∈ W 1,3 loc (R 3 ), then H D (A) has no eigenvalues, i.e. σ p (H D (A)) = ∅. Remark 1.1. Notice that the conditions in (1.12) are overabundant, in the sense that if c is such that the second inequality of (1.12) holds true, then 4 √ 3c < 1 is automatically satisfied. Indeed, the second inequality of (1.12) requires c < c * 1 where c * 1 ≈ 0.075, whereas the first requires c < c * 2 where c * 2 ≈ 0.14. We decided to keep both conditions anyway in order to have a faster comparison with the corresponding results concerning the other theorems.
Organisation of the paper
Even though so far we have considered only the three-dimensional framework, in this work we shall actually provide variants of the results presented above in any dimension. (We anticipate already now that the twodimensional framework will be excluded in the settings of Pauli and Dirac operators because of the well-known Aharonov-Casher effect.) In order to state our results in any dimension, however, an auxiliary material will be needed in order to introduce the general framework for the Pauli and Dirac Hamiltonians. We therefore postpone the presentation of the general results to Section 3, while Section 2 is devoted to the definition of Dirac and Pauli operators to any dimension (this section can be skipped by an experienced reader). The method of multipliers for scalar Schrödinger operators is revisited with all the necessary details in Section 4. The development of the method for Schrödinger operators with matrix-valued potentials is performed in Section 5. The application of this general result to Pauli and Dirac operators is given in Section 6.
Notations
Here we summarise specific notations and conventions that we use in this paper.
• We adopt the convention to write matrices in boldface.
• For any dimension d ≥ 2, the physically relevant quantity associated to a given magnetic vector potential
A : R d → R d is the d × d matrix-valued quantity B := (∇A) − (∇A) t .
Here, as usual, (∇A) jk = ∂ j A k and (∇A) t jk = (∇A) kj with j, k = 1, 2 . . . , d. In d = 2 and d = 3 the magnetic tensor B can be identified with the scalar field B 12 = ∂ 1 A 2 − ∂ 2 A 1 or the vector field B = curl A, respectively. More specifically, one has
Bw = B 12 w ⊥ if d = 2, w ∈ R 2 −B × w if d = 3, w ∈ R 3 ,
where for any w = (w 1 , w 2 ) ∈ R 2 , w ⊥ := (w 2 , −w 1 ) and the symbol × denotes the cross product in R 3 .
Notice that we did not comment on the case d = 1. In one dimension, in fact, the addition of a magnetic potential is trivial, in the sense that it is always possible to remove it by a suitable gauge transformation. We refer to [3] for a complete survey on the concept of magnetic field in any dimensions and its definition in terms of differential forms and tensor fields.
• We adopt the standard notation | · | for the Euclidean norm on C d . We use the same symbol | · | for the operator norm: if M is a d × d matrix, we set
|M | := sup v∈C d v =0
|M v| |v| .
• Let v, w ∈ R d , the centered dot operation v · w designates the scalar product of the two vectors v, w in R d .
• Given two vectors v, w ∈ R d and a d × d matrix M , the double-centered dot operation v · M · w stands for the vector-matrix-vector product which returns the following scalar number
v · M · w := d j,k=1 v k M kj w j .
• We use the following definition for the L 2 -norm of a vector-valued function u = (u 1 , u 2 , . . . , u n ) on R d :
u [L 2 (R d )] n := n j=1 u j 2 L 2 (R d ) 1/2 .
Definition of Dirac and Pauli Hamiltonians in any dimension
As already mentioned, our results will be stated in all dimensions d ≥ 1. In particular, this requires a more careful analysis on the Dirac and Pauli operators as their explicit form changes according to the underlying dimension. Since here we are just interested in identifying the correct action of the operators, we disregard issues with the operator domains for a moment.
The Dirac operator
Generalising the expression (1.3) to arbitrary dimensions requires ensuring existence of d+ 1 Hermitian matrices α := (α 1 , α 2 , . . . , α d ) and β satisfying the anticommutation relations
α µ α ν + α ν α µ = 2δ µν I C n(d) , α µ β + βα µ = 0 C n(d) , β 2 = I C n(d) ,(2.1)
for µ, ν ∈ {1, 2, . . . , d}, where δ µν represents the Kronecker symbol. The possibility to find such matrices clearly depends on the dimension n(d) of the matrices themselves. In this regard one can verify that the following distinction is needed:
n(d) := 2 d+1 2 if d is odd, 2 d 2 if d is even. (2.2)
Even though all that really cares are the anticommutation relations that the Dirac matrices satisfy, for the purpose of visualisation of the supersymmetric structure of the Dirac operator, we shall rely on a particular representation of these matrices, that is the so-called standard representation. According to the standard representation one defines the d + 1 matrices α = (α 1 , α 2 , . . . , α d ) and β iteratively (with respect to the dimension) distinguishing between odd and even dimensions. For sake of clearness in the following the Dirac matrices are written with a superscript (d) to stress that these are constructed at the step corresponding to working in d dimensions, e.g., α = (α
(d) 1 , α (d) 2 , . . . , α (d)
d ) and β (d) are the d + 1 Dirac matrices constructed in d dimensions. Moreover, for notation convenience, we denote the matrix β (d) as the (d + 1)-th α-matrix, namely
β (d) := α (d) d+1 .
Odd dimensions
If d is odd, let us assume to know the n(d−1)×n(d−1) matrices α
(d−1) 1 , α (d−1) 2 , . . . , α (d−1) d
corresponding to a previous step in the iteration. We then define n(d) × n(d) matrices (where, according to (2.2), n(d) = 2n(d − 1)) in the following way:
α (d) µ = 0 α (d−1) µ α (d−1) µ 0 , β (d) := α (d) d+1 = I C n(d−1) 0 0 −I C n(d−1) , µ = 1, 2, . . . , d.
Even dimensions
If d is even, we define n(d) × n(d) matrices (where, according to (2.2), n(d) = n(d − 1) = 2n(d − 2)) as follows:
α (d) 1 = 0 I C n(d−2) I C n(d−2) 0 , α (d) µ+1 = 0 −iα (d−2) µ iα (d−2) µ 0 , µ = 1, 2, . . . , d − 1,
and
β (d) := α (d) d+1 = I C n(d−1) 0 0 −I C n(d−1) .
Notice that we are also using the convention that n(0) = 1 and that the 1 × 1 matrix α (0) 1 = (1). This allows us to use the previous rule to construct the Dirac matrices corresponding to the standard representation also in d = 1 and d = 2.
According to the construction above, one recognises that the Dirac matrices, regardless of the dimension, have all the following structure
α µ = 0 a * µ a µ 0 , β = I C n(d)/2 0 0 −I C n(d)/2 , µ = 1, 2, . . . , d, (2.3) where a µ are n(d)/2 × n(d)/2 matrices (Hermitian if d is odd) such that a µ a * ν + a ν a * µ = 2δ µν I C n(d)/2 , a * µ a ν + a * ν a µ = 2δ µν I C n(d)/2 ,(2.4)
for µ, ν ∈ {1, 2, . . . , d}. Here, as usual, a * µ denotes the adjoint to a µ , that is the conjugate transpose of a µ . We set a := (a 1 , . . . , a d ).
Remark 2.1. Notice that, as a consequence of the fact that α µ are Hermitian (in any dimension) and that α 2 µ = I C n(d) , one has |α µ | = 1, µ = 1, 2, . . . , d. Therefore, due to the iterative construction above, one has that also the submatrices a µ and a * µ have norm one, i.e. |a µ | = |a * µ | = 1.
In the standard representation, that is using expression (2.3) for the Dirac matrices, the purely magnetic Dirac operator can be defined through the following block-matrix differential expression
H D (A) := 1 2 I C n(d)/2 D * D − 1 2 I C n(d)/2 , (2.5) where D := −ia · ∇ A , D * := −ia * · ∇ A .
Notice that in odd dimension, being the submatrices a µ Hermitian, one has D = D * .
The square of the Dirac operator
From representation (2.5), it can be easily seen that H D (A) can be decomposed as a sum of a 2 × 2 diagonal block and a 2 × 2 off-diagonal block operators. More specifically, one has
H D (A) = H diag + H off-diag , where H diag := 1 2 I C n(d)/2 0 0 − 1 2 I C n(d)/2 , H off-diag := 0 D * D 0 .
As one may readily check, H diag and H off-diag satisfy the anticommutation relation
H diag H off-diag + H off-diag H diag = 0. (2.6)
This distinguishing feature places the Dirac operator within the class of operators with supersymmetry. It is consequence of the supersymmetric condition (2.6) that squaring out the Dirac operator gives
H D (A) 2 = (H diag + H off-diag ) 2 = H 2 diag + H 2 off-diag ,
where
H 2 diag = 1 4 I C n(d)/2 0 0 1 4 I C n(d)/2 , H 2 off-diag = D * D 0 0 DD * .
Therefore, H D (A) 2 turns out to have the following favorable form
H D (A) 2 = D * D + 1 4 I C n(d)/2 0 0 DD * + 1 4 I C n(d)/2 . (2.7)
From property (2.4) of the Dirac submatrices, one can show that
D * D = −∇ 2 A I C n(d)/2 − i 2 a * · B · a, DD * = −∇ 2 A I C n(d)/2 − i 2 a · B · a * . (2.8)
Low-dimensional illustrations
In order to become more confident with the previous construction, we decided to present explicitly the situations of dimensions d = 1 and d = 2 in the next two subsections. (Dimension d = 3 was already discussed above.)
Dimension one
In the Hilbert space L 2 (R; C 2 ), the 1d Dirac operator reads
H D (0) := −iα∇ + 1 2 β,
where ∇ is just a weird notation for an ordinary derivative. With the notation H D (0) we emphasise that the magnetic potential A has been chosen to be identically equal to zero, since in one dimension it can be always removed by choosing a suitable gauge. One can immediately verify that squaring out the operator H D (0) yields
H D (0) 2 = −∇ 2 I C 2 + 1 4 I C 2 .
According to the rule provided above, in the standard representation, one chooses α := σ 1 and β := σ 3 , where σ 1 and σ 3 are two of the three Pauli matrices. Thus, one conveniently writes Hence, in one dimension, the Pauli operator coincides with the free one dimensional Schrödinger operator acting in L 2 (R; R).
Dimension two
In the Hilbert space L 2 (R 2 ; C 2 ), the 2d Dirac operator reads
H D (A) := −iα · ∇ A + 1 2 β,
where α := (α 1 , α 2 ) and β are 2 × 2 Hermitian matrices satisfying (2.1). Squaring out H D (A) yields
H D (A) 2 = −∇ 2 A I C 2 − i 2 [α 1 , α 2 ]B 12 + 1 4 I C 2 .
According to the rule provided above, in the standard representation, one chooses α 1 := σ 1 , α 2 := σ 2 and β := σ 3 . This gives [α 1 , α 2 ] = 2iσ 3 and
H D (A) = 1 2 D * D − 1 2 , where D := −i∂ 1,A + ∂ 2,A , D * := −i∂ 1,A − ∂ 2,A ,
and ∂ j,A := ∂ j + iA j , j = 1, 2. Thus
H D (A) 2 = H P (A) + 1 4 I C 2 with the Pauli operator H P (A) := −∇ 2 A I C 2 + σ 3 B 12
The Pauli operator
After these illustrations, let us come back to the general dimension d ≥ 1. Recall that the Dirac operator H D (A) has been introduced via (2.5) and that its square satisfies (2.7). The following lemma specifies the form of the square according to the parity of the dimension and offers a natural definition for the Pauli operator in any dimension. • If d is odd, then
H odd D (A) 2 = H odd P (A) + 1 4 I C n(d)/2 0 0 H odd P (A) + 1 4 I C n(d)/2 , (2.11)
where we define
H odd P (A) := −∇ 2 A I C n(d)/2 − i 2 a · B · a.
(2.12)
• If d is even, then H even
D (A) 2 = H even P (A) + 1 4 I C n(d) ,(2.
13)
where we define
H even P (A) := −∇ 2 A I C n(d)/2 − i 2 a * · B · a, 0 0 −∇ 2 A I C n(d)/2 − i 2 a · B · a * .
(2.14)
Proof. In odd dimensions one has that D * = D, therefore
D * D = DD * = D 2 = − ia · ∇ A 2 = −∇ 2 A I C n(d)/2 − i 2 a · B · a.
Thus, defining H odd P (A) := D * D and using (2.7) one immediately gets the desired representation in odd dimensions. In even dimensions one defines
H even P (A) := D * D 0 0 D * D .
Hence, from (2.7) and (2.8) one readily has the thesis.
Notice that in even dimensions the Pauli operator is a matrix operator with the same dimension as the Dirac Hamiltonian. In odd dimensions the dimension of the Pauli operator is a half of that of the Dirac operator. Recalling (2.2), we therefore set
n ′ (d) := n(d)/2 if d is odd, n(d) if d is even. (2.15)
Domains of the operators
Finally, we specify the domains of the Dirac and Pauli operators. Notice that the rather formal manipulations of the preceding subsections can be justified when the action of the operators is considered on smooth functions of compact support. Therefore, we shall define each of the operators as an extension of the operator initially defined on such a restricted domain. We always assume that the vector potential A ∈ L 2 loc (R d ; R d ) is such that B ∈ L 1 loc (R d ; R d×d ). We define the Pauli operator H P (A) acting on the Hilbert space L 2 (R d ; R n ′ (d) ) as the self-adjoint Friedrichs extension of the operator initially considered on the domain C ∞ 0 (R d ; R n ′ (d) ); notice that this initial operator is symmetric. Disregarding the spin-magnetic term for a moment, the form domain can be identified with the magnetic Sobolev space (cf. [22,Sec. 7.20])
H 1 A (R d ; R n ′ (d) ) := u ∈ L 2 (R d ; R n ′ (d) ) : ∂ j,A u ∈ L 2 (R d ; R n ′ (d) ) for every j ∈ {1, . . . , d} . (2.16)
The operator domain is the subset of H 1
A (R d ; R n ′ (d) ) consisting of functions ψ such that ∇ 2 A ψ ∈ L 2 (R d ; R n ′ (d) )
. To include the spin-magnetic term, we make the hypothesis that there exist numbers a < 1 and b ∈ R such that, for every ψ ∈ C ∞ 0 (R d ),
1 2 R d |B||ψ| 2 ≤ a R d |∇ A ψ| 2 + b R d |ψ| 2 . (2.17)
Then the spin-magnetic term is a relatively form-bounded perturbation of the already defined operator with the relative bound less than one (recall Remark 2.1), so the Pauli operator H P (A) with the same form domain (2.16) is indeed self-adjoint. For the domain of the Dirac operator (2.5) we take
D(H D (A)) := H 1 A (R d ; R n(d) ) . (2.18) Notice that H D (A) is symmetric. Using Lemma 2.1, for every ψ ∈ C ∞ 0 (R d ; R n(d) )
, which is dense in D(H D (A)), we have the identity (with a slight abuse of notation)
Statement of the main results in any dimension
Now we are in position to state our main results in any dimension. As anticipated, in order to do that, we shall consider separately the three spinorial Hamiltonians.
The spinor Schrödinger equation
Let us start by considering the matrix Schrödinger operator
H S (A, V ) := −∇ 2 A I C n + V in L 2 (R d ; C n ) ,(3.1)
which is an extension of (1.4) to any dimension d ≥ 1 and n ≥ 1. Here V ∈ L 1 loc (R d ; C n×n ) and A ∈ L 2 loc (R d ; R d ). The operator is properly introduced as the Friedrichs extension of the operator initially defined on C ∞ 0 (R d ; C n ). The hypotheses in the theorems below ensure that H S (A, V ) is well defined as an m-sectorial operator. (2) )] + ∈ L 1 loc (R d ) and rV (1) , r(Re V (2) ) − , r Im V (2) ∈ L 2 loc (R d ). Assume that there exist numbers a 1 ,
A general result in any dimension
Theorem 3.1. Given any d, n ≥ 1, let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R d ; R d×d ). Suppose that V ∈ L 1 loc (R d ; C n×n ) admits the decomposition V = V (1) + V (2) with components V (1) ∈ L 1 loc (R d ) and V (2) = V (2) I C n , where V (2) ∈ L 1 loc (R d ) is such that [∂ r (r Re Va 2 , b 1 , b 2 , b, β 1 , β 2 , c ∈ [0, 1) satisfying b 2 1 + β 2 1 + 2a 2 1 < 1 and 2c + 2β 2 + 2a 2 + (d − 1)a 2 1 + b 2 + (b 2 + a 2 )(β 1 + a 1 ) < 1 (3.2)
such that, for all n-vector u with components in C ∞ 0 (R d ),
R d |V (1) ||u| 2 ≤ a 2 1 R d |∇ A u| 2 , R d r 2 |V (1) | 2 |u| 2 ≤ a 2 2 R d |∇ A u| 2 , (3.3) R d (Re V (2) ) − |u| 2 ≤ b 2 1 R d |∇ A u| 2 , R d r 2 (Re V (2) ) 2 − |u| 2 ≤ b 2 2 R d |∇ A u| 2 , (3.4) R d [∂ r (r Re V (2) )] + |u| 2 ≤ b 2 R d |∇ A u| 2 , (3.5) R d |Im V (2) ||u| 2 ≤ β 2 1 R d |∇ A u| 2 , R d r 2 |Im V (2) | 2 |u| 2 ≤ β 2 2 R d |∇ A u| 2 , (3.6) R d r 2 |B| 2 |u| 2 ≤ c 2 R d |∇ A u| 2 . (3.7)
If d = 2 assume also that the inequality
1 2 R 2 |u| 2 r ≤ R 2 r|∇ A u| 2 + R 2 r(Re V (2) ) + |u| 2 (3.8)
holds true. If, in addition, one has The theorem is commented on in the following subsections.
A ∈ W 1,2p loc (R d ) and Re V (2) ∈ W 1,p loc (R d ) , where p = 1 if d = 1, p > 1 if d = 2, p = d/2 if d ≥ 3,
Criticality of low dimensions
Because of the criticality of the Laplacian in L 2 (R d ) with d = 1, 2, the lower dimensional scenarios are a bit special.
First of all, due to the absence of magnetic phenomena in R 1 , the corresponding assumptions (3.3)-(3.7) in dimension d = 1 come with the classical gradient ∇ as a replacement of the magnetic gradient ∇ A . Consequently, because of the criticality of the Laplacian in L 2 (R), necessarily V (1) = 0, (Re V (2) ) − = 0, [∂ r (r Re V (2) )] + = 0 and Im V (2) = 0. Moreover, (3.7) is always satisfied if d = 1 being B equal to zero. Hence, if d = 1, the theorem essentially says that the scalar Schrödinger operator −∇ 2 + V in L 2 (R) has no eigenvalues, provided that V is non-negative and the radial derivative ∂ r (rV ) is non-positive. The requirements respectively exclude non-positive and positive eigenvalues. The latter is a sort of the classical repulsiveness requirement (cf. [24, Thm. XIII.58]).
Similarly, if d = 2 and there is no magnetic field (i.e. B = 0), the theorem essentially says that the scalar Schrödinger operator −∇ 2 + V in L 2 (R 2 ) has no eigenvalues, provided that V is non-negative and the radial derivative ∂ r (rV ) is non-positive (again, the conditions exclude non-positive and positive eigenvalues, respectively). On the other hand, in two dimensions, the situation becomes interesting if the magnetic field is present. Indeed, the magnetic Laplacian in L 2 (R 2 ) is subcritical due to the existence of magnetic Hardy inequalities (see [20] for the pioneering work and [3] for the most recent developments). The latter guarantee a source of sufficient conditions to make the hypotheses (3.3)-(3.7) non-trivial (cf. [15]).
An alternative statement in dimension two
We want to comment more on the additional condition (3.8) in dimension d = 2. Using the 2d weighted Hardy inequality
R 2 r |∇ A u| 2 ≥ 1 4 R 2 |u| 2 r , (3.10)
it is easy to check that requiring "enough" positivity to Re V (2) will guarantee the validity of (3.8). More specifically, the pointwise bound
[Re V (2) (x)] + ≥ 1 4|x| 2 ,
valid for almost every x ∈ R 2 is sufficient for (3.8) to hold. On the other hand, without the positivity of Re V (2) , condition (3.8) is quite restrictive. Indeed, if one assumes V (2) = 0, then ensuring the validity of (3.8), would require to ensure the existence of vector potentials A for which an improvement of the weighted Hardy inequality (3.10) holds true (for (3.8) with V (2) = 0 is nothing but (3.10) with a better constant). For this reason, following an idea introduced in [15, Sec. 3.2], we provide an alternative result, which avoids condition (3.8), but a stronger hypothesis compared to (3.2) is assumed.
a 1 , a 2 , b 1 , b 2 , b, β 1 , β 2 , c, ǫ ∈ [0, 1) satisfying b 2 1 + β 2 1 + 2a 2 1 < 1 and 2c + 2β 2 + 2a 2 + a 2 1 + b 2 + (b 2 + a 2 )(β 1 + a 1 ) + 4ǫ + 17 ǫ (a 2 1 + β 2 1 ) < 1, (3.11)
such that, for all n-vector u with components in
C ∞ 0 (R 2 ), inequalities (3.3)-(3.7) hold true. If, in addition, one has A ∈ W 1,2p loc (R 2 ) and Re V (2) ∈ W 1,p loc (R 2 ), where p > 1, then H S (A, V ) has no eigenvalues, i.e. σ p (H S (A, V )) = ∅.
A simplification in higher dimensions
In dimensions d ≥ 3, as a consequence of the diamagnetic inequality (see [19] and [22,Thm. 7.21])
|∇|ψ|(x)| ≤ |∇ A ψ(x)| a.e. x ∈ R d ,(3.12)
together with the classical Hardy inequality
R d |ψ(x)| 2 |x| 2 dx ≤ 4 (d − 2) 2 R d |∇ψ| 2 dx, ∀ ψ ∈ C ∞ 0 (R d ), d ≥ 3, (3.13)
applied to |ψ|, one can prove the following magnetic Hardy inequality
R d |ψ(x)| 2 |x| 2 dx ≤ 4 (d − 2) 2 R d |∇ A ψ| 2 dx, ∀ ψ ∈ C ∞ 0 (R d ), d ≥ 3.
(3.14)
Using (3.14), it is easy to check that the first inequalities in
a 2 1 := 2 d − 2 a 2 , b 2 1 := 2 d − 2 b 2 , β 2 1 := 2 d − 2 β 2 ,
and assuming a 2 , b 2 , β 2 < (d − 2)/2. Hence, in the higher dimensions d ≥ 3, conditions in (3.2) simplifies to
2 d − 2 b 2 + β 2 + 2a 2 < 1 and 2c + 2β 2 + 2(2d − 3) d − 2 a 2 + b 2 + √ 2 √ d − 2 (b 2 + a 2 )( β 2 + √ a 2 ) < 1. (3.15)
In particular, this justifies the fact that in Theorem 1.1 which is a special case of Theorem 3.1 for d = 3 (and n = 2) we assume only the validity of (1.8), (1.9) and (1.10), moreover (3.2) is replaced by (1.7) (notice that dropping the subscript · 2 in the constants and fixing d = 3 in (3.15) gives (1.7)).
The Aharonov-Bohm field
Let us come back to dimension two and consider the Aharonov-Bohm magnetic potential
A(x, y) := (− sin θ, cos θ) α(θ) r , (3.16)
where (x, y) = (r cos θ, r sin θ) is the parametrisation via polar coordinates, r ∈ (0, ∞), θ ∈ [0, 2π), and α : [0, 2π) → R is an arbitrary bounded function. In this specific case, there is an explicit magnetic Hardy-type inequality (see [20,Thm. 3])
R 2 |∇ A ψ| 2 ≥ γ 2 R 2 |ψ| 2 r 2 , ∀ ψ ∈ C ∞ 0 (R 2 \ {0}), γ := dist{ᾱ, Z},(3.17)
whereᾱ has the physical meaning of the total magnetic flux:
α := 1 2π 2π 0 α(θ) dθ. (3.18)
Notice that in this case the magnetic field B equals zero everywhere except for x = 0; indeed B = 2πᾱδ (3.19) in the sense of distribution, where δ is the Dirac delta function. The Aharonov-Bohm potential (3.16) is not in L 2 loc (R 2 ), so the matrix Schrödinger operator is not well defined as described below (3.1) and Theorem 3.1 does not apply to it as such. Now the Schrödinger operator H S (A, V ) is introduced as the Friedrichs extension of the operator (1.4) initially defined on C ∞ 0 (R 2 \ {0}; C n ). At the same time, it is possible to adapt the method of multipliers in such a way that it covers this situation as well. The following result can be considered as an extension of [15,Thm. 5] in the scalar case to the spinorial Schrödinger equation.
Theorem 3.3. Let d = 2 and let A be as in (3.16) withᾱ / ∈ Z and V as in Theorem 3.1. Assume that there exist numbers a, b, b, β, ǫ ∈ [0, 1) satisfying 1 γ (b + β + 2a) < 1 and 2β + 2a + a γ + b 2 + 1 √ γ (b + a)( √ a + β) + 1 4 − γ 2 ǫ + (a + β) ǫγ 3 < 1, (3.20)
with γ := dist{ᾱ, Z}, such that, for all n-vector u with component in
C ∞ 0 (R 2 \ {0}), inequalities R 2 r 2 |V (1) | 2 |u| 2 ≤ a 2 R 2 |∇ A u| 2 , (3.21) and R 2 r 2 (Re V (2) ) 2 − |u| 2 ≤ b 2 R 2 |∇ A u| 2 , R 2 r 2 |Im V (2) | 2 |u| 2 ≤ β 2 R 2 |∇ A u| 2 , (3.22) R 2 [∂ r (r Re V (2) )] + |u| 2 ≤ b 2 R 2 |∇ A u| 2 (3.23) hold true. If, in addition, one has Re V (2) ∈ W 1,p loc (R 2 ), p > 1, then H S (A, V ) has no eigenvalues, i.e. σ p (H S (A, V )) = ∅.
3.1.6 On the regularity condition (3.9) and their replacement
As we will see in more details later on (see Section 4.2), the additional local regularity assumptions (3.9) on the potentials are needed in order to justify rigorously the algebraic manipulations that the method of multipliers introduces. A formal proof of Theorem 3.1 would require just the weaker conditions A ∈ L 2 loc (R d ) and V ∈ L 1 loc (R d ). The unpleasant conditions (3.9) can be removed if we consider the situation of potentials V and A with just one singularity at the origin (see Section 4.5). This specific case is worth being investigated as it allows to cover a large class of repulsive potentials, e.g., V (x) = a/|x| α I C n with a > 0 and α > 0, and also the Aharonov-Bohm vector fields (3.16) which otherwise would be ruled out by conditions (3.9).
An alternative general result in the self-adjoint setting
Obviously, Theorem 3.1 above is valid, with clear simplifications, also in the self-adjoint situation, namely considering Hermitian matrix-valued potentials V . In this case, however, we also have an alternative result that we have decided to present because the "repulsivity" condition (3.5) is replaced by a "more classical" assumption in terms of r∂ r V (2) . Furthermore, condition (3.8) is not needed in this context. More precisely we have the following result.
Theorem 3.4. Let d, n ≥ 1 and let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R d ; R d×d ). Suppose that V ∈ L 1 loc (R d ; R n×n ) admits the decomposition V = V (1) + V (2) with components V (1) ∈ L 1 loc (R d ) and V (2) = V (2) I C n , where V (2) ∈ L 1 loc (R d ) is such that [r∂ r V (2) ] + ∈ L 1 loc (R d ) and rV (1) ∈ L 2 loc (R d ). Assume that there exist numbers a 1 , a 2 , b, b, c ∈ [0, 1) satisfying a 2 1 + b 2 < 1 and 2c + b 2 + da 2 1 + 2a 2 < 2 (3.24)
such that, for all n-vector u with components in C ∞ 0 (R d ), (3.3) and (3.7) hold and, moreover,
R d V (2) − |u| 2 ≤ b 2 R d |∇ A u| 2 , R d [r∂ r V (2) )] + |u| 2 ≤ b 2 R d |∇ A u| 2 . (3.25) If in addition (3.9) holds true, then H S (A, V ) has no eigenvalues, i.e. σ p (H S (A, V )) = ∅.
Remark 3.1. Here, the first condition in (3.24) is not explicitly used in the proof of the theorem, but it is needed to give sense to the Hamiltonian H S (A, V ). We refer to Section 4.1 for details.
The Pauli equation
Recall that the definition of the Pauli operator depends on the parity of the dimension, cf. Lemma 2.1.
Theorem 3.5. Let d ≥ 3 be an integer and let n ′ (d) be as in (2.15).
Let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R 2 ; R d×d ). Suppose that V ∈ L 1 loc (R d ; C n ′ (d)×n ′ (d) ) admits the decomposition V = V (1) + V (2) with components V (1) ∈ L 1 loc (R d ; C n ′ (d)×n ′ (d) ) and V (2) = V (2) I C n ′ (d) , where V (2) ∈ L 1 loc (R d ) is such that [∂ r (r Re V (2) )] + ∈ L 1 loc (R d ) and rV (1) , r(Re V (2) ) − , r Im V (2) ∈ L 2 loc (R d ).
If d is even, we additionally require
V (1) = V (1) I C n ′ (d) . Assume that there exist numbers a, b, β, b, c ∈ [0, 1) satisfying 2 d − 2 b + β + 2 a + d 2 c < 1, 2c + 2β + 2(2d − 3) d − 2 a + d 2 c + b 2 + √ 2 √ d − 2 b + a + d 2 c β + a + d 2 c < 1, (3.26) such that, for all n ′ (d)-vector u with components in C ∞ 0 (R d ), the inequalities R d r 2 |V (1) | 2 |u| 2 ≤ a 2 R d |∇ A u| 2 , R d r 2 |B| 2 |u| 2 ≤ c 2 R d |∇ A u| 2 , (3.27) and R d r 2 (Re V (2) ) 2 − |u| 2 ≤ b 2 R d |∇ A u| 2 , R d r 2 |Im V (2) | 2 |u| 2 ≤ β 2 R d |∇ A u| 2 , (3.28) R d [∂ r (r Re V (2) )] + |u| 2 ≤ b 2 R d |∇ A u| 2 , (3.29)
hold true. If, in addition, one has
A ∈ W 1,d loc (R d ) and Re V (2) ∈ W 1,d/2 loc (R d ),
then H P (A, V ) has no eigenvalues, i.e. σ p (H P (A, V )) = ∅.
Remark 3.2 (Even parity). Observe that in the even dimensional case we assume also the component V (1) to be diagonal. This is needed in order not to spoil the diagonal form in the definition (2.14) of the free Pauli operator, which will represent a crucial point in the strategy underlying the proof (we refer to Section 6.2 for more details).
The case of low dimensions d = 1, 2 is intentionally not present in Theorem 3.5 for the following reasons.
Remark 3.3 (Dimension one). As discussed in Section 2.3.1, the one-dimensional Pauli operator coincides with the scalar potential-free Schrödinger operator −∇ 2 (i.e. the one-dimensional Laplacian), hence the absence of the point spectrum is trivial in this case. Formally, it is already guaranteed by Theorem 3.1 with d = n = 1 (see also Section 3.1.2).
Remark 3.4 (Dimension two). The two dimensional case is rather special because of the paramagnetism of the Pauli operator. As a matter of fact, the total absence of the point spectrum is no longer guaranteed even in the purely magnetic case (i.e. V = 0). In this case the Pauli operator has the form (see Section 2.3.2)
H P (A, 0) = −∇ 2 A + B 12 0 0 −∇ 2 A − B 12 . (3.30)
For smooth vector potentials, the supersymmetry says that the operators −∇ 2 A ± B 12 have the same spectrum except perhaps at zero (see [10,Thm. 6.4]). Hence the absence of the point spectrum for the two-dimensional Pauli operator is in principle governed by our Theorem 3.1 with d = 2 and n = 1 (or Theorem 3.2) or its selfadjoint counterpart Theorem 3.4 for the special choice V = B 12 I C 2 . Unfortunately, we do not see how to derive any non-trivial condition on B 12 to guarantee the total absence of eigenvalues (cf. Remark 5.1). Physically, it does not come as a big surprise because of the celebrated Aharonov-Casher effect, which states that the number of zero-eigenstates is equal to the integer part of the total magnetic flux (see [10,Sec. 6.4]). On the one hand, the absence of negative eigenvalues does follow as an immediate consequence of the standard lower bound
R 2 |∇ A u| 2 ≥ ± R 2 B 12 |u| 2 , ∀u ∈ C ∞ 0 (R 2 ), (3.31)
which holds with either of the sign ± (see, e.g., [2, Sec. 2.4]).
Notice that when an attractive potential is added to the two-dimensional Pauli operator, it has been proved [28,17] that the perturbed Hamiltonian presents always (i.e. no matter how small is chosen the coupling constant) negative eigenvalues (not only due to the Aharonov-Casher zero modes turning into negative ones, but it is also the essential part of the spectrum that contributes to their appearance). This fact can be seen as a quantification of the aforementioned paramagnetic effect of the Pauli operators in contrast to the diamagnetic effect which holds true for magnetic Schrödinger operators.
The Dirac equation
Finally, we state our results for the purely magnetic Dirac operator (2.5).
Theorem 3.6. Let d ≥ 3 and let n(d) be as in (2.2). Let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R d ; R d×d ). Assume that there exists a number c ∈ [0, 1) satisfying 2d d − 2 c < 1 and 2c + d(2d − 3) d − 2 c + √ 2 √ d − 2 d 2 c 3/2 < 1 (3.32) such that, for all n(d)-vector u with components in C ∞ 0 (R d ), the inequality R d r 2 |B| 2 |u| 2 ≤ c 2 R d |∇ A u| 2 (3.33) holds true. If in addition A ∈ W 1,d loc (R d ), then H D (A)
has no eigenvalues, i.e. σ p (H D (A)) = ∅. As discussed in Section 2.3.1, the square of the one-dimensional Dirac operator is just the one-dimensional Laplacian shifted by a constant (cf. (2.9)), hence the absence of the point spectrum follows at once in this case. On the other hand, the two-dimensional analogue of Theorem 3.6 is unavailable, because of the absence of a two-dimensional variant of Theorem 3.5 in the Pauli case, cf. Remark 3.4.
Scalar electromagnetic Schrödinger operators revisited
In this section, we leave aside the operators acting on spinor Hilbert spaces and focus on scalar electromagnetic Schrödinger operators (1.1). This will be useful later on when, in the following sections, we reduce our analysis to the level of components. We provide a careful and deep analysis of the method of multipliers, stressing on the major outcomes that the technique provides in this context. Our goal is to represent a reader-friendly overview of the original ideas and main outcomes of [16,15] to tackle the issue of the total absence of eigenvalues of scalar Schrödinger operators. Furthermore, we go through the more technical parts by rigorously establishing some results that were just sketched in the previous works.
Definition of the operators
For the sake of completeness, we start with recalling some basic facts on the rigorous definition of the scalar electromagnetic Schrödinger operators.
Let d ≥ 1 be any natural number. Let A ∈ L 2 loc (R d ; R d ) and V ∈ L 1 loc (R d ; C) be respectively a vector potential and a scalar potential (the latter possibly complex-valued). The quantum Hamiltonian apt to describe the motion of a non-relativistic particle interacting with the electric field −∇V and the magnetic field B := (∇A) − (∇A) t is represented by the scalar electromagnetic Schrödinger operator
H A,V := −∇ 2 A + V in L 2 (R d ). (4.1)
Observe that the magnetic field is absent in R 1 and A can be chosen to be equal to zero without loss of generality. Therefore the two-dimensional framework is the lowest in which the introduction of a magnetic field is non-trivial. As usual, the sum in (4.1) should be understood in the sense of forms after assuming that V is relatively form-bounded with respect to the magnetic Laplacian −∇ 2 A with the relative bound less than one. We shall often proceed more restrictively by assuming the form-subordination condition
R d |V ||u| 2 ≤ a 2 R d |∇ A u| 2 , ∀ u ∈ D A := {u ∈ L 2 (R d ) : ∇ A u ∈ L 2 (R d )}, (4.2)
where a ∈ [0, 1) is a constant independent of u. Assumption (4.2) in particular implies that the quadratic form
h V [u] := R d V |u| 2 , u ∈ D(h V ) := u ∈ L 2 (R d ) : R d |V ||u| 2 < ∞
is relatively bounded with respect to the quadratic form
h A [u] := R d |∇ A u| 2 , u ∈ D(h A ) = D A ,
with the relative bound less than one. Consequently, the sum h A, With the aim of including also potentials which are not necessarily subordinated in the spirit of (4.2), now we present an alternative way to give a meaning to the operator H A,V assuming different conditions on the electric potential V. We introduce the form
V := h A + h V with domain D(h A,V ) := D Ah (1) A,V [u] := R d |∇ A u| 2 + R d (Re V ) + |u| 2 , u ∈ D(h (1) A,V ) := C ∞ 0 (R d ) |||·||| , with |||u||| 2 := R d |∇ A u| 2 + R d (Re V ) + |u| 2 + R d |u| 2 .
The form h
A,V is closed by definition. Now instead of assuming the smallness condition (4.2) for the whole V , we take the advantage of the splitting in real (positive and negative part) and imaginary part of the potential to require the following more natural subordination: There exist b, β ∈ [0, 1) with
b 2 + β 2 < 1 (4.3) such that, for any u ∈ D(h (1) A,V ), R d (Re V ) − |u| 2 ≤ b 2 R d |∇ A u| 2 , R d |Im V ||u| 2 ≤ β 2 R d |∇ A u| 2 . (4.4)
In other words, we require the subordination just for the parts (Re V ) − and Im V of the potential V . Hence, defining
h (2) A,V [u] := − R d (Re V ) − |u| 2 + i R d Im V |u| 2 , the form h (2) A,V is relatively bounded with respect to h(1)
A,V , with the relative bound less than one (see (4.3)). Consequently, as above, the sum
h A,V = h (1) A,V + h (2)
A,V is a closed and sectorial form and
D(h A,V ) = D(h (1) A,V ).
Therefore, also in this more general setting, H A,V is the m-sectorial operator associated with h A,V .
In order to consider simultaneously both these two possible configurations, we introduce the decomposition V = V (1) + V (2) and assume that there exist a, b, β ∈ [0, 1) satisfying
a 2 + b 2 + β 2 < 1 (4.5)
such that, for any u ∈ D A ,
R d |V (1) ||u| 2 ≤ a 2 R d |∇ A u| 2 ,(4.6)
and
R d (Re V (2) ) − |u| 2 ≤ b 2 R d |∇ A u| 2 , R d |Im V (2) ||u| 2 ≤ β 2 R d |∇ A u| 2 . (4.7) Let us define h (1) A,V [u] := R d |∇ A u| 2 + R d (Re V (2) ) + |u| 2 with D(h (1) A,V ) := C ∞ 0 (R d ) |||·||| , where |||u||| 2 := R d |∇ A u| 2 + R d (Re V (2) ) + |u| 2 + R d |u| 2 ,
and h
(2) A,V [u] := R d V (1) |u| 2 − R d (Re V (2) ) − |u| 2 + i R d Im V (2) |u| 2 with D(h (2) A,V ) := D(h(1)
A,V ). By the same reasoning as above, one has that H A,V is the m-sectorial operator associated with the closed and sectorial form h A,V := h (1)
A,V + h (2) A,V with D(h A,V ) := D(h (1)
A,V ). In order to drop the dependance on the form h in the notation of the domain that will not be used explicitly any more, from now on we will denote D A,V := D(h A,V ).
Further hypotheses on the potentials
As we shall see below, in order to justify rigorously the algebraic manipulations that the method of multipliers introduces, we need to assume more regularity on the magnetic potential A and on the electric potential V = V (1) + V (2) than the ones required to give a meaning to the electromagnetic Hamiltonian (4.1).
Further hypotheses on the magnetic potential
We assume
A ∈ W 1,p loc (R d ; R d ) where p = 2 if d = 1, p > 2 if d = 2, p = d if d ≥ 3.
(4.8)
In particular, these assumptions ensure that for any u ∈ D A then
Au ∈ L 2 loc (R d ; R d ) (4.9)
and the same can be said for ∂ l Au, with l = 1, 2, . . . , d. Indeed, from the Hölder inequality, one has that for any k = 1, 2, . . . , d
A k u 2 L 2 loc (R d ) ≤ A k L p loc (R d ) u L q loc (R d ) with 1/p + 1/q = 1/2. (4.10)
Observe that the diamagnetic inequality (3.12) and u ∈ D A guarantee |u| ∈ H 1 (R d ). By the Sobolev embeddings
H 1 (R d ) ֒→ L q (R d ) where q = ∞ if d = 1, 2 ≤ q < ∞ if d = 2, q = 2 * := 2d/(d − 2) if d ≥ 3.
(4.11)
Consequently, if one chooses q as in (4.11), then u L q (R d ) is finite. If, moreover, the Hölder conjugated exponent p is as in our assumption (4.8), then A k L p loc (R d ) is finite and therefore, from (4.10), A k u ∈ L 2 loc (R d ). Notice that, given any function u ∈ D A as soon as Au ∈ L 2 (R d ), then ∇u ∈ L 2 (R d ) and therefore u ∈
H 1 (R d ). In other words {u ∈ D A & Au ∈ L 2 (R d )} ⊆ H 1 (R d ).
(4.12)
Further hypotheses on the electric potential
Recalling the decomposition V = V (1) + V (2) , we assume the following condition on the real part of the second component:
Re V (2) ∈ W 1,p loc (R d ; R) where p = 1 if d = 1, p > 1 if d = 2, p = d/2 if d ≥ 3 (4.13)
By the same reasoning as done above for the magnetic potential, one can observe that assumption (4.13) ensures that for any u ∈ H 1 A (R d ), then Re V (2) |u| 2 ∈ L 1 loc (R d ), and the same can be said for ∂ k Re V (2) , with k = 1, 2, . . . , d.
The method of multipliers: main ingredients
The purpose of this subsection is to provide, in a unified and rigorous way, the proof of the common crucial starting point of the series of works [16,15,4,5] for proving the absence of the point spectrum of the electromagnetic Hamiltonians H A,V in various settings.
Since this section is intended as a review of already known results on scalar Schrödinger Hamiltonians, here we will be concerned almost exclusively with the most interesting and more troublesome case of the spectral parameter λ ∈ C within the sector of the complex plane given by {λ ∈ C : Re λ ≥ |Im λ|}.
(4.14)
On the other hand, how to deal with the complementary sector, i.e., {λ ∈ C : Re λ < |Im λ|} can be seen explicitly in the proof of our original results (see Sections 5 and 6). The proof of the absence of eigenvalues within the sector defined in (4.14) is based on the following crucial result obtained by means of the method of multipliers. It basically provides an integral identity for weak solutions u to the resolvent equation
(H A,V − λ)u = f , where f : R d → C is a suitable function. More specifically, u ∈ D A,V is such that the identity R d ∇ A u · ∇ A v + R d V uv = λ R d uv + R d fv (4.15)
holds for any v ∈ D A,V , where f is any suitable function for which the last integral in (4.15) is finite. The crucial result reads as follows.
Lemma 4.1. Let d ≥ 1, let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R d ; R d×d ) and (4.8) holds. Suppose that V ∈ L 1 loc (R d ; C) admits the decomposition V = V (1) + V (2)
with Re V (2) satisfying (4.13). Let u ∈ D A,V be a solution to (4.15), with |Im λ| ≤ Re λ and rf ∈ L 2 (R d ), satisfying
r 2 |V (1) | 2 + r 2 (Re V (2) ) 2 − + [∂ r (r Re V (2) )] + + r 2 |Im V 2 | 2 + r 2 |B| 2 |u| 2 ∈ L 1 (R d ).
Then also r|∇ A u − | 2 + r −1 |u| 2 + [∂ r (r Re V (2) )] − |u| 2 + r[Re V (2) ] + |u| 2 ∈ L 1 (R d ) and the identity
R d |∇ A u − | 2 dx + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 (Re λ) −1/2 |Im λ| R d |u| 2 |x| dx + 2 Im R d x · B · u − ∇ A u − dx + (d − 1) R d Re V (1) |u| 2 dx + 2 Re R d x · V (1) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x| Re V (1) |u| 2 dx − R d ∂ r (|x| Re V (2) )|u| 2 dx − 2 Im R d x Im V (2) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x| Re V (2) |u| 2 dx = (d − 1) Re R d fū dx + 2 Re R d x · f − ∇ A u − dx + (Re λ) −1/2 |Im λ| Re R d |x|fū dx (4.16) holds true with u − (x) := e −i(Re λ) 1/2 sgn(Im λ)|x| u(x) (4.17)
and f − defined in the analogous way.
Remark 4.1 (Dimension one). Since the addition of a magnetic potential is trivial in R 1 , the corresponding identity (4.16) with d = 1 comes with the classical gradient ∇ as a replacement of the magnetic gradient ∇ A , moreover the term involving B is not present.
The proof of Lemma 4.1 can be found in Subsection 4.3.1, here we just provide its main steps:
•
Step one: Approximation of u with a sequence of compactly supported functions u R (see definition (4.28) below) which satisfy a related problem with small (in a suitable topology) corrections. This first step is necessary in order to justify rigorously the algebraic manipulations that the method of multipliers introduces when the test function v is chosen to be possibly unbounded (so that it is not even a priori clear if this specific choice v belongs to L 2 (R d )).
•
Step two: Development of the method of multipliers for u R (main core of the proof) in order to produce the analogue of identity (4.16) for the approximating sequence. This step will require a further approximation procedure which will ensure that the chosen multiplier v (see (4.51) below) is in D A,V and therefore allowed to be taken as a test function.
•
Step three: Proof of (4.16) by taking the limit as R → ∞ in the previous identity and using the smallness of the corrections which is quantified in Lemma 4.3 below.
As a byproduct of the crucial identity of Lemma 4.1, we get the following inequality. For the sake of completeness, we provide it with a proof.
∇ A u − 2 L 2 (R d ) + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 R d |u − | 2 |x| dx + R d |x|(Re V (2) ) + |u − | 2 dx ≤ 2 |x||B|u − L 2 (R d ) + |x| Im V (2) u − L 2 (R d ) + |x|f L 2 (R d ) ∇ A u − L 2 (R d ) + (d − 1) |f | 1/2 |u − | 1/2 2 L 2 (R d ) + [∂ r (|x| Re V (2) )] 1/2 + u − 2 L 2 (R d ) + |x|(Re V (2) ) − u − L 2 (R d ) + |x|f L 2 (R d ) |Im V (2) | 1/2 u − L 2 (R d ) + |f | 1/2 |u − | 1/2 L 2 (R d ) (4.18)
holds true.
Proof of Lemma 4.2. Let us consider identity (4.16) with V (1) = 0. In passing, notice that requiring V (1) = 0 do not entails any loss of generality. Indeed since, according to our notations, V (1) represents the component of the electric potential V which is fully subordinated to the magnetic Dirichlet form (in the sense given by (4.6)), it can be treated at the same level of the forcing term f. After splitting Re V (2) in its positive and negative parts, namely using Re V (2) = (Re V (2) ) + − (Re V (2) ) − , identity (4.16) with V (1) = 0 reads as follows
R d |∇ A u − | 2 dx + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 R d |u| 2 |x| dx + R d |x|(Re V (2) ) + |u| 2 dx = −2 Im R d x · B · u − ∇ A u − dx + R d ∂ r (|x| Re V (2) )|u| 2 dx + 2 Im R d x Im V (2) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x|(Re V (2) ) − |u| 2 dx + (d − 1) Re R d fū dx + 2 Re R d xf − ∇ A u − dx + (Re λ) −1/2 |Im λ| Re R d |x|fū dx. (4.19)
We consider first
I := −2 Im R d x · B · u − ∇ A u − dx.
By the Cauchy-Schwarz inequality, it immediately follows that
|I| ≤ 2 |x||B|u − L 2 (R d ) ∇ A u − L 2 (R d ) .
(4.20)
Now we consider the terms in (4.19) involving V (2) , that is
II := R d ∂ r (|x| Re V (2) )|u| 2 dx + 2 Im R d x Im V (2) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x|(Re V (2) ) − |u| 2 dx = II 1 + II 2 + II 3 .
Using that |u| = |u − |, the term II 1 can be easily estimated in this way:
II 1 ≤ R d [∂ r (|x| Re V (2) )] + |u| 2 dx = [∂ r (|x| Re V (2) )] 1/2 + u − 2 L 2 (R d ) . (4.21)
By the Cauchy-Schwarz inequality one has
II 2 ≤ |II 2 | ≤ 2 |x| Im V (2) u − L 2 (R d ) ∇ A u − L 2 (R d ) . (4.22)
Finally, if Im λ = 0, we also need to estimate II 3 . First notice that choosing v = Im λ |Im λ| u in (4.15) (with V (1) = 0) and taking the imaginary part of the resulting identity, gives the following L 2 -bound
u L 2 (R d ) ≤ |Im λ| −1/2 |Im V (2) | 1/2 u L 2 (R d ) + |f | 1/2 |u| 1/2 L 2 (R d ) .
(4.23)
Using the Cauchy-Schwarz inequality, the L 2 -bound (4.23), the fact that we are working in the sector |Im λ| ≤ Re λ, and again using that |u| = |u − |, we have
II 3 ≤ (Re λ) −1/2 |Im λ| |x|[Re V (2) ] − u L 2 (R d ) u L 2 (R d ) ≤ |x|[Re V (2) ] − u − L 2 (R d ) |Im V (2) | 1/2 u − L 2 (R d ) + |f | 1/2 |u − | 1/2 L 2 (R d ) .
(4.24)
Now we estimate the terms in (4.19) involving f, namely
III := (d − 1) Re R d fū dx + 2 Re R d xf − ∇ A u − dx + (Re λ) −1/2 |Im λ| Re R d |x|fū dx = III 1 + III 2 + III 3 .
In a similar way as done to estimate II 1 , II 2 and II 3 , one gets
III 1 ≤ (d − 1) |f | 1/2 |u − | 1/2 2 L 2 (R d ) , III 2 ≤ 2 |x|f L 2 (R d ) ∇ A u − L 2 (R d )(4.25)
and
III 3 ≤ (Re λ) −1/2 |Im λ| |x|f L 2 (R d ) u L 2 (R d ) ≤ |x|f L 2 (R d ) |Im V (2) | 1/2 u − L 2 (R d ) + |f | 1/2 |u − | 1/2 L 2 (R d ) .µ(r) = 1 if 0 ≤ r ≤ 1, 0 if r ≥ 2.
Given a positive number R, we set µ R (x) := µ(|x|R −1 ). Then µ R : R d → [0, 1] is such that
µ R = 1 in B R (0), µ R = 0 in R d \ B 2R (0), |∇µ R | ≤ cR −1 , |∆µ R | ≤ cR −2 ,(4.27)
where B R (0) stands for the open ball centered at the origin and with radius R > 0 and c > 1 is a suitable constant independent of R. For any function h : R d → C we then define the compactly supported approximating family of functions by setting
h R := µ R h. (4.28) If u ∈ D A,V is a weak solution to −∇ 2 A u + V u = λu + f , it
is not difficult to show that the compactly supported function u R belongs to D A,V and solves in a weak sense the following related problem
− ∇ 2 A u R + V u R = λu R + f R + err(R) in R d ,(4.29)
where err(R) := −2∇ A u · ∇µ R − u∆µ R . The next easy result shows that the extra terms (4.30), which originate from the introduction of the horizontal cut-off µ R , become negligible as R increases.
err(R) L 2 (R d ) R→∞ − −−− → 0, |x|err(R) L 2 (R d ) R→∞ − −−− → 0 hold true.
Proof. By (4.27) we have
err(R) L 2 (R d ) ≤ 2 R d |∇ A u| 2 |∇µ R | 2 1/2 + R d |u| 2 |∆µ R | 2 1/2 ≤ 2c R {R<|x|<2R} |∇ A u| 2 1/2 + c R 2 {R<|x|<2R} |u| 2 1/2 .
Since u ∈ L 2 (R d ) and ∇ A u ∈ L 2 (R d ) d , the right-hand side tends to zero as R goes to infinity.
Similarly,
|x|err(R) L 2 (R d ) ≤ 2 R d |x| 2 |∇ A u| 2 |∇µ R | 2 1/2 + R d |x| 2 |u| 2 |∆µ R | 2 1/2 ≤ 4c {R<|x|<2R} |∇ A u| 2 1/2 + 2c R {R<|x|<2R} |u| 2 1/2
, and again the right-hand side goes to zero as R approaches infinity.
•
Step two. This second step represents the main body of the section, it is here that the method of multipliers is fully developed. Informally speaking the method of multipliers is based on producing integral identities by choosing different test functions v in (4.15) (see Lemma 4.4 below) and later combining them in a refined way to get, for instance in our case, the analogous to (4.16). By virtue of the previous step, we shall develop the method for compactly supported solutions u ∈ D A,V to (4.15), it will be in the next
Step three that we will get the result also for not necessarily compactly supported solutions.
As a starting point we state the aforementioned identities, these are collected in the following lemma. Notice that the lemma is stated for any λ ∈ C and not necessarily just for λ in the sector (4.14).
Lemma 4.4. Let d ≥ 1, let A ∈ L 2 loc (R d ; R d ) be such that B ∈ L 2 loc (R d ; R d×d )
and assume also (4.8), Suppose that V ∈ L 1 loc (R d ; C) admits the decomposition V = V (1) +V (2) with Re V (2) satisfying (4.13). Let u ∈ D A,V be any compactly supported solution of (4.15), with λ any complex constant and |x|f ∈ L 2 loc (R d ), satisfying
|x| 2 |V (1) | 2 + |x| 2 |Im V (2) | 2 |u| 2 ∈ L 1 loc (R d ). (4.31)
Then |x| −1 |u| 2 ∈ L 1 loc (R d ) and the following identities This gives
R d |∇ A u| 2 dx + R d Re V |u| 2 dx = Re λ R d |u| 2 dx + Re R d fū dx. (4.32) − d − 1 2 R d |u| 2 |x| dx + R d |x||∇ A u| 2 dx + R d Re V |x||u| 2 dx = Re λ R d |x||u| 2 dx + Re R d f |x|ū dx. (4.33) R d Im V |u| 2 dx = Im λ R d |u| 2 dx + Im R d fū dx. (4.34) Im R d x |x| ·ū∇ A u dx + R d Im V |x||u| 2 dx = Im λ R d |x||u| 2 dx + Im R d f |x|ū dx. (4.35) 2 R d |∇ A u| 2 dx + 2 Im R d x · B · u∇ A u dx + d R d Re V (1) |u| 2 dx + 2 Re R d x · V (1) u∇ A u dx − R d x · ∇ Re V (2) |u| 2 dx − 2 Im R d x · Im V (2) u∇ A u dx = −2 Im λ Im R d x · u∇ A u dx + d Re R d fū dx + 2 Re R d f x · ∇ A u dx.R d |∇ A u| 2 dx − 2(Re λ) 1/2 sgn(Im λ) Im R d x |x| ·ū ∇ A u dx + Re λ R d |u| 2 dx + 2(Re λ) 1/2 |Im λ| R d |x||u| 2 dx + 2 Im λ Im R d x · u ∇ A u dx + 2 Im R d x · B · u ∇ A u dx + (d − 1) R d Re V (1) |u| 2 dx + 2 Re R d x · V (1) u ∇ A u dx − 2(Re λ) 1/2 sgn(Im λ) R d |x| Im V (1) |u| 2 dx − R d Re V (2) |u| 2 dx − R d x · ∇ Re V (2) |u| 2 dx − 2(Re λ) 1/2 sgn(Im λ) R d |x| Im V (2) |u| 2 dx − 2 Im R d x · Im V (2) u∇ A u dx = (d − 1) Re R d fū dx + 2 Re R d x · f ∇ A u dx − 2(Re λ) 1/2 sgn(Im λ) Im R d |x|fū dx. (4.37)
Recalling definition (4.17) of u − , one observes that
∇ A u − (x) = e −i(Re λ) 1/2 sgn(Im λ)|x| ∇ A u − i(Re λ) 1/2 sgn(Im λ) x |x| u(x) ,(4.38)
and therefore
|∇ A u − | 2 = |∇ A u| 2 + Re λ|u| 2 − 2(Re λ) 1/2 sgn(Im λ) x |x| · Im(ū∇ A u). (4.39) Moreover one has x · B · u∇ A u = x · B · u − ∇ A u − ,(4.40)
where the previous follows from the fact that being B anti-symmetric, then x · B · x = 0.
Reintegrating (4.39) over R d , we obtain
R d |∇ A u| 2 dx − 2(Re λ) 1/2 sgn(Im λ) Im R d x |x| ·ū ∇ A u dx + Re λ R d |u| 2 dx = R d |∇ A u − | 2 dx. (4.41)
Adding equation (4.33) multiplied by (Re λ) −1/2 |Im λ| to (4.37), plugging (4.41), using again (4.39) and (4.40), we get
R d |∇ A u − | 2 dx + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 (Re λ) −1/2 |Im λ| R d |u| 2 |x| dx + 2 Im R d x · B · u − ∇ A u − dx + (d − 1) R d Re V (1) |u| 2 dx + (Re λ) −1/2 |Im λ| R d |x| Re V (1) |u| 2 dx + 2 Re R d x · V (1) u ∇ A u + i(Re λ) 1/2 sgn(Im λ) x |x|ū dx − R d ∂ r (|x| Re V (2) )|u| 2 dx + (Re λ) −1/2 |Im λ| R d |x| Re V (2) |u| 2 dx − 2 Im R d x Im V (2) u ∇ A u + i(Re λ) 1/2 sgn(Im λ) x |x|ū dx =(d − 1) Re R d fū dx + (Re λ) −1/2 |Im λ| Re R d |x|fū dx + 2 Re R d x · f ∇ A u + i(Re λ) 1/2 sgn(Im λ) x |x|ū dx.
(4.42)
Then, using (4.38) in the fourth, last but two and last line of the previous identity, we obtain
R d |∇ A u − | 2 dx + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 (Re λ) −1/2 |Im λ| R d |u| 2 |x| dx + 2 Im R d x · B · u − ∇ A u − dx + (d − 1) R d Re V (1) |u| 2 dx + 2 Re R d x · V (1) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x| Re V (1) |u| 2 dx − R d ∂ r (|x| Re V (2) )|u| 2 dx − 2 Im R d x Im V (2) u − ∇ A u − dx + (Re λ) −1/2 |Im λ| R d |x| Re V (2) |u| 2 dx = (d − 1) Re R d fū dx + 2 Re R d x · f − ∇ A u − dx + (Re λ) −1/2 |Im λ| Re R d |x|fū dx, (4.43)
where f − (x) := e −i(Re λ) 1/2 sgn(Im λ)|x| f (x).
•
Step three. Now we want to come back to our approximating sequence u R . Recalling that u R is a weak solution to (4.29), identity (4.43), rewritten in terms of u R , f R and err(R) gives Letting R go to infinity, the thesis follows from dominated and monotone convergence theorems and Lemma 4.3.
R d |∇ A u − R | 2 dx + (Re λ) −1/2 |Im λ| R d |x||∇ A u − R | 2 dx − (d − 1) 2 (Re λ) −1/2 |Im λ| R d |u R | 2 |x| dx + 2 Im R d x · B · u − R ∇ A u − R dx + (d − 1) R d Re V (1) |u R | 2 dx + 2 Re R d x · V (1) u − R ∇ A u − R dx + (Re λ) −1/2 |Im λ| R d |x| Re V (1) |u R | 2 dx − R d ∂ r (|x| Re V (2) )|u R | 2 dx − 2 Im R d x Im V (2) u − R ∇ A u − R dx + (Re λ) −1/2 |Im λ| R d |x| Re V (2) |u R | 2 dx = (d − 1) Re R d f R u R dx + 2 Re R d x · f − R ∇ A u − R dx + (Re λ) −1/2 |Im λ| Re R d |x|f R u R dx + (d − 1) Re R d err(R)u R dx + 2 Re R d x · err(R) − ∇ A u − R dx + (Re λ) −1/2 |Im λ| Re R d |x|err(R)u R dx.
The method of multipliers: proof of the crucial Lemma 4.4
This part is entirely devoted to the rigorous proof of the crucial identities contained in Lemma 4.4. Let us start proving (4.32) and (4.33). Choosing in (4.15) v := ϕu, with ϕ : R d → R being a radial function such that v ∈ D A,V (since the support of u is compact, any locally bounded ϕ together with locally bounded partial derivatives of first order is admissible). Using the generalised Leibniz rule for the magnetic gradient, namely
∇ A (gh) = (∇ A g)h + g∇h (4.45) valid for any g, h : R d → C, we get R d ϕ|∇ A u| 2 + R dū ∇ A u · ∇ϕ + R d V ϕ|u| 2 = λ R d ϕ|u| 2 + R d f ϕū.
Taking the real part of the obtained identity, using that being A a real-valued vector field one has one has Re(ū∇ A u) = Re(ū∇u) (4.46) and performing an integration by parts give
− 1 2 R d ∆ϕ|u| 2 + R d ϕ|∇ A u| 2 + R d Re V ϕ|u| 2 = Re λ R d ϕ|u| 2 + Re R d f ϕū.
Taking ϕ := 1 and ϕ(x) := |x|, we get (4.32) and (4.33). Equation (4.34) and (4.35) are obtained as in the previous case choosing in (4.15) v := ψu, with ψ : R d → R being a radial function such that v ∈ D A,V and taking the imaginary part of the resulting identity. Finally, one chooses ψ := 1 and ψ(x) := |x|, respectively. The remaining identity (4.36) is formally obtained by plugging into (4.15) the multiplier
v := [∇ 2 A , φ]u = ∆φu + 2∇φ · ∇ A u with φ(x) := |x| 2 , (4.47)
taking the real part and integrating by parts. However, such v does not need to belong to D A (and therefore neither to D A,V ). Indeed, though on the one hand the unboundedness of the radial function φ does not pose any problems because the support of u is assumed to be compact at this step, on the other hand ∇ A u does not necessarily belong to D A . Following the strategy developed in [5], we replace (4.47) by its regularised version
v := ∆φu + ∇φ · [∇ δ,N A u + ∇ −δ,N A u] = ∆φu + ∂ k φ [∂ δ,N k,A u + ∂ −δ,N k,A u] with φ(x) := |x| 2 , (4.48) where ∇ δ,N A u := (∂ δ,N 1,A , . . . , ∂ δ,N d,A )u with ∂ δ,N k,A u := ∂ δ k u + iT N (A k )u, k = 1, 2, . . . , d, (4.49)
and where
∂ δ k u(x) := τ δ k u(x) − u(x) δ with τ δ k u(x) := u(x + δe k ), k = 1, 2, . . . , d,v := 2du + 2x k [∂ δ,N k,A u + ∂ −δ,N k,A u]. (4.51)
Clearly, being u ∈ D A,V , the first term in v belongs to D A,V and therefore we need to comment further just on the second term of the sum, namely x k ∂ δ,N k,A u (the part involving ∂ −δ,N k,A u is analogous). One can check that
x k ∂ δ,N k,A u := x k (∂ δ k + iT N (A k ))u ∈ L 2 (R d );
this is a consequence of u ∈ L 2 (R d ) being compactly supported and of the boundedness of T N (A k ). It is less trivial to prove that for any l = 1, 2, . . . , d, one has ∂ l,A [x k ∂ δ,N k,A u] ∈ L 2 (R d ). To begin with, it is easy to check that the following commutation relation between the magnetic gradient ∂ l,A and its regularised version ∂ δ,N k,A holds true
∂ l,A , ∂ δ,N k,A := i[(∂ l A k )χ {|A k |≤N } − (∂ δ k A l )τ δ k ], k, l = 1, 2 . . . , d. (4.52)
Here [·, ·] denotes the usual commutator operator, for any given subset S ⊆ R d , the function χ S is the characteristic function of the set S and τ δ k is the translation operator as defined in (4.50). Using (4.45), the fact that, by definition of the commutator operator, ∂ l,A ∂ δ,N k,A = ∂ δ,N k,A ∂ l,A + [∂ l,A ∂ δ,N k,A ] and eventually using (4.52) one has
∂ l,A [x k ∂ δ,N k,A u] = δ l,k ∂ δ,N k,A u + x k ∂ l,A ∂ δ,N k,A u = δ l,k ∂ δ,N k,A u + x k ∂ δ,N k,A ∂ l,A u + x k [∂ l,A , ∂ δ,N k,A ]u = v 1 + v 2 + v 3 , where v 1 := δ l,k ∂ δ,N k,A u, v 2 := x k ∂ δ,N k,A ∂ l,A u, v 3 := x k i[(∂ l A k )χ {|A k |≤N } − (∂ δ k A l )τ δ k ]u.
Here and hence δ l,k for every k, l = 1, 2, . . . , d denotes the Kronecker symbol. Now, being u ∈ D A,V (thus in particular u ∈ L 2 (R d )) and since
T N (A k ) ∈ L ∞ (R d ), then v 1 = δ l,k ∂ δ,N k,A u := δ l,k (∂ δ k + iT N (A k ))u is clearly in L 2 (R d ). Moreover, since u ∈ D A,V (thus in particular ∂ l,A u ∈ L 2 (R d ))
is compactly supported, one can conclude the same for v 2 . With respect to v 3 , since A k ∈ W 1,p loc (R d ) with p as in (4.8), then (∂ l A k )u ∈ L 2 (R d ) (see (4.9)). Similar reasoning allows us to conclude that also (∂ δ k A l )τ δ k u ∈ L 2 (R d ). Therefore v 3 ∈ L 2 (R d ). Now we are left to show just that (Re V (2)
) + [x k ∂ δ,N k,A u] ∈ L 2 (R d ). First let us write (Re V (2) ) + [x k ∂ δ,N k,A u] = v 4 + v 5 , where v 4 := x k (Re V (2) ) + ∂ δ k u, v 5 := ix k (Re V (2) ) + T N (A k )u.
Observe that being u ∈ D A,V (thus in particular (Re V (2) ) + u ∈ L 2 (R d )) and compactly supported and since
T N (A k ) ∈ L ∞ (R d ), one has that v 5 ∈ L 2 (R d ).
Making explicit the difference quotient ∂ δ k u, one can also see that v 4 ∈ L 2 (R d ) by using that (Re V (2) ) + ∈ L p loc (R d ) with p as in (4.13) and the fact that |u| ∈ H 1 (R d ). Gathering these facts together, we guaranteed that our multiplier v as defined in (4.51) belongs to D A,V and hence we have justified its choice as a test function in the weak formulation (4.15). Now we are in a position to prove identity (4.36). For a moment, we proceed in a greater generality by considering φ in (4.48) to be an arbitrary smooth function φ : R d → R. We plug (4.48) in (4.15) and take the real part. Below, for the sake of clarity, we consider each integral of the resulting identity separately.
• Kinetic term
Let us start with the "kinetic" part of (4.15):
K := Re R d ∇ A u · ∇ A v.
(4.53)
Using ∂ l,A v = (∂ l ∆φ)ū + ∆φ∂ l,A u + ∂ lk φ [∂ δ,N k,A u + ∂ −δ,N k,A u] + ∂ k φ [∂ l,A ∂ δ,N k,A u + ∂ l,A ∂ −δ,N k,A u], we write K = K 1 + K 2 + K 3 + K 4 with K 1 := Re R d ∂ l,A u(∂ l ∆φ)ū, K 2 := R d |∇ A u| 2 ∆φ, K 3 := Re R d ∂ lk φ ∂ l,A u [∂ δ,N k,A u + ∂ −δ,N k,A u], K 4 := Re R d ∂ k φ ∂ l,A u [∂ l,A ∂ δ,N k,A u + ∂ l,A ∂ −δ,N k,A u].
(4.54)
Using (4.46) and integrating by parts in K 1 give
K 1 = − 1 2 R d ∆ 2 φ|u| 2 .
Now we consider K 4 . Using simply the definition of the commutator operator, we write
K 4 = K 4,1 + K 4,2 ,
where
K 4,1 := Re R d ∂ k φ ∂ l,A u ∂ δ,N k,A ∂ l,A u + ∂ −δ,N k,A ∂ l,A u , K 4,2 := Re R d ∂ k φ∂ l,A u [∂ l,A , ∂ δ,N k,A ]u + [∂ l,A , ∂ −δ,N k,A ]u .
We start considering K 4,1 . Using an analogous version to (4.46) for the regularised magnetic gradient, namely Re(ū ∂ δ,N k,A u) = Re(ū ∂ δ k u), k = 1, 2, . . . , d (4.55) and the identity 2 Re(ψ∂ δ k ψ) = ∂ δ k |ψ| 2 − δ|∂ δ k ψ| 2 (4.56) valid for every ψ : R d → C, we write K 4,1 = K 4,1,1 + K 4,1,2 with
K 4,1,1 := 1 2 R d ∂ k φ{∂ δ k |∂ l,A u| 2 + ∂ −δ k |∂ l,A u| 2 }, and K 4,1,2 := − δ 2 R d ∂ k φ{|∂ δ k ∂ l,A u| 2 − |∂ −δ k ∂ l,A u| 2 }.
Making use of the integration-by-parts formula for difference quotients (see [13,Sec. 5
.8.2]) R d ϕ ∂ δ k ψ = − R d (∂ −δ k ϕ) ψ (4.57)
which holds true for every ϕ, ψ ∈ L 2 (R d ), one gets
K 4,1,1 = − 1 2 R d ∂ −δ k ∂ k φ + ∂ δ k ∂ k φ |∇ A u| 2 .
At the same time, making explicit the difference quotient and changing variable in K 4,1,2 give (summation both over k and l)
K 4,1,2 = − δ 2 R d {∂ k φ − (τ δ k ∂ k φ)}|∂ δ k ∂ l,A u| 2 .
Now we choose the multiplier φ(x) := |x| 2 and observe that
∂ k φ = 2x k , ∂ lk φ = 2δ k,l , ∂ ±δ k ∂ k φ = 2, ∇∆φ = 0, ∆ 2 φ = 0. (4.58)
Consequently,
K 1 = 0, K 2 = 2d R d |∇ A u| 2 dx, K 3 = 2 Re R d ∂ l,A u [∂ δ,N l,A u + ∂ −δ,N l,A u] dx, and K 4 = −2d R d |∇ A u| 2 dx + R d |τ δ k ∇ A u − ∇ A u| 2 dx +2 Im R d x k ∂ l,A u (∂ l A k )χ {|A k |≤N }ū −(∂ δ k A l )τ δ kū dx+2 Im R d x k ∂ l,A u (∂ l A k )χ {|A k |≤N }ū −(∂ −δ k A l )τ −δ kū dx.
In summary,
K = 2 Re R d ∂ l,A u [∂ δ,N l,A u + ∂ −δ,N l,A u] dx + R d |τ δ k ∇ A u − ∇ A u| 2 dx +2 Im R d x k ∂ l,A u (∂ l A k )χ {|A k |≤N }ū −(∂ δ k A l )τ δ kū dx+2 Im R d x k ∂ l,A u (∂ l A k )χ {|A k |≤N }ū −(∂ −δ k A l )τ −δ kū dx.
Now we want to see what happens when δ goes to zero and N goes to infinity. To do so, we need the following lemma.
∂ δ,N l,A u δ→0 N →∞ − −−− → ∂ l,A u in L 2 (R d ) (4.59)
and
(∂ l A k )χ {|A k |≤N } − (∂ δ k A l )τ δ k u δ→0 N →∞ − −−− → [∂ l A k − ∂ k A l ]u in L 2 (R d ).
(4.60)
Proof. Let us start with (4.59). Using the explicit expression (4.49) for ∂ δ,N l,A u, one easily has
R d |∂ δ,N l,A u − ∂ l,A u| 2 dx ≤ 2 R d |∂ δ l u − ∂ l u| 2 dx + 2 R d |T N (A l )u − A l u| 2 dx.
Now, as a consequence of the L 2 -strong convergence of the difference quotients (which can be used here because u ∈ H 1 (R d ) (see (4.12))), the first integral converges to zero as δ goes to zero. As regards with the second integral we use that, by definition, T N (s) converges to s as N tends to infinity, the bound |T N (s)| ≤ |s| and the fact that by virtue of (4.8) the function A l u ∈ L 2 (R d ), these allow us to conclude that the integral goes to zero as N goes to infinity via the dominated convergence theorem. This concludes the proof of (4.59). Now we prove (4.60). Observe that (4.60) follows as soon as one proves that the limits
(∂ l A k )χ {|A k |≤N } u N →∞ − −−− → ∂ l A k u in L 2 (R d )
and
(∂ δ k A l )τ δ k u δ→0 − −− → ∂ k A l u in L 2 (R d )
hold true. As hypothesis (4.8) implies that ∂ l A k u ∈ L 2 (R d ), the first limit is an immediate consequence of the dominated convergence theorem. With respect to the second one, one has
R d |(∂ δ k A l )τ δ k u − ∂ k A l u| 2 dx ≤ 2 R d |∂ δ k A l | 2 |τ δ k u − u| 2 dx + 2 R d |∂ δ k A l − ∂ k A l | 2 |u| 2 dx
and the two integrals tend to zero as δ goes to zero as a consequence of the L q -continuity of the translations with 1 ≤ q < ∞ and the strong L p -convergence of the difference quotients with 1 ≤ p < ∞ together with assumption (4.8).
With Lemma 4.5 at hand, it follows as a mere consequence of the Cauchy-Schwarz inequality that
K δ→0 N →∞ − −−− → 4 R d |∂ l,A u| 2 dx + 4 Im R d x k ∂ l,A u [∂ l A k − ∂ k A l ]ū dx.
• Source term
Let us now consider simultaneously the "source" and "eigenvalue" parts of (4.15), that is,
F := Re λ R d uv + R d fv . (4.61)
This can be written as F = F 1 + F 2 + F 3 + F 4 with
F 1 := Re λ R d ∆φ|u| 2 , F 2 := Re λ Re R d ∂ k φ u[∂ δ,N k,A u + ∂ −δ,N k,A u], F 3 := − Im λ Im R d ∂ k φ u[∂ δ,N k,A u + ∂ −δ,N k,A u], F 4 := Re R d f {∆φū + ∂ k φ [∂ δ,N k,A u + ∂ −δ,N k,A u]}.
(4.62)
Applying (4.55) and (4.56), we further split F 2 = F 2,1 + F 2,2 , where
F 2,1 := 1 2 Re λ R d ∂ k φ {∂ δ k |u| 2 + ∂ −δ k |u| 2 } and F 2,2 := − δ 2 Re λ R d ∂ k φ {|∂ δ k u| 2 − |∂ −δ k u| 2 }.
Using the integration-by-parts formula (4.57), we get
F 2,1 = − 1 2 Re λ R d {∂ −δ k ∂ k φ + ∂ δ k ∂ k φ}|u| 2 .
Choosing φ(x) := |x| 2 in the previous identities and using (4.58) gives
F 1 = 2d Re λ R d |u| 2 dx, F 2 = −2d Re λ R d |u| 2 dx − δ Re λ R d x k {|∂ δ k u| 2 − |∂ −δ k u| 2 } dx, F 3 = −2 Im λ Im R d x k u [∂ δ,N k,A u + ∂ −δ,N k,A u] dx, F 4 = Re R d f {2dū + 2x k [∂ δ,N k,A u + ∂ −δ,N k,A u]} dx.
Using limit (4.59) in Lemma 4.5, one gets from the Cauchy-Schwarz inequality that
F δ→0 N →∞ − −−− → −4 Im λ Im R d x k u ∂ k,A u dx + Re R d f {2dū + 4x k ∂ k,A u} dx.
• Electric potential term
Let us now consider the contribution of the "potential" part of (4.15), that is,
J := Re R d V uv. (4.63)
Using the decomposition V = V (1) + V (2) , it can be written as J = J 1 + J 2 with
J 1 := Re R d V (1) uv and J 2 := Re R d V (2) uv
First of all,
J 1 = R d Re V (1) ∆φ|u| 2 + Re R d ∂ k φV (1) u [∂ δ,N k,A u + ∂ −δ,N k,A u].
Let us consider now the part involving V (2) . We can write
J 2 = J 2,1 + J 2,2 + J 2,3 , where J 2,1 := R d Re V (2) ∆φ|u| 2 , J 2,2 := R d Re V (2) ∂ k φ Re{u [∂ δ,N k,A u + ∂ −δ,N k,A u]} J 2,3 := − Im R d Im V (2) ∂ k φ u [∂ δ,N k,A u + ∂ −δ,N k,A u]
Let us consider J 2,2 . Using (4.55), (4.56) and integrating by parts we get
J 2,2 = − 1 2 R d {∂ −δ k [∂ k φ Re V (2) ] + ∂ δ k [∂ k φ Re V (2) ]}|u| 2 − δ 2 R d Re V (2) ∂ k φ{|∂ δ k u| 2 − |∂ −δ k u| 2 } = − 1 2 R d {∂ −δ k [∂ k φ Re V (2) ] + ∂ δ k [∂ k φ Re V (2) ]}|u| 2 + 1 2 R d ∂ δ k [∂ k φ Re V (2) ]|τ δ k u − u| 2 .
Choosing φ(x) := |x| 2 in the previous identities and using (4.58) we can write
J 1 = J 1,1 + J 1,2 , where J 1,1 := 2d R d Re V (1) |u| 2 dx and J 1,2 := 2 Re R d x k V (1) u [∂ δ,N k,A u + ∂ −δ,N k,A u] dx.
Moreover
J 2,1 = 2d R d Re V (2) |u| 2 dx, J 2,3 = −2 Im R d x k Im V (2) u[∂ δ,N k,A u + ∂ −δ,N k,A u] dx,J 2,2 = − R d {∂ −δ k [x k Re V (2) ] + ∂ δ k [x k Re V (2) ]}|u| 2 dx + R d ∂ δ k [x k Re V (2) ]|τ δ k u − u| 2 dx.
By virtue of hypothesis (4.31), |x||V (1) ||u| ∈ L 2 loc (R d ) and then, using the Cauchy-Schwarz inequality and limit (4.59) in Lemma 4.5, one has
J 1,2 δ→0 N →∞ − −−− → 4 Re R d x k V (1) u ∂ k,A u dx.
(4.64)
Similarly, using that |x||Im V (2) ||u| ∈ L 2 loc (R d ) (see (4.31)) and again (4.59), via the Cauchy-Schwarz inequality one also has
J 2,3 δ→0 N →∞ − −−− → −4 Im R d x k Im V (2) u ∂ k,A u dx.
Since x k Re V (2) ∈ W 1,p loc (R d ) with p as in (4.13), using the strong L p -convergence of the difference quotients with 1 ≤ p < ∞ and via the Hölder inequality, it is not difficult to see that
J 2,2 δ→0 − −− → − 2 R d ∂ k [x k Re V (2) ]|u| 2 dx = −2d R d Re V (2) |u| 2 dx − 2 R d x k ∂ k Re V (2) |u| 2 dx,
where the last identity follows from the Leibniz rule applied to ∂ k (x k Re V (2) ).
In summary, gathering the previous limits altogether, one gets
J 1 δ→0 N →∞ − −−− → 2d R d Re V (1) |u| 2 dx + 4 Re R d x k V (1) u ∂ k,A u dx. and J 2 δ→0 N →∞ − −−− → −2 R d x k ∂ k Re V (2) |u| 2 dx − 4 Im R d x k Im V (2) u∂ k,A u dx.
Passing to the limit δ → 0 and N → ∞ in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36).
Potentials with just one singularity: alternative proof of the crucial Lemma 4.4
In this section we consider the case of potentials (both electric and magnetic) with capacity zero set of singularities, in fact with just one singularity at the origin. This will allow us to remove the unpleasant hypotheses (4.8) and (4.13). Since the point has a positive capacity in dimension one, here we exclusively consider d ≥ 2. (As a matter of fact, if d = 1, hypothesis (4.13) is rather natural, while (4.8) is automatically satisfied because of the absence of magnetic fields on the real line.)
To be more specific, in the sequel we consider the following setup. Let A ∈ L 2 loc (R d \ {0}; R d ) and V ∈ L 1 loc (R d \ {0}; C) and assume
Re V ∈ L ∞ loc (R d \ {0}) and A ∈ W 1,∞ loc (R d \ {0}). (4.65)
Notice that assumption (4.65) is satisfied by a large class of potentials, namely V (x) = a/|x| α with a > 0 and α > 0 and the Aharonov-Bohm vector field (3.16).
Observe that since it is no more necessarily true that V ∈ L 1 loc (R d ; C) and A ∈ L 2 loc (R d ; R d ), the procedure developed in Subsection 4.1 in order to rigorously introduced the Hamiltonian H A,V formally defined in (4.1) must be adapted. The modification of the procedure consists merely in taking the Friedrichs extension of the operator initially defined on
C ∞ 0 (R d \ {0}) instead of C ∞ 0 (R d ).
To be more specific, we first introduce the closed quadratic form
h (1) A,V [u] := R d |∇ A u| 2 dx + R d (Re V ) + |u| 2 dx, u ∈ D(h (1) A,V ) := C ∞ 0 (R d \ {0}) |||·||| ,(4.66)
where
|||u||| 2 := h (1) A,V [u] + u 2 L 2 (R d ) . Assume that there exist b, β ∈ [0, 1) with b 2 + β 2 < 1,
such that, for any u ∈ D(h (1)
A,V ), R d (Re V ) − |u| 2 dx ≤ b 2 R d |∇ A u| 2 dx, R d |Im V ||u| 2 dx ≤ β 2 R d |∇ A u| 2 dx. (4.67) Then, defining h (2) A,V [u] := − R d (Re V ) − |u| 2 dx + i R d Im V |u| 2 dx, u ∈ D(h (1) A,V ), the form h (2)
A,V is relatively bounded with respect to h (1) A,V , with the relative bound less than one. Consequently, the sum h A,V := h This subsection is concerned with the proof of Lemma 4.4 in the present alternative framework. More specifically we will provide the proof of identity (4.36) only, which is the one whose changes are significant. For the sake of clarity, we restate it with the alternative hypotheses assumed in this section. (Without loss of generality, we consider just the situation in which V (1) = 0; indeed, the assumption (4.13) that we remove now concerned the component V (2) only.)
Lemma 4.6. Let d ≥ 2. Let A ∈ L 2 loc (R d \ {0}) be such that B ∈ L 2 loc (R d \ {0}) and let V ∈ W 1,1 loc (R d \ {0}
) be potentials satisfying (4.65). Let u ∈ D A,V be any compactly supported solution of (4.15), with λ being any complex constant and |x|f ∈ L 2 loc (R d ), satisfying
|x| 2 |B| 2 + |x| 2 |Im V | 2 + [x · ∇ Re V ] + |u| 2 ∈ L 1 loc (R d ).
Then [x · ∇ Re V ] − |u| 2 ∈ L 1 loc (R d ) and the following identity
2 R d |∇ A u| 2 dx + 2 Im R d x · B · u∇ A u dx − R d x · ∇ Re V (2) |u| 2 dx − 2 Im R d x · Im V (2) u∇ A u dx = −2 Im λ Im R d x · u∇ A u dx + d Re R d fū dx + 2 Re R d f x · ∇ A u dxξ ε (x) := 0 if |x| ≤ ε, ξ(log 2 (|x|/ε)) if ε ≤ |x| ≤ 2ε, 1 if |x| ≥ 2ε.
It comes from a straightforward computation to check that in both cases, there exists a constant c > 0 such that the following control on the first derivatives
|∇ξ ε | ≤ c/ε (4.68)
holds true. We take as the test function in (4.15) a slight modification of the multiplier (4.48) chosen above, namely
v := ∆φu + ξ ε ∂ k φ[∂ δ k,A u + ∂ −δ k,A u] with φ(x) := |x| 2 ,(4.69)
where ∂ δ k,A u := ∂ δ k u + iA k u, k = 1, 2, . . . , d,
with ∂ δ k defined as in (4.50). More specifically,
v = 2du + 2ξ ε x k [∂ δ k,A u + ∂ −δ k,A u]. (4.70)
Observe that in this framework we do not need the truncation of the magnetic potential.
Mimicking the arguments of Section 4.4, one can show that v defined as in (4.70) belongs to D A,V . In fact, one has v ∈ L 2 (R d ), ∂ l,A v := (∂ l + iA l )v ∈ L 2 (R d ) for any l = 1, . . . , d and (Re V ) + v ∈ L 2 (R d ). We comment just on ξ ε x k ∂ δ k,A u in (4.70). Being ξ ε supported off the origin, A k ∈ L ∞ (supp ξ ε ), therefore
ξ ε x k ∂ δ k,A u := ξ ε x k (∂ δ k + iA k )u ∈ L 2 (R d ). Now we want to show that ∂ l,A [ξ ε x k ∂ δ k,A u] ∈ L 2 (R d ).
First observe that using the chain rule for magnetic derivatives (4.45), one can write
∂ l,A [ξ ε x k ∂ δ k,A u] = v 1 + v 2 , where v 1 := ξ ε ∂ l,A [x k ∂ δ k,A u], and v 2 := ∂ l ξ ε [x k ∂ δ k,A u].
Clearly, exactly as above, v 2 ∈ L 2 (R d ). Using again that A k L ∞ (supp ξε) < ∞ and the fact that
x k ∂ δ k,A u = x k ∂ δ,N k,A u with N = A k L ∞ (supp ξε)
, where ∂ δ,N k,A are defined as in (4.49), one can reason as in Section 4.4 to conclude that v 1 ∈ L 2 (R d ) as well (observe that here it comes into play the assumption ∂ l A k ∈ L ∞ (R d \ {0}), as in the previous section it came into play the assumption ∂ l A k ∈ L p loc (R d ) with p as in (4.8)). It remains just to prove that (Re V ) + [ξ ε x k ∂ δ k,A u] ∈ L 2 (R d ), but this follows immediately observing that, on the support of ξ ε , (Re V ) + is bounded. Now we are in position to prove identity (4.36'). Also in this section we proceed in a greater generality by considering φ in (4.69) to be an arbitrary smooth function φ : R d → R. After we will plug in our choice φ(x) = |x| 2 . We consider identity (4.15) with the test function v as in (4.70) and we take the real part. Each resulting integrals are treated separately.
• Kinetic term
Let us start with the "kinetic" part of (4.15), i.e. (4.53). Using
∂ l,A v =(∂ l ∆φ)ū + ∆φ∂ l,A u + ξ ε ∂ lk φ[∂ δ k,A u + ∂ −δ k,A u] + ξ ε ∂ k φ [∂ l,A ∂ δ k,A u + ∂ l,A ∂ −δ k,A u] + ∂ l ξ ε ∂ k φ[∂ δ k,A u + ∂ −δ k,A u],
we write K = K ε 0 + K 1 + K 2 + K ε 3 + K ε 4 with K 1 and K 2 as in (4.54) and
K ε 0 := Re R d ∂ l ξ ε ∂ k φ∂ l,A u [∂ δ k,A u + ∂ −δ k,A u], K ε 3 := Re R d ξ ε ∂ lk φ ∂ l,A u [∂ δ k,A u + ∂ −δ k,A u], K ε 4 := Re R d ξ ε ∂ k φ ∂ l,A u [∂ l,A ∂ δ k,A u + ∂ l,A ∂ −δ k,A u].
As regards with K ε 4 , proceeding in the same way as done in Section 4.4 to treat the term K 4 , we end up with
K ε 4 = K ε 4,1,1 + K ε 4,1,2 + K ε 4,2 , where K ε 4,1,1 = − 1 2 R d {∂ −δ k (ξ ε ∂ k φ) + ∂ δ k (ξ ε ∂ k φ)}|∇ A u| 2 , K ε 4,1,2 = − δ 2 R d {ξ ε ∂ k φ − τ δ k (ξ ε ∂ k φ)}|∂ δ k ∂ l,A u| 2 and K ε 4,2 = Im R d ξ ε ∂ k φ∂ l,A u ∂ l A kū − (∂ δ k A l )τ δ kū + Im R d ξ ε ∂ k φ∂ l,A u ∂ l A kū − (∂ −δ k A l )τ −δ kū .
Now we choose φ(x) := |x| 2 . Using (4.58) we get
K ε 0 = 2 Re R d ∂ l ξ ε x k ∂ l,A u [∂ δ k,A u + ∂ −δ k,A u] dx, K 1 = 0, K 2 = 2d R d |∇ A u| 2 dx, K ε 3 = 2 Re R d ξ ε ∂ l,A u [∂ δ l,A u + ∂ −δ l,A u] dx, and K ε 4,1,1 = − R d {∂ −δ k (ξ ε x k ) + ∂ δ k (ξ ε x k )}|∇ A u| 2 dx, K ε 4,1,2 = R d ∂ δ k (ξ ε x k )|τ δ k ∂ l,A u − ∂ l,A u| 2 dx, K ε 4,2 = 2 Im R d ξ ε x k ∂ l,A u ∂ l A kū − (∂ δ k A l )τ δ kū dx + 2 Im R d ξ ε x k ∂ l,A u ∂ l A kū − (∂ −δ k A l )τ −δ kū dx.
Now we need the following analogous version to Lemma 4.5.
Lemma 4.7. Under the hypotheses of Lemma 4.6, the limits
∂ δ l,A u δ→0 − −− → ∂ l,A u in L 2 loc (R d \ {0})
and
(∂ δ k A l )τ δ k u δ→0 − −− → ∂ k A l u in L 2 loc (R d \ {0}) hold true.
Using Lemma 4.7 and letting δ go to zero, it is easy to see that
K ε 0 δ→0 − −− → 4 Re R d ∂ l ξ ε x k ∂ l,A u∂ k,A u dx, K ε 3 δ→0 − −− → 4 R d ξ ε |∂ l,A u| 2 dx, K ε 4,1,1 δ→0 − −− → − 2 R d ∂ k (ξ ε x k )|∇ A u| 2 dx = −2 R d ∂ k ξ ε x k |∇ A u| 2 dx − 2d R d ξ ε |∇ A u| 2 dx, K ε 4,1,2 δ→0 − −− → 0, K ε 4,2 δ→0 − −− → 4 Im R d ξ ε x k ∂ l,A u[∂ l A k − ∂ k A l ]ū dx.
(4.71)
Now we want to see what happens in the limit of ε approaching zero. In order to do that we will use the following lemma.
Lemma 4.8. Let g ∈ L 1 (R d ) and let ξ ε be defined as above. Then
R d ξ ε g dx ε→0 − −− → R d g dx and R d ∂ l ξ ε x k g dx ε→0 − −− → 0 k, l = 1, 2 . . . , d.
(4.72)
Proof. The first limit in (4.72) immediately follows from the definition of ξ ε via the dominated convergence theorem. On the other hand, using (4.68), one has
R d |∂ l ξ ε ||x k ||g| dx ≤ 2 c ε<|x|<2ε |g| dx ε→0 − −− → 0,
which yields the second limit in (4.72), again from the dominated convergence theorem.
Using Lemma 4.8 and passing to the limit in (4.71), one easily gets
K δ→0 ε→0 − −− → 4 R d |∂ l,A u| 2 dx + 4 Im R d x k ∂ l,A u[∂ l A k − ∂ k A l ]ū dx.
Notice that here we have used that, by hypothesis, |x| 2 |B| 2 |u| 2 ∈ L 1 loc (R d ).
• Source term
Now consider simultaneously the "source" and "eigenvalue" parts of (4.15), i.e. (4.61). Plugging in (4.61) our chosen test function v defined in (4.69), we can write F = F 1 + F ε 2 + F ε 3 + F ε 4 with F 1 as in (4.62) and
F ε 2 := Re λ Re R d ξ ε ∂ k φ u[∂ δ k,A u + ∂ −δ k,A u], F ε 3 := − Im λ Im R d ξ ε ∂ k φ u[∂ δ k,A u + ∂ −δ k,A u], F ε 4 := Re R d f {∆φū + ξ ε ∂ k φ [∂ δ k,A u + ∂ −δ k,A u]}.
As regards with F ε 2 , proceeding as in Section 4.4 when we treated F 2 , we end up with
F ε 2 = F ε 2,1 + F ε 2,2 with F ε 2,1 = − 1 2 Re λ R d {∂ −δ k (ξ ε ∂ k φ) + ∂ δ k (ξ ε ∂ k φ)}|u| 2 and F ε 2,2 := − δ 2 Re λ R d ξ ε ∂ k φ {|∂ δ k u| 2 − |∂ −δ k u| 2 }.
Choosing φ(x) := |x| 2 in the previous identities and using (4.58) give
F 1 = 2d Re λ R d |u| 2 dx, F ε 2,1 = − Re λ R d {∂ −δ k (ξ ε x k ) + ∂ δ k (ξ ε x k )}|u| 2 dx, F ε 2,2 = −δ Re λ R d ξ ε x k {|∂ δ k u| 2 − |∂ −δ k u| 2 } dx, F ε 3 = −2 Im λ Im R d ξ ε x k u [∂ δ k,A u + ∂ −δ k,A u] dx, F ε 4 = Re R d f {2dū + 2ξ ε x k [∂ δ k,A u + ∂ −δ k,A u]} dx.
Reasoning as above, one gets
F ε 2,1 δ→0 − −− → −2 Re λ R d ∂ k ξ ε x k |u| 2 dx − 2d Re λ R d ξ ε |u| 2 dx, F ε 2,2 δ→0 − −− → 0, F ε 3 δ→0 − −− → −4 Im λ Im R d ξ ε x k u∂ k,A u dx, F ε 4 δ→0 − −− → Re R d f {2dū + 4ξ ε x k ∂ k,A u} dx.
Using Lemma 4.8, we conclude that
F δ→0 ε→0 − −− → −4 Im λ Im R d x k u∂ k,A u dx + Re R d f {2dū + 4x k ∂ k,A u} dx.
• Electric potential term
Let us now consider the contribution of the "potential" part of (4.15), i.e. (4.63). Plugging v defined as in (4.69) into (4.63), we write J = J 1 + J ε 2 with
J 1 := R d Re V ∆φ|u| 2 and J ε 2 := Re R d V ξ ε ∂ k φu[∂ δ k,A u + ∂ −δ k,A u].
Choosing φ(x) := |x| 2 in the previous identities and using (4.58), we obtain
J 1 = 2d R d Re V |u| 2 dx and J ε 2 = 2 Re R d ξ ε x k V u[∂ δ k,A u + ∂ −δ k,A u] dx. Now we write J ε 2 = J ε 2,1 + J ε 2,2 , where J ε 2,1 := 2 Re R d ξ ε x k Re V u[∂ δ k,A u + ∂ −δ k,A u] dx and J ε 2,2 := −2 Im R d ξ ε x k Im V u[∂ δ k,A u + ∂ −δ k,A u] dx.
Using that Re V is bounded on supp ξ ε , taking the limit as δ goes to zero, it follows from Lemma 4.7
J ε 2,1 δ→0 − −− → 4 Re R d ξ ε x k Re V u∂ k,A u dx = −2 R d ∂ k ξ ε x k Re V |u| 2 dx − 2d R d ξ ε Re V |u| 2 dx − 2 R d ξ ε x k ∂ k Re V |u| 2 dx,
where in the last identity we have just integrated by parts. Moreover, using that by hypothesis |x|
2 |Im V | 2 |u| 2 ∈ L 1 loc (R d ), we have J ε 2,2 δ→0 − −− → −4 Im R d ξ ε x k Im V u∂ k,A u dx.
Finally, using that Re V |u| 2 and [x k ∂ k Re V ] + |u| 2 ∈ L 1 (R d ) and again |x| 2 |Im V | 2 |u| 2 ∈ L 1 loc (R d ), then Lemma 4.8 gives
J δ→0 ε→0 − −− → −2 R d [x k ∂ k Re V ] + |u| 2 dx + 2 R d [x k ∂ k Re V ] − |u| 2 dx − 4 Im R d x k Im V u∂ k,A u dx.
Observe that in order to pass to the limit in the integral involving [x k ∂ k Re V ] − , we have used the monotone convergence theorem being ξ ε ր 1 as ε tends to zero.
In summary, passing to the limit δ → 0 and ε → 0 in (4.15) and multiplying the resulting identity by 1/2, one obtains (4.36'). This concludes the proof of Lemma 4.6.
Absence of eigenvalues of matrix Schrödinger operators
We start our investigation on Schrödinger operators by considering first the most delicate case represented by the non self-adjoint results Theorem 3.1 (and its particular case Theorem 1.1) and the alternatives in d = 2 given by Theorem 3.2 and Theorem 3.3. The self-adjoint situation is treated afterward (Subsection 5.2).
Non self-adjoint case
R d ∇ A u j · ∇ A v j dx + R d V (2) u j v j dx = λ R d u j v j dx + R d f j v j dx (5.2)
for j = 1, 2 . . . , n and for any v j ∈ D A,V .
Here, since we want to use directly the estimate in Lemma 4.2, we have defined f := −V (1) u. In passing, observe that by virtue of our hypothesis (3.3), it is not difficult to check that f, so defined, satisfies n j=1 |f j | 1/2 |u j | 1/2 2
L 2 (R d ) ≤ a 2 1 ∇ A u − 2 [L 2 (R d )] n and |x|f [L 2 (R d )] n ≤ a 2 ∇ A u − [L 2 (R d )] n ,(5.3)
with a 1 and a 2 as in (3.3) and u − as in (4.17). Notice that here we have used that |u| = |u − |.
The strategy of our proof is to show that, under the hypotheses of Theorem 3.1, u is identically zero. In order to do that, as customary, we split the proof into two cases: |Im λ| ≤ Re λ and |Im λ| > Re λ.
• Case |Im λ| ≤ Re λ.
Since u j , for j = 1, 2, . . . , n, is a solution to (5.2), we can use directly Lemma 4.2 to get the estimate
∇ A u − j 2 L 2 (R d ) + (Re λ) −1/2 |Im λ| R d |x||∇ A u − j | 2 dx − (d − 1) 2 R d |u − j | 2 |x| dx + R d |x|(Re V (2) ) + |u − j | 2 dx ≤ 2 |x||B|u − j L 2 (R d ) + |x| Im V (2) u − j L 2 (R d ) + |x|f j L 2 (R d ) ∇ A u − j L 2 (R d ) + (d − 1) |f j | 1/2 |u − j | 1/2 2 L 2 (R d ) + [∂ r (|x| Re V (2) )] 1/2 + u − j 2 L 2 (R d ) + |x|(Re V (2) ) − u − j L 2 (R d ) + |x|f j L 2 (R d ) |Im V (2) | 1/2 u − j L 2 (R d ) + |f j | 1/2 |u − j | 1/2 L 2 (R d ) .
Summing over j = 1, 2, . . . , n and using the Cauchy-Schwarz inequality for discrete measures, we easily obtain
∇ A u − 2 [L 2 (R d )] n + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 R d |u − | 2 |x| dx + R d |x|(Re V (2) ) + |u − | 2 dx ≤ 2 |x||B|u − [L 2 (R d )] n + |x| Im V (2) u − [L 2 (R d )] n + |x|f [L 2 (R d )] n ∇ A u − [L 2 (R d )] n + (d − 1) n j=1 |f j | 1/2 |u − j | 1/2 2 L 2 (R d ) + [∂ r (|x| Re V (2) )] 1/2 + u − 2 [L 2 (R d )] n + |x|(Re V (2) ) − u − [L 2 (R d )] n + |x|f [L 2 (R d )] n |Im V (2) | 1/2 u − [L 2 (R d )] n + n j=1 |f j | 1/2 |u − j | 1/2 2 L 2 (R d ) 1/2 .
Using assumptions (3.4)-(3.7) together with (5.3), one has
1 − 2c + 2β 2 + 2a 2 + (d − 1)a 2 1 + b 2 + (b 2 + a 2 )(β 1 + a 1 ) ∇ A u − 2 [L 2 (R d )] n + (Re λ) −1/2 |Im λ| R d |x||∇ A u − | 2 dx − (d − 1) 2 R d |u − | 2 |x| dx + R d |x|(Re V (2) ) + |u − | 2 dx ≤ 0. (5.4)
Now we need to estimate the squared bracket of the latter inequality, namely
I := R d |x||∇ A u − | 2 dx − (d − 1) 2 R d |u − | 2 |x| dx + R d |x|(Re V (2) ) + |u − | 2 dx. (5.5)
Notice that, since I appears as a "coefficient" of the positive spectral quantity (Re λ) −1/2 |Im λ|, we would like to get a positive contribution out of it to eventually discard this term in the previous estimate. Notice that only the second term in I could spoil such positivity and therefore our aim is to control its magnitude in size by means of the positivity of the other terms in I.
To do so, we will proceed distinguishing the cases d = 1, d = 2 and d ≥ 3. Let us start with the easiest d = 1. In this situation the second term in I cancels out and therefore I ≥ 0. We go further considering the case d ≥ 3. Here we employ the weighted magnetic Hardy-inequality
R d |x||∇ A u| 2 dx ≥ (d − 1) 2 4 R d |u| 2 |x| dx. (5.6)
More specifically, using (5.6) we have
I ≥ d − 3 d − 1 R d |x||∇ A u − | 2 dx + R d |x|(Re V (2) ) + |u − | 2 dx,(5.7)
which again is positive because we are considering d ≥ 3.
Observe that in both cases treated so far, namely d = 1 and d ≥ 3, the positivity of the real part of V (2) , namely the term R d |x|[Re V (2) ] + |u| 2 dx, did not really enter the proof of the positivity of I. The situation is different when considering d = 2. Indeed, although (5.6) is valid also for d = 2, in this case the right-hand side of estimate (5.7) is not necessarily positive. Thus assumption (3.8) comes into play here. Indeed, thanks to (3.8), it is immediate that
I := R 2 |x||∇ A u − | 2 dx − 1 2 R 2 |u − | 2 |x| dx + R 2 |x|(Re V (2) ) + |u − | 2 dx ≥ 0.
Hence we have proved that in any dimension d ≥ 1 we have I ≥ 0. This yields that
1 − 2c + 2β 2 + 2a 2 + (d − 1)a 2 1 + b 2 + (b 2 + a 2 )(β 1 + a 1 ) ∇ A u − 2 [L 2 (R d )] n ≤ 0,
which, by virtue of (3.2), implies that u − (and therefore u) is identically equal to zero.
• Case |Im λ| > Re λ.
Let u j for j = 1, 2, . . . , n be a solution to (5.2). Choosing as a test function v j := u j and taking the real part of the resulting identity and adding/subtracting, instead of the real part, the imaginary part of the resulting identity, one gets
R d |∇ A u j | 2 dx + R d (Re V (2) ) + |u j | 2 dx − R d (Re V (2) ) − |u j | 2 dx ± R d Im V (2) |u j | 2 dx = (Re λ ± Im λ) R d |u j | 2 dx + Re R d f j u j dx ± Im R d f j u j dx.
Summing over j = 1, 2, . . . , n and discarding the positive term on the left-hand side involving (Re V (2) ) + , one easily gets
∇ A u 2 [L 2 (R d )] n ≤ (Re λ ± Im λ) R d |u| 2 dx + R d (Re V (2) ) − |u| 2 dx + R d |Im V (2) ||u| 2 dx + 2 n j=1 |f j | 1/2 |u j | 1/2 2 L 2 (R d ) .
Using the first inequalities in (3.4), (3.6) and (5.3), we have
1 − (b 2 1 + β 2 1 + 2a 2 1 ) ∇ A u 2 [L 2 (R d )] n ≤ (Re λ ± Im λ) u 2 [L 2 (R d )] n .
Therefore, since by the first inequality in (3.2) we have b 2 1 + β 2 1 + 2a 2 1 < 1, then Re λ ± Im λ ≥ 0 unless u = 0. But since |Im λ| > Re λ we conclude that u = 0.
This concludes the proof of Theorem 3.1.
Now we prove the alternative Theorem 3.2 valid in d = 2.
Proof of Theorem 3.2. Since the proof follows analogously to the one of Theorem 3.1 presented above, except for the analysis in the sector |Im λ| ≤ Re λ, we shall comment just on this situation. As in the proof of Theorem 3.1, we want to estimate the term I defined in (5.5), which appears multiplied by the spectral coefficient (Re λ) −1/2 |Im λ| in (5.4). A first application of the weighted inequality (5.6) gives
I ≥ − 1 4 R 2 |u − | 2 |x| dx + R 2 |x|(Re V (2) ) + |u − | dx ≥ − 1 4 R 2 |u − | 2 |x| dx,(5.8)
where the last inequality follows by discarding the positive term involving the potential V (2) . Now, we proceed estimating the term R 2 |u − | 2 |x| dx. In order to do that we will strongly use the following Hardy-Poincar-type inequality
BR |∇ψ| 2 dx ≥ 1 4R BR |ψ| 2 |x| dx,(5.9)
valid for all ψ ∈ W 1,2 0 (B R ), where B R := {x ∈ R 2 : |x| < R} denotes the open disk of radius R > 0 (see [15] for an explicit proof of (5.9)).
Following the strategy of [15], given two positive numbers R 1 < R 2 , we introduce the function η : [0, ∞) → [0, 1] such that η = 1 on [0, R 1 ], η = 0 on [R 2 , ∞) and η(r) = (R 2 − r)/(R 2 − R 1 ) for r ∈ (R 1 , R 2 ). We denote by the same symbol η the radial function η • r : R 2 → [0, 1]. Now, writing u − = ηu − + (1 − η)u − and using (5.9), we have
R 2 |u − | 2 |x| dx ≤ 2 BR 2 (η|u − |) 2 |x| dx + 2 R 2 (1 − η)|u − | 2 |x| dx ≤ 8R 2 BR 2 |∇(η|u − |)| 2 dx + 2 R 1 R 2 |u − | 2 dx ≤ 16R 2 R 2 |∇|u − || 2 dx + 16 R 2 (R 2 − R 1 ) 2 R 2 |u − | 2 dx + 2 R 1 R 2 |u − | 2 dx.
Choosing R 1 = R 2 /2 and using the diamagnetic inequality (3.12) give
R 2 |u − | 2 |x| dx ≤ 16R 2 R 2 |∇ A u − | 2 dx + 68 R 2 R 2 |u − | 2 dx.
Now we fix conveniently R 2 ; namely, given any positive number ǫ, we set R 2 := ǫ(Re λ) 1/2 /|Im λ| in the previous inequality. Then multiplying the resulting inequality by (Re λ) −1/2 |Im λ| 1 4 , we get (Re λ) −1/2 |Im λ|
1 4 R 2 |u − | 2 |x| dx ≤ 4ǫ R 2 |∇ A u − | 2 dx + 17 ǫ |Im λ| R 2 |u − | 2 dx ≤ 4ǫ R 2 |∇ A u − | 2 dx + 17 ǫ R 2 |Im V ||u − | 2 dx ≤ 4ǫ + 17 ǫ (a 2 1 + β 2 1 ) R 2 |∇ A u − | 2 dx,(5.10)
where in the first inequality we have used the restriction to the sector |Im λ| ≤ Re λ, the second estimate follows from (4.34) with f = 0 and the third inequality from (3.3) and (3.6). Using that, from (5.8) and (5.10), one has (Re λ) −1/2 |Im λ| I ≥ −(Re λ) −1/2 |Im λ|
1 4 R 2 |u − | 2 |x| dx ≥ − 4ǫ + 17 ǫ (a 2 1 + β 2 1 ) R 2 |∇ A u − | 2 dx
and plugging this last bound in (5.4), we get 1 − 2c + 2β 2 + 2a 2 + a 2 1 + b 2 + (b 2 + a 2 )(β 1 + a 1 ) + 4ǫ + 17 ǫ (a 2 1 + β 2 1 )
∇ A u − 2 [L 2 (R 2 )] n ≤ 0.
From hypothesis (3.11), we therefore conclude that u = 0 as above.
Finally, we prove the two dimensional result in which the magnetic potential is fixed to be the Aharonov-Bohm one.
Proof of Theorem 3.3. As in the proof of Theorem 3.2, we need to estimate the term I defined in (5.5), which appears in (5.4). Notice that in this specific case (due to the triviality of the magnetic field, everywhere except at the origin, see (3.19)), in (5.4) there does not appear the constant c related to the smallness condition assumed for B. In order to estimate I, we will use the following weighted Hardy inequality, which is also an improvement upon (3.10) , it reads
R 2 |x||∇ A ψ| 2 dx ≥ 1 4 + γ 2 R 2 |ψ| 2 |x| dx, ∀ ψ ∈ C ∞ 0 (R d \ {0}),(5.11)
where γ := dist{ᾱ, Z} andᾱ is as in (3.18) (see [15,Lem. 3] for a proof of (5.11)). A first application of (5.11) gives (Re λ) −1/2 |Im λ| I ≥ −(Re λ) −1/2 |Im λ|
1 4 − γ 2 R 2 |u − | 2 |x| dx, (5.12)
where we discarded the positive term in I involving the potential V (2) . Notice that since we are assumingᾱ / ∈ Z, then γ ∈ (0, 1/2], this gives 1/4 − γ 2 ≥ 0. Now, we proceed estimating the term R 2 |u − | 2 |x| dx. Given any positive number R, we write
R 2 |u − | 2 |x| dx = BR |u − | 2 |x| dx + R 2 \BR |u − | 2 |x| dx ≤ R BR |u − | 2 |x| 2 dx + 1 R R 2 |u − | 2 dx,
where, also here, B R denotes the open disk of radius R > 0. Choosing in the previous inequality R := ǫγ 2 (Re λ) 1/2 /|Im λ| with any positive constant ǫ, and multiplying the resulting estimate by the quantity (Re λ) −1/2 |Im λ| 1 4 − γ 2 , we get
(Re λ) −1/2 |Im λ| 1 4 − γ 2 R 2 |u − | 2 |x| dx ≤ 1 4 − γ 2 ǫγ 2 R 2 |u − | 2 |x| 2 dx + 1 ǫγ 2 R 2 |Im V ||u − | 2 dx ≤ 1 4 − γ 2 ǫ + (a + β) ǫγ 3 R 2 |∇ A u − | 2 dx.
In the first inequality we have used the restriction to the sector |Im λ| ≤ Re λ, while in the second inequality we have used first the Hardy inequality (3.17) and then the hypotheses on the potential (3.21) together with the second inequality of (3.22). Plugging the last estimate in (5.12) and the resulting estimate in (5.4), and using an analog reasoning as in Remark 3.1.4, give
1 − 2β + 2a + a γ + b 2 + 1 √ γ (b + a)( √ a + β) + 1 4 − γ 2 ǫ + (a + β) ǫγ 3 ∇ A u − 2 [L 2 (R 2 )] n ≤ 0.
From hypothesis (3.20) we therefore conclude that u = 0 as above.
5.2 Self-adjoint case: Proof of Theorem 3.4
Now we prove the much simpler and less involved analogous result to Theorem 3.1 for self-adjoint Schrödinger operators, namely Theorem 3.4.
Proof of Theorem 3.4. Let u be any weak solution to the eigenvalues equation (5.1), with V real-valued. The proof of this theorem is based exclusively on the identity (4.36). More precisely, using that V is real-valued, so necessarily Im λ = 0, from (4.36) (with f = 0) we get This immediately gives a contradiction in virtue of (3.24). This concludes the proof.
2 R d |∇ A u j | 2 dx = − 2 Im R d x · B · u j ∇ A u j dx + R d |x|∂ r V (2) |u j | 2 dx − d R d V (1) |u| 2 dx − 2 Re R d x · V (1) u j ∇ A u j dx.(5.
In passing, observe that here we did not need to split the proof and proving separately absence of positive and non-positive eigenvalues. Indeed, we got the absence of the total point spectrum in just one step.
Remark 5.1 (Two-dimensional Pauli operators as a special case). One reason for investigating matrix selfadjoint Schrödinger operators in this work, comes from our interest in pointing out a pathological behavior of the two dimensional purely magnetic (and so self-adjoint) Pauli Hamiltonian. From the explicit expression (3.30) of the two dimensional Pauli operators, it is evident the relation with the scalar Schrödinger operator
−∇ 2 A + V (1) with V (1) := ±B 12 .
In this specific situation identity (5.13), which was the crucial identity to prove absence of point spectrum in the self-adjoint situation, reads (after multiplying by 1/2)
R 2 |∇ A u| 2 dx = − Im R 2 x · B 12 u∇ A u ⊥ dx − R 2 B 12 |u| 2 dx − Re R 2
x · B 12 u∇ A u dx.
We stress that differently to the proof presented above, here the presence of the second term on the righthand side involving the magnetic field does not allow us to get a contradiction. Indeed, roughly speaking, all the positivity coming from the left-hand side and that is customarily used to get the contradiction under the smallness assumption on the magnetic field is exploited to control the second term on the right-hand side (due to inequality (3.31)), therefore, using (3.7), one is left with a term of the type −2c ∇ A u 2 L 2 (R 2 ) ≤ 0, which leads to no contradiction, however small is chosen the constant c.
Absence of eigenvalues of Pauli and Dirac operators
This section is devoted to the proof of emptiness of the point spectrum of Pauli and Dirac Hamiltonians.
Warm-up in the 3d case
Even though the three dimensional setting proposed in the introduction is clearly covered by the more general results Theorem 3.5 and Theorem 3.6, we decided to dedicate to the 3d case a separate section. Indeed, due to the physical relevance of this framework, we want to make it easier to spot the conditions which guarantee the absence of the point spectrum in this case, avoiding the interested reader working his/her way through the statements of the theorems in the general setting.
Absence of eigenvalues of Dirac operators in any dimension
Now we can conclude our discussion by proving the absence of eigenvalues of Dirac operators in the general case, namely proving Theorem 3.6. Let us start commenting on the odd-dimensional case. Due to expression (2.11) for the squared Dirac in odd dimensions and due to the analogy with (1.6) in the three-dimensional case, one can proceed as in the proof of Theorem 1.3 using the validity of the corresponding result Theorem 3.5 for Pauli operators to get the result.
Turning to the even-dimensional situation, one realises from (2.13) that the squared Dirac operator equals a shifted Pauli operator. Therefore Theorem 3.6 follows as a consequence of Theorem 3.5 for even Pauli operators.
commutation properties of the Pauli matrices, it is easy to see that the square of the purely magnetic Dirac operator H D (A, 0) =: H D (A) satisfies H D (A) 2 = H P (P (A) := H P (A, 0) is just the purely magnetic Pauli operator (1.2). This allows us to ensure the absence of the point spectrum of the Dirac operator H D (A), once the corresponding result for the Pauli operator H P (A)
10) hold true. If in addition A ∈ W 1,3 loc (R 3 ) and V (2) ∈ W 1,3/2 loc (R 3 ), then H S (A, V ) has no eigenvalues, i.e. σ p (H S (A, V )) = ∅.
Theorem 1 . 2 (
12Pauli equation). Under the hypotheses of Theorem 1.1, with (1.7) being replaced by 2 b+β +2(a+ √ 3c) < 1 and 2c+2β +6(a+ √ 3c)+b 2 + √ 2 b+(a+ √ 3c) ( β + a + √ 3c) < 1, (1.11)the operator H P (A, V ) has no eigenvalues, i.e. σ p (H P (A, V )) = ∅.
Lemma 2. 1 (
1Algebraic definition of Pauli operators). Let d ≥ 1 and let n(d) be as in (2.2).
H
D (A)ψ 2 = (ψ, H D (A) 2 ψ) = (ψ, H P (A)ψ) + 1 4 ψ 2 . Since the quadratic form of the Pauli operator H P (A) is closed on the space (2.16), it follows that the Dirac operator H D (A) with (2.18) is a closed symmetric operator. Under further assumptions about the vector potential (see [27, Sec. 4.3]), one can ensure that H D (A) is actually self-adjoint, but our results hold under the present more general setting.
H S (A, V ) has no eigenvalues, i.e. σ p (H S (A, V )) = ∅.
Theorem 3 . 2 .
32Let d = 2 and let n, A, B and V be as in Theorem 3.1. Assume that there exist numbers
(3. 3 )
3, (3.4) and (3.6) follow respectively as a consequence of the second inequalities in (3.3), (3.4) and (3.6) with
Lemma 4. 2 .
2Under the hypotheses of Lemma 4.1 the following estimate
•
are in a position to prove Lemma 4.1 on the basis of the three steps presented above.Step one. The desired approximation by compactly supported functions is achieved by a usual "horizontal cut-off." Let µ : [0, ∞) → [0, 1] be a smooth function such that
Lemma 4. 3 .
3Given u ∈ D A,V , let err(R) be as in (4.30). Then the following limits
Now we show how to use these identities to prove the analogous of identity (4.16) for compactly supported solutions of (4.15). For the sake of clarity, the technical proof of Lemma 4.4 is postponed to Subsection 4.4.Let us start our algebraic manipulation of identities (4.32)-(4.36) by taking the sum − (4.32) − 2(Re λ) 1/2 sgn(Im λ) (4.35) + (4.36).
∈ R \ {0} is the standard difference quotient of u (we refer to[13, Sec. 5.8.2] or[21, Sec. 10.5] for basic facts about the difference quotients) and the Lipschitz continuous functionT N (s) := max{−N, min{s, N }}with N > 0 is the usual truncation function. After the second equality of (4.48) and in the sequel, we use the Einstein summation convention.We start showing that v defined as in (4.48) belongs to D A,V , which is saying v ∈ L 2 (R d ), ∂ l,A v := (∂ l + iA l )v ∈ L 2 (R d ) for any l = 1, . . . , d and (Re V(2) ) + v ∈ L 2 (R d ). To see that, let us rewrite explicitly (4.48) with the choice φ(x) := |x| 2 , that is
Lemma 4 . 5 .
45Under the hypotheses of Lemma 4.4, the following limits hold true:
V ) is a closed and sectorial form and H A,V is understood as the m-sectorial operator associated with h A,V via the representation theorem. Again, we abbreviate D A,V := D(h A,V ). 4.5.1 Proof of identity (4.36)
For d ≥ 3 we define ξ : [0, ∞) → [0, 1] to be a smooth function such that ξ(r) := 0 if r ≤ 1, 1 if r ≥ 2, and set ξ ε (x) := ξ(|x|/ε). For d = 2, let ξ ∈ C ∞ ([0, 1]) such that ξ = 0 in a right neighborhood of 0 and ξ = 1 in a left neighborhood of 1; then we define the smooth function
Proof of Theorem 3.1. Let u be any weak solution to the eigenvalue equation H S (A, V )u = λu (5.1) with H S (A, V ) being defined as in (1.4) and λ being any complex constant. More precisely, u satisfies
6.1. 1
1Absence of eigenvalues of Pauli operators: proof of Theorem 1.2 Let u be any weak solution to the eigenvalue equation H P (A, V )u = λu, (6.1) with H P (A, V ) defined as in (1.2) and where λ is any complex constant. Using (1.2) and the decomposition V = V (1) + V (2) , problem (6.1) can be written as an eigenvalue problem for matrix Schrödinger operators, namely H S (A, W )u = λu,
is a closed and sectorial form. Therefore H A,V as defined in (4.1) makes sense as the m-sectorial operator associated to h A,V via the representation theorem (cf. [18, Thm. VI.2.1]).
13 )
13Observing thatR d |x|∂ r V (2) |u j | 2 dx ≤ R d [|x|∂ r V (2) ] + |u j | 2 dx,using the Cauchy-Schwarz inequality and summing over j = 1, 2, . . . , n, one has2 ∇ A u ∇ A u [L 2 (R d )] n + ∇ A u [L 2 (R d )] n . − (2c + b 2 + da 2 1 + 2a 2 ) ∇ A u [L 2 (R d )] n ≤ 0.2
[L 2 (R d )] n ≤2
R d
|x|
2 |B|
2 |u|
2 dx
1/2
R d
[|x|∂ r V (2) ] + |u|
2 dx
+ d
R d
|V (1) ||u|
2 dx + 2
R d
|x|
2 |V (1) |
2 |u|
2 dx
1/2
Now, using (3.3), (3.7) and (3.25), one easily gets
2 2
AcknowledgmentThe first author (L.C.) gratefully acknowledges financial support by the Deutsche Forschungsgemeinschaft (DFG) through CRC 1173. The research of the second author (D.K.) was partially supported by the GACR grant No. 18-08835S.In light of the assumptions in(1.8)about V(1)and B, which intrinsically are both full-subordination conditions to the magnetic Dirichlet form, it is indeed natural to treat V(1)and B in a unified way defining W (1) as in(6.2).Assuming the hypotheses of Theorem 1.2 and using that |σ| = √ 3 due to the fact that the Pauli matrices have norm one, one easily verifies the boundHence, hypotheses of Theorem 1.1 are satisfied (with W instead of V and with a + √ 3c as a replacement for a in(3.27)). From this we conclude the absence of eigenvalues of H S (A, W ) and, in turn clearly of H P (A, V ), which is the thesis. Now we are in position to prove Theorem 1.3. As we will see, it follows as a consequence of the corresponding result for Pauli operators, namely Theorem 1.2.Let u be any solution to the eigenvalues equationMore explicitly, using expression (1.6) and defining u 1,2 := (u 1 , u 2 ) and u 3,4 := (u 3 , u 4 ) the two-vectors with components respectively the first and the second component of u = (u 1 , u 2 , u 3 , u 4 ), and the third and the fourth, one gets that u 1,2 and u 3,4 satisfy H P (A)u 1,2 + 1 4 u 1,2 = k 2 u 1,2 , H P (A)u 3,4 + 1 4 u 3,4 = k 2 u 3,4 . In other words, the two-vectors u 1,2 and u 3,4 are solutions to the eigenvalue problems associated to the shifted Pauli operators H P (A) + 1 4 with eigenvalues k 2 . Notice that since (1.13) holds for any u = (u 1 , u 2 , u 3 , u 4 ), in particular it holds for the four-vector (u 1 , u 2 , 0, 0) and (0, 0, u 3 , u 4 ). This fact implies that the second condition in (1.8) of Theorem 1.2 holds with the same constant c as in(1.13). This means that we are in the hypotheses of Theorem 1.2 (once we set a purely magnetic framework, namely V = 0), so H P (A) has no eigenvalues. As a consequence, the shifted operator H P (A) + 1 4 I C 2 has no eigenvalues too. Hence u 1,2 and u 3,4 are vanishing and with them u = (u 1,2 , u 3,4 ) itself. This concludes the proof of Theorem 1.3.Absence of eigenvalues of Pauli operators in any dimensionNow we are in position to prove the general Theorem 3.5.Proof of Theorem 3.5. We divide the proof depending on the parity of the space dimension.Odd dimensionsIn odd dimensions, the proof follows the same scheme as the one presented in the three-dimensional case.Looking at expression (2.12) and using the decomposition of V = V (1) +V(2), one defines W = W (1) +W (2) such that(1)and W (2) = V(2).It is easy to see thatwhere we have used the validity of (3.27) and the fact that |a| = √ d (see Remark 2.1). Thus, the proof follows exactly as the one of Theorem 1.2 using, this time, the general result for Schrödinger operators Theorem 3.1.Even dimensionsLet u be any solution to the eigenvalue problemwhere H even P (A, V ) is defined in (2.14) and λ is any complex constant. In passing notice that according to (2.15), since d is even, then n ′ (d) = n(d).Defining u up := (u 1 , u 2 , . . . , u n(d)/2 ) and u down := (u n(d)/2+1 , u n(d)/2+2 , . . . , u n(d) ), the n(d)/2-vectors with components respectively the first half and the second half of the components of u = (u 1 , u 2 , . . . , u n(d) ), one gets Notice that here we have also used that the component V(1)and V (2) of V = V (1) + V(2)are diagonal by the hypothesis. It is easy to see thatwhere we have used(3.27)for the vector (u up , 0) and (0, u down ), respectively, and the fact that |a| = √ d. This means that we are in the hypotheses of Theorem 3.1 (once we replace V with W up and W down and with a + d 2 c instead of a 2 in (3.3)) and therefore H S (A, W up ) and H S (A, W down ) have no eigenvalues. Hence u up and u down are vanishing and with them u = (u up , u down ).This concludes the proof of Theorem 3.5.
Ground state of a spin-1/2 charged particle in a two-dimensional magnetic field. Y Aharonov, A Casher, Phys. Rev. A. 3Y. Aharonov and A. Casher, Ground state of a spin-1/2 charged particle in a two-dimensional magnetic field, Phys. Rev. A (3) 19 (1979), no. 6, 2461-2462.
Generalized Hardy inequality for the magnetic Dirichlet forms. A Balinsky, A Laptev, A Sobolev, J. Stat. Phys. 116A. Balinsky, A. Laptev and A. Sobolev, Generalized Hardy inequality for the magnetic Dirichlet forms, J. Stat. Phys. 116 (2004), 507-521.
The Hardy inequality and the heat equation with magnetic field in any dimension. C Cazacu, D Krejčiřík, Comm. Partial Differential Equations. 417C. Cazacu and D. Krejčiřík, The Hardy inequality and the heat equation with magnetic field in any dimen- sion, Comm. Partial Differential Equations 41 (2016), no. 7, 1056-1088.
Uniform resolvent estimates and absence of eigenvalues for Lamé operators with subordinated complex potentials. L Cossetti, J. Math. Anal. Appl. 455L. Cossetti, Uniform resolvent estimates and absence of eigenvalues for Lamé operators with subordinated complex potentials, J. Math. Anal. Appl. 455 (2017), 336-360.
Absence of eigenvalues of non-self-adjoint Robin Laplacians on the half-space. L Cossetti, D Krejčiřík, arXiv:1812.05348[math.SP]L. Cossetti and D. Krejčiřík, Absence of eigenvalues of non-self-adjoint Robin Laplacians on the half-space, arXiv:1812.05348 [math.SP] (2018).
Estimates on complex eigenvalues for Dirac operators on the half-line. J.-C Cuenin, Integral Equ. Oper. Theory. 79J.-C. Cuenin, Estimates on complex eigenvalues for Dirac operators on the half-line, Integral Equ. Oper. Theory 79 (2014), 377-388.
Eigenvalue bounds for Dirac and fractional Schrödinger operators with complex potentials. J. Funct. Anal. 272, Eigenvalue bounds for Dirac and fractional Schrödinger operators with complex potentials, J. Funct. Anal. 272 (2017), 2987-3018.
Eigenvalue estimates for non-selfadjoint Dirac operators on the real line. J.-C Cuenin, A Laptev, Ch Tretter, Ann. Henri Poincaré. 15J.-C. Cuenin, A. Laptev, and Ch. Tretter, Eigenvalue estimates for non-selfadjoint Dirac operators on the real line, Ann. Henri Poincaré 15 (2014), 707-736.
Eigenvalues of one-dimensional non-selfadjoint Dirac operators and applications. J.-C Cuenin, P Siegl, Lett. Math. Phys. 108J.-C. Cuenin and P. Siegl, Eigenvalues of one-dimensional non-selfadjoint Dirac operators and applications, Lett. Math. Phys. 108 (2018), 1757-1778.
Schrödinger operators, with application to quantum mechanics and global geometry. H L Cycon, R G Froese, W Kirsch, B Simon, Springer-VerlagBerlinH. L. Cycon, R. G. Froese, W. Kirsch, and B. Simon, Schrödinger operators, with application to quantum mechanics and global geometry, Springer-Verlag, Berlin, 1987.
On quantitative bounds on eigenvalues of a complex perturbation of a Dirac operator. C Dubuisson, Integral Equ. Oper. Theory. 78C. Dubuisson, On quantitative bounds on eigenvalues of a complex perturbation of a Dirac operator, Integral Equ. Oper. Theory 78 (2014), 249-269.
Resolvent estimates and bounds on eigenvalues for Dirac operators on the half-line. A Enblom, J. Phys. A: Math. Theor. 51165203A. Enblom, Resolvent estimates and bounds on eigenvalues for Dirac operators on the half-line, J. Phys. A: Math. Theor. 51 (2018), 165203.
Partial differential equations. L C Evans, Graduate Studies in Mathematics. 19American Mathematical SocietyL. C. Evans, Partial differential equations, Graduate Studies in Mathematics, vol. 19, American Mathe- matical Society, Providence, RI, 1998.
Location of eigenvalues of three-dimensional non-self-adjoint Dirac operators. L Fanelli, D Krejčiřík, Lett. Math. Phys. 109L. Fanelli and D. Krejčiřík, Location of eigenvalues of three-dimensional non-self-adjoint Dirac operators, Lett. Math. Phys. 109 (2019), 1473-1485.
Absence of eigenvalues of two-dimensional magnetic Schrödinger operators. L Fanelli, D Krejčiřík, L Vega, J. Funct. Anal. 275L. Fanelli, D. Krejčiřík, and L. Vega, Absence of eigenvalues of two-dimensional magnetic Schrödinger operators, J. Funct. Anal. 275 (2018), 2453-2472.
Spectral stability of Schrödinger operators with subordinated complex potentials. J. Spectr. Theory. 8, Spectral stability of Schrödinger operators with subordinated complex potentials, J. Spectr. Theory 8 (2018), 575-604.
Weakly coupled bound states of Pauli operators. R L Frank, S Morozov, S Vugalter, Calc. Var. Partial Differential Equations. 401-2R. L. Frank, S. Morozov, and S. Vugalter, Weakly coupled bound states of Pauli operators, Calc. Var. Partial Differential Equations 40 (2011), no. 1-2, 253-271.
Perturbation theory for linear operators. T Kato, Springer-VerlagBerlinT. Kato, Perturbation theory for linear operators, Springer-Verlag, Berlin, 1966.
Schrödinger operators with singular potentials. Israel J. MAth. 13, Schrödinger operators with singular potentials, Israel J. MAth, 13 (1972), 135-148.
Hardy inequalities for magnetic Dirichlet forms. A Laptev, T Weidl, Oper.Theory Adv. Appl. 108A. Laptev and T. Weidl, Hardy inequalities for magnetic Dirichlet forms, Oper.Theory Adv. Appl. 108 (1999), 299-305.
A first course in Sobolev spaces. G Leoni, American Mathematical SocietyProvidence, RIG. Leoni, A first course in Sobolev spaces, American Mathematical Society, Providence, RI, 2009.
. E H Lieb, M Loss, Analysis. American Mathematical SocietyE. H. Lieb and M. Loss, Analysis, American Mathematical Society, Providence, Rhode Island, 1997.
M Reed, B Simon, Methods of modern mathematical physics, I. Functional analysis. New YorkAcademic PressM. Reed and B. Simon, Methods of modern mathematical physics, I. Functional analysis, Academic Press, New York, 1972.
M Reed, B Simon, Methods of modern mathematical physics, IV. Analysis of operators. New YorkAcademic PressM. Reed and B. Simon, Methods of modern mathematical physics, IV. Analysis of operators, Academic Press, New York, 1978.
A criterion for the existence of nonreal eigenvalues for a Dirac operator. D Sambou, New York J. Math. 22D. Sambou, A criterion for the existence of nonreal eigenvalues for a Dirac operator, New York J. Math. 22 (2016), 469-500.
A simple criterion for the existence of nonreal eigenvalues for a class of 2D and 3D Pauli operators. D Sambou, Linear Algebra Appl. 529D. Sambou, A simple criterion for the existence of nonreal eigenvalues for a class of 2D and 3D Pauli operators, Linear Algebra Appl. 529 (2017), 51-88.
The Dirac equation. B Thaller, Springer-VerlagBerlin HeidelbergB. Thaller, The Dirac equation, Springer-Verlag, Berlin Heidelberg, 1992.
Remarks on virtual bound states for semi-bounded operators. T , Comm. Partial Differential Equations. 241-2T. Weidl, Remarks on virtual bound states for semi-bounded operators, Comm. Partial Differential Equations 24 (1999), no. 1-2, 25-60.
| [] |
[
"COVERING CERTAIN WREATH PRODUCTS WITH PROPER SUBGROUPS",
"COVERING CERTAIN WREATH PRODUCTS WITH PROPER SUBGROUPS"
] | [
"Martino Garonzi ",
"Attila Maróti "
] | [] | [] | For a non-cyclic finite group X let σ(X) be the least number of proper subgroups of X whose union is X. Precise formulas or estimates are given for σ(S ≀ Cm) for certain nonabelian finite simple groups S where Cm is a cyclic group of order m. | 10.1515/jgt.2010.035 | [
"https://arxiv.org/pdf/1211.5342v1.pdf"
] | 119,315,641 | 1211.5342 | 4ac5f67e1b3bc54015558b63078819ff9b193aa5 |
COVERING CERTAIN WREATH PRODUCTS WITH PROPER SUBGROUPS
22 Nov 2012
Martino Garonzi
Attila Maróti
COVERING CERTAIN WREATH PRODUCTS WITH PROPER SUBGROUPS
22 Nov 2012
For a non-cyclic finite group X let σ(X) be the least number of proper subgroups of X whose union is X. Precise formulas or estimates are given for σ(S ≀ Cm) for certain nonabelian finite simple groups S where Cm is a cyclic group of order m.
Introduction
For a non-cyclic finite group X let σ(X) be the least number of proper subgroups of X whose union is X. Let S be a nonabelian finite simple group, let Σ be a nonempty subset of S, and let m be a positive integer. Let α(m) be the number of distinct prime divisors of m. Let M be a nonempty set of maximal subgroups of S with the following properties (provided that such an M exists). Let N denote a covering for S, that is, a set of proper subgroups of S whose union is S. We state and prove two direct consequences of Theorem 1.1. Recall that M 11 is the Mathieu group of degree 11. Let P SL(n, q) denote the projective special linear group of dimension n over a field of order q. The ideas of the proof of Theorem 1.1 together with the ideas in [1] can be used to find a formula for σ(P SL(n, q) ≀ C m ) holding for several infinite series of groups P SL(n, q) ≀ C m for n ≥ 12. However, since such an investigation would be quite lengthy, we do not pursue it in this paper.
Let A n be the alternating group of degree n where n is at least 5. The ideas of the proof of Theorem 1.1 together with the ideas in [9] can be used to find a formula and some estimates for σ(A n ≀ C m ) in various cases. In some sense Theorem 1.4 extends a theorem of [9], namely that 2 n−2 ≤ σ(A n ) if n > 9 with equality if and only if n is congruent to 2 modulo 4.
Finally we show the following result using the ideas of Theorem 1.1. Theorem 1.1 and Corollaries 1.2, 1.3 are independent from the Classification of Finite Simple Groups (CFSG). Theorems 1.4 and 1.5 do depend on CFSG, but with more work using [10] instead of [8] one can omit CFSG from the proofs.
There are many papers on the topic of covering groups with proper subgroups. The first of these works [11] appeared in 1926. The systematic study of the invariant σ(X) was initiated in [3]. Since then a lot of papers appeared in this subject including [12], [5], and [7].
A finite group X is called σ-elementary (or σ-primitive) if for any proper, nontrivial normal subgroup N of X we have σ(X) < σ(X/N ). σ-elementary groups play a crucial role in determining when σ(X) can equal a given positive integer n for some finite group X. The groups we consider in this paper are σ-elementary. Giving good lower bounds for σ(X) for σ-elementary groups X will help answer the problem of what the density of those positive integers n is for which there exists a finite group G with n = σ(G).
On subgroups of product type
Let S be a nonabelian finite simple group, and let G = S ≀ C m be the wreath product of S with the cyclic group C m of order m. Denote by γ a generator of C m . If M is a maximal subgroup of S and g 1 , . . . , g m are elements of S, the normalizer in G of M g1 × · · · × M gm ≤ S m = soc(G) is called a subgroup of product type. A subgroup of product type is maximal in G (but we will not use this fact in the paper). In the following let the subscripts of the g i 's and the x i 's be modulo m.
Lemma 2.1. Let M be a maximal subgroup of S, and let k ∈ {1, . . . , m − 1}. Let g 1 , . . . , g m be elements of S with g 1 = 1. Choose γ := (1, 2, . . . , m). The element
(x 1 , . . . , x m )γ k belongs to N G (M × M g2 × · · · × M gm ) if and only if x i−k ∈ g −1 i−k M g i ∀i = 1, . . . , m.
In particular, if t is any positive integer at most m and (x 1 , . . . , x m )γ k belongs to N G (M × M g2 × · · · × M gm ), then
x t x k+t x 2k+t · · · x (l−1)k+t ∈ M gt , where l = m/(m, k). Proof. The element (x 1 , . . . , x m )γ k normalizes M g1 × M g2 × · · · × M gm if and only if (M g1x1 × M g2x2 × . . . × M gmxm ) γ k = M g1 × M g2 × · · · × M gm .
The permutation γ k sends i to i + k modulo m, so the condition becomes the following
M g 1−k x 1−k × M g 2−k x 2−k × · · · × M g m−k x m−k = M g1 × M g2 × . . . × M gm .
That is,
g i−k x i−k g −1 i ∈ M ∀i = 1, .
. . , m. Multiplying on the right by g i and on the left by g −1 i−k we obtain
x i−k ∈ g −1 i−k M g i ∀i = 1, . . . , m.
Let t be a positive integer at most m. The line with x t on the left-hand side says that x t ∈ g −1 t M g k+t ; the line with x k+t on the left-hand side says that x k+t ∈ g −1 k+t M g 2k+t , and so on. By multiplying these together in this order we obtain that x t x t+k x t+2k · · · x t+(l−1)k ∈ M gt , where l is the smallest number at most m such that m divides lk, that is, l = m/(m, k).
3. An upper bound for σ(S ≀ C m ) Proposition 3.1. Let S be a nonabelian finite simple group, let N denote a covering for S, let m be a fixed positive integer, and let α(m) denote the number of distinct prime factors of m. Then
σ(S ≀ C m ) ≤ α(m) + min N M∈N |S : M | m−1 .
Proof. The bound is clearly true for m = 1. Assume that m > 1.
The idea is to construct a covering of S ≀ C m which consists of exactly
α(m) + min N M∈N |S : M | m−1
proper subgroups.
There are α(m) maximal subgroups of the group S ≀ C m containing its socle. Choose all of these to be in the covering. Then we are left to cover all elements of the form (x 1 , . . . , x m )γ k where the x i 's are elements of S, where C m = γ , and k is coprime to m. It suffices to show that such elements can be covered by the subgroups of the form
N G (M × M g2 × M g3 × · · · × M gm )
where M varies in a fixed cover N of S and the g i 's vary in S, because for each fixed M in N we have |S : M | choices for M gi for each i ∈ {2, . . . , m}.
By Lemma 2.1, (x 1 , . . . , x m )γ k belongs to N G (M × M g2 × · · · × M gm ) if and only if
x i−k ∈ g −1 i−k M g i ∀i = 1, . . . , m, with g 1 = 1. The first condition is x 1−k ∈ g −1 1−k M . Choose g 1−k = x −1 1−k . Then move to the condition x j−k ∈ g −1 j−k M g j with j = 1 − k, i.e. x 1−2k ∈ g −1 1−2k M g 1−k , and rewrite it using the information
g 1−k = x −1 1−k : get x 1−2k x 1−k ∈ g −1 1−2k M . Choose g 1−2k = x −1 1−k x −1 1−2k .
Continue this process for m/(m, k) = m iterations, using Lemma 2.1 (recall that m is coprime to k). Choose
g 1−jk = x −1 1−k x −1 1−2k · · · x −1 1−jk , ∀j = 1, . . . , m − 1.
At the m-th time we get the relation
x 1−mk x 1−(m−1)k · · · x 1−2k x 1−k ∈ g −1 1−mk M.
But g 1−mk = g 1 ∈ M , so to conclude it suffices to choose an M from N which contains the element x 1−mk x 1−(m−1)k · · · x 1−2k x 1−k .
On subgroups of diagonal type
Let S be a nonabelian finite simple group. Let m be a positive integer at least 2 and let t be a divisor of m which is less than m. For positive integers i and j with 1 ≤ i ≤ t and 2 ≤ j ≤ m/t let ϕ i,j be an automorphism of S. For simplicity, let us denote the matrix (ϕ i,j ) i,j by ϕ. Let Consider the restriction to N G (∆ ϕ ) of the natural projection of G onto C m . Any element of C m has preimage of size at most |∆ ϕ | ≤ |S| m/ℓ where ℓ is the smallest prime divisor of m.
Definite unbeatability
The following definition was introduced in [9]. Then H is said to be definitely unbeatable on Π.
For Π ⊆ X let σ(Π) be the least cardinality of a family of proper subgroups of X whose union contains Π. The next lemma is straightforward so we state it without proof.
Lemma 5.2. If H is definitely unbeatable on Π then σ(Π) = |H|.
It follows that if H is definitely unbeatable on Π then |H| = σ(Π) ≤ σ(X).
Proof of Theorem 1.1
In this section we prove Theorem 1.1.
By Proposition 3.1, it is sufficient to show the lower bound of the statement of Theorem 1.1.
Fix a positive integer m at least 2, let S be a nonabelian finite simple group, and let Σ and M be as in the Introduction (satisfying conditions (0)-(5)). As before,
let G = S ≀ C m .
Let Π 1 be the set consisting of all elements (x 1 , . . . , x m )γ of G with the property that x 1 · · · x m ∈ Σ and let H 1 be the set consisting of all subgroups
N G (M × M g2 × · · · × M gm ) with the property that M ∈ M. For fixed M ∈ M put Σ M = Σ ∩ s∈S M s .
Note that, by Conditions (0) and (3) of the Introduction, Σ M ∩ Σ K = ∅ if M and K are non-conjugate elements of M. Let Π 2 be the set consisting of all elements (x 1 , . . . , x m )γ r of G with the property that r is a prime divisor of m and that x 1 x r+1 · · · x m−r+1 is in Σ M and x 2 x r+2 · · · x m−r+2 is in Σ K where M and K are not conjugate in S. Finally, let H 2 be the set consisting of all maximal subgroups of G containing the socle of G. Put Π = Π 1 ∪ Π 2 and H = H 1 ∪ H 2 . By Lemma 5.2 and the remark following Lemma 5.2, the following proposition finishes the proof of Theorem 1.1.
Proposition 6.1. The set H of subgroups of G is definitely unbeatable on Π.
Proof. In this paragraph let us prove Condition (1) of Definition 5.1. Let H be an arbitrary subgroup in H 1 . Suppose that H = N G (M × M g2 × · · · × M gm ) for some M ∈ M and g 2 , . . . , g m ∈ S. Let π be an element of Σ ∩ M . (Such an element exists by Condition (1) of the Introduction.) Let
x 1 = g 2 , x 2 = g −1 2 g 3 , . . . , x m−1 = g −1 m−1 g m , and x m = x −1 m−1 · · · x −1 2 x −1 1 π.
Then, by Lemma 2.1, the element (x 1 , . . . , x m )γ is in H (and also in Π 1 ). Let H be an arbitrary subgroup in H 2 . Let the index of H in G be r for some prime divisor r of m. Then H contains every element of Π 2 of the form (x 1 , . . . , x m )γ r .
In this paragraph let us prove Condition (2) of Definition 5.1. Let (x 1 , . . . , x m )γ be an arbitrary element of Π 1 . We will show that there exists an H ∈ H 1 which contains (x 1 , . . . , x m )γ. We know that x 1 x 2 · · · x m ∈ Σ. By Condition (2) of the Introduction, we see that there exists an M ∈ M with the property that
x 1 x 2 · · · x m ∈ M . Now let g 2 = x 1 , g 3 = x 1 x 2 , . . . , g m = x 1 x 2 · · · x m−1 . Then H = N G (M × M g2 × · · · × M gm ) contains (x 1 , . . . , x m )γ by Lemma 2.1. Now let (x 1 , . . . , x m )γ r be an arbitrary element of Π 2 . This is contained in the maximal subgroup H of index r in G containing the socle of G. We see that H is contained in H 2 .
Now we show that Condition (3) of Definition 5.1 is satisfied. Notice that, by construction (by the second half of Lemma 2.1 and by Condition (4) of the Introduction), Π 1 ∩ H 2 = ∅ and Π 2 ∩ H 1 = ∅ for every H 1 ∈ H 1 and H 2 ∈ H 2 . Hence it is sufficient to show that Π 1 ∩ H 1 ∩ H 2 = ∅ for distinct subgroups H 1 and H 2 in H 1 and also that Π 2 ∩ H 1 ∩ H 2 = ∅ for distinct subgroups H 1 and H 2 in H 2 . The latter claim is clear by considering the projection map from G to C m , hence it is sufficient to show the former claim. First notice that if M and K are two distinct elements of M and g 2 , . . . , g m , k 2 , . . . , k m are arbitrary elements of S, then
Π 1 ∩ N G (M × M g2 × · · · × M gm ) ∩ N G (K × K k2 × · · · × K km ) = ∅,
by Lemma 2.1 and by Condition (3) of the Introduction. Finally let M be fixed and let To show that Condition (4) of Definition 5.1 is satisfied, it is necessary to make three easy observations based on the following folklore lemma.
Π 1 ∩ N G (M × M g2 × · · · × M gm ) ∩ N G (M × M k2 × · · · × M km ) = ∅ for some elements g 2 , . . . , g m , k 2 , . . . , k m of S. Then by Lemma 2.1, for every index i with 2 ≤ i ≤ m, we have M g i = M k i (just consider the products x 1 · · · x j for all positive integers j with 1 ≤ j ≤ m − 1 where (x 1 , . . . , x m )γ
Lemma 6.2. A maximal subgroup of G = S ≀ C m either contains the socle of G, is of product type, or is of diagonal type.
If L is a maximal subgroup of G containing the socle of G then Corollary 1.2 is clear for m = 1 by [9], so let us assume that m ≥ 2.
|Π ∩ L| = |Σ ∩ M 1 ||Σ ∩ M 2 | |S| m−2 where the sum is over all pairs (M 1 , M 2 ) ∈ M 2 such that M 1 is not conjugate to M 2 in S. If L is of product type, then |Π ∩ L| = |Σ ∩ M ||M | m−1 where M is such that L = N G (M × M g2 × · · · × M gm ) for some elements g 2 , . . . , g m of S. Finally if L is of diagonal type, then |Π ∩ L| ≤ (1 + α(m))|S| m/ℓ where ℓ is
Let M be the set of all 11 conjugates of the maximal subgroup M 10 of M 11 together with all 12 conjugates of the maximal subgroup P SL(2, 11) of M 11 . It is easy to check that M is a covering for M 11 , hence, by the upper bound of Theorem
1.1, we have σ(M 11 ≀ C m ) ≤ α(m) + 11 m + 12 m .
Let Σ be the subset of M 11 consisting of all elements of orders 8 or 11. To prove Corollary 1.2 it is sufficient to show that Σ and M satisfy the six conditions of the statement of Theorem 1.1.
By [6] we know that the maximal subgroups of M 11 are: M 10 , P SL(2, 11), M 9 : 2, S 5 , and M 8 : S 3 , and that for these we have the following. This shows that the first five conditions of the statement of Theorem 1.1 are verified. Now let us compute the four expressions involved in Condition (5).
• (1 + α(m))|S| m/ℓ ≤ (1 + α(m))|S| m/2 = (1 + α(m))( √ 7920) m ; • max H ∈M, H<S |Σ ∩ H||H| m−1 = 36 · 144 m−1 ; • ( |Σ ∩ M 1 ||Σ ∩ M 2 |)|S| m−2 = 2 · 132 · 180 · 120 · 7920 m−2 since we have 2 · 12 · 11 = 2 · 132 choices for the pair (M 1 , M 2 ); • min M∈M |Σ ∩ M ||M | m−1 = 120 · 660 m−1 .
We have then to prove that max((1 + α(m))7920 m/2 , 36 · 144 m−1 ) ≤ ≤ min(2 · 132 · 180 · 120 · 7920 m−2 , 120 · 660 m−1 ).
Clearly the right-hand side is 120 · 660 m−1 and it is bigger than 36 · 144 m−1 , so we have to prove that (1 + α(m))7920 m/2 ≤ 120 · 660 m−1 .
After rearranging, taking roots, and using the fact that (1 + α(m)) 1/m ≤ √ 2 we obtain that it suffices to prove the inequality
√ 2 √ 7920 660 ≤ 120 660 1/m .
Since the right-hand side of the previous inequality is increasing with m, it suffices to assume that m = 2. But then the inequality becomes clear.
Proof of Corollary 1.3
Note that Corollary 1.3 is clear for m = 1 by [2].
Let p ≥ 11 be a prime and assume that the smallest prime divisor ℓ of m is at least 5.
Let M be the set of all p + 1 conjugates of the maximal subgroup C p ⋊ C (p−1)/2 of P SL(2, p) together with all p(p − 1)/2 conjugates of the maximal subgroup D p+1 of P SL(2, p). It is easy to check that M is a covering for P SL(2, p), hence, by the upper bound of Theorem 1.1, we have σ(P SL(2, p) ≀ C m ) ≤ α(m) + (p + 1) m + (p(p − 1)/2) m .
Let Σ 1 ⊆ P SL(2, p) be a set of p 2 − 1 elements each of order p with the property that every element of Σ 1 fixes a unique point on the projective line and that (Σ 1 ∩ M )∪{1} is a group of order p for every conjugate M of C p ⋊C (p−1)/2 . Let Σ 2 be the set of all irreducible elements of P SL(2, p) of order (p + 1)/2. Put Σ = Σ 1 ∪ Σ 2 . To prove Corollary 1.3 it is sufficient to show that Σ and M satisfy the six conditions of the statement of Theorem 1.1.
By [4] the maximal subgroups of P SL(2, p) are the following.
• C p ⋊ C (p−1)/2 ; • D p−1 if p ≥ 13; • D p+1 ; • A 5 , A 4
, and S 4 for certain infinite families of p.
Since p ≥ 11, no element of Σ is contained in a subgroup of the form A 5 , A 4 , or S 4 . Moreover since (p + 1)/2 and p do not divide p − 1, no element of Σ is contained in a subgroup of the form D p−1 . Similarly, it is easy to see that no element of Σ 1 is contained in a conjugate of D p+1 and no element of Σ 2 is contained in a conjugate of C p ⋊ C (p−1)/2 . By the above and by a bit more, it follows that the first five conditions of the statement of Theorem 1.1 hold. Now let us compute the four expressions involved in Condition (5).
But before we do so, let us note two things. If M is a maximal subgroup of the form D p+1 , then |Σ ∩ M | = ϕ((p + 1)/2) where ϕ is Euler's function. Moreover, if M is conjugate to C p ⋊ C (p−1)/2 , then |Σ ∩ M | = p − 1.
•
(1 + α(m))|S| m/ℓ ≤ (1 + α(m))((1/2)p(p 2 − 1)) m/5 ;
• max H ∈M |Σ ∩ H||H| m−1 = 0; • ( |Σ ∩ M 1 ||Σ ∩ M 2 |)|S| m−2 = = 2(p + 1)(p(p − 1)/2)ϕ((p + 1)/2)(p − 1)((1/2)p(p 2 − 1)) m−2 ; • min M∈M |Σ ∩ M ||M | m−1 = = min(ϕ((p + 1)/2)(p + 1) m−1 , (p − 1)(p(p − 1)/2) m−1 ) = = ϕ((p + 1)/2)(p + 1) m−1 .
We are easily reduced to prove the following inequality
(1 + α(m))(p(p 2 − 1)/2) m/5 ≤ ϕ((p + 1)/2)(p + 1) m−1 .
Using the fact that (1 + α(m)) 1/m ≤ √ 2 we obtain that it suffices to show that
√ 2 (p(p 2 − 1)/2) 1/5 p + 1 ≤ ϕ((p + 1)/2) p + 1 1/m .
Since the right-hand side is increasing with m, it suffices to assume that m = 5. By taking 5-th powers of both sides we obtain
2 √ 2p(p 2 − 1) ≤ (p + 1) 4 .
But this is clearly true for p ≥ 11.
Alternating groups
¿From this section on we will deal with the special case when S is the alternating group A n . We will repeat some of the definitions in more elaborate form.
For each positive integer n ≥ 5 which is not a prime we define a subset Π 0 of A n and a set H 0 of maximal subgroups of A n . (These sets Π 0 and H 0 will be close to the sets Σ and M of the Introduction.) Let n be odd (and not a prime). In this case let Π 0 be the set of all n-cycles of A n and let H 0 be the set of all maximal subgroups of A n conjugate to (S n/p ≀ S p ) ∩ A n where p is the smallest prime divisor of n.
Let n be divisible by 4. In this case let Π 0 be the set of all (i, n − i)-cycles of A n (permutations of A n which are products of two disjoint cycles one of length i and one of length n − i) for all odd i with i < n/2 and let H 0 be the set of all maximal subgroups of A n conjugate to some group of the form (S i × S n−i ) ∩ A n for some odd i with i < n/2.
Let n be congruent to 2 modulo 4. In this case let Π 0 be the set of all (i, n − i)cycles of A n for all odd i with i ≤ n/2 and let H 0 be the set of all maximal subgroups of A n conjugate to some group of the form (S i × S n−i ) ∩ A n for some odd i with i < n/2 or conjugate to (S n/2 ≀ S 2 ) ∩ A n . Theorem 9.1 (Maróti, [9]). With the notations above H 0 is definitely unbeatable on Π 0 provided that n ≥ 16.
Wreath products
Let m be a fixed positive integer (which can be 1). Let G = A n ≀ C m and let γ be a generator of C m . Let Π 1 be the set consisting of all elements (x 1 , . . . , x m )γ of G with the property that x 1 · · · x m ∈ Π 0 and let H 1 be the set consisting of all subgroups N G (M × M g2 × · · · × M gm ) with the property that M ∈ H 0 . If m = 1, then set Π = Π 1 and H = H 1 . From now on, only in the rest of this paragraph, suppose that m > 1. For n odd let Π 2 be the set consisting of all elements (x 1 , . . . , x m )γ r of G with the property that r is a prime divisor of m and that x 1 x r+1 · · · x m−r+1 is an n-cycle and
x 2 x r+2 · · · x m−r+2 is an (n − 2)-cycle. For fixed M ∈ H 0 put Π 0,M = Π 0 ∩ g∈An M g .
(Depending on M (and on the parity of n) Π 0,M is the set of n-cycles or the set of (i, n − i)-cycles with i ≤ n/2 contained in the union of all conjugates of some M in H 0 .) For n even let Π 2 be the set consisting of all elements (x 1 , . . . , x m )γ r of G with the property that r is a prime divisor of m and that x 1 x r+1 · · · x m−r+1 ∈ Π 0,M and x 2 x r+2 · · · x m−r+2 ∈ Π 0,K where M and K are not conjugate in A n . Finally, let H 2 be the set consisting of all maximal subgroups of G containing the socle of G. Put Π = Π 1 ∪ Π 2 and H = H 1 ∪ H 2 .
Proposition 10.1. If m = 1, then H is definitely unbeatable on Π for n ≥ 16. If m > 1, then H is definitely unbeatable on Π for n > 12 provided that n has a prime divisor at most 3 √ n.
For m = 1 there is nothing to show. Suppose that m > 1.
Along the lines of the ideas in Section 6, it is possible (and easy) to show that Π and H satisfy Conditions (1), (2), and (3) of Definition 5.1. (Condition (3) of Definition 5.1 is satisfied since, for example for n odd, no conjugate of (S n/p ≀S p )∩A n contains an (n−2)-cycle where p is the smallest prime divisor of n.) Hence, to prove Proposition 10.1, it is sufficient to verify Condition (4) of Definition 5.1. This will be done in the next three sections.
Some preliminary estimates
Some of the following lemma depends on the fact that a!(n − a)! ≥ b!(n − b)! whenever a and b are integers with a ≤ b ≤ n/2. for H 1 ∈ H 1 . Let n be congruent to 2 modulo 4. Then
|Π ∩ H 1 | = |Π 1 ∩ H 1 | ≥ (1/2 m−1 )(((n/2) − 1)!) 2 ((n/2)!) 2m−2
for H 1 ∈ H 1 . Finally, let n be even. Then
|Π ∩ H 2 | = |Π 2 ∩ H 2 | ≥ 4 3(n − 1)(n − 3) |A n | m for H 2 ∈ H 2 .
Proof. This follows from the above and from the observations made when dealing with Condition (4) Proof.
(1) After rearranging, the inequality becomes n − 2 ≤ |S n : (S n/p ≀ S p )| m which is clearly true.
(2) After rearranging, the inequality becomes 6(n − 1)(n − 3) (n + 2)(n − 2) < 6 ≤ |S n | |S (n/2)−1 × S (n/2)+1 | m which is clearly true.
(3) After rearranging, the inequality becomes 6(n − 1)(n − 3) n 2 < 6 < n n/2 m which is clearly true.
The case when K is a subgroup of diagonal type
Let K be a subgroup of G of diagonal type. Note that K ∈ H. We would like to show that |Π∩K| ≤ |Π∩H| for every H ∈ H. We have |Π∩K| ≤ (1+α(m))|A n | m/2 .
We need Stirling's formula.
Theorem 12.1 (Stirling's formula). For all positive integers n we have √ 2πn(n/e) n e 1/(12n+1) < n! < √ 2πn(n/e) n e 1/(12n) .
The declared aim of proving the inequality |Π ∩ K| ≤ |Π ∩ H| for every H ∈ H is achieved through the next lemma. We also point out that the right-hand sides of the inequalities of the following lemma come from Section 11. (1) Let n be odd with smallest prime divisor p at most 3 √ n. Then
(1 + α(m))(n!/2) m/2 ≤ (1/(2 m−1 n)) (n/p)! p p! m .
(2) Let n be divisible by 4 and larger than 8. Then
(1 + α(m))(n!/2) m/2 ≤ (((n/2) − 2)!)((n/2)!) (((n/2) − 1)!)(((n/2) + 1)!) 2 m−1 .
(3) Let n be congruent to 2 modulo 4 and larger than 10. Then
(1 + α(m))(n!/2) m/2 ≤ (1/2 m−1 )(((n/2) − 1)!) 2 ((n/2)!) 2m−2 .
Proof.
(1) It is sufficient to show the inequality
n 2 (1 + α(m)) 2/m ≤ ((n/p)!) 2p p! 2 2n! .
For this it is sufficient to see that
n(1 + α(m)) 2/m ≤ ((n/p)!) 2p n! .
Substituting Stirling's formula (Theorem 12.1) on the right-hand side, we see that it is sufficient to show that n(1 + α(m)) 2/m ≤ (2π(n/p)) p (n/pe) 2n √ 2πn(n/e) n e 1/(12n) .
Since 3 ≤ p ≤ 3 √ n and e 1/(12n) < 2, it is sufficient to prove
n(1 + α(m)) 2/m ≤ (2πn 2/3 ) 3 (n 2/3 /e) 2n 2 √ 2πn(n/e) n .
Since (1 + α(m)) 2/m ≤ 2 it is sufficient to see that √ 2π
2π 3 ≤ n (1/3)n+(1/2) e n .
But this is true for n ≥ 27.
(2) After rearranging the inequality and taking roots we get
(1 + α(m)) 2/m (n!/2) ≤ 8
n 2 − 4 2/m ((n/2) − 1)!((n/2) + 1)! 2 2 .
Since (1 + α(m)) 2/m ≤ 2 and 8/(n 2 − 4) ≤ (8/(n 2 − 4)) 2/m , it is sufficient to see that n 2 − 4 2 n! ≤ (((n/2) − 1)!((n/2) + 1)!) 2 .
Since n (n/2)−1 ≤ 2 n−1 , it is sufficient to prove (n 2 − 4)2 n−2 ≤ ((n/2) − 1)!((n/2) + 1)!.
But this is true for n ≥ 12.
(3) After rearranging the inequality and taking roots we see that it is sufficient to show 4(1 + α(m)) 2/m (n/2) 4/m (n!/2) ≤ ((n/2)!) 4 .
Since (1 + α(m)) 2/m ≤ 2 and (n/2) 4/m ≤ (n/2) 2 , it is sufficient to see that
n 2 n! ≤ ((n/2)!) 4 .
But this can be seen by induction for n ≥ 14.
13. The case when K is a subgroup of product type
Let K be a subgroup of G of product type such that K ∈ H. We would like to show that |Π ∩ K| ≤ |Π ∩ H| for every H ∈ H.
Suppose that K = N G (M × M g2 × · · · × M gm ) where M is a maximal subgroup of A n . If M is an intransitive subgroup then Π ∩ K = ∅, by construction of Π and H, hence there is nothing to show in this case.
In the next paragraph and in Lemma 13.3 we will make use of the following fact taken from [8]. Now let n be even. In this case a ≥ 3.
Lemma 13.2. Let n be even and let a be the smallest divisor of n larger than 2.
If n > 10, then n((n/a)!) a a! ≤ 2((n/2)!) 2 .
Proof. If n = 2a, then we must consider the inequality 2 a ≤ (a − 1)!. This is clearly true if a satisfies a > 5, hence if n > 10. This means that we may assume that 3 ≤ a ≤ n/4.
The lemma is true for 10 < n ≤ 28 by inspection. From now on we assume that n ≥ 30.
Applying Stirling's formula (see Theorem 12.1), we see that it is sufficient to verify the inequality n( 2π(n/a)) a (n/ae) n e a 2 /(12n) √ 2πa(a/e) a e 1/(12a) ≤ 2πn(n/2e) n e 2/(6n+1) .
After rearranging factors we obtain 2 n (2π(n/a)) a/2 e a 2 /(12n) √ 2πa(a/e) a e 1/(12a) ≤ a n 2πe 2/(6n+1) .
After taking natural logarithms and rearranging terms we obtain
a ln(2π) 2 + ln n 2 + ln a 2 + a 12n −1 + ln a 2 + 1 12a − ln(2π) 2 − 2 6n + 1
≤ n(ln a−ln 2).
By the assumption 3 ≤ a ≤ n/4 and by dividing both sides of the previous inequality by ln n we see that it is sufficient to prove a 1+ ln(2π) 2 ln n + 1 48 ln n − 1 ln n + 1 2 + 1 36 ln n − ln(2π) 2 ln n − 2 (6n + 1) ln n ≤ n ln n (ln a−ln 2).
Since ln(2π) 2 ln n + 1 48 ln n − 1 ln n < 0 and 1 36 ln n − ln(2π) 2 ln n − 2 (6n + 1) ln n < 0, it is sufficient to prove
(1) a + 0.5 ln a − ln 2 ≤ n ln n .
This is true for a = 3, 4, and 5 (provided that n ≥ 30). Hence assume that 7 ≤ a ≤ n/4.
The function x+0.5 ln x−ln 2 increases when x > 6, hence it is sufficient to show inequality (1) in case of the substitution a = n/4. But that holds for n ≥ 30. The proof of the lemma is now complete. (2) If n is congruent to 2 modulo 4, then
(1 + α(m)) ((n/a)!) a a! 2 m ≤ (1/2 m−1 )(((n/2) − 1)!) 2 ((n/2)!) 2m−2 .
Proof. By Lemma 13.2 it is sufficient to show that both displayed inequalities follow from the inequality n((n/a)!) a a! ≤ 2((n/2)!) 2 .
Indeed, the first displayed inequality becomes
(1 + α(m)) ((n/a)!) a a! 2 m ≤ 8 n 2 − 4 (((n/2) − 1)!)(((n/2) + 1)!) 2 m .
Since (1 + α(m)) 1/m ≤ √ 2 and (2 √ 2)/n ≤ (8/(n 2 − 4)) 1/2 ≤ (8/(n 2 − 4)) 1/m , it is sufficient to see that (n/2)((n/a)!) a a! ≤ ((n/2) − 1)!((n/2) + 1)!.
But this proves the first part of the lemma since ((n/2)!) 2 < ((n/2) − 1)!((n/2) + 1)!.
After rearranging the factors in the second displayed inequality of the statement of the lemma, we get
(1 + α(m)) ((n/a)!) a a! m ≤ (8/n 2 )(n/2)! 2m .
By similar considerations as in the previous paragraph, we see that this latter inequality follows from the inequality n((n/a)!) a a! ≤ 2((n/2)!) 2 .
Now let M be a maximal primitive subgroup of A n . We know that |M | < 2.6 n by [8]. The following lemma is necessary for our purposes.
Lemma 13.4. For n > 12 and m ≥ 2 we have the following.
(1) Let n be odd with smallest prime divisor p at most 3 √ n. Then
(1 + α(m))2.6 nm ≤ (1/(2 m−1 n)) (n/p)! p p! m .
(2) If n is divisible by 4, then
(1 + α(m))2.6 nm ≤ (((n/2) − 2)!)((n/2)!) (((n/2) − 1)!)(((n/2) + 1)!) 2 m−1 .
(3) If n is congruent to 2 modulo 4, then
(1 + α(m))2.6 nm ≤ (1/2 m−1 )(((n/2) − 1)!) 2 ((n/2)!) 2m−2 .
Proof. By Lemma 12.2, there is nothing to prove for n ≥ 17 since
(1 + α(m))2.6 nm < (1 + α(m))(n!/2) m/2 holds for n ≥ 17. One can check the validity of the inequalities for n = 16 and n = 14 by hand.
Putting together the results of the previous three sections, the proof of Proposition 10.1 is complete by Lemma 6.2. 14. A lower bound for σ(A n ≀ C m )
In this section we show that if n > 12 and n is not congruent to 2 modulo 4, then
α(m) + 1 2 n i=1 i odd n i m < σ(A n ≀ C m ).
To show this for n divisible by 4 and n > 12, notice that
α(m) + 1 2 n i=1 i odd n i m = σ(Π) ≤ σ(A n ≀ C m ).
Let n > 12 be odd. By [9] we may assume that m > 1. In this case we clearly have
α(m) + 1 2 n i=1 i odd n i m < 2 nm−m−1 .
Hence it is sufficient to show that 2 nm−m−1 ≤ σ(A n ≀ C m ).
We have |Π 1 | = (n − 1)!(n!/2) m−1 . Let H = N G (M × M g2 × · · · × M gm ) for some maximal subgroup M of A n and some elements g 2 , . . . , g m ∈ A n . If M is intransitive, then Π 1 ∩ H = ∅. If M is imprimitive, then, by Lemma 13.1,
|Π 1 ∩ H| ≤ (1/(n2 m−1 ))(n/p)! mp p! m
where p is the smallest prime divisor of n. If M is primitive, then, by the statement just before Lemma 13.4, |Π 1 ∩H| ≤ 2.6 nm . Now let H be a subgroup of G of diagonal type. Then |Π 1 ∩ H| ≤ (n!/2) m/2 . If H is a maximal subgroup of G containing the socle of G, then Π 1 ∩ H = ∅. Let M be a minimal cover (a cover with least number of members) of G containing maximal subgroups of G. Let a be the number of subgroups in M of the form N G (M × M g2 × · · · × M gm ) where M is imprimitive. Let b be the number of subgroups in M of the form N G (M × M g2 × · · · × M gm ) where M is primitive. Let c be the number of subgroups in M of diagonal type. Then a · (1/(n2 m−1 ))(n/p)! mp p! m + b · 2.6 nm + c · (n!/2) m/2 ≥ (n − 1)!(n!/2) m−1 .
From this we see that Let us first show Theorem 1.4. Suppose that n is congruent to 2 modulo 4. If n ≥ 10, then σ(A n ) = 2 n−2 , by [9]. Hence we may assume that m > 1 (and n > 10). In this case, by Proposition 10.1, H is definitely unbeatable on Π and H 0 is a covering for A n . Hence Let the set H 3 of maximal subgroups of A n be defined as follows. If 4 divides n, then let H 3 be the set of all subgroups conjugate (in A n ) to (S n/2 ≀ S 2 ) ∩ A n . If n is odd, then let H 3 be the set of all subgroups conjugate (in A n ) to some subgroup (S k × S n−k ) ∩ A n for some k with k ≤ n/3. Then H 0 ∪ H 3 is a covering for A n . Hence, by Proposition 3.1, this gives us the upper bound This proves Theorem 1.5.
( 0 )
0If M ∈ M then M s ∈ M for any s ∈ S; (1) Σ ∩ M = ∅ for every M ∈ M; (2) Σ ⊆ M∈M M ; (3) Σ ∩ M 1 ∩ M 2 = ∅ for every distinct pair of subgroups M 1 and M 2 of M; (4) M contains at least two subgroups that are not conjugate in S; |Σ ∩ M 1 ||Σ ∩ M 2 | |S| m−2 , min M∈M |Σ ∩ M ||M | m−1 where ℓ is the smallest prime divisor of m and the sum is over all pairs (M 1 , M 2 ) ∈ M 2 with M 1 not conjugate to M 2 .
Theorem 1 . 1 .
11Using the notations and assumptions introduced above we have α(m) + M∈M |S : M | m−1 ≤ σ(S ≀ C m ) ≤ α(m) + min N M∈N |S : M | m−1 . Date: 14th of March, 2010.
Corollary 1. 2 .
2For every positive integer m we have σ(M 11 ≀ C m ) = α(m) + 11 m + 12 m .
Corollary 1 . 3 .
13Let p be a prime at least 11 and m be a positive integer with smallest prime divisor at least 5. Then σ(P SL(2, p) ≀ C m ) = α(m) + (p + 1) m + (p(p − 1)/2) m .
Theorem 1 . 4 ..
14Let us use the notations and assumptions introduced above. Let n be larger than 12. If n is congruent to 2 modulo 4 then σ(A n ≀ C m ) = α(m) + Otherwise, if n is not congruent to 2 modulo 4, then (A n ≀ C m ).
Theorem 1. 5 .
5Let us use the notations and assumptions introduced above. Let n be a positive integer with a prime divisor at most 3 √ n. Then σ(A n ≀ C m ) is asymptotically equal to α(m) + min N M∈N |A n : M | m−1 as n goes to infinity.
∆ ϕ = {(y 1 , . . . , y t , yϕ1,2 1 , . . . , y ϕt,2 t , . . . , y ϕ 1,m/t 1 , . . . , y ϕ t,m/t t )|y 1 , . . . , y t ∈ S} which is a subgroup of S m = soc(G) where G = S ≀ C m . The subgroup N G (∆ ϕ ) is called a subgroup of diagonal type.
Definition 5 . 1 .
51Let X be a finite group. Let H be a set of proper subgroups of X, and let Π ⊆ X. Suppose that the following four conditions hold on H and Π.
( 1 )
1Π ∩ H = ∅ for every H ∈ H; (2) Π ⊆ H∈H H; (3) Π ∩ H 1 ∩ H 2 = ∅for every distinct pair of subgroups H 1 and H 2 of H; (4) |Π ∩ K| ≤ |Π ∩ H| for every H ∈ H and K < X with K ∈ H.
is in the intersection of Π 1 with the two normalizers) from which it follows that M gik −1 i = M . This finishes the proof of Condition (3) of Definition 5.1.
•
M 10 has order 720, it contains 180 elements of order 8 and no element of order 11; no element of order 8 is contained in two distinct conjugates of M 10 ; • P SL(2, 11) has order 660, it contains no element of order 8 and 120 elements of order 11; no element of order 11 is contained in two distinct conjugates of P SL(2, 11); • M 9 : 2 has order 144, it contains 36 elements of order 8 and no element of order 11; • S 5 has order 120, it contains no element of order 8 and no element of order 11; • M 8 : S 3 has order 48, it contains 12 elements of order 8 and no element of order 11.
Lemma 11 . 1 .
111Let n be odd (and not a prime). Then|Π ∩ H 1 | = |Π 1 ∩ H 1 | = (1/(2 m−1 n)) (n/p)! p p!m for H 1 ∈ H 1 where p is the smallest prime divisor of n, and |Π ∩ H 2 | = |Π 2 ∩ H 2 | = (2/(n(n − 2)))|A n | m for H 2 ∈ H 2 . Let n be divisible by 4. Then |Π ∩ H 1 | = |Π 1 ∩ H 1 | ≥ (((n/2) − 2)!)(
( 3 )
3of Definition 5.1 while proving Theorem 1.1. The last statement follows from counting (1, n − 1)-cycles and (3, n − 3)-cycles (twice). Lemma 11.2. Depending on n ≥ 5 we have the following.(1) If n is odd (and not a prime), then(1/(2 m−1 n)) (n/p)! p p! m ≤ (2/(n(n − 2)))|A n | m , hence min H∈H |Π ∩ H| = (1/(2 m−1 n)) (n/p)! p p! m .(2) If n is divisible by 4, then If n is congruent to 2 modulo 4, then |Π ∩ H| ≥ (1/2 m−1 )(((n/2) − 1)!) 2 ((n/2)!) 2m−2 .
Lemma 12 . 2 .
122Let m ≥ 2. The following hold.
Lemma 13 . 1 .
131For a positive integer n at least 8 we have((n/a)!) a a! ≥ ((n/b)!) b b! whenever a and b are divisors of n with a ≤ b.Let M be a maximal imprimitive subgroup of A n conjugate to (S n/a ≀ S a ) ∩ A n for some proper divisor a of n. Let n be odd (and not a prime). Then Π 2 ∩ K = ∅ since M does not contain an (n − 2)-cycle. In this case |Π ∩ K| = |Π 1 ∩ K| = (1/(2 m−1 n)) ((n/a)!) are done by part (1) of Lemma 11.2.
By
Lemma 2.1, we have |Π ∩ K| ≤ (1 + α(m))|M | m . The left-hand sides of Lemmas 13.3 and 13.4 are upper bounds for (1 + α(m))|M | m in various cases. Lemma 13.3. Let n be even and let a be the smallest divisor of n larger than 2. Let m ≥ 2. Then for n > 10 we have the following.(1) If n is divisible by 4, then
n is a prime. Hence to finish the proof of this section, it is sufficient to see Lemma 14.1. For n > 12 odd and for m > 1 we have the following. 15. Proofs of Theorems 1.4 and 1.5
.
n : M | m−1 = |H| = σ(Π) ≤ σ(G) ≤ α(m) + M∈H0 |A n : M | This (and the previous section) proves Theorem 1.4. ¿From now on assume that n is either at least 16 and divisible by 4 or odd with a prime divisor at most 3 √ n. In this case H is definitely unbeatable on Π by Proposition 10.1. This gives us the lower bound α(m) + M∈H0 |A n : M | m−1 ≤ σ(G).
Hence to prove Theorem 1.5, it is sufficient to see that the fraction f (n, m) = M∈H3 |A n : M | tends to 0 as n goes to infinity.Finally, if n is odd with smallest prime divisor p at most3 √ n, then f (n, tends to 0 as n goes to infinity.
The research of the second author was supported by a Marie Curie International Reintegration Grant within the 7th European Community Framework Programme and partially by grants OTKA T049841 and OTKA NK72523.AcknowledgementsThanks are due to Andrea Lucchini for helpful comments and to the anonymous referee for a careful reading of an early version of this paper.(1)where n is not a prime and p is the smallest prime divisor of n.Proof.(1) It is sufficient to prove the inequality 2 n−1 ≤ n! (n/p)! p p! for n ≥ 15. This is true by inspection for 15 ≤ n < 99. Hence assume that n ≥ 99. Applying Stirling's formula (see Theorem 12.1) three times to both sides of the inequalitySince e 1/(12(n/p)) e 1/12p < 2 and e 1/(12n+1) > 1, it is sufficient to prove the inequality 2 n 2π(n/p) p (n/pe) n 2πp(p/e) p ≤ √ 2πn(n/e) n .After rearranging factors and applying the estimate 3 ≤ p ≤ √ n we see that it is sufficient to prove 2 n 2πn/3 √ n 2π √ n( √ n/e) √ n ≤ 3 n √ 2πn.After taking logarithms of both sides of the previous inequality and rearranging terms, we get ( √ n/2) ln(2πn/3) + (1/2) ln(2π √ n) + √ n ln( √ n/e) ≤ n ln(3/2) + (1/2) ln(2πn).After further rearrangements we obtain ( √ n − (1/4)) ln n ≤ ln(3/2)n + √ n(1 − (ln(2π/3)/2)).After dividing both sides of the previous inequality by √ n and evaluating the logarithms we see that it is sufficient to prove ln n ≤ 0.4 √ n + 0.63 for n ≥ 99. But this is clearly true.(2) Rearranging the inequality we get (n/4)2 nm ≤ (n!) m /2.6 nm . Hence it is sufficient to see that ( √ n/2)5.2 n ≤ n!. But this is true for n ≥ 13.
Sets of elements that pairwise generate a linear group. J R Britnell, Evseev, R M Guralnick, P E Holmes, A Maróti, J. Combin. Theory Ser. A. 1153Britnell, J. R.; Evseev, A; Guralnick, R. M.; Holmes, P. E.; Maróti, A. Sets of elements that pairwise generate a linear group. J. Combin. Theory Ser. A. 115 (2008), no. 3, 442-465.
Subgroup coverings of some linear groups. R A Bryce, V Fedri, L Serena, Bull. Austral Math. Soc. 602Bryce, R. A.; Fedri, V.; Serena, L. Subgroup coverings of some linear groups. Bull. Austral Math. Soc. 60, (1999), no. 2, 227-238.
On n-sum groups. J H E Cohn, Math. Scand. 751Cohn, J. H. E. On n-sum groups. Math. Scand. 75, (1994), no. 1, 44-58.
Linear groups: With an exposition of the Galois field theory. L E Dickson, Dover Publication IncNew YorkDickson, L. E. Linear groups: With an exposition of the Galois field theory. Dover Publication Inc. New York, 1958.
On the structure of primitive n-sum groups. E Detomi, A Lucchini, Cubo. 103Detomi, E.; Lucchini, A. On the structure of primitive n-sum groups. Cubo 10 (2008), no. 3, 195-210.
Gap The, Group, GAP -Groups, Algorithms, and Programming. Version 4.4 ;The GAP Group, GAP -Groups, Algorithms, and Programming, Version 4.4 ; 2005, (\protect\vrule width0pt\protect\href{http://www.gap-system.org}{http://www.gap-system.org}).
Finite groups that are the union of at most 25 proper subgroups. M Garonzi, J. Algebra Appl. To appear inGaronzi, M. Finite groups that are the union of at most 25 proper subgroups. To appear in J. Algebra Appl.
On the orders of primitive groups. A Maróti, J. Algebra. 2582Maróti, A. On the orders of primitive groups. J. Algebra 258 (2002), no. 2, 631-640.
Covering the symmetric groups with proper subgroups. A Maróti, J. Combin. Theory Ser. A. 1101Maróti, A. Covering the symmetric groups with proper subgroups. J. Combin. Theory Ser. A 110 (2005), no. 1, 97-111.
On the orders of primitive permutation groups. C Praeger, J Saxl, Bull. London Math. Soc. 12Praeger, C.; Saxl, J. On the orders of primitive permutation groups. Bull. London Math. Soc. 12, (1980), 303-308.
I gruppi che possono pensarsi come somma di tre loro sottogruppi. G Scorza, Boll. Un. Mat. Ital. 5Scorza, G. I gruppi che possono pensarsi come somma di tre loro sottogruppi. Boll. Un. Mat. Ital. 5, (1926), 216-218.
Groups as the union of proper subgroups. M J Tomkinson, Math. Scand. 81Tomkinson, M. J. Groups as the union of proper subgroups. Math. Scand. 81, (1997), 191-198.
| [] |
[
"Church Synthesis on Register Automata over Linearly Ordered Data Domains *",
"Church Synthesis on Register Automata over Linearly Ordered Data Domains *"
] | [
"Léo Exibard \nReykjavik University\nIceland\n",
"Emmanuel Filiot \nUniversité libre de Bruxelles\nBelgium\n",
"Ayrat Khalimov \nUniversité libre de Bruxelles\nBelgium\n"
] | [
"Reykjavik University\nIceland",
"Université libre de Bruxelles\nBelgium",
"Université libre de Bruxelles\nBelgium"
] | [] | In a Church synthesis game, two players, Adam and Eve, alternately picks some element in a finite alphabet, for an infinite number of rounds. The game is won by Eve if the ω-word formed by this infinite interaction belongs to a given language S, called specification. It is well-known that for ωregular specifications, whether Eve as a strategy to enforce the specification no matter what Adam does, is a decidable problem. We study the extension of Church synthesis games to the linearly ordered data domains (Q, ≤) and (N, ≤). In this setting, the infinite interaction between Adam and Eve results in a data ω-word, i.e., an infinite sequence of elements in the domain. We study this problem when specifications are given as register automata. They extend finite automata with a finite set of registers in which they can store data and with which they can compare incoming data with respect to the linear order. Church games over (N, ≤) are however undecidable, even for deterministic register automata. Thus, we introduce one-sided Church games, where Eve instead operates over a finite alphabet, while Adam still manipulates data. We show they are determined, and deciding the existence of a winning strategy is in Exp-Time, both for Q and N. This follows from a study of constraint sequences, which abstract the behaviour of register automata, and allow us to reduce Church games to ω-regular games. We show an application of one-sided Church games to some transducer synthesis problem. In this application, a transducer models a reactive system (Eve) which outputs data stored in its registers, depending on its interaction with an environment (Adam) which inputs data to the system.Church Synthesis on Register Automata with a Linear OrderACM Subject Classification:Theory of computation → Logic and verification Theory of computation → Automata over infinite objects Theory of computation → TransducersIntroductionChurch synthesis. Reactive synthesis is the problem of automatically constructing a reactive system from a specification of correct executions, i.e. a non-terminating system which interacts with an environment, and whose executions all comply with the specification, no matter how the environment behaves. The earliest formulation of synthesis dates back to Church, who proposed to formalize it as a game problem: two players, Adam in the role of the environment and Eve in the role of the system, alternately pick the elements from two finite alphabets I and O respectively. Adam starts with i 0 ∈ I, Eve responds with o 0 ∈ O, ad infinitum. Their interaction results in the ω-word w = i 0 o 0 i 1 o 1 ... ∈ (I·O) ω . The winner is decided by a winning condition, represented as a language S ⊆ (I · O) ω called specification: if w ∈ S, the play is won by Eve, otherwise by Adam. Eve wins the game if she has a strategy λ ∃ : I + → O to pick elements in O, depending on what has been played so far, so that no matter the input sequence i 0 i 1 . . . chosen by Adam, the resulting ω-word i 0 λ(i 0 )i 1 λ(i 0 i 1 ) . . . belongs to S. Similarly, Adam wins the game if he has a strategy λ ∀ : O * → I to win against any Eve's strategy. In the original Church problem, specifications are ω-regular languages, i.e. languages definable in monadic second-order logic with one successor or equivalently, deterministic parity automata. The seminal papers[14,44]have shown that Church games (for ω-regular specification) are determined : either Eve wins or otherwise Adam wins. Moreover, given a Church game, the winner of the game is computable. Finally, justifying to use Church games as a formulation of reactive synthesis, finite-memory strategies are sufficient to win (both for Eve and Adam). This implies that if Eve wins a Church game, one can effectively construct a finite-state machine (e.g. a Mealy machine) implementing a winning strategy. Church synthesis and games on graphs have been extensively studied, for specifications given in linear-time temporal logic (LTL)[43], recently supported by a tool competition[49], as well as in many other settings, for example, quantitative, distributed, non-competitive (see[5,13]and the references therein). Yet, those works focus on control, sometimes with complex interactions between the synthesized systems, rather than on data. This is reflected already in the original formulation by Church: Adam and Eve interact via finite alphabets I and O, intended to model control actions rather that proper pieces of data. But real-life systems often operate values from a large to infinite data domain. Examples include data-independent programs[53,35,41], software with integer parameters[11], communication protocols with message parameters[19], and more[10,51,18]. The goal of this paper is to study extensions of reactive synthesis, and its formulation as Church games, to infinite data domains: (Q, ≤) and (N, ≤) in particular. | 10.4230/lipics.stacs.2021.54 | [
"https://export.arxiv.org/pdf/2004.12141v6.pdf"
] | 231,603,451 | 2004.12141 | 179d44be0aa53d17afe8b218b83e2ca38fe092a3 |
Church Synthesis on Register Automata over Linearly Ordered Data Domains *
20 Mar 2023
Léo Exibard
Reykjavik University
Iceland
Emmanuel Filiot
Université libre de Bruxelles
Belgium
Ayrat Khalimov
Université libre de Bruxelles
Belgium
Church Synthesis on Register Automata over Linearly Ordered Data Domains *
20 Mar 2023* This article is an extended version of [25], which features full proofs and incorporates elements of [24, Chapter 7]. 1 2SynthesisChurch GameRegister AutomataRegister TransducersOrdered Data Words
In a Church synthesis game, two players, Adam and Eve, alternately picks some element in a finite alphabet, for an infinite number of rounds. The game is won by Eve if the ω-word formed by this infinite interaction belongs to a given language S, called specification. It is well-known that for ωregular specifications, whether Eve as a strategy to enforce the specification no matter what Adam does, is a decidable problem. We study the extension of Church synthesis games to the linearly ordered data domains (Q, ≤) and (N, ≤). In this setting, the infinite interaction between Adam and Eve results in a data ω-word, i.e., an infinite sequence of elements in the domain. We study this problem when specifications are given as register automata. They extend finite automata with a finite set of registers in which they can store data and with which they can compare incoming data with respect to the linear order. Church games over (N, ≤) are however undecidable, even for deterministic register automata. Thus, we introduce one-sided Church games, where Eve instead operates over a finite alphabet, while Adam still manipulates data. We show they are determined, and deciding the existence of a winning strategy is in Exp-Time, both for Q and N. This follows from a study of constraint sequences, which abstract the behaviour of register automata, and allow us to reduce Church games to ω-regular games. We show an application of one-sided Church games to some transducer synthesis problem. In this application, a transducer models a reactive system (Eve) which outputs data stored in its registers, depending on its interaction with an environment (Adam) which inputs data to the system.Church Synthesis on Register Automata with a Linear OrderACM Subject Classification:Theory of computation → Logic and verification Theory of computation → Automata over infinite objects Theory of computation → TransducersIntroductionChurch synthesis. Reactive synthesis is the problem of automatically constructing a reactive system from a specification of correct executions, i.e. a non-terminating system which interacts with an environment, and whose executions all comply with the specification, no matter how the environment behaves. The earliest formulation of synthesis dates back to Church, who proposed to formalize it as a game problem: two players, Adam in the role of the environment and Eve in the role of the system, alternately pick the elements from two finite alphabets I and O respectively. Adam starts with i 0 ∈ I, Eve responds with o 0 ∈ O, ad infinitum. Their interaction results in the ω-word w = i 0 o 0 i 1 o 1 ... ∈ (I·O) ω . The winner is decided by a winning condition, represented as a language S ⊆ (I · O) ω called specification: if w ∈ S, the play is won by Eve, otherwise by Adam. Eve wins the game if she has a strategy λ ∃ : I + → O to pick elements in O, depending on what has been played so far, so that no matter the input sequence i 0 i 1 . . . chosen by Adam, the resulting ω-word i 0 λ(i 0 )i 1 λ(i 0 i 1 ) . . . belongs to S. Similarly, Adam wins the game if he has a strategy λ ∀ : O * → I to win against any Eve's strategy. In the original Church problem, specifications are ω-regular languages, i.e. languages definable in monadic second-order logic with one successor or equivalently, deterministic parity automata. The seminal papers[14,44]have shown that Church games (for ω-regular specification) are determined : either Eve wins or otherwise Adam wins. Moreover, given a Church game, the winner of the game is computable. Finally, justifying to use Church games as a formulation of reactive synthesis, finite-memory strategies are sufficient to win (both for Eve and Adam). This implies that if Eve wins a Church game, one can effectively construct a finite-state machine (e.g. a Mealy machine) implementing a winning strategy. Church synthesis and games on graphs have been extensively studied, for specifications given in linear-time temporal logic (LTL)[43], recently supported by a tool competition[49], as well as in many other settings, for example, quantitative, distributed, non-competitive (see[5,13]and the references therein). Yet, those works focus on control, sometimes with complex interactions between the synthesized systems, rather than on data. This is reflected already in the original formulation by Church: Adam and Eve interact via finite alphabets I and O, intended to model control actions rather that proper pieces of data. But real-life systems often operate values from a large to infinite data domain. Examples include data-independent programs[53,35,41], software with integer parameters[11], communication protocols with message parameters[19], and more[10,51,18]. The goal of this paper is to study extensions of reactive synthesis, and its formulation as Church games, to infinite data domains: (Q, ≤) and (N, ≤) in particular.
Church synthesis over infinite data domains. Church games naturally extend to an infinite data domain D: Adam and Eve alternatively pick data in D, and their infinite interaction results in a data ω-word d 0 d 0 d 1 d 1 · · · ∈ D ω . The game is won by Eve if it belongs to a given specification S ⊆ D ω . Accordingly, strategies for Eve have type D + → D while strategy for Adam have type D * → D. In this paper, we study specifications given by a standard extension of finite-state automata to infinite data domains called register automata [36]: they use a finite set of registers to store data, and a set of predicates over the data domain to test data. In each step, the automaton reads a data value from D, compares it with the values held in its registers using the predicates (and possibly constants). Depending on this comparison, it decides to store the value into some of the registers, and then moves to a successor state. This way it builds a sequence of configurations (pairs of state and register values) representing its run on reading a word from D ω : it is accepted if the visited states satisfy a certain parity condition. In this paper, we study specifications given by deterministic register automata over Q or N, which can use the predicate ≤ and the constant 0 to test data.
Contributions. Our first result is an impossibility result: deciding the winner of a Church game for specifications given by deterministic register automata over (N, ≤) is an undecidable problem (Theorem 1). We introduce the one-sided restriction on Church games: Adam still has the full power of picking data but Eve's behaviour is restricted to pick elements from a finite alphabet only. Despite being asymmetric, onesided Church games are quite expressive. For example, they model synthesis scenarios for runtime data monitors that monitor the input data stream and raise a Boolean flag when a critical trend happens (like oscillations above a certain amplitude), and for systems that need to take control actions depending on sensor measurements (a heating controller for instance). Formally, in one-sided Church games, there is a finite set of elements Σ in which Eve picks her successive choices. Accordingly, specifications are languages S ⊆ (DΣ) ω , in this paper defined by deterministic one-sided register automata (defined naturally by alternating between register automata transitions and finite-state automata transitions). Eve's strategies have type λ ∃ : D + → Σ while Adam's strategies have type λ ∀ : Σ * → D. We prove the following about onesided Church games whose specifications are given by one-sided deterministic register automata over (Q, ≤) and (N, ≤):
1. they are determined: every game is either won by Eve or Adam 2. they are decidable: the winner can be computed in time exponential in the number of registers of the specification, 3. if Eve wins, then she has a winning strategy which can be implemented by a transducer with registers (which can be effectively constructed).
Transducers with registers extend Mealy machines with a finite set of registers: they have finitely many states, and given any state and a test over the input data, deterministically, they assign the current data to some registers (or none), output an element of Σ, and update their state. Therefore, the last result echoes the similar result in the ω-regular setting (finite-memory strategies can be effectively constructed for the winner), and supports the fact that one-sided Church games on register automata are an adequate framework for effective synthesis of machines processing streams of data.
Example 1. Figure 1 illustrates a specification given by a deterministic one-sided register automaton, alternating between square and circle states, depending on whether their outgoing transitions read data or elements in a finite alphabet Σ = {a, b}. It can be seen as a game arena where Adam controls the square states while Eve controls the circle states. To simplify the presentation, two parts of the automaton are not depicted and have been summarized as "Eve wins" and "Eve loses": any run going in the former part is non-accepting and any run going in the latter part is accepting (this can be modeled by a parity condition). So, Eve's objective is to force executions into "Eve wins", whatever input data are issued by Adam. There are two registers, r M and r l . The test (true) means that the transition can be taken irrespective of the data played, the test r l < * < r M means that the data should be between the values of registers r l and r M , and the test 'else' means the opposite. The writing ↓ r means that the data is stored into the register r. At first, Adam provides some data d M , serving as a maximal value stored in r M . Register r l , initially 0, holds the last data d l played by Adam. Consider state C: if Adam provides a data outside of the interval ]d l , d M [, he loses; if it is strictly between d l and d M , it is stored into register r l and the game proceeds to state D. There, Eve can either respond with label b and move to state E, or with a to state C. In state E, Adam wins if he can provide a data strictly between d l and d M , otherwise he loses. Eve wins this game in N: for example, she could always respond with label a, looping in states C-D. After a finite number of steps, Adam is forced to provide a data ≥ d M , losing the game. An alternative Eve winning strategy, that does depend on Adam data, is to loop in C-D until d M − d l = 1 (hence she has to memorise the first Adam value d M ), then move to state E, where Adam will lose. In the dense domain (Q, ≤), however, the game is won by Adam, because he can always provide a value within ]d l , d M [ for any d l < d M , so the game either loops in C-D forever or reaches "Eve loses". Proof overview. We give intuitions about the main ingredients to show decidability.
The key idea used to solve problems about register automata is to forget the precise values of input data and registers, and track instead the constraints (sometimes called types) describing the relations between them. In our example, all registers start in 0 so the initial constraint is r 1 l = r 1 M , where r i abstracts the value of register r at step i. Then, if Adam provides a data above the value of r l , the constraint becomes r 2 l < r 2 M in state B. Otherwise, if Adam had provided a data equal to the value in r l , the constraint would be r 2 l = r 2 M . In this way the constraints evolve during the play, forming an infinite sequence. Looping in states C-D induces the constraint sequence r i l < r i+1 l < r i M = r i+1 M i>2 . It forms an infinite chain r 3 l < r 4 l < ... bounded by constant r 3 M = r 4 M = ... from above. In N, as it is a well-founded order, it is not possible to assign values to the registers at every step to satisfy all constraints, so the sequence is not satisfiable. Before elaborating on how this information can be used to solve Church games, we describe our results on satisfiability of constraint sequences. This topic was inspired by the work [47] which studies, among others, the nonemptiness problem of constraint automata, whose states and transitions are described by constraints. In particular, they show [47, Appendix C] that satisfiability of constraint sequences can be checked by nondeterministic ωB-automata [6]. Nondeterminism however poses a challenge in synthesis, and it is not known whether games with winning objectives given as nondeterministic ωB-automata are decidable. In contrast, we describe a deterministic max-automaton [8] characterising the satisfiable constraint sequences in N. As a consequence of [9], games over such automata are decidable. Then we study two kinds of constraint sequences inspired by Church games with register automata. First, we show that the satisfiable lasso-shaped 1 constraint sequences, of the form uv ω , are recognisable by deterministic parity automata. Second, we show how to assign values to registers on-the-fly in order to satisfy a constraint sequence induced by a play in the Church game.
To solve one-sided Church games with a specification given as a register automaton S for (N, ≤) and (Q, ≤), we reduce them to certain finite-arena zero-sum games, which we call automata games. The states and transitions of the game are those of the specification automaton S. The winning condition requires Eve to satisfy the original objective of S only on feasible plays, i.e. those that induce satisfiable constraint sequences. In our example, the play A · B · (C · D) ω does not satisfy the parity condition, yet it is won by Eve in the automaton game since it is not satisfiable in N, and therefore there is no corresponding play in the Church game. We show that if Eve wins the automaton game, then she wins the Church game, using a strategy that simulates the register automaton S and simply picks one of its transitions. It is also sufficient: if Adam wins the automaton game then he wins the Church game. To prove this, we construct, from a winning strategy of Adam in the automaton game, a winning strategy of Adam (that manipulates data) in the Church game. This step uses the previously mentioned results on satisfiability of constraint sequences. Over (N, ≤), we cannot solve the automaton game directly, as it is not ω-regular, so we reduce it to an ω-regular approximation of it which considers quasi -feasible sequences, a notion which is more liberal than feasibility but coincides with it on lasso-shaped words.
Related works. This paper is an extended version of the conference paper [25]. It follows a line of works about synthesis from register automata specifications [23,37,38,27], which focused on register automata over data domains (D, =) equipped with equality tests only. The synthesis of data systems have also been investigated in [31,40]. They do not rely on register automata and are also limited to equality tests or do not study data comparison. Thus, systems that output the largest value seen so far, grant a resource to a process with the lowest id, or raise an alert when a heart sensor reads values forming a dangerous curve, are out of reach of those synthesis methods. These systems require ≤.
In this paper, we consider specifications given by deterministic register automata. Already in the case of infinite alphabets (D, =), dropping the determinism requirement leads to undecidability: finding a winner of a Church game is undecidable when specifications are given as nondeterministic or universal register automata [23,27]. To recover decidability, in the case of universal register automata, those works restrict Eve strategies to register transducers with an a priori fixed number of registers. This problem is called register-bounded synthesis. Recently in [26], register-bounded synthesis have been extended to various data domains such as (N, ≤), (Z, ≤), or (Σ * , ) where Σ is an arbitrary finite alphabet and is the prefix relation. The results of [26] are orthogonal to the results of this paper, and rely on the study of constraint sequences we conduct here.
The paper [28] studies synthesis from variable automata with arithmetic. Those automata are incomparable with register automata: on the one hand, they allow addition on top of a dense order predicate, but on the other hand they do not allow updating the content of the registers along the run. Note that they do not consider the case of a discrete order. The paper [29] studies strategy synthesis but, again, mainly over a dense domain. A one-sided setting similar to ours was studied in [30] for Church games whose winning condition is given by formulas of the Logic of Repeating Values (a fragment of LTL with the freeze quantifier [20]), but only for (D, =). That work was extended to domain (Z, ≤) in [4]. There, the authors show that the realisability problem in one-sided setting on (Z, ≤) for Constraint LTL and its prompt variant are 2EXPTIME-complete. Deterministic register automata are more expressive than Constraint LTL, so our work subsumes their decidability result, yet smaller expressivity of Constraint LTL enables simpler arguments. We note that our proof ideas -abstracting data words by finite-alphabet words and utilising regularity of abstracted words -are somewhat similar to those in papers on Constraint LTL [21,4]. The work on automata with atoms [39] implies our decidability result for (Q, ≤), even in the two-sided setting, but not the complexity result, and it does not apply to (N, ≤). Our setting in N is loosely related to monotonic games [2]: they both forbid infinite descending behaviours, but the direct conversion is unclear. Games on infinite arenas induced by pushdown automata [52,12,1] or one-counter systems [48,32] are orthogonal to our games.
Outline. In Section 2, we introduce preliminary notions. Section 3 introduces Church synthesis games along with the main tools and results (with proofs postponed). Section 4 presents the postponed proofs for Church synthesis, relying on results about satisfiability of constraint sequences over (N, ≤) described in Section 5.
Preliminaries
In this paper, N = {0, 1, . . . } is the set of natural numbers (including 0). We assume some knowledge of ω-regular languages and ω-automata, and refer to e.g. [15] for an introduction.
Data ω-words. In this paper, an ordered data domain, or simply data domain, D is an infinite countable set of elements called data, linearly ordered by some order denoted <. We consider two data domains, N and Q, with their usual order. A data ω-word over D is an infinite sequence d 0 d 1 . . . of data in D. We denote by D ω the set of data ω-words. Similarly, we denote by D * the set of finite sequences (possibly empty) of elements in D.
Registers. Let R be a finite set of elements called registers, intended to contain data values, i.e. values in D. A register valuation is a mapping ν : R → D (also written ν ∈ D R ). For any data d ∈ D, we write d R to denote the constant valuation ν d (r) = d for all r ∈ R.
A test is a maximally consistent set of atoms of the form * r for r ∈ R and ∈ {=, <, >}. We may represent tests as conjunctions of atoms instead of sets. The symbol ' * ' is used as a placeholder for incoming data. For example, for R = {r 1 , r 2 }, the expression r 1 < * is not a test because it is not maximal, but (r 1 < * ) ∧ ( * < r 2 ) is a test. We denote Tst R the set of all tests and just Tst if R is clear from the context. A register valuation ν ∈ D R and data d ∈ D satisfy a test tst ∈ Tst, written (ν, d) |= tst, if all atoms of tst get satisfied when we replace the placeholder * by d and every register r ∈ R by ν(r). An assignment is a subset asgn ⊆ R. Given an assignment asgn, a data d ∈ D, and a valuation ν, we define update(ν, d, asgn) to be the valuation ν s.t. ∀r ∈ asgn : ν (r) = d and ∀r ∈ asgn : ν (r) = ν(r).
Register automata. A specification deterministic register automaton, or simply deterministic register automaton is a tuple S = (Q, q ι , R, δ, α) where Q = Q A Q E is a set of states partitioned into Adam and Eve states, the state q ι ∈ Q A is initial, R is a set of registers, δ = δ A δ E is a (total and deterministic) transition function where, for P ∈ {A, E}, we have, by setting A = E and E = A: δ P : (Q P ×Tst → Asgn×Q P ); and α : Q → {1, ..., c} is a priority function where c is the priority index.
A configuration of A is a pair (q, ν) ∈ Q × D R , describing the state and register content; the initial configuration is (
q ι , 0 R ). A run of S on a word w = d 0 d 1 ... ∈ D ω is a sequence of configurations ρ = (q 0 , ν 0 )(q 1 , ν 1 )... ∈ ((Q A ×D R )(Q E ×D R )
) ω starting in the initial configuration ((q 0 , ν 0 ) = (q ι , 0 R )) and such that for every i ≥ 0: by letting tst i be a unique test for which (ν i , d i ) |= tst i , we have δ(q i , tst i ) = (asgn i , q i+1 ) for some asgn i and ν i+1 = update(ν i , d i , asgn i ). Because the transition function δ is deterministic and total, every word induces a unique run in S. The run ρ is accepting if the maximal priority visited infinitely often is even. A word is accepted by S if it induces an accepting run. The language L(S) of S is the set of all words it accepts.
Interleavings. Specification register automata are meant to recognise interleavings of inputs (provided by Adam) and output (provided by Eve), hence the partitioning of states. Often, we need to combine them or conversely tell them apart. Thus, given two words u = u 0 u 1 · · · ∈ D ω and v = v 0 v 1 · · · ∈ D ω , we formally define their interleaving u ⊗ v = u 0 v 0 u 1 v 1 · · · ∈ D ω . We note that given a word w = w 0 w 1 · · · ∈ D ω , it can be uniquely decomposed into w = u ⊗ v, where u = w 0 w 2 · · · ∈ D ω and v = w 1 w 3 · · · ∈ D ω .
Games.
A two-player zero-sum game, or simply a game, is a tuple G =
(V ∀ , V ∃ , v 0 , E, W ) where V ∀ and V ∃ are disjoint sets of vertices controlled by Adam and Eve, v 0 ∈ V ∀ is initial, E ⊆ (V ∀ × V ∃ ) ∪ (V ∃ × V ∀ ) is a turn-based transition relation, and W ⊆ (V ∀ ∪ V ∃ ) ω is a winning objective. An Eve strategy is a mapping λ ∃ : (V ∀ V ∃ ) + → V ∀ such that (v ∃ , λ(v 0 ∀ v 0 ∃ ...v k ∀ v k ∃ )) ∈ E for all paths v 0 ∀ v 0 ∃ ...v k ∀ v k ∃ of G starting in v 0 ∀ = v 0 and ending in v k ∃ ∈ V ∃ (where k ≥ 0). Note that λ ∃ only depends on the V ∃ component, since the V ∀ part is determined by the V ∃ part, so we sometimes define it as λ ∃ : V + ∃ → V ∀ .
Adam strategies are defined similarly, by inverting the roles of ∃ and ∀. A strategy is finite-memory if it can be computed by a finite-state machine, and positional if it only depends on the current vertex. A play is a sequence of vertices starting in v 0 and satisfying the edge relation E. It is won by Eve if it belongs to W (otherwise it is won by Adam). An infinite play
π = v 0 v 1 . . . is compatible with an Eve strategy λ when for all i ≥ 0 s.t. v i ∈ V ∃ : v i+1 = λ(v 0 . . . v i ).
An Eve strategy is winning if all infinite plays compatible with it are winning. A game is determined (respectively, finite-memory determined, positionally determined ) if either Adam or Eve has a winning strategy (resp., a finite-memory winning strategy, a positional winning strategy).
A finite-arena game is a game whose arena is finite, i.e. where V ∀ and V ∃ are finite. Among them, we distinguish ω-regular games, where the winning condition is an ωregular language. In particular, a parity game is a game whose winning condition is defined through a parity function α : V ∀ V ∃ → {1, ..., c}, where a play v 0 v 1 . . . is winning for Eve if and only if the maximal priority seen infinitely often is even. It is well-known that ω-regular games are finite-memory determined and reduce to parity games, which are positionally determined and can be solved in n c [33] (see also [16]), where n is the size of the game and c the priority index.
Note that in register automata, Adam is represented as A and Eve as E, while in games he is ∀ and she is ∃. This is to visually distinguish automata from games.
Church Synthesis Games
A Church synthesis game is given as a tuple G = (I, O, S), where I is an input alphabet, O is an output alphabet, and S ⊆ (I · O) ω is a specification. Its semantics is provided by the game
({v 0 } ∪ O, I, v 0 , E, S), where E = (({v 0 } ∪ O) × I) ∪ (I × O),
but we rephrase it to provide a stronger intuition. In particular, it is at first counterintuitive that Adam owns O vertices, and Eve I vertices; this is because both players choose their move by targeting a specific vertex.
Thus, in a Church synthesis game, two players, Adam (the environment, who provides inputs) and Eve (the system, who controls outputs), interact. Their strategies are respectively represented as mappings λ ∀ : v 0 O * → I (often simply represented as λ ∀ : O * → I for symmetry) and λ ∃ :
I + → O. Given λ ∀ and λ ∃ , the outcome λ ∀ λ ∃ is the infinite sequence i 0 o 0 i 1 o 1 ... such that for all j ≥ 0: i j = λ ∀ (o 0 ...o j−1 )
and o j = λ ∃ (i 0 ...i j ). If λ ∀ λ ∃ ∈ S, the outcome is won by Eve, otherwise by Adam. Eve wins the game if she has a strategy λ ∃ such that for every Adam strategy λ ∀ , the outcome λ ∀ λ ∃ is won by Eve. Solving a synthesis game amounts to finding whether Eve has a winning strategy. Synthesis games are parameterised by classes of alphabets and specifications. A game class is determined if every game in the class is either won by Eve or by Adam.
The class of synthesis games where I and O are finite and where S is an ω-regular language is known as Church games; they are decidable and determined. They also enjoy the finite-memoriness property: if Eve wins a game then there is an Eve winning strategy that can be represented as a finite-state machine [14] (see also [50] for a game-theoretic presentation of those results).
We study synthesis games where I = O = D is an ordered date domain and the specifications are described by deterministic register automata. In the following, we let G D S = (D, D, S) be the Church synthesis game with input and output alphabet D and specification S, and simply write G S when D is clear from the context.
Church games on register automata
We start our study with a negative result, that highlights the difficulty of the problem: over the data domain (N, ≤), Church games are undecidable. Indeed, if the two players pick data values, one can simulate a two-counter machine, by asking one player to provide the values of the counters and the other to check that no cheating happens on the increments and decrements, using the fact that c = c + 1 whenever there does not exist d such that c < d < c . Theorem 1. Deciding the existence of a winning strategy for Eve in a Church game whose specification is a deterministic register automaton over (N, ≤) is undecidable.
Proof idea. We reduce the problem from the halting problem of 2-counter machines, which is undecidable [42]. We define a specification with 4 registers r 1 , r 2 , z and t. r 1 and r 2 each store the value of one counter; z stores 0 to conduct zero tests and t is used as a buffer. We now describe how to increment c 1 (see Figure 2a); the case of c 2 and of decrementing are similar. Eve suggests a value d > r 1 , which is stored in t. Then, Adam can check that the increment was done correctly: Eve cheated if and only if Adam can provide a data d such that r 1 < d < d. If he cannot, d is stored in r 1 , thus updating the value of the counter. The acceptance condition is then a reachability one, asking that a halting instruction is eventually met. Now, if M halts, then its run is finite and the values of the counters are bounded by some B. As a consequence, there exists a strategy of Eve which simulates the run by providing the values of the counters along the run. Conversely, if M does not halt, then no halting instruction is reachable by simulating M correctly, and Adam is able to check that Eve does not cheat during its simulation.
Proof. We reduce from the halting problem of deterministic 2-counter machines, which is undecidable [42]. Among multiple formalisations of counter machines, we pick the following one: a 2-counter machine has two counters which contain integers, (resp. ) is a rejecting sink (resp. accepting sink). Non-depicted transitions go to the sink state that is losing for the player that takes them.
k k + 1 * > r 1 , ↓ t r1 < * < t * = t, ↓ r 1 * ≤ r 1 ∨ * > t (a) Gadget for instruction inc 1 . k k k * = r1 ∧ * = z * = r 1 ∧ * > z (b) Gadget for instruction ifz 1 (k , k ).
initially valued 0. It is composed of a finite set of instructions M = (I 1 , . . . , I m ), each instruction being of the form inc j , dec j , ifz j (k , k ) for j = 1, 2 and k , k ∈ {1, . . . , m}, or halt. The semantics are defined as follows: a configuration of M is a triple (k, c 1 , c 2 ), where 1 ≤ k ≤ m and c 1 , c 2 ∈ N. The transition relation (which is actually a function, as M is deterministic) is then, from a configuration (k, c 1 , c 2 ):
• If I k = inc 1 , then the machine increments c 1 and jumps to the next instruction A run of the machine is then a finite or infinite sequence of successive configurations, starting at (1, 0, 0). We say that M halts whenever it admits a finite run which ends in a configuration (k, c 1 , c 2 ) such that I k = halt. Let M = (I 1 , . . . , I m ) be a 2-counter machine. We associate to it the following specification deterministic register automaton: S has states Q = Q A Q E , where, for P ∈ {A, E}, Q P = {0, . . . , m + 1} ∪ ({0, . . . , m + 1}×{y, n}) ∪ { , } × {P }. The letters y and n are used to remember whether an ifz test evaluated to true or false; they are only used by A, but we included them in Q E for symmetry. The initial state of S is (0, A). The automaton has four registers r 1 , r 2 , t, z. The acceptance is defined by the reachability condition F = {( , A)}, while signals rejecting sink states. The transitions of S are defined by the following procedure:
I k+1 : (k, c 1 , c 2 ) → (k + 1, c 1 + 1, c 2 ). Similarly for inc 2 . • If I k = dec 1 and c 1 > 0, then (k, c 1 , c 2 ) → (k + 1, c 1 − 1,
• Initially, there is a transition (0, A) − → (1, E) so that the implementation can start the simulation.
tions (k, A) r1< * <t − −−−− → ( , E), (k, A) * =t,↓r1 − −−−− → (k + 1, E) and (k, A) * ≤r1 − −− → ( , E), (k, A) * >t −−→ ( , E).
-The case I k = dec j for j = 1, 2 is similar: we add output transition (k, E) * <r1,↓t
− −−−− → (k, A) and input transitions (k, A) t< * <r1 − −−−− → ( , E), (k, A) * =t,↓r1 − −−−− → (k + 1, E) and (k, A) * ≥r1 − −− → ( , E), (k, A) * <t −−→ ( , E)
. Note that in our definition, if c j = 0, then the instruction dec j should be blocking, i.e. the computation should fail, which is consistent with the fact that in that case, the implementation cannot provide d < r 1 .
-If I k = ifz j (k , k ), then we add the gadget of Figure 2b -If I k = halt, we add a transition (k, E) − → ( , A).
• Finally, ( , P ) − → ( , P ) and ( , P ) − → ( , P ) for P ∈ {A, E}, so that both and are sink states alternating between the players. In the following, we sometimes write for ( , P ) and for ( , P ), since the owner of the state does not matter. Now, assume that M admits an accepting run ρ = (k 1 , c 1 1 , c 1 2 ) → · · · → (k n , c n 1 , c n 2 ), where n ∈ N, k 1 = 1, c 1 1 = c 1 2 = 0 and I kn = halt. The values of the counters are bounded by some B ≤ n. Then, let λ ρ be the strategy of Eve which ignores the input provided by Adam and plays the output w ρ = c j0 0 . . . c jn−1 n−1 0 ω , where for 1 ≤ l < n, j l is the index of the counter modified or tested at step l (i.e. j l = 1, 2 is such that I k l = inc j l , dec j l of ifz j l (k , k )). Formally, for all u ∈ N + of length l ≥ 0, we let λ ρ (u) = c j l l if l ≤ n − 1 and λ ρ (u) = 0 otherwise. Let us show that λ ρ is a winning strategy for Eve. Let u ∈ N ω be an input word provided by Adam. We show by induction on l that in S the partial run over (u⊗w)[:2l+1] is either in state or S is in configuration ((k l , E), τ l ), where τ l (r 1 ) = c 1 l and τ l (r 2 ) = c 2 l . Initially, S is in configuration ((0, A), τ 0 R ). Then, whatever Adam plays, it transitions to ((1, E), τ 0 R ), so the invariant holds. Now, assume it holds up to step l. If S is in ( , E), the only available transition is ( , E) − → ( , A), and then ( , A) − → ( , E), so the invariant holds at step l + 2 ( is a sink state). Otherwise, necessarily l < n, S is in configuration ((k l , E), τ l ) and there are four cases:
• I k l = inc j . By definition, j = j l . We treat the case j = 1, the other case is similar.
Then, Eve plays c 1 l = c 1 l−1 + 1, which is such that c 1 l > τ l (r 1 ). Then, there does not exist d such that τ l (r 1 ) < d < τ l (t) since τ l (r 1 ) = c 1 l−1 and τ l (t) = c 1 l−1 + 1, so the play cannot transition to ( , E). Now, either Adam plays u l+1 = τ l (t) = c 1 l−1 + 1, in which case S evolves to configuration ((k l+1 , E), c 1 l+1 , c 2 l+1 ), and the invariant holds. Otherwise, u l+1 = τ l (t) and S goes to ( , E) and the invariant holds as well.
• The case of I k l = dec j is similar. Let us just mention that the computation does not block at this step, otherwise ρ is not a run of M , so the transition d < r j can indeed be taken by Eve. • I k l = ifz j (k , k ). Again, j = j l , and we treat the case j = 1. Eve plays c 1 l ; there are two cases. If c 1 l = 0, the transition * = r 1 ∧ * = z is taken in S, since at every step, τ l (z) = 0 (this register is never modified). If c 1 l = 0, then the transition * = r 1 ∧ * > z is taken. In both cases, whatever Adam plays, S then evolves to ((k l+1 , E), τ l+1 ) (where τ l+1 = τ l ) and the invariant holds. • Finally, if I k l = halt, then whatever Eve plays, S transitions to ( , A), and whatever Adam plays, the automaton transitions to ( , E).
As a consequence, is eventually reached whatever the input, which means that for all u ∈ N ω , u ⊗ I(u) ∈ S, i.e. I is indeed an implementation of S. Conversely, assume that Eve has a winning strategy λ ∃ in G S . Let ρ be the maximal run of M (i.e. either ρ ends in a configuration with no successor, or it is infinite). It is unique since M is deterministic. Let n = ρ , with the convention that n = ∞ if ρ is infinite. Let us build by induction a play of a strategy 2 of Adam λ ∀ such that for all l < n, (λ ∀ λ ∃ )[:2l] = c j l l . and the configuration reached by S over (λ ∀ ⊗ λ ∃ )[:2l] is ((k l , E), τ l ). Initially, let u 0 = 0. As the initial test is , S anyway evolves to state (1, E), with τ (r 1 ) = τ (r 2 ) = 0. Now, assume we built such input u up to l. There are again four cases:
• I k l = inc j .
Then, Eve provides some output data d E > τ l (r j ). Assume by contradiction that d E > τ l (r j ) + 1. Then, λ ∃ is not winning because if Adam plays d A = τ l (r j ) + 1, S goes to state ( , E), which is a sink rejecting state, so the play is losing irrelevant of what both players play after this move. So, necessarily, d E = τ l (r j ) + 1 = c j l l , and S evolves to configuration (k l+1 , τ l+1 ). • The case I k l = dec j is similar. Necessarily, c l j > 0, otherwise Eve cannot provide any output data and the play is losing for Eve, which contradicts the fact that λ ∃ is winning. Thus, the computation does not block here. • I k l = ifz j (k , k ). The output transitions of the gadget constrain Eve to output d E = τ l (r j ) = c j l l , and irrelevant of what Adam plays S then evolves to configuration ((k l+1 , E), τ l+1 ). • I k l = halt. Then, it means that n < ∞ and l = n, so the invariant vacuously holds. Now, ρ cannot be infinite, otherwise λ ∀ λ ∃ is not accepted by S because is never reached and Eve would not win. It moreover cannot block on some dec j instruction, as demonstrated in the induction. Thus, a halt instruction is eventually reached, which means that ρ is a halting run of M : M halts.
Church games on one-sided register automata
In light of this undecidability result, we consider one-sided synthesis games, where Adam provides data but Eve reacts with labels from a finite alphabet (a similar restriction was studied in [30] for domain (D, =)). Specifications are now given as a language S ⊆ (D · Σ) ω , recognised by a one-sided deterministic register automaton. Definition 1. A one-sided deterministic register automaton, or simply one-sided register automaton S = (Σ, Q, q ι , R, δ, α) is a deterministic register automaton that additionally has a finite alphabet Σ of Eve labels. Its states are again partitioned into Adam and Eve states Q = Q A Q E , and it has an initial state q ι ∈ Q A . Its transition function δ = δ A δ E is again total, but now has δ E : Q E × Σ → Q A . The rest is defined as for deterministic register automata:
δ A : Q A × Tst → Asgn × Q E ;
R is a set of registers, and finally α : Q → {1, ..., c} is a priority function where c is the priority index.
The notions of configurations and runs are defined analogously, except for the assymetry between input and output: a configuration of A is a pair (q, ν) ∈ Q × D R , describing the state and register content; the initial configuration is (q ι , 0 R ). A run of S on a word w = d 0 a 0 d 1 a 1 ... ∈ (DΣ) ω (note the interleaving of D and Σ) is a sequence of configurations ρ = (q 0 , ν 0 )(p 0 , ν 1 )(q 1 , ν 1 )(p 0 , ν 2 )... ∈ ((Q A × D R )(Q E × D R )) ω starting in the initial configuration (i.e. (q 0 , v 0 ) = (q ι , 0 R )) and such that for every i ≥ 0:
• (reading an input data value) by letting tst i be a unique test for which (ν i , d i ) |= tst i , we have δ(q i , tst i ) = (asgn i , p i ) for some asgn i and ν i+1 = update(ν i , d i , asgn i ), as for deterministic register automata; • (reading an output letter from Σ) δ(p i , a i ) = q i+1 , as for finite-state automata.
Again, because the transition function δ is deterministic and total, every word induces a unique run in S. The run ρ is accepting if the maximal priority visited infinitely often is even. A word is accepted by S if it induces an accepting run. The language L(S) of S is the set of all words it accepts. Figure 1 shows an example of a one-sided automaton. For instance, it rejects the words 3a1b2(ΣD) ω and accepts the words 3a1a2b(DΣ) ω .
The rest of this paper is dedicated to showing that Church games whose specification are defined by one-sided register automata over (Q, ≤) or (N, ≤) are decidable in exponential time, and that those games are determined. Formally, Theorem 2. Let S = (Σ, Q, q ι , R, δ, α) be a one-sided register automaton over (N, ≤) or (Q, ≤).
1. The problem of determining the winner of the Church synthesis game G = (D, D, S) is decidable in time polynomial in |Q| and exponential in c and |R|. 2. G S is determined, i.e. either Eve or Adam has a winning strategy in G S .
The above is a wrapper theorem, that aggregates Theorems 9 for (Q, ≤) and 18 for (N, ≤). We defer the proof to Section 4. The result for (Q, ≤) can be derived from [21] or [39, Section 7], but we include it for pedagogical reasons, as it allows us to introduce the main tools in a simple setting and highlight the difficulties that creep up when we shift to (N, ≤).
In the case of a finite alphabet, the game-theoretic approach to solving Church games whose specification is given by a deterministic finite-state automaton consists in playing on the automaton, in the following sense: the arena consists of the automaton, and Adam and Eve alternately choose an input (respectively, output) letter, or equivalently (since the automaton is deterministic) an input (resp., output) transition of the automaton. Then, Eve wins whenever the word they jointly produced is accepted by the automaton.
Here, we follow the same approach, with the additional difficulty that the players manipulate data values from an infinite alphabet, and it is not immediate to relate the data values they choose with the corresponding transitions of the automaton. We thus study the link between the automaton game (where players pick transitions in the automaton) and the corresponding Church game. This is done through the key notion of feasible action words: a sequence of transition labels is feasible whenever it labels a run over some data word. Adam is then asked to provide feasible action words, otherwise he loses. To show that the automaton game is equivalent with the Church game, it remains to show that a strategy of Adam in the automaton game can be translated to a strategy in the Church game. The key ingredient is to be able to instantiate a given action by a data value on-the-fly, while the play unfolds.
Over (Q, ≤), as we demonstrate, the set of feasible action words is ω-regular, so the automaton game is ω-regular as well. Moreover, from a given configuration, one can locally determine whether an action can be instantiated with a data value, and pick it accordingly, which yields the sought strategy translation. Thus, both games are equivalent, and we get decidability since ω-regular games are decidable. The case of (N, ≤) is much more involved and requires further developments, so we start the presentation with (Q, ≤) to sharpen our tools.
The automaton game
For the rest of this section, fix a one-sided register automaton S = (Σ, Q, q ι , R, δ, α) over an ordered data domain D (it can be either (Q, ≤) or (N, ≤)).
Before introducing the game itself, we define the main technical notion, which relates the syntax and semantics of register automata.
Definition 2.
An action word is a sequence (tst 0 , asgn 0 )(tst 1 , asgn 1 )... from (Tst × Asgn) * ,ω . It is D-feasible (or simply feasible when D is clear from the context) if there exists a sequence ν 0 d 0 ν 1 d 1 . . . of register valuations ν i and data d i over D such that ν 0 = 0 R and for all i:
ν i+1 = update(ν i , d i , asgn i ) and (ν i , d i ) |= tst i .
We denote by Feasible D (R) the set of action words over R feasible in D.
With the Church game (D, D, S), we associate the following automaton game, which is a finite-arena game
G f S = (V ∀ , V ∃ , v 0 , E, W f S ).
Essentially, it memorises the transitions taken by the automaton S during the play of Adam and Eve. It has
V ∀ = {q ι } ∪ (Σ × Q A ), V ∃ = Tst × Asgn × Q E , v 0 = q ι , E = E 0 ∪ E ∀ ∪ E ∃ where: • E 0 = v 0 , (tst, asgn, u 0 ) | δ(v 0 , tst) = (asgn, u 0 ) , • E ∀ = (σ, v), (tst, asgn, u) | δ(v, tst) = (asgn, u) , and • E ∃ = (tst, asgn, u), (σ, v) | δ(u, σ) = v .
We let:
W f S = v 0 (tst 0 , asgn 0 , u 0 )(σ 0 , v 1 ) . . . (tst 0 asgn 0 ) . . . ∈ Feasible D (R) ⇒ v 0 u 0 v 1 u 1 . . . |= α
The strategies of Adam and Eve in the automaton game are of the form λ f ∀ :
V ∀ (V ∃ V ∀ ) * → V ∃ and λ f ∃ : (V ∀ V ∃ ) + → V ∀ .
Since the automaton S is deterministic, they can equivalently be expressed as λ f ∀ : Σ * → Tst and λ f ∃ : Tst + → Σ. Let us show that G f S is a sound abstraction of G S , in the sense that a winning strategy of Eve in G f S can be translated to a winning strategy of Eve in G S , for both (Q, ≤) and (N, ≤):
Proposition 3. Let S be a deterministic register automaton. If Eve has a winning strategy in G f S , then she has a winning strategy in the Church game G S .
Proof. The main idea of the proof is that is G S , Eve has more information than in G f S , since she knows what data values Adam played, while in G f S she can only access the corresponding tests.
Formally
, let λ f ∃ : (V ∀ V ∃ ) + → V ∀ be a winning Eve strategy in G f S .
We construct a winning Eve strategy λ ∃ : Tst + → Σ in G S as follows 3 . Fix an arbitrary sequence
tst 0 ...tst k ; we define λ ∃ (tst 0 ...tst k ). First, for all 0 ≤ i ≤ k − 1, we inductively define v 0 , u 0 , v 1 , u 1 , . . . , v k ∈ (Q A ∪ Q E ), asgn 0 , ..., asgn k , and σ 1 , . . . , σ k ∈ Σ: • The state v 0 = q ι is the initial state of S. • For all 0 ≤ i ≤ k, define u i ∈ Q E and asgn i to be such that (asgn i , u i ) = δ(v i , tst i ), σ i+1 = λ f ∃ v 0 (tst 0 , asgn 0 , u 0 )(σ 1 , v 1 ) . . . (tst i , asgn i , u i ) , and v i+1 = δ(u i , σ i )
. We then set λ ∃ (tst 0 ...tst k ) = σ k+1 . We now show that the constructed Eve strategy λ ∃ is winning in G S . Consider an arbitrary Adam data strategy λ D ∀ , and let (v 0 , ν 0 )(u 0 , ν 1 )(v 1 , ν 1 )(u 1 , ν 2 )... be an infinite run in G S on reading the outcome λ D ∀ λ ∃ ; it is enough to show that v 0 u 0 v 1 u 1 ... satisfies the parity condition. Let d 0 d 1 ... be the sequence of data produced by Adam during the play, let σ 0 σ 1 ... be the labels produced by Eve strategy λ ∃ , and let a = (tst 0 , asgn 0 )(tst 1 , asgn 1 )... be the tests and assignments performed by the automaton during the run. Then, the
sequence v 0 (tst 0 , asgn 0 , u 0 )(σ 0 , v 1 )(tst 1 , asgn 1 , u 1 )... constitutes a play in G f S , which is compat- ible with λ f ∃ . Moreover, as witnessed by ν 0 d 0 ν 1 d 1 ..., the action word a is feasible. Therefore, since λ f ∃ is winning, the sequence v 0 u 0 v 1 u 1 ... satisfies the parity condi- tion.
The converse direction of the above proposition is in general harder, as it amounts to showing that the information provided by tests is enough. For the case of (Q, ≤), the density of the domain allows to instantiate the tests on-the-fly, in a way that 3 What we really need is a winning Eve strategy of the form λ D ∃ : D + → Σ. The strategy λ ∃ : Tst + → Σ that we construct encodes λ D ∃ as follows: it has the same set R of registers as the automaton G S , and performs the same assignment actions as the automaton. Then, on seeing a new data, it compares the data with the register values, which induces a test, and passes this test to λ ∃ .
does not jeopardise the feasibility of the overall sequence (Section 4.1). The case of (N, ≤) is much more involved, and is the subject of the rest of Section 4.
Application to transducer synthesis
The Church synthesis game models the reactive synthesis problem: S is a specification, and a winning strategy in G corresponds to a reactive program which implements S, i.e. whose set of behaviours abides by S.
In the finite alphabet case, Church synthesis games are ω-regular. Since those games are finite-memory determined, it means that if a specification admits an implementation, then it admits a finite-state one [14], that can be modelled as a finite-state transducer (a.k.a. a Mealy machine). In this section, we study at which conditions we can get an analogue of this result for specifications defined by input-driven register automata [27], i.e. two-sided automata where the output data is restricted to be the content of some register (in other words, the implementation is not allowed to generate data). Input-driven automata can be simulated by one-sided automata, in that output registers can be seen as finite labels. Correspondingly, we target register transducers, which generalise finite-state transducers to data domains in the same way as register automata generalise finite-state automata. We then show that finite-memory strategies in the automaton game induce register transducer implementations. Indeed, a finite-memory strategy corresponds to a sub-automaton of S, which picks output transitions in S with the help of its memory. This subautomaton is then a register transducer with R registers. This result is reminiscent of Proposition 5 in [27].
We now define input-driven register automata, register transducers, and then define synthesis problem and its decidability.
Input-driven register automata. An input-driven deterministic register automaton is a two-sided register automaton whose output data are required to be the content of some registers. Formally, it is a tuple S = (Q, q ι , R, δ, α) where Q = Q A Q E , q ι ∈ Q A and the transition function is
δ : (Q A × Tst → Asgn × Q E ) ∪ (Q E × Tst = → Asgn ∅ × Q A ),
where Tst = consists of tests which contain at least one atom of the form * = r for some r ∈ R, i.e. the output data must be equal to some specification register, and Asgn ∅ = {∅} meaning that output data is never assigned to anything (this is without loss of generality, given that the output data has to be equal to the content of some register).
Correspondence with one-sided register automata. To an input-driven register automaton specification, we associate a one-sided register automaton by treating output registers as finite labels. Formally, let S = (Q, q ι , R, δ, α) be an input-driven register automaton. Its associated one-sided automaton is S = (Tst = , Q, q ι , R, δ , α) (note that the finite output alphabet is Tst = ). Up to remembering equality relations between registers, we can assume that from an output state, all outgoing transitions can be taken, independently of the register configuration, i.e. that from a reachable
output configuration (q E , τ ), for all transitions t = q E tst=,∅ − −−− → q A , there exists d such that q E d − → t q A .
This however induces a blowup of Q exponential in |R|.
The transition function is δ
A = δ A , and δ E (q E , tst) = q A if and only if δ E (q E , tst) = (∅, q A ).
Overall, the size of S is exponential in |R| (because of the assumption we made on output transitions) and polynomial in |Q|.
Register transducers. A register transducer (RT) is a tuple T = (Q, q ι , R, δ), where Q is a set of states and q ι ∈ Q is initial, R is a finite set of registers. The transition function δ is a (total) function δ : Q × Tst → Asgn × R × Q.
The semantics of T are provided by the associated register automaton S T . It has
states Q = (Q A ∪{ A }) (Q E ∪{ E }),
where Q A and Q E are two disjoint copies of Q and A , E jointly form a rejecting sink. It has initial state q ι and set of registers R.
Its transition function is defined as q
A tst,asgn −−−−→ S T q E r = ,∅ − −− → S T q A and q E r = ,∅ − −− → A T A whenever q tst|asgn,r −−−−−→ T q , where q tst|asgn,r −−−−−→ T q stands for δ(q, tst) = (asgn, r, q ) (similarly for A T ). Additionally, we let A ,∅ − −− → A T E ,∅ − −− → A T A .
The priority function is defined as α : q ∈ Q → 2 and A , E → 1, i.e. all states but are accepting. Then, T recognises the (total) function f T :
d A 0 d A 1 · · · → d E 0 d E 1 . . . such that d A 0 d E 0 d A 1 d E 1 · · · ∈ L(A T )
. It exists since all states but are accepting and is unique since the output transitions that avoid the sink state are determined by the input ones, and they only contain equality tests so the corresponding output data is unique.
Synthesis for input-driven output specifications
Given a specification S, we say that a function f realises S if they have the same domain and its graph is included in S, i.e. dom(f ) = dom(S) and for all input x ∈ dom(S), (x, f (x)) ∈ S. We then say that a register transducer T realises the register automaton specification S if f T does, i.e. L(T ) ⊆ L(S).
The register transducer synthesis problem then asks to produce T that realises S when such T exists, otherwise output "unrealisable". Note that T and S can have different sets of registers.
Proposition 4. Let S = (Q, q ι , R, δ, α) be an input-driven register automaton, and S its associated one-sided register automaton. If S admits a register transducer implementation, then Eve has a winning strategy in the Church game G S associated with S .
Proof. Assume that there exists a register transducer T which realises S. From T , we define a strategy λ T in G, which simulates T and S in parallel. Given a history d i 0 . . . d i n , let d o n be the data output by T . As S is deterministic, there exists a unique run over the history
d i 0 d o 0 . . . d i n d o n ; let t = q E tst=,∅ − −−− → q A be the transition taken by S on reading d o n . Then, define λ T (d i 0 . . . d i n ) = tst = .
Now, for a play in G consistent with λ T , consider the associated run in S . As T is an implementation and the sequence of transitions is feasible (as witnessed by the data given as input), this run is necessarily accepting, so λ T is indeed a winning strategy in G.
Proposition 5. Let S = (Q, q ι , R, δ, α) be an input-driven register automaton, and S its associated one-sided register automaton. If Eve wins G f S with a finite-memory strategy, then S admits a register transducer implementation.
Proof. Let S = (Q, q ι , R, δ, α) be an input-driven register automaton, and S its associated one-sided register automaton. Assume that Eve has a finite-memory winning strategy in G f S that is be computed with a finite-state automaton M with states P , initial memory p 0 , transition function µ : P ×V ∃ → P and move selection s : P → V ∀ . Thus, given an history
h = v 0 . . . v n ∈ V + ∃ , λ ∃ (h) : V + ∃ → Tst = is defined as s(p), where p 0 h − → M p. Then, consider T = (Q × P, (q ι , p 0 ), R, δ )
. We define δ as follows:
assume the transducer is in state (q, p). Then, the transducer receives input satisfying some test tst. In S, it corresponds to some input transition δ(q, tst) = (asgn, q ). The memory is updated to µ(p, (tst, asgn)) = p , and s(p ) = tst = . Let r be such that tst = ⇒ r = (such r necessarily exists by definition of Tst = ). Then, we let δ((q, p), tst) = (asgn, r, (q , p )). Now, let w = d A 0 d A 1 . . . be an input data word, and
T (w) = d E 0 d E 1 . . . . By construction, the run of S over w ⊗ T (w) = d A 0 d E 0 d A 1 d E 1 .
. . corresponds to a play consistent with λ ∃ , so it is accepting (since it is feasible, as witnessed by w ⊗ T (w)). As a consequence, w ⊗ T (w) ∈ L(S), which means that T is indeed a register transducer implementation of S.
In the proof of Theorem 1, Eve's strategy consists in outputting a finite data word with B ≥ 0 distinct data values, and then only zeroes. Thus, it can be implemented with a register transducer with B registers, provided that its registers can be initialised with non-zero data values (in our setting, we assume all registers are initialised to 0). As a consequence, we get: Theorem 6. For specifications defined by two-sided deterministic register automata over data domains (Q, ≤), the register transducer synthesis problem is undecidable, provided that registers can be initialised to an arbitrary valuation. Remark 1. The decidability status of the synthesis problem for register transducers with initial valuation 0 R is open.
Solving Church Synthesis Games on (N, ≤)
We now have the main tools in hand to solve Church synthesis games over ordered data domains. As an introduction, before the case of (N, ≤), we apply our tools to (Q, ≤).
Warm-up: the case of (Q, ≤)
First, let us observe that in that case, the automaton game is ω-regular:
Proposition 7. Let S be a one-sided register automaton over (Q, ≤). Then G f S is an ω-regular game.
Proof. Let S = (Σ, Q, q ι , R, δ, α) be a one-sided register automaton over (Q, ≤),
and let G f S = (V ∀ , V ∃ , v 0 , E, W f S ) be its associated automaton game. G f S is a finite- arena game; it remains to show that it is ω-regular, i.e. that W f S is ω-regular. Recall that W f S = v 0 (tst 0 , asgn 0 , u 0 )(σ 0 , v 1 ) . . . | (tst 0 asgn 0 ) . . . ∈ Feasible D (R) ⇒ v 0 u 0 v 1 u 1 . . . |= α .
By Theorem 20 (on page 30), we know that Feasible D (R) is ωregular; since α is a parity condition, one can then build an ω-regular automaton recognising W f S using standard automata constructions. From Proposition 3, we already know that for all one-sided register automata S (over (Q, ≤) or (N, ≤)), G f S soundly abstracts G S . We now show the converse for (Q, ≤):
Proposition 8. Let S be a one-sided register automaton over (Q, ≤). If Eve has a winning strategy in G S , then she has a winning strategy in the Church game G f S .
Proof. We show the result by contraposition. Assume that Eve does not win G f S . As G f S is ω-regular (Proposition 7), it is determined, so Adam has a winning strategy
λ f ∀ : V ∀ (V ∀ V ∃ ) * → V ∃ in G f S .
We construct the winning Adam data strategy λ Q ∀ in G S step-by-step, by instantiating the tests on-the-fly. When the test is an equality, pick the corresponding data, and when it is of the form r < * < r , take some rational number strictly in the interval.
Formally, suppose we are in the middle of a play: d 0 ...d k−1 has been played by Adam λ Q ∀ and σ 0 ...σ k−1 has been played by Eve; both sequences are empty initially. We want to know the value d
k for λ Q ∀ (σ 0 ...σ k−1 ). Let (v 0 , ν 0 )(u 0 , ν 1 )(v 1 , ν 1 )(u 1 , ν 2 )...(v k , ν k ) be the current run prefix of the register automaton G S (initially (v 0 , ν 0 )). We construct the corresponding play prefix v 0 (tst 0 , asgn 0 , u 0 )(σ 0 , v 1 )(tst 1 , asgn 1 , u 1 )(σ 1 , v 2 )...(σ k−1 , v k ) of G f (initially v 0 )
. We assume that this play prefix adheres to λ f ∀ (this holds initially). We now consult λ f ∀ :
let (tst k , asgn k , u k ) = λ f ∀ (σ k−1 , v k ).
Using tst k and ν k , we construct d k as follows. • If tst k contains * = r for some r ∈ R, we set d k = ν k (r). • If tst k is of the form r < * for all r ∈ R, then set d k = max(ν k ) + 1, i.e. take the largest value held in the registers plus 1. • Similarly, if tst k is of the form * < r for all r ∈ R, then set d k = min(ν k ) − 1. • Otherwise, for every r ∈ R, the test tst k has either r < * or * < r. We now pick two registers r, s such that the test contains r < * and * < s and no register holds a value between ν k (r) and ν k (s). Then we set
d k = ν k (r)+ν k (s) 2 .
It is easy to see that d k satisfies tst k , i.e. (ν k , d k ) |= tst k . Finally, define ν k+1 = update(ν k , d k , asgn k ). Thus, the next configuration of the run in the register automaton is (u k , ν k+1 ). In G f , the play is extended by (tst k , asgn k , u k ); notice that the resulting extended play again adheres to the winning Adam strategy λ f ∀ . Therefore, starting from the empty sequences of Adam data choices and Eve label choices, step-by-step we construct the values for λ Q ∀ .
Then, each play consistent with this strategy in G S corresponds to a unique run in S, which is also a play in G f . As λ f ∀ is winning, such a run is accepting, so λ ∀ is winning: Eve does not win G S .
We are now ready to show: Theorem 9. Let S = (Σ, Q, q ι , R, δ, α) be a one-sided register automaton over (Q, ≤).
1. The problem of determining if Eve wins the Church synthesis game G = (D, D, S) is decidable in time polynomial in |Q| and exponential in c and |R|. 2. G S is determined, i.e. either Eve or Adam has a winning strategy in G S .
Proof of Theorem 9. First, by Propositions 3 and 8, we know that Eve G S iff she wins G f S . By analysing the constructions of Propositions 7 and Theorem 20, we get that the automaton game G f S is of size polynomial in |Q| and exponential in |R|, and has a number of priorities linear in c, so it can be solved in O((poly(|Q|)2 poly(|R|) ) c ), which yields item 1 of the theorem.
Then, determinacy (item 2) follows from the determinacy of G f S , since it is equivalent with G S .
As a consequence of Propositions 4 and 5, we also get:
Proposition 10. Let S be an input-driven register automaton, and S its associated one-sided register automaton. The following are equivalent:
• Eve has a winning strategy in G S • Eve has a winning strategy in G f S • Eve has a finite-memory winning strategy in G f S • S admits a register transducer implementation • S admits an implementation Thus, we have:
Theorem 11. For specifications defined by deterministic input-driven output register automata over data domains (Q, ≤), the register transducer synthesis problem is equivalent with the synthesis problem (for arbitrary implementations) and can be solved in time polynomial in |Q| and exponential in c and |R|.
Remark 2. For data domain (Q, ≤), the synthesis problem for specifications defined by two-sided register automata is also decidable, if the target implementation is any program, as the Church game again reduces to a parity game: checking feasibility is still doable using a parity automaton. However, in general, register transducers might not suffice; e.g. the environment can ask the system to produce an infinite sequence of data in increasing order. Yet, it can be shown that implementations can be restricted to simple programs, which can be modelled by register transducers which have the additional ability to pick a data between two others, e.g. by computing d1+d2 2 : such ability suffices to translate a finite-memory strategy in the automaton game to an implementation.
We now shift to the main result of the paper, namely that Church synthesis games are decidable over (N, ≤). We start by providing some results on actions sequences over (N, ≤) that highlight the difficulties and hint at how to overcome them (Section 4.2), and then use them to define an ω-regular approximation of the automaton game that we show to be sound and complete (Section 4.3).
Action sequences over (N, ≤)
Action sequences over (N, ≤) are not ω-regular First, contrary to (Q, ≤), one needs a global condition on action sequences to check whether they are feasible. To get an intuition, consider the action sequence ( {r})((r > * )r) ω , that asks for an initial data value (stored in r), and then repeatedly asks to provide smaller and smaller data values. While feasible in (Q, ≤), such a sequence is not feasible in (N, ≤), as it would require an infinite descending chain in N. And, actually, the discreteness of (N, ≤) implies that the set of feasible action sequences is not ω-regular in (N, ≤) (see, e.g., [21,Corollary 6.5] or [47, Appendix C]). We provide an example, for self-containedness.
Example 2. consider the automaton of Figure 3, which essentially consists in that of Figure 1 (on page 4) where we allow Adam to repeatedly try his luck by taking the transition from C to B. Note that the priorities (written above the states) ensure that if he does so, he loses. Then, consider sequences of states in A(BC(DC) * ) ω , where Adam initially picks a value, the game transitions to B then C, then Adam and Eve loop between B and C for some time, until at some point Adam transitions back to B, and so on. To check whether such a sequence actually corresponds to a play, one needs to check that there exists a uniform bound (the content of r M ) over the iterations of DC. Formally, plays in A(BC(DC) * ) ω are of the form A(BC(DC) n0 )(BC(DC) n1 ) . . . where there exists b ≥ 0 such that for all i ≥ 0, n i ≤ b. By an elementary pumping argument, one can show that this language is not ω-regular [6]. This implies that Feasible N (R) is not ω-regular whenever |R| ≥ 2, and neither is the automaton game. We thus consider an ω-regular overapproximation of the automaton game, and show that both games are actually equivalent.
A 2 B 2 C 1 D 1 E 0 F 2 F 2 G 1 G 1 / ↓ r M a, b r l < * < r M / ↓ r l else a b else r l < * < r M
Constraint sequences, consistency and satisfiability
To introduce the above game, we first require a further study of Feasible N (R), that we conduct through the notion of constraint sequences. To ease the comparison between (Q, ≤) and (N, ≤), we define them for both domains. Thus, in this section, fix an ordered domain D.
Given a set of registers R (which can also be thought of as variables), we let R = {r | r ∈ R} be the set of their primed versions. Given a valuation ν ∈ D R , define ν ∈ D R to be the valuation that maps ν (r ) = ν(r) for every r ∈ R.
Definition 3. A constraint over R is a total non-strict preorder over R ∪ R , i.e. a total order with ties allowed. It can be represented as a maximally consistent set of atoms of the form t 1 t 2 where t 1 , t 2 ∈ R ∪ R , where the symbol denotes one of >, <, or =.
Given a constraint C, the writing C |R denotes the subset of its atoms r s for r, s ∈ R, and C |R -the subset of atoms over primed registers. Given a set S of atoms r s over r , s ∈ R , let unprime(S) be the set of atoms derived by replacing every r ∈ R by r.
A state constraint relates registers in the current moment only: it contains atoms over non-primed registers, so it has no atoms over primed registers. Note that both C |R and unprime(C |R ) are state constraints.
A constraint describes how register values change in one step: their relative order at the beginning (when t 1 , t 2 ∈ R), at the end (when t 1 , t 2 ∈ R ), and in between (with t 1 ∈ R and t 2 ∈ R ).
Example 3. For instance, the ordering r 1 < r 1 < r 2 < r 2 is a constraint over R = {r 1 , r 2 } and can be represented by {r 1 < r 2 , r 1 < r 1 , r 2 > r 2 , r 1 < r 2 }; it is satisfied e.g. by the two successive valuations ν a : {r 1 → 1, r 2 → 4} and ν b : {r 1 → 2, r 2 → 3}. Similarly, r 1 = r 1 < r 2 = r 2 is a constraint corresponding to the set {r 1 < r 2 , r 1 = r 1 , r 2 = r 2 , r 1 < r 2 }. Note that the set {r 1 < r 2 , r 1 > r 1 , r 2 < r 2 , r 1 > r 2 } does not represent a constraint: it is not consistent hence it does not correspond to any total preorder as r 1 > r 1 > r 2 > r 2 > r 1 implies r 1 > r 1 violating irreflexivity. Another counter-example is r ≤ r for R = {r}: it is not a constraint since it is not total.
Definition 4.
A constraint sequence is then an infinite sequence of constraints C 0 C 1 . . . (when a sequence is finite, we explicitly state it).
It is consistent if for every i: unprime(C i|R ) = C i+1 |R , i.e. the register order at the end of step i equals the register order at the beginning of step i + 1.
A valuation w ∈ D R∪R satisfies a constraint C, written w |= C, if every atom holds when we replace every r ∈ R ∪ R by w(r). A constraint sequence is satisfiable if there exists a sequence of valuations ν 0 ν 1 ... ∈ (D R ) ω such that ν i ∪ ν i+1 |= C i for all i ≥ 0. If, additionally 4 , ν 0 = 0 R , then it is 0-satisfiable. Note that satisfiability implies consistency, but not vice versa, as we show below.
Note also that the notions of constraints and constraint sequences over (N, ≤) and over (Q, ≤) syntactically coincide. This is done on purpose, to ease the comparison between the two domains. When this matters, we always make it clear on which domain a constraint sequence is meant to be interpreted.
Finally, remark that consistency also coincides for both domains, while satisfiability does not, as witnessed by the constraint sequence ({r > r }) ω over R = {r}: it is satisfiable in Q but not in N.
Example 4. We give a richer example. Let R = {r 1 , r 2 , r 3 , r 4 }. Let a consistent constraint sequence C 0 C 1 . . . start with {r 2 < r 1 = r 1 < r 2 < r 3 = r 4 < r 4 = r 3 }{r 1 < r 2 = r 2 < r 1 < r 4 = r 3 < r 3 = r 4 } Figure 4 visualises C 0 C 1 plus a bit more constraints. The black lines represent the evolution of the same register; ignore the colored paths for now. The constraint C 0 describes the transition from moment 0 to 1, and C 1 the transition from moment 1 to 2. This finite constraint sequence is satisfiable in Q and in N. For example, the valuations can start with ν 0 = {r 4 → 6, r 3 → 5, r 2 → 4, r 1 → 3}. In N, no valuations starting with ν 0 (r 3 ) < 5 can satisfy the sequence. Further, since the constraint C 0 requires all registers in R to differ, the sequence is not 0-satisfiable in Q nor in N.
Chains
This section describes a characterisation of satisfiable constraint sequences that is amenable to being recognised by automata. The proofs are quite technical, so we defer them to Section 5 and for the time being only give an intuition.
Definition 5 (Chains). Fix R and a consistent constraint sequence C 0 C 1 . . . over R. A (decreasing) two-way chain is a finite or infinite sequence (r 0 , m 0 ) 0 (r 1 , m 1 ) 1 ... ∈ (R × N) · {=, >} * ,ω satisfying the following (note that m 0 can differ from 0).
• m i+1 = m i , or m i+1 = m i + 1 (time flows forward), or m i+1 = m i − 1 (backwards).
• If m i+1 = m i then (r i i r i+1 ) ∈ C mi . • If m i+1 = m i + 1 then (r i i r i+1 ) ∈ C mi . • If m i+1 = m i − 1 then (r i i r i+1 ) ∈ C mi−1 .
The depth of a chain is the number of >; when it is infinity, the chain is infinitely decreasing. Figure 4 highlights four two-way chains (there are more) with yellow, blue, green and red colors. For instance, the green-colored chain c 3 , defined as (r 4 , 2) > (r 3 , 3) > (r 2 , 2) > (r 1 , 3) > (r 2 , 3), has depth 4. Given a moment i and a register x, a (decreasing) right two-way chain starting in (x, i) (r2w for short) is a two-way chain (x, i) 1 (r 1 , m 1 ) 2 (r 2 , m 2 ) . . . such that m j ≥ i, j ∈ {=, >}, for all j. Thus, all elements appear to the right of the starting moment (x, i).
We define one-way chains similarly, except that time now flows forwards or stays the same, and that they can be either increasing or decreasing:
• m i+1 = m i (time does not flow), or m i+1 = m i + 1 (time flows forward). • If m i+1 = m i then (r i i r i+1 ) ∈ C mi . • If m i+1 = m i + 1 then (r i i r i+1 ) ∈ C mi .
A one-way chain is decreasing (respectively, increasing) if for all i ≥ 0, i ∈ {>, =} (resp., i ∈ {<, =}).
In Figure 4, the blue (c 2 ) chain (r 4 , 0) > (r 3 , 0) > (r 2 , 0) > (r 1 , 0) > (r 2 , 1) > (r 1 , 2) > (r 2 , 3) is one-way decreasing chain of depth 6; the same sequence is also a two-way chain. The red (c 4 ) chain (r 2 , 3) < (r 1 , 4) = (r 1 , 5) < (r 2 , 5) < (r 4 , 5) < (r 3 , 5) is one-way increasing of depth 4; if we read the sequence in reverse, it represents a two-way chain (two-way chains are always decreasing). Sometimes we write "chain" omitting whether it is two-or one-way.
A stable chain is an infinite chain (r 0 , m) = (r 1 , m + 1) = (r 2 , m + 2) = ...; it can also be written as (m, r 0 r 1 r 2 ...). In Figure 4, the yellow (c 1 ) chain (0, (r 4 r 3 ) ω ) is stable. Given a stable chain χ r = (m, r 0 r 1 ...) and a chain χ s = (s 0 , n 0 ) 0 (s 1 , n 1 ) 1 ..., where n i ≥ m for all i, the chain χ r is above χ s (equiv., χ s is below χ r ) if for all i the constraint C ni contains r ni−m > s i or r ni−m = s i ; here we used n i − m because the register at moment n i in the chain χ r is r ni−m . In Figure 4, the yellow chain (0, (r 4 r 3 ) ω ) is above all colored chains. A stable chain (m, r 0 r 1 ...) is maximal if it is above all other stable chains starting after m. In Figure 4, the yellow chain (0, (r 4 r 3 ) ω ) is maximal (assuming the sequence evolves in a similar fashion). Notice that if a sequence has a stable chain, then it has a maximal one. A ceiled chain is a chain that is below a maximal stable chain. A constraint sequence can have an infinite number of ceiled chains; it can also have zero, e.g. when there are no stable chains.
Note that in this section, we mostly focus on one-way chains and right two-way chains, while two-way chains are used in Section 5.1 as a technical intermediate. In the latter section, we show: Lemma 12. A consistent constraint sequence is 0-satisfiable in N iff there exists b ≥ 0 such that:
1. it has no infinitely decreasing one-way chains, 2. the ceiled one-way chains have a depth at most b 3. it starts in C 0 s.t. C 0|R = {r = s | r, s ∈ R}, and 4. it has no decreasing one-way chains of depth ≥ 1 from (r, 0) for any r.
In line with Example 2, the above characterisation is not ω-regular; the culprit is item 2. We thus define quasi-feasible constraint sequences, by relaxing the condition to asking that there are no infinite increasing ceiled chains. Definition 6. A consistent constraint sequence is quasi-feasible whenever:
• it has no infinitely decreasing one-way chains, • it has no infinitely increasing ceiled one-way chains, • it starts in C 0 s.t. C 0|R = {r = s | r, s ∈ R}, and • it has no decreasing one-way chains of depth ≥ 1 from (r, 0) for any r.
In Section 5.3 on page 43, we show:
Lemma 26. A lasso-shaped consistent constraint sequence is 0-satisfiable if and only if it is quasi-feasible.
We conclude the section by formally relating action words (see Definition 2) with constraint sequences.
Action words and constraint sequences
Every action word naturally induces a unique constraint sequence. For instance, for registers R = {r, s}, an action word starting with ({r < * , s < * }, {s}) (test whether the current data d is above the values of r and s, store it in s) induces a constraint sequence starting with {r = s, r = r , s < s , r < s } (the atom r = s is due to all registers being equal initially). This is formalised in the next lemma, which is notation-heavy but says a simple thing: given an action word, we can construct, on the fly, a constraint sequence that is 0-satisfiable iff the action word is feasible. For technical reasons, we need a new register r d to remember the last Adam data. The proof is on page 36, so as not to break the flow of the argument.
Lemma 13. Let R be a set of registers, R d = R {r d }, and D be (N, ≤) or (Q, ≤). There exists a mapping constr : Π × Tst × Asgn → C from state constraints Π over R d and tests-assignments over R to constraints C over R d , such that for all action words a 0 a 1 a 2 ...
∈ (Tst × Asgn) ω , a 0 a 1 a 2 ... is feasible iff C 0 C 1 C 2 ... is 0-satisfiable, where ∀i ≥ 0: C i = constr(π i , a i ), π i+1 = unprime(C i|R d ), π 0 = {r = s | r, s ∈ R d }.
Then, given a set of registers R, we say that an action word a is quasi-feasible whenever constr(a) is quasi-feasible. We correspondingly denote by QFeasible N (R) the set of quasi-feasible action words over R.
4.3
The ω-regular game G reg S After this long but necessary detour through constraint sequences, we are ready to define the ω-regular game associated with the automaton game. Recall that in Section 3.3, given a one-sided automaton S, we defined G f S = (V ∀ , V ∃ , v 0 , E, W f S ). We now let G reg S = (V ∀ , V ∃ , v 0 , E, W reg S ). Thus, it has the same vertices and edge relation:
V ∀ = {q ι } ∪ (Σ × Q A ), V ∃ = Tst × Asgn × Q E , v 0 = q ι , E = E 0 ∪ E ∀ ∪ E ∃ where: • E 0 = v 0 , (tst, asgn, u 0 ) | δ(v 0 , tst) = (asgn, u 0 ) , • E ∀ = (σ, v), (tst, asgn, u) | δ(v, tst) = (asgn, u) , and • E ∃ = (tst, asgn, u), (σ, v) | δ(u, σ) = v .
However, the winning condition is now:
W f S = v 0 (tst 0 , asgn 0 , u 0 )(σ 0 , v 1 ) . . . (tst 0 asgn 0 ) . . . ∈ QFeasible N (R) ⇒ v 0 u 0 v 1 u 1 . . . |= α
i.e., we replaced Feasible N (R) with QFeasible N (R). First, by Proposition 27, we know that QFeasible N (R) is ω-regular. Thus:
Proposition 14. Let S be a one-sided automaton, and define G reg S as above. Then, G reg S is an ω-regular game.
We now show that it is equivalent with the Church game G S . Proposition 15. Let S be a one-sided automaton, G S the corresponding Church game, G f S its automaton game, and G reg S its associated ω-regular game. The following are equivalent:
1. Eve has a winning strategy in G reg S 2. Eve has a finite-memory winning strategy in G reg S 3. Eve has a finite-memory winning strategy in G f S 4. Eve has a winning strategy in G f S 5. Eve has a winning strategy in G S .
Proof. We start with the chain of implications 1 ⇒ 2 ⇒ 3 ⇒ 4 ⇒ 5.
The implication (1) ⇒ (2) holds because G reg S is ω-regular, and we know that those games are finite-memory determined [34].
Then, (2) ⇒ (3) follows from the fact that G reg S is actually harder than G f S , i.e. W reg
S ⊆ W f S , because Feasible N (R) ⊆ QFeasible N (R). (3) ⇒ (4) is immediate. (4) ⇒ (5) is exactly Proposition 3.
It remains to show that (5) ⇒ (1). We proceed by contraposition. Thus, assume that Eve does not have a winning strategy in G reg f . By finite-memory determinacy of games with parity objectives, in G reg f Adam has a finite-memory winning strategy
λ f ∀ : V ∀ (V ∃ V ∀ ) * → V ∃ (equiv., λ f ∀ : Σ * → Tst)
. We show the following:
Proposition 16. If Adam has a winning strategy in G reg S , then he has a winning strategy in G S .
Proof. At first, it is not clear how to instantiate it to a data strategy λ N ∀ : Σ * → N winning in G S . For instance, if the strategy λ f ∀ in G reg f dictates Adam to pick the test * > r, it is not clear which data should λ N ∀ pick -ν(r) + 1, ν(r) + 2, more?because for different Eves different values may be needed. To construct λ N ∀ from λ f ∀ that beats every Eve, we show that for any finite-memory strategy of Adam, there is a uniform bound on the depth of all its r2w chains. This is formalised by the following claim (that we prove afterwards):
Claim 17. Let λ f ∀ be a finite-memory strategy of Adam that is winning in G reg f . There exists a bound b ≥ 0 such that for each play ρ consistent with λ f ∀ , for each right twoway chain γ of the constraint sequence induced by ρ (starting in some (r, i) ∈ R × N), depth(γ) ≤ b.
Thanks to existence of this uniform bound b, we can construct λ N ∀ from λ f ∀ as follows. First, translate the currently played action-word prefix (tst 0 , asgn 0 )...(tst m , asgn m ) into a constraint-sequence prefix using Lemma 13. Then apply to it the data-assignment function from Lemma 28. By construction, for each play in G consistent with λ N ∀ , the corresponding run in S is a play consistent with λ f ∀ in G reg f . As λ f ∀ is winning, such run is not accepting, i.e. the play is winning for Adam in G S . Therefore, λ N ∀ is a winning Adam's strategy in G S . End of the proof of Prop. 16 As a consequence, Eve does not have a winning strategy in G S , which means that (5) ⇒ (1).
End of the proof of Prop. 15 We are left to prove Claim 17.
Boundedness of right two-way chains induced by Adam (Proof of Claim 17)
Proof idea. If Adam has a finite-memory strategy, then if a decreasing right two-way chain γ is sufficiently deep, Eve can force Adam to loop in a memory state in a way such that the loop can be iterated while preserving the chain. We can additionally ensure that this chain contains a strictly decreasing or increasing segment. When iterated, this segment makes the chain unfeasible. Indeed, if the segment is decreasing, iterating the loop yields an infinite descending chain in N, which is not feasible. The case of an increasing fragment happens when γ is decreasing from right to left (recall that it is a two-way chain), so increasing from left to right. When iterated, this yields an infinite increasing chain, which is perfectly fine in N. However, it can be bounded from above with the help of γ: before decreasing from right to left, γ has to go from left to right, since it is a right chain (i.e. it is not allowed to go to the left of its initial position). On the strictly increasing segment, this left-to-right prefix is either constant or decreasing, so when the loop is iterated it provides an upper bound for our increasing chain.
Proof. We now move to the formal proof. We could use a Ramsey argument in the spirit of Lemma 23 to extract an infinite one-way chain that is either increasing or decreasing. However, this amounts to breaking a butterfly upon the wheel and we prefer to rely on a simpler pumping argument, which also gives a finer-grained perception of what it happening there. In particular, it provides a bound b that does not depend on a Ramsey number.
Thus, let λ f ∀ be a finite-memory strategy of Adam with memory M that is winning in G S . Suppose, towards a contradiction, that there exists a play ρ that is consistent with λ f ∀ and which contains a decreasing right two-way chain of depth D > |M |·2 2|R| 2 . We denote it γ = (r 0 , m 0 ) 0 (r 1 , m 1 ) 1 (r 2 , m 2 ) 2 . . . n−1 (r n , m n ), where for all 0 ≤ i ≤ n, i ∈ {>, =}, r i ∈ R and m i ∈ N. Given a two-way chain and a position i ≥ m 0 , we define the crossing section at i as the sequence of registers that occur at position i, ordered by their appearance in the chain: γ i is the maximal subword of γ that contains letters of the form (r, i) for some r ∈ R (see Fig. 5a, where we depicted a chain that has two identical crossing sections at positions i and j). This construction is reminiscent of the techniques that are used to study loops in two-way automata or transducers, hence the name. At each position, there are |M | distinct memory states for Adam, less than 2 |R| 2 many distinct crossing sections and less than 2 |R| 2 many possible orderings of the registers. As a consequence there exists two positions m 0 ≤ i < j such that γ i = γ j , the memory state of Adam at position i and j is the same, the order between registers at position i is the same at position j, and there is at least one occurrence of > in the chain segment. Since λ f ∀ is finitememory, Eve can repeat her actions between positions i and j indefinitely to iterate this fragment of the play ρ. Since the crossing sections match and the order between registers is the same at positions i and j, we can glue the chain fragments together to get an infinite two-way chain (see Fig.5b), with infinitely many occurrences of >. There are two cases:
• There is a fragment that strictly decreases from left to right (as the chain fragment over register r 4 in Fig.5b). Then, when Eve repeats her actions indefinitely, this yields an infinite descending chain, which means that the play is not feasible (Lemma 22), so Eve wins. This contradicts the fact that λ f ∀ is winning. • All decreasing fragments occur from right to left (as do the fragments over r 2 and r 1 in Fig.5b). Necessarily, the topmost fragment, i.e. the fragment of the register that appears first in γ i , is left-to-right, since γ is a right two-way chain. It is not strictly decreasing, otherwise we are back to the first case. Then, the strictly decreasing fragments are bounded from above by this constant fragment. Iterating the loop yields an infinite increasing chain that is bounded from above, which means that the play is again not feasible, so we again obtain a contradiction.
Overall, the depth of the decreasing right two-way chains induced by λ f ∀ is uniformly bounded by b = |M | · 2 2|R| 2 , where |M | is the size of Adam's memory.
We finally have all the cards in hand to show: Theorem 18. Let S = (Σ, Q, q ι , R, δ, α) be a one-sided register automaton over (N, ≤). As a consequence of Proposition 15, since finite-memory winning strategies of Eve in G f S correspond to register transducer implementations (Proposition 4), we also get:
Theorem 19. For specifications defined by deterministic input-driven output register automata over data domains (N, ≤), the register transducer synthesis problem is equivalent with the synthesis problem (for arbitrary implementations) and can be solved in time polynomial in |Q| and exponential in c and |R|.
Satisfiability of Constraint Sequences in (N, ≤)
This section studies the problem of checking whether a given infinite sequence of constraints can be satisfied with values from domain N. Recall that constraints and constraint sequences are respectively defined in Definitions 3 and 4 on page 22. This section's structure is:
• We start with a simple and relatively known result on satisfiability of constraint sequences in data domain Q. We then focus completely on N. • Section 5.1 describes conditions on chains that characterise satisfiable constraint sequences (in N).
• Section 5.2 describes "max-automata" characterisation of satisfiable constraint sequences. The max-automaton characterisation checks the conditions on chains introduced in Section 5.1. • In the study of Church synthesis games on N, the crucial role play lasso-shaped constraint sequences and their satisfiability. We rely on them when proving Proposition 15. The satisfiability of such sequences is the focus of Section 5.3, which shows that the regularity of sequences allows for characterisation of the satisfiability using classical ω-regular automata instead of max-automata. Thus, in the context of Church synthesis games, the max-automaton characterisation is not used. • Section 5.4 shows that "depth-bounded" constraint sequences can be mapped to satisfying valuations on-the-fly: such a data assignment function is used when proving the decidability of Church synthesis games (Proposition 15), namely, to show that winning Adam's strategies in abstracted finite-alphabet games can be instantiated to winning data Adam's strategies in Church synthesis games.
Satisfiability of constraint sequences in Q
Before proceeding to our main topic of satisfiability of constraint sequences in N, we describe, for completeness, similar results for Q.
The following result is glimpsed in several places (e.g. in [47,Appendix C]): a constraint sequence is satisfiable in Q iff it is consistent. This is a consequence of the following property which holds because Q is dense: for every constraint C and ν ∈ Q R such that ν |= C |R , there exists ν ∈ Q R such that ν∪ν |= C. Consistency can be checked by comparing every two consecutive constraints of the sequence. Thus it is not hard to show that consistent -hence satisfiable -constraint sequences in Q are recognizable by deterministic parity automata.
Theorem 20. There is a deterministic parity automaton with two colors and of size exponential in |R| that accepts exactly all constraint sequences satisfiable (or 0-satisfiable) in Q.
To prove the result, we first show that a constraint sequence in Q is satisfiable iff it is consistent, then we construct an automaton checking the consistency.
Lemma 21. Let R be a set of registers and
D = Q. A constraint sequence C 0 C 1 . . . is satisfiable iff it is consistent. It is 0-satisfiable iff it is consistent and C 0|R = {r 1 = r 2 | r 1 , r 2 ∈ R}.
Proof. Direction ⇒ is simple for both claims, so we only prove direction ⇐.
Consider the first claim, direction ⇐. Assume the sequence is consistent. We construct ν 0 ν 1 · · · ∈ (Q R ) ω such that ν i ∪ ν i+1 |= C i for all i. The construction proceeds step-by-step and relies on the following fact ( †): for every constraint C and ν ∈ Q R such that ν |= C |R , there exists ν ∈ Q R such that ν ∪ ν |= C. Then define ν 0 , ν 1 . . . as follows: start with an arbitrary ν 0 satisfying ν 0 |= C 0|R . Given ν i |= C i|R , let ν i+1 be any valuation in Q R that satisfies ν i ∪ ν i+1 |= C i (it exists by ( †)). Since ν i+1 |= C i|R , and unprime(C i|R ) = C i+1 |R by consistency, we have ν i+1 |= C i+1 |R , and we can apply the argument again.
We are left to prove the fact ( †). The constraint C completely specifies the order on R ∪ R , while ν fixes the values for R, and ν |= C |R . Hence we can uniquely order registers R and the values {ν(r) | r ∈ R} of R on the Q-line. Since Q is dense, it is always possible to choose the values for R that respect this order; we leave out the details.
Consider the second claim, direction ⇐. Since C 0 C 1 . . . is consistent, then by the first claim, it is satisfiable, hence it has a witnessing valuation ν 0 ν 1 . . . . The constraint C 0 requires all registers in R to start with the same value, so define d = ν 0 (r) for arbitrary r ∈ R. Let ν 0 ν 1 . . . be the valuations decreased by d: ν i (r) = ν i (r) − d for every r ∈ R and i ≥ 0. The new valuations satisfy the constraint sequence because the constraints in Q are invariant under the shift (follows from the fact: if r 1 < r 2 holds for some ν ∈ D R , then it holds for any ν − d where d ∈ D). The equality ν 0 = 0 R means that the constraint sequence is 0-satisfiable.
We now prove Theorem 20.
Proof of Theorem 20. The sought automaton has an alphabet consisting of all constraints. By Lemma 21, for satisfiability, it suffices to construct the automaton that checks consistency, namely that every two adjacent constraints C 1 C 2 in the input word satisfy the condition unprime(C 1|R ) = C 2|R . We only sketch the construction. The automaton memorises the atoms C 1|R of the last constraint C 1 into its state, and on reading the next constraint C 2 the automaton checks that unprime(C 1|R ) = C 2|R . If this holds, the automaton transits into the state that remembers C 2|R ; if the check fails, the automaton goes into the rejecting sink state. And so on. The automaton for checking 0-satisfiability additionally checks that C 0|R = {r = s | r, s ∈ R}. The number of states is exponential in |R|, the number of colors is 2, and in fact the socalled safety (aka looping) acceptance suffices.
For the rest of this section, we focus on domain N.
Chains characterise satisfiability of constraint sequences
In this section we prove the characterisation of satisfiable constraint sequences that we used to ω-regularly approximate the automaton game over (N, ≤) (Section 4.2). Recall that chains are defined in Definition 5 on page 23.
While the target characterisation relies on one-way chains, we start by presenting a characterisation using two-way chains: such chains compare register values forwards and backwards in time. This characterisation is intuitive and easy to prove but difficult to implement using one-way automata. Therefore, later we provide an alternative characterisation using one-way chains which read constraint sequences in forward direction only. The lifting from two-way to one-way chains is done using Ramsey theorem [45]. A similar proof strategy is employed in [47, Appendix C], but our notion of chains is simpler and we describe the previously missing application of Ramsey theorem. We start with the definitions of two-way chains, then describe the characterisations in Lemmas 22 and 23.
Lemma 22. A consistent constraint sequence is satisfiable in N iff
A2. it has no infinite-depth two-way chains, and B2. every ceiled two-way chain has a bounded depth (i.e., there exists b ∈ N such that the depth of every ceiled two-way chain is ≤ b).
Proof. The direction ⇒ is proven by contradiction: if A2 is not satisfied, then one needs infinitely many values below the maximal initial value of a register to satisfy the sequence, which is impossible in N. Similarly for B2. We now state this formally. Suppose a constraint sequence C 0 C 1 ... is satisfiable by some valuations ν 0 ν 1 .... Towards a contradiction, assume that A2 does not hold, i.e. there is an infinite decreasing twoway chain χ = (r 0 , m 0 )(r 1 , m 1 ).... Let ν m0 (r 0 ) = d be the data value at the start of the chain. Each decrease (r i , m i ) > (r i+1 , m i+1 ) in the chain χ requires the data to decrease as well: ν i (r i ) > ν i+1 (r i+1 ), so there must be an infinite number of data values between d and 0, which is impossible in N. Hence A2 must hold. Now consider B2. If there are no ceiled chains, we are done, so assume there is at least one ceiled chain. Then there exists a maximal stable chain, by definition. Let d be the value of the registers in the maximal stable chain. All ceiled chains lie below the maximal stable chain, therefore the values of their registers are bounded by d . Hence the depth of each such a chain is bounded by b = d , so B2 holds. The direction ⇐. Given a consistent constraint sequence C 0 C 1 ... satisfying A2 and B2, we construct a sequence of register valuations ν 0 ν 1 ... such that ν i ∪ν i+1 |= C i for all i ≥ 0 (recall that ν = {r → ν(r) | r ∈ R}). For a register r and moment i ∈ N, let d(r, i) be the largest depth of two-way chains from (r, i); such a number exists by assumption B2; it is not ∞ by assumption A2; it can be 0. Then, for every r ∈ R and i ∈ N, set ν i (r) = d(r, i).
We now prove that for all i, the satisfaction ν i ∪ ν i+1 |= C i holds, i.e. all atoms of C i are satisfied. Pick an arbitrary atom t 1 t 2 of C i , where t 1 , t 2 ∈ R ∪ R . Define m t1 = i + 1 if t 1 is a primed register, else m t1 = i; similarly define m t2 . There are two cases.
• t 1 t 2 is t 1 = t 2 .
Then the deepest chains from (t 1 , m t1 ) and (t 2 , m t2 ) have the same depth, d(t 1 , m t1 ) = d(t 2 , m t2 ), and hence ν i ∪ ν i+1 satisfies the atom. • t 1 t 2 is t 1 > t 2 . Then, any chain (t 2 , m t2 )... from (t 2 , m t2 ) can be prefixed by (t 1 , m t1 ) to create the deeper chain (t 1 , m t1 ) > (t 2 , m t2 ).... Hence d(t 1 , m t1 ) > d(t 2 , m t2 ), therefore ν i ∪ ν i+1 satisfies the atom.
This concludes the proof.
Remark. The proof describes a data-assignment function which maps a sequence of constraints to a sequence of valuations satysfying it. Such functions are widespread, see e.g. [47,Lemma C.7] or [17,Lemma 15]. Later in Section 5.4 we describe a different kind of data-assignment function, which does not see the whole constraint sequence beforehand but only the prefix read so far. This changes how much the register values get separated from each other: from b in the above proof to approx. 2 B .
The previous lemma characterises satisfiability in terms of two-way chains, but our final goal is the characterisation by automata. It is hard to design a one-way automaton tracing two-way chains, so we lift the previous lemma to one-way chains.
Lemma 23. A consistent constraint sequence is satisfiable in N iff
A1. it has no infinitely decreasing one-way chains, and B2. every ceiled one-way chain has a bounded depth (i.e., there exists b ∈ N such that the depth of every ceiled one-way chain is ≤ b).
We describe a proof idea then provide a full proof.
Proof idea. We start from Lemma 22 and show that hypotheses A2 and B2 can be refined to A1 and B1 respectively. From an infinite (decreasing) two-way chain, we can always extract an infinite decreasing one-way chain, since two-way chains are infinite to the right and not to the left. Hence, for every moment i, there always exists a moment j > i such that one register of the chain is smaller at step j than a register of the chain at step i. Then, given a sequence of ceiled two-way chains of unbounded depth, we are able to construct a sequence of one-way chains of unbounded depth. This construction is more difficult than in the above case. Indeed, even though there are by hypothesis deeper and deeper ceiled two-way chains, they may start at later and later moments in the constraint sequence and go to the left. Hence one cannot simply take an arbitrarily deep two-way chain and extract an arbitrarily deep oneway chain from it. However, we demonstrate, using a Ramsey argument, that it is still possible to extract arbitrarily deep one-way chains since the two-way chains are not completely independent.
Proof. Thanks to Lemma 22, it suffices to show that A1 ⇔ A2 and B1 ⇔ B2. The implications A2 ⇒ A1 and B2 ⇒ B1 follow from the definitions of chains. Now, let us show that ¬A2 ⇒ ¬A1: let C 0 C 1 . . . be a consistent constraint sequence, and assume that it has an infinite two-way chain χ = (r a , i) . . . . We then construct an infinite descending one-way chain χ . The construction is illustrated in Figure 6. Our one-way chain χ starts in (r a , i). The area on the left from i-timeline contains i · |R| points, but χ has an infinite depth hence at some point it must go to the right from i. Let r b be the smallest register visited at moment i by χ; we first assume that r b is different from r a (the other case is later). Let χ go (r b , i) (r , i + 1). We append this to χ and get χ = (r a , i) > (r b , i) (r , i + 1). If r a and r b were actually the same, so the chain χ moved (r a , i) (r , i + 1), then we would append only (r a , i) (r , i + 1). By repeating the argument from the point (r , i + 1), we construct the infinite descending one-way chain χ . Hence ¬A1 holds. Now, let us show ¬B2 ⇒ ¬B1. Given a sequence of ceiled two-way chains of unbounded depth, we need to create a sequence of ceiled one-way chains of unbounded depth. We extract a witnessing one-way chain of a required depth from a sufficiently deep two-way chain. To this end, we represent the two-way chain as a clique with colored edges, and whose one-colored subcliques represent all one-way chains. We then use the Ramsey theorem that says a monochromatic subclique of a required size always exists if a clique is large enough. From the monochromatic subclique we extract the sought one-way chain.
The Ramsey theorem [45] is about clique graphs with colored edges. For the number n ∈ N of vertices, let K n denote the clique graph and let E Kn be its set of edges. Then, we let color : E Kn → {1, . . . , #c} be an edge-coloring function, where I.e., for any given n, there is a sufficiently large size l such that any colored clique of this size contains a monochromatic subclique of size n. Ramsey number depends on the number #c of colors and size n of the clique and is independent of a coloring function color. We will use the theorem with three colors only: #c = 3. Given a sequence of two-way chains of unbounded depth, we show how to build a sequence of one-way chains of unbounded depth. Suppose we want to build a oneway chain of depth n, and let l be the Ramsey number for (3, n). Since the two-way chains from the sequence have unbounded depth, there is a two-way chain χ of depth l. From it we construct the following colored clique (the construction is illustrated in Figure 7).
• Remove stuttering elements from χ: whenever (r i , m i ) = (r i+1 , m i+1 ) appears in χ, remove (r i+1 , m i+1 ). We repeat this until no stuttering elements appear. Let χ > = (r 1 , m 1 ) > · · · > (r l , m l ) be the resulting sequence; it is strictly decreasing, and contains l pairs (the same as the depth of the original χ). Note the following property ( †): for every not necessarily adjacent (r i , m i ) > (r j , m j ), there is a oneway chain (r i , m i ) . . . (r j , m j ); it is decreasing if m i < m j , and increasing otherwise; its depth is at least 1. The resulting sequence may skip points in time, but thisas will be explained later -does not affect the construction. • The elements (r, m) of χ > serve as the vertices of the colored clique. The edgecoloring function is: for every not necessarily adjacent (r a , m a )
> (r b , m b ) in χ > , let color (r a , m a ), (r b , m b ) be if m a < m b , if m a > m b , ↓ if m a = m b .
Thus, we assign a color to an edge between every two vertices. Figure 7b gives an example.
By applying the Ramsey theorem, we get a monochromatic subclique of size n with vertices V ⊆ {(r 1 , m 1 ), . . . , (r l , m l )}. Its color cannot be ↓ when n > |R|, because a time line has maximum |R| points. Suppose the subclique's color is (the case is similar). We build the increasing sequence χ = (r 1 , m 1 ) < · · · < (r n , m n ), where m i < m i+1 and (r i , m i ) ∈ V for every i. The sequence χ may not satisfy the definition of one-way chains, because the removal of stuttering elements that we performed at the beginning can cause time jumps i.e. m i+1 > m i + 1. But it is easyrelying on the property ( †)-to construct the one-way chain χ of depth n from χ by inserting the necessary elements between (r i , m i ) and (r i+1 , m i+1 ). The case when the subclique has color , the resulting constructed chain is decreasing.
Thus, for every given n, we constructed either a decreasing or increasing ceiled one-way chain of depth n. In other words, a sequence of such chains of unbounded depth. Hence ¬B1 holds, which concludes the proof.
The next easy lemma (first stated on page 24) refines the characterisation to 0-satisfiability: Lemma 12. A consistent constraint sequence is 0-satisfiable in N iff there exists b ≥ 0 such that:
1. it has no infinitely decreasing one-way chains, 2. the ceiled one-way chains have a depth at most b 3. it starts in C 0 s.t. C 0|R = {r = s | r, s ∈ R}, and 4. it has no decreasing one-way chains of depth ≥ 1 from (r, 0) for any r.
Proof. Direction ⇒. The first two items follow from Lemma 23; the third one follows from the definition of satisfiability. Consider the last item: suppose there is such a chain. Then, at the moment when the chain strictly decreases and goes to some register s, the register s would need to have a value below 0, which is impossible in N.
Direction ⇐. The first two items are exactly A1 and B1 from Lemma 23, so the sequence is satisfiable, hence it also satisfies the conditions A2 and B2 from Lemma 22. In the proof of Lemma 22, we showed that in this case the following valuations ν 0 ν 1 ... satisfy the sequence: for every r ∈ R and moment i ∈ N, set ν i (r) (the value of r at moment i) to the largest depth of the two-way chains starting in (r, i). We construct ν 0 ν 1 ... as above, and get a witness of satisfaction of our constraint sequence. Note that at moment 0, ν 0 = 0 R , by the last item. Hence the constraint sequence is 0satisfiable.
Action words and constraint sequences
In this section, we provide the proof of the following lemma, stated on page 25:
Lemma 13. Let R be a set of registers, R d = R {r d }, and D be (N, ≤) or (Q, ≤).
There exists a mapping constr : Π × Tst × Asgn → C from state constraints Π over R d and tests-assignments over R to constraints C over R d , such that for all action words a 0 a 1 a 2 ...
∈ (Tst × Asgn) ω , a 0 a 1 a 2 ... is feasible iff C 0 C 1 C 2 ... is 0-satisfiable, where ∀i ≥ 0: C i = constr(π i , a i ), π i+1 = unprime(C i|R d ), π 0 = {r = s | r, s ∈ R d }.
Proof. Given π, tst, asgn, we define the mapping constr : (π, tst, asgn) → C as follows. The definition is as expected, but we should be careful about handling of r d , it is the last item.
• The constraint C includes all atoms of the state constraint π (that relates the registers at the beginning of the step). • Recall that neither tst nor asgn talk about r d . For readability, we shorten (t 1 t 2 ) ∈ C to simply t 1 t 2 , ( * r) ∈ tst to * r, and a ≤ b means (a < b)∨(a = b). • We define the order at the end of the step as follows. For every two different r, s ∈ R:
-r = s iff (r = s) ∧ r, s ∈ asgn or r ∈ asgn ∧ ( * = s) or r, s ∈ asgn; -r < s iff (r < s) ∧ r, s ∈ asgn or ( * < s) ∧ r ∈ asgn ∧ s ∈ asgn; -r = r d iff (r = * ) or r ∈ asgn; -r r d iff (r * ) ∧ r ∈ asgn, for ∈ {<, >};
• So far we have defined the order of the registers at the beginning and the end of the step. Now we relate the values between these two moments. For every r ∈ R:
-r = r iff r ∈ asgn or r ∈ asgn ∧ ( * = r); -r r iff r ∈ asgn ∧ (r * ), for ∈ {<, >};
• Finally, we relate the values of r d between the moments. There are two cases.
-The value of r d crosses another register: ∃r ∈ R : (r d < r) ∧ ( * ≥ r). Then (r d > r d ). Similarly for the opposite direction: if ∃r ∈ R : (r d > r) ∧ ( * ≤ r) then (r d < r d ).
-Otherwise, the value of r d does not cross any register boundary. Then r d = r d .
Using the mapping constr, every action word a = (tst 0 asgn 0 )(tst 1 asgn 1 ) . . . can be uniquely mapped to the constraint sequence C 0 C 1 . . . as follows: C 0 = constr(π 0 , tst 0 , asgn 0 ), set π 1 = unprime(C 0|R d ), then C 1 = constr(π 1 , tst 1 , asgn 1 ), and so on.
We now prove that an action word is feasible iff the constructed constraint sequence is 0-satisfiable. This follows from the definitions of feasibility and 0satisfiability, and from the following simple property of feasible action words. Every feasible action word has a witness ν 0 d 0 ν 1 d 1 · · · ∈ (D R · D) ω such that: if some tst is repeated twice and no assignment is done, then the value d stays the same. This property is needed due to the last item in the definition of constr where we set r d = r d .
Max-automata recognise satisfiable constraint sequences
This section presents an automaton characterisation of constraint sequences satisfiable in N. The automaton construction verifies the conditions on one-way chains stated in Lemma 23: the absence of (A1) infinite decreasing one-way chains and of (B1) unbounded one-way ceiled chains. The boundedness requirement of the second condition cannot be checked by ω-regular automata 5 , and for that reason in [47] the authors used nondeterministic ωB-automata. Since nondeterminism is usually hard to handle in synthesis, we picked deterministic max-automata [8], which are incomparable with ωB-automata, expressivity-wise. We now define max-automata and then present the characterisation.
Deterministic max-automata extend classic finite-alphabet parity automata with a finite set of counters c 1 , . . . , c n which can be incremented, reset to 0, or updated by taking the maximal value of a set of counters, but the counters cannot be tested. On reading a word, the automaton builds a sequence of counter valuations. The acceptance condition is given as a conjunction of the parity acceptance condition and a Boolean combination of conditions "counter c i is bounded along the run". Such a condition on a counter is satisfied by a run if there exists a bound b ∈ N such that counter c i has value at most b along the run. By using negation, conditions such as "c i is unbounded along the run" can also be expressed. A run is accepting if it satisfies the parity condition and the Boolean formula on the counter conditions. Deterministic max-automata are strictly more expressive than ω-regular automata. For instance, they can express the non-ω-regular language of words of the form a n1 ba n2 b . . . such that n i ≤ b for all i ≥ 0, for some b ∈ N that can vary from word to word. A max-automaton recognising the language is in Figure 8.
We now prove the main result of this section. . . | ∃b ∈ N ∀i : n i ≤ b}. It uses a single counter c, the acceptance condition is "counter c is bounded", and the parity acceptance is trivial (always accept). The operation max is not used.
the number of counters is O(|R| 2 ), and the number of priorities is polynomial in |R|.
The same holds for 0-satisfiability in N.
Proof idea. We design a deterministic max-automaton that checks conditions A1 and B1 of Lemma 23. Condition A1, namely the absence of infinitely decreasing one-way chains, is checked as follows. We construct a nondeterministic Büchi automaton that guesses a chain and verifies that it is infinitely decreasing, i.e. that '>' occurs infinitely often and that there is no '<' (only '>' and '='). Determinising and complementing yields a deterministic parity automaton, that can be disjuncted through a synchronised product with the deterministic max-automaton checking condition B1. The latter condition (the absence of ceiled one-way chains of unbounded depth) is more involved. We design a master automaton that tracks every chain χ that currently exhibits a stable behaviour. To every such a chain χ, the master automaton assigns a tracer automaton whose task is to ensure the absence of unbounded-depth ceiled chains below χ. For that, the tracers use 2|R| counters -one for tracking increasing and one for tracking decreasing chains -and requires them to be bounded. We use the max operation on counters to ensure that we trace the largest chains only. The overall acceptance condition ensures that if the chain χ is stable, then there are no ceiled chains below χ of unbounded depth. Finally, we take the product of all these automata, which preserves determinism.
In the next section, we provide the details of the proof.
Proof of Theorem 24
We describe a max-automaton A that accepts a constraint sequence iff it is consistent and has no infinitely decreasing one-way chains and no ceiled one-way chains of unbounded depth. By Lemma 23, such a sequence is satisfiable.
The automaton has three components
A = A c ∧ A ¬∞ ∧ A b .
A c The parity automaton A c checks consistency, i.e. that ∀i : unprime(C i|R ) = (C i+1 ) |R . It has exponential in |R| number of states and two priorities (the safety language).
A ¬∞ The parity automaton A ¬∞ ensures there are no infinitely decreasing oneway chains. First, we construct its negation, an automaton that accepts a constraint sequence iff it has such a chain. Intuitively, the automaton guesses such a chain and then verifies that the guess is correct. It loops in the initial state q ι until it nondeterministically decides that now is the starting moment of the chain and guesses the first register r 0 of the chain, and transits into the next state while memorising r 0 . When the automaton is in a state with r and reads a constraint C, it guesses the next register r n , verifies that (r n > r) ∈ C or (r n = r) ∈ C, and transits into the state that remembers r n . The Büchi acceptance condition ensures that the automaton leaves the initial state and transits from some r to some r n with (r n > r) ∈ C infinitely often. Determinising and complementing this automaton gives A ¬∞ . The number of states is exponential and the number of priorities is polynomial in |R|, due to the determinisation.
A b The max-automaton A b ensures that all ceiled one-way chains have bounded depth. It relies on the master automaton controlling the team of |R| chain tracers Tr = {tr 1 , ..., tr |R| }. Each tracer tr is equipped with a counter idle tr and a set Cn tr of 2|R| of counters, thus overall there are |R|(2|R| + 1) counters. The construction ensures that every stable chain is tracked by a single tracer tr and its counter idle tr is bounded; and vice versa, if a tracer tr has its counter idle tr bounded, it tracks a stable chain. Suppose for a moment that tracer tr tracks a stable chain χ. Then the goal of counters Cn tr is to track the deepest increasing and decreasing chains below χ. Since there are only |R| registers, it suffices to track |R| decreasing chains, every chain ending in a different register (similarly for increasing chains). This is because there is no need to track two decreasing chains ending in the same register: once the two chains "meet" in a register r, we continue tracking only the one with the larger depth and forget about the other. We use the max operation of automata to implement this idea. Overall, the construction ensures that the counters in Cn tr are bounded iff the increasing and decreasing chains ceiled by the stable chain tracked by the tracer tr have bounded depths. The acceptance of A b is the formula tr∈Tr idle tr is bounded → c∈Cntr c is bounded .
The work of tracers is controlled by the master automaton via four commands idle ("track nothing"), start ("start tracking a potentially stable chain"), move ("continue tracking"), and reset ("stop tracking"). Before we formally describe the master and the tracers, we define the concept of "levels" used in the presentation. Intuitively, the levels abstract concrete data values, and the tracers actually track the levels instead of specific registers.
Fix a constraint C. A level l ⊆ R \{∅} is an equivalence class of registers wrt. C |R or wrt. unprime(C |R ). Thus, in the constraint C we distinguish the levels of two kinds: start levels (at the beginning of the step) and end levels (at the end of the step). A start level l ⊆ R disappears when C contains no atoms of the form r = s for r ∈ l and s ∈ R; this means that a data value abstracted by the level disappears from the registers. An end level l ⊆ R is new if C contains no atoms of the form r = s where r ∈ R and s ∈ l; intuitively, the constraint requires a new data value to appear in registers l. A start level l morphs into an end level l if C contains an atom r = s for some r ∈ l and s ∈ l ; i.e., the constraint requires the registers in l to hold the data value previously held by the registers in l. Notice that there can be
= r 2 = r 3 > r 2 > r 3 > r 1 }.
at most |R| start and |R| end levels, for a fixed constraint C. Figure 9 illustrates the definitions. We are now ready to describe the master and the tracers.
Master. States of A b are of the form (getTr , q), where the partial mapping getTr : l → tr maps a level l ⊆ R \{∅} to a tracer tr ∈ Tr , and q = (q 1 , ..., q |Tr | ) describes the states of individual tracers. The master updates the state component getTr while the tracers update their states. Initially, there is only one start level R (assuming the registers start with the same value), so we define getTr = {R → tr 1 }. Suppose the automaton reads a constraint C, let L and L be the start and end levels of C, and suppose the automaton is in state (getTr , q) and getTr : L → Tr . We define the successor state (getTr , q ), where getTr : L → Tr , and operations on the counters using the following procedure.
• To every tracer tr that does not currently track a level, i.e. tr ∈ Tr \ getTr (L), the master commands idle (causing the tracer to increment idle tr ). • For every start level l ∈ L that morphs into l ∈ L : let tr = getTr (l), then -the master sends move(r ) to tr where r ∈ l is chosen arbitrary; this will cause the tracer tr to update its counters Cn tr and move into a successor state q tr ; the register r will be used as a descriptor of a stable chain tracked by tr. -we set getTr (l ) = getTr (l), thus the tracer continues to track it.
• For every start level l ∈ L that disappears: let tr = getTr (l), then -the master sends reset to tr, which causes the reset of the counters in Cn tr and the increment of idle tr .
• For every new end level l ∈ L :
-we take an arbitrary tr that is not yet mapped by getTr and map getTr (l ) = tr; -the master sends start to tr.
Tracers. We now describe the tracer component. Its goal is to trace the depths of ceiled chains. When the counters of a tracer are bounded, the depths of the chains it tracks are also bounded. The tracer consists of two components, B and B , which track decreasing and increasing chains. We only describe B , the other one is similar.
The component B has a set Cn ∪ {idle} of |R| + 1 counters. A state of B is either the initial state q ι or a partial mapping getCn : R Cn. Intuitively, in each getCn-state, for each register r mapped by getCn, the value of the counter getCn(r) reflects the depth of the deepest ceiled decreasing one-way chain ending in r. When several chains end in r, the counter gets the maximal value of the depths. We maintain this property of getCn during the transition of B on reading a constraint C, using operations of max-automata on counters and register-order information from C. The component B does the following:
• If the master's command is idle, then increment the counter idle and stay in q ι . • If the master's command is reset, reset all counters in Cn, increment the counter idle, and go into state q ι . • If the master's command is start, move from state q ι into the state with the empty mapping getCn.
Otherwise, the master's command is move(r ), for some r ∈ R passed by the master and serving as a descriptor of a stable chain traced by the current tracer. The tracer performs the operations on its counters and updates the mapping getCn as follows.
• Release counters. For every r such that r < r < r , the component resets the counter getCn(r) and removes r from the mapping getCn. I.e., we stop tracking chains ending in register r since such chains are no longer below the stable chain assigned to the tracer. • Allocate counters. For every r such that r ≥ r > r : pick a counter c ∈ Cn \ getCn(R) and map getCn(r) = c. I.e., we start tracking chains ending in r. • Update counters. For every r such that r ≤ r and r < r do the following.
Let R >r = {r o | r < r o < r } be the registers larger than the updated r but below r , and let getCn(R >r ) be the associated counters. Let r = be a register s.t. r = = r (may not exist). We update the counter getCn(r) depending on the case:
-R >r is empty and r = does not exist: the condition means that no decreasing ceiled chain can be extended into r . Then we reset the counter getCn(r). -R >r is empty and r = exists: only the chains ending in r = can be extended into r , and since r = = r , the deepest chain keeps its depth. Therefore we copy(getCn(r = )) into the counter getCn(r). -R >r is not empty and r = does not exist: the chains from registers in R >r can be extended into r , and since r is lower than any register in R >r , their depths increase. The new value of counter getCn(r) must reflect the deepest chain, therefore the counter gets the value max getCn(R >r ) + 1. -R >r is not empty and r = exists: some chains from registers in R >r can be decremented into r , there is also a chain from r = that can be extended into r without its depth changed. The counter gets max max(getCn(R >r )) + 1, getCn(r = ) , which describes the deepest resulting chain.
The number of states in B is no more than |R| |R| +1, and the number of counters is |R| + 1. The construction for B is similar to this construction for B , except that we need to track increasing ceiled chains instead of decreasing ones. The number of counters in B and B is 2|R| + 1. Since we use |R| number of tracers, the total number of counters becomes |R|(2|R| + 1). Overall, A b has an exponential in |R| number of states, the number of counters is in O(|R| 2 ), and the parity condition is trivial. This concludes the description of the tracers and of the automaton A b .
We have described all three components A = A c ∧ A ¬∞ ∧ A b , where A c expresses a safety language, A ¬∞ is a classic deterministic parity automaton, and A b is a deterministic max-automaton with the trivial parity acceptance condition. All the automata has no more than an exponential in |R| number of states, A ¬∞ has a polynomial in |R| number of colors, and A b has a polynomial in |R| number of counters. It is not hard to see that the product of these automata gives the desired automaton A with exponentially many states, polynomially many colors and counters, in |R|. The acceptance condition is the parity acceptance in conjunction with the formula of A b described on page 39.
Finally, for the case of 0-satisfiability, the automaton A also needs to satisfy the additional conditions stated in Lemma 12, in particularly there shall be no decreasing one-way chains from moment 0 of depth ≥ 1. This check is simple and omitted. This concludes the proof of Theorem 24.
Remark. In [47,Appendix C] it is shown that satisfiable constraint sequences in N are characterised by nondeterministic ωB-automata [6]. These automata are incomparable with deterministic max-automata.
The following two languages separate these classes: (a B b) ω is recognised by det max automata but not by nondet ωB automata, and {a n1 b a n2 b a n3 b . . . | lim inf n i < ∞} witnesses the opposite direction. The latter language is recognisable by the nondet ωB automaton which guesses a bounded subsequence of n 1 n 2 . . .. The non-recognisability by det max automata follows from [8,Section 6].
We prove the claim about (a B b) ω . First, the language (a B b) ω is recognisable by det ωB automata and hence by det max automata. Since det max automata are closed under the complement, (a B b) ω is also recognisable by det max automata. Now, by contradiction, assume that (a B b) ω is recognisable by nondet ωB automata. The result [6, Lemma 2.5] says: if an ωB language over alphabet {a, b} contains a word with infinitely many bs then it contains a word from (a B b) ω . The language (a B b) ω contains the former (e.g. take any word from (a S b) ω ) but not the latter. Contradiction. Hence it is not an ωB language.
Satisfiability of lasso-shaped sequences
An infinite sequence is lasso-shaped (or regular ) if it is of the form w = uv ω . Lasso-shaped sequences are prevalent in automata theory and in the data setting in particular. For instance, [21] studies satisfiability of logic Constraint LTL in the data domain (N, ≤) and shows that considering lasso-shaped witnesses of satisfiability is sufficient. Another work [26] shows that if there is an ω-regular overapproximation of satisfiable constraint sequences and which is exact on lasso-shaped sequences, then a synthesis problem is decidable in (N, ≤). In this paper, when proving the decidability of Church synthesis problem, we do not directly rely on lasso-shaped sequences, but we use a characterisation similar to the one proven in this section.
This section shows that considering lasso-shaped constraint sequences greatly simplifies the task of characterisation of satisfiability. We first show how lasso-shaped sequences simplify the condition B1 of characterisation Lemma 23, then describe the chain characterisation under assumption of lasso-shaped sequences, and finally state the ω-regular automaton characterisation.
Lemma 25. For every lasso-shaped consistent constraint sequence, it has ceiled oneway chains of unbounded depth iff it has ceiled one-way chains of infinite depth.
Proof. Direction ⇐ is trivial, so consider direction ⇒. The argument uses the standard pumping technique. Fix a lasso-shaped constraint sequence C 0 . . . C k−1 (C k . . . C k+l ) ω having ceiled chains of unbounded depth. Since these chains have unbounded depth, they pass through C k more and more often. At moments when the current constraint is C k , each such a chain is in one of the finitely-many registers. Hence there is a chain, say increasing, that on two separate occasions of reading the constraint C k goes through the same register r, and the chain suffix from the first pass through r until the second pass has at least one <. Then we create an increasing chain of infinite depth by repeating this suffix forever.
The above lemma together with Lemma 12 yields the following result.
Lemma 26. A lasso-shaped consistent constraint sequence is 0-satisfiable iff it is quasi-feasible, i.e.:
• it has no infinite-depth decreasing one-way chains, • it has no ceiled infinite-depth increasing one-way chains, • it has no decreasing one-way chains of depth ≥ 1 from moment 0, and
• it starts with C 0 s.t. C 0|R = {r = s | r, s ∈ R}.
The conditions of this lemma can be checked by an ω-regular automaton: Its construction is similar to the components A c and A ¬∞ from the proof of Theorem 24 and is omitted. Hence we get the theorem below.
Theorem 27. For every R, there is a deterministic parity automaton that accepts a lasso-shaped constraint sequence iff it is 0-satisfiable in N; its number of states and priorities is exponential and polynomial in |R|, respectively.
Data-assignment function
In this section, we design a data-assignment function that maps a sequence of constraints to a sequence of register valuations satisfying it, while doing it on the fly, i.e. by reading the constraint sequence from left to right. It is significant that the entire constraint sequence is not known in advance. Such a function is used in Section 3 when proving Proposition 15, namely that Adam's winning strategy in the finitealphabet game transfers to the winning strategy in the Church synthesis game. There, Adam has to produce data values given only the prefix of a play.
In the next section, we state the lemma on existence of a data-assignment function, and then devote a significant amount of space to proving it.
Lemma 28 on existence of a data-assignment function
Intuitively, a data-assignment function produces register valuations while reading a constraint sequence from left to right. We are interested in functions that produce register valuations satisfying given constraint sequences. Since data-assignment functions cannot look into the future and do not know how many values will be inserted between any two registers, knowing a certain bound on such insertions is necessary. Moreover, to simplify the presentation, we restrict how many new data values can appear during the step. In our Church synthesis games, at most one new value provided by Adam can appear. We start by defining data-assignment functions, then describe the assumptions and state the lemma.
Let C denote the set of all constraints over registers R, and let C |R denote the set of all constraints over atoms over R only. A data-assignment function has the type (C |R ∪ C + ) → N R . A data-assignment function f maps a constraint sequence C 0 C 1 ... into a sequence of valuations f (C 0|R )f (C 0 )f (C 0 C 1 )....
We now describe the two assumptions used by our data-assignment function. Intuitively, the first assumption states that only a bounded number of insertions between any two registers can happen, and this bound is known. To formalise the assumption, we define a special kind of chains, called right two-way chains. Informally, right chains are two-way chains that operate to the right of their starting point. Knowing a bound on the depths of right chains amounts to knowing how many values in the future can be inserted between the registers. Fix a constraint sequence. Given a moment i and a register x, a (decreasing) right two-way chain starting in (x, i) (r2w for short) is a two-way chain (x, i) 1 (r 1 , m 1 ) 2 (r 2 , m 2 ) . . . such that m j ≥ i, j ∈ {=, >}, for all j. As these chains are two-way, they can start and end in the same moment i. Notice that in Lemma 22 on characterisation of satisfiable constraint sequences we can replace two-way chains by r2w chains. Our data-assignment function will assume the knowledge of a bound on the r2w chains.
We now describe the second assumption about one-new-value appearance during a step. Its formalisation uses the notion of levels introduced in Section 5.2 on page 39 (see also Figure 9). We briefly recall those notions. Recall that a constraint describes a set of totally ordered equivalence classes of registers from R ∪ R . The figure on the right describes a constraint that can be defined by the ordered equivalence classes {r 4 , r 4 } < {r 2 } < {r 3 , r 3 } < {r 1 , r 2 , r 1 }. It shows two columns of levels, start levels (in the left column) and end levels (in the right column), where a level describes a set of registers equivalent in that moment. The assumption † says:
In every constraint of a given sequence, at most one new end level appear.
( †)
The constraint depicted in the above figure satisfies this assumption, the one in Figure 9 does not. This assumption helps to simplify the proofs, and is satisfied by the constraint sequences induced in our Church synthesis games. One final notion before stating the lemma. A constraint sequence is 0-consistent if it is consistent, starts in C 0 with C 0|R = {r = s | r, s ∈ R}, and has no decreasing chains of depth ≥ 1 starting at moment 0. Note that a 0-consistent constraint sequence whose r2w chains are bounded is 0-satisfiable (follows from Lemma 22).
Lemma 28 (data-assignment function). For every b ≥ 0, there exists a dataassignment function f : (C |R ∪ C + ) → N R such that for every finite or infinite 0-consistent constraint sequence C 0 C 1 C 2 ... satisfying assumption † and whose r2w chains are depth-bounded by b, the register valuations f (C 0|R )f (C 0 )f (C 0 C 1 )... satisfy the constraint sequence.
Proof idea. We define a special kind of xy (m) -chains that help to estimate how many insertions between the values of registers x and y at moment m we can expect in the future. As it turns out, without knowing the future, the distance between x and y has to be exponential in the maximal depth of xy (m) -chains. We describe a data-assignment function that maintains such exponential distances. The function is surprisingly simple: if the constraint inserts a register x between two registers r and s with already assigned values d r and d s , then set d x = dr+ds 2 ; and if the constraint puts a register x above all other registers, then set d x = d M +2 b where d M the largest value currently held in the registers and b is the given bound on the depth of r2w chains.
The rest of the section is devoted to the proof of this lemma.
Proof of Lemma 28
xy (m) -connecting chains and the exponential nature of register valuations
Fix an arbitrary 0-satisfiable constraint sequence C 0 C 1 ... whose r2w chains are depthbounded by b. Consider a moment m and two registers x and y such that (x > y) ∈ C m .
We would like to construct witnessing valuations ν 0 ν 1 ... using the current history only, e.g. a register valuation ν m at moment m given only the prefix C 0 ...C m−1 . Note that the prefix C 0 ...C m−1 defines the ordered partition of registers at moment m as well, since C m−1 is defined over R ∪ R . Let us see how much space we might need between ν m (x) and ν m (y), relying on the fact that the depths of r2w chains are bounded by b. Consider decreasing two-way chains that start at moment i ≤ m, end in (x, m), and which are contained within time moments {i, ..., m} (shown in blue). Further, consider decreasing two-way chains starting in (y, m), ending at moment j ∈ {i, ..., m}, and contained within time moments {j, ..., m} (shown in pink). Among such chains, pick two chains of depths α and β, respectively, that maximise the sum α + β. After seeing C 0 C 1 ...C m−1 , we do not know how the constraint sequence will evolve, but by boundedness of r2w chains, any r2w chain starting in (x, m) and ending in (y, m) (contained within time moments ≥ m) will have a depth d ≤ b − α − β (otherwise, we could add prefix α and postfix β to it and construct an r2w chain of depth larger than b). We conclude that ν m (x)−ν m (y) ≥ b − α − β, since the number of values in between two registers should be greater or equal than the longest two-way chain connecting them. To simplify the upcoming arguments, we introduce xy (m) -connecting chains which consist of α and β parts and directly connect x to y.
An xy (m) -connecting chain is any r2w chain of the form (a, i) . . . (x, m) > (y, m) . . . (b, j): it starts in (a, i) and ends in (b, j), where i ≤ j ≤ m and a, b ∈ R, and it directly connects x to y at moment m. Note that it is located solely within moments {i, ..., m}. Continuing the previous example, the xy (m) -connecting chain starts with α, directly connects (x, m) > (y, m), and ends with β; its depth is α + β + 1 (we have "+1" no matter how many registers are between x and y, since x and y are connected directly).
With this new notion, the requirement
ν m (x) − ν m (y) ≥ b − α − β becomes ν m (x) − ν m (y) ≥ b − d xy + 1,
where d xy is the largest depth of xy (m) -connecting chains.
However, since we do not know how the constraint sequence evolves after C 0 ...C m−1 , we might need even more space between the registers at moment m. Consider an example on the right, with R = {r 0 , r 1 , r 2 } and the bound b = 3 on the depth of r2w chains.
• Suppose at moment 1, after seeing the constraint C 0 , which is {r 1 , r 2 } > {r 0 , r 1 , r 2 , r 0 }, the valuation is ν 1 = {r 0 → 0; r 1 , r 2 → 3}. It satisfies ν 1 (r 2 ) − ν 1 (r 0 ) ≥ b − d r2r0 + 1 (indeed, b = 3 and d r2r0 = 1 at this moment); similarly for ν(r 1 ) − ν(r 0 ). • Let the constraint C 1 be {r 1 , r 2 , r 2 } > {r 1 } > {r 0 , r 0 }. What value ν 2 (r 1 ) should register r 1 have at moment 2? Note that the assignment should work no matter what C 2 will be in the future. Since the constraint C 1 places r 1 between r 0 and r 2 at moment 2, we can only assign ν 2 (r 1 ) = 2 or ν 2 (r 1 ) = 1. If we choose 2, then the constraint C 2 having {r 2 , r 2 } > {r 1 } > {r 1 } > {r 0 , r 0 } (the red dot in the figure) shows that there is not enough space between r 2 and r 1 at moment 2 (ν 2 (r 2 ) = 3 and ν 2 (r 1 ) = 2). Similarly for ν 2 (r 1 ) = 1: the constraint C 2 having {r 2 , r 2 } > {r 1 } > {r 1 } > {r 0 , r 0 } (the blue dot in the figure) eliminates any possibility for a correct assignment.
Thus, at moment 2, the register r 1 should be equally distanced from r 0 and r 2 , i.e. ν 2 (r 1 ) ≈ ν2(r0)+ν2(r2)
2
, since its evolution can go either way, towards r 2 or towards r 0 . This hints at the exponential nature of distances between the registers. This if formalised in the next lemma showing that any data-assignment function that places two registers x and y at any moment m closer than 2 b−dxy is bound to fall. Intuitively, b − d xy describes how many more times an insertion between the values of registers x and y can happen in the future. Since each newly inserted value should be equidistant from the boundaries, we get the 2 b−dxy lower bound.
Lemma 29 (tightness). Fix b ≥ 3, registers R of |R| ≥ 3, a 0-consistent constraint sequence prefix C 0 ...C m−1 where m ≥ 1 and whose r2w chains are depth-bounded by b, two registers x, y ∈ R s.t. (x > y ) ∈ C m−1 , and a data-assignment function f : (C |R ∪ C + ) → N R . Let ν m = f (C 0 ...C m−1 ) and d xy be the maximal depth of xy (m) -connecting chains. If ν m (x) − ν m (y) < 2 b−dxy , then there exists a continuation C m C m+1 ... such that the whole sequence C 0 C 1 ... is 0-consistent and its r2w chains are depth-bounded by b (hence 0-satisfiable), yet f cannot satisfy it.
Proof. We use the idea from the previous example. The constraints C m C m+1 ... are: largest registers at moment m. In Church games this case happens when the test contains the atom * > r. D2. If a register x at moment m + 1 lays between two adjacent registers a > b at moment m, then ν m+1 (x) = νm(a)+νm(b)
2
. In Church games this happens when the test contains a > * > b. D3. If a register x at moment m + 1 equals a register r at previous moment m, so (r = x ) ∈ C m , then ν m+1 (x) = ν m (r). In Church games this case corresponds to a test containing the atom * = r for some register r.
Note that the case when a register x must lay below all registers never happens, since the special register r 0 always holds 0 and a given constraint sequence is 0-consistent and hence never requires r 0 > r for some register r. This is where r 0 comes handy.
Invariant. The data-assignment function satisfies the following invariant:
∀m ∈ N. ∀x, y ∈ R s.t. (x > y) ∈ C m : ν m (x) − ν m (y) ≥ 2 b−dxy ,
where d xy is the largest depth of xy (m) -connecting chains and b is the bound on the depth of r2w chains.
Proof of the invariant. The invariant holds initially since (r 1 = r 2 ) ∈ C 0 for all r 1 , r 2 ∈ R. Assuming it holds at step m, we show that it holds at m + 1. Fix two arbitrary registers x, y ∈ R such that (x > y ) ∈ C m ; we will prove that ν m+1 (x) − ν m+1 (y) ≥ 2 b−dxy , where d xy is the largest depth of xy (m+1) -connecting chains. There are four cases depending on whether the levels of x and y at moment m + 1 are present at moment m or not, illustrated in Figure 10. by item D2 of data-assignment function. The register y at moment m + 1 lays on a level that was also present at moment m, witnessed by register c. Formally, C m contains a > x > b for a and b adjacent at moment m, c = y , and > ν m (b). When (r > s ) ∈ C m we get ν m+1 (r) ≥ ν m (a), and when (r < s ) ∈ C m we get ν m+1 (r) ≤ ν m (b), therefore we are done.
Finally, the function always assigns nonnegative numbers, from N, so we are done.
Lifting the assumption about 0
We now lift the assumption about a register always holding 0. This assumption was used in the definition of the data-assignment function (items D1, D2, D3). The idea is to convert a given constraint sequence over registers R into a sequence over registers R {r 0 } while preserving satisfiability.
Conversion function. Given a 0-consistent constraint sequence C 0 C 1 ... over R without a special register holding 0, we will construct, on-the-fly, a 0-consistent sequenceC 0C1 ... over R {r 0 } that has such a register. Intuitively, we will add atoms r = r 0 only if they follow from what is already known otherwise we add atoms r > r 0 .
Initially, in addition to the atoms of C 0 , we require r = r 0 for every r ∈ R (recall that the original C 0 contains r 1 = r 2 for all r 1 , r 2 ∈ R). This gives an incomplete constraintC 0 over R 0 ∪ R 0 : it does not yet have atoms of the form r r 0 , r 0 r , r 0 r , where r ∈ R 0 .
At moment m ≥ 0, given a constraintC m|R0 over R 0 (without primed registers R 0 ) and a constraint C m over R ∪ R (without register r 0 ), we constructC m over R 0 ∪ R 0 as follows:
•C m contains all atoms of C m . • (r 0 = r 0 ) ∈C m . • For every r ∈ R: if r = r 0 is implied by the current atoms ofC m , then we add it, otherwise we add r > r 0 . Notice that the atom r < r 0 is never implied byC m , as we show now. Suppose the contrary. Then, since C m does not talk about r 0 nor r 0 , there should be s ∈ R such that (s = r 0 ) ∈C m|R0 and (r < s) ∈ C m . By construction, if this is the case, then there is a one-way chain (r 1 , 0) = (r 2 , 1) = ... = (s, m) of zero depth. As a consequence, we can construct the one-way decreasing chain (r 1 , 0) = (r 2 , 1) = ... = (s, m) > (r, m + 1) of depth 1, which implies that C 0 C 1 ... is not 0-consistent. We reached a contradiction, so (r < r 0 ) ∈C m is not possible. • Finally, to makeC m maximal, we add all atoms implied byC m but not present there.
Using this construction, we can easily define c0nv : C + →C and map a given 0consistent constraint sequence C 0 C 1 ... toC 0C1 ... with a dedicated register holding 0. Notice that the constructed sequence is also 0-consistent, because we never add inconsistent atoms and never add an atom r < r 0 (see the third item). Finally, in the constructed sequence the depths of r2w chains can increase by at most 1, due to the register r 0 : it can increase the depth of a finite chain by one, unless the chain is already ending in a register holding 0. Hence we get the following lemma.
Lemma 30. For every 0-consistent constraint sequence C 0 C 1 ..., the sequenceC 0C1 ... constructed with c0nv is also 0-consistent. Moreover, the maximal depth of r2w chains cannot increase by more than 1.
Final proof of Lemma 28. We lift the assumption about constraint sequences having a special register always holding zero. Using c0nv , we automatically translate a given 0-consistent constraint sequence prefix C 0 ...C m over R intoC 0 ...C m over R {r 0 } that contains a register r 0 always holding 0. Now we can apply the dataassignment function as described before. By definition of c0nv , the original constraint C i ⊂C i for every i ≥ 0, so the resulting valuation satisfies the original constraints as well. This concludes the proof of Lemma 28.
Conclusion
Our main result states that one-sided Church games for specifications given as deterministic register automata over (N, ≤) are decidable, in ExpTime. Moreover, we show that those games are determined, and strategies implemented by transducers with registers suffice to win. The decidability result involves a characterization of satisfiable infinite constraint sequences over (N, ≤): they must not have decreasing two-way chains of infinite depth and ceiled (bounded from the above) chains of unbounded depth. A similar characterization can be established for (Z, ≤). For instance, it should require that the two-way chains which are both upper-and lower-bounded have bounded depth. Then the decidability of one-sided Church synthesis for (Z, ≤) can be established in a similar way to (N, ≤). The decidability for (Z, ≤) can also be proven by reducing to the problem for (N, ≤) as follows. From a specification S, given as a set of words d 1 σ 1 d 2 σ 2 . . . alternating between a value d i ∈ Z and a letter σ i from a finite alphabet Σ, we construct a specification S of words of the form max(0, d 1 )#max(0, −d 1 )σ 1 max(0, d 2 )#max(0, −d 2 )σ 2 · · · ∈ (N(Σ ∪ {#})) ω , where # acts as a waiting symbol. Non-zero values given by Adam at positions 4n + 1 correspond to positive values, and non-zero values at positions 4n + 3 to negative values. If S is given as a DRA, a DRA can be constructed to recognize S , which preserves the existence of solutions to synthesis. An interesting future direction is to establish a general reduction between data domains such that decidability results for one-sided Church synthesis transfer from one domain to the other. A candidate notion for such reduction was defined in the context of register-bounded transducer synthesis [26].
An important future direction is to consider logical formalisms instead of automata to describe specifications in a more declarative and high-level manner. Data-word first-order logics [7,46] have been studied with respect to the satisfiability problem but when used as specification languages for synthesis, only few results are known. The first steps in this direction were done in [30,4] for Constraint LTL on (Z, ≤); see also [22] for an overview of nonemptiness of constraint tree automata; and see [3] for a slightly different context of parameterized synthesis.
Figure 1 :
1Eve wins this game in N but loses in Q.
Figure 2 :
2Gadgets for 2CM instructions. The instruction number k is stored in the state of the automaton. The state
c 2 ). If c 1 = 0, then the computation fails and there is no successor configuration. Similarly for dec 2 . • If I k = ifz 1 (k , k ), then M jumps to k or k according to a zero-test on c 1 : if c 1 = 0, then (k, c 1 , c 2 ) → (k , c 1 , c 2 ), otherwise (k, c 1 , c 2 ) → (k , c 1 , c 2 ). Similarly for ifz 2 .
•
Then, for each k ∈ {1, . . . , m}: -If I k = inc j for j = 1, 2, then we add to the transitions of S the gadget from Figure 2a, i.e. output transition (k, E) * >r1,↓t − −−−− → (k, A) and input transi-
, i.e. output transi-tions (k, E) * =r1∧ * =z − −−−−−− → (k, y, A), (k, E) * =r1∧ * >z − −−−−−− → (k,n, A) and input transitions (k, y, A) − → (k , E) and (k, n, A) − → (k , E).
Figure 3 :
3Eve wins this game in N (but loses in Q).
Figure 4 :
4Visualisation of a constraint sequence. Individual register values are depicted by black dots, and dots are connected by black lines when they talk about the same register. Yellow/blue/green/red paths depict chains (cf infra).
a fragment of a play. We are able to glue the chain since the crossing sections and the order between registers are the same at positions i and j.
1 .
1The problem of determining if Eve wins the Church synthesis game G = (D, D, S) is decidable in time polynomial in |Q| and exponential in c and |R|. 2. G S is determined, i.e. either Eve or Adam has a winning strategy in G S . Proof. For (N, ≤), item (1) follows from Proposition 15 and from the fact that G reg f is of size polynomial in |Q| and exponential in |R|. Item (2) on determinacy is proven as follows. Assume Eve loses G S . By Proposition 15, Eve loses G reg f . In the proof of Proposition 15, we have shown (Proposition 16) that in this case Adam has a strategy winning in the original Church game. Hence our Church games are determined.
Figure 6 :
6Proving the direction ¬A2 ⇒ ¬A1 in Lemma 23. The two-way chain is in grey, the constructed one-way chain is in blue.#c is the number of edge colors in the clique. A clique is monochromatic if all its edges have the same color (#c = 1). The Ramsey theorem says:Fix the number #c of edge colors. (∀n)(∃l)(∀color : E K l → {1, . . . , #c}): there exists a monochromatic subclique of K l with n vertices. The number l is called Ramsey number for (#c, n).
Clique: shown the edges for the top 5 points only. Try completing the rest.
Figure 7 :
7Proving the direction ¬B2 ⇒ ¬B1 in Lemma 23 of
Theorem 24 .Figure 8 :
248For every R, there is a deterministic max-automaton accepting exactly all constraint sequences satisfiable in N. The number of states is exponential in |R|, Max-automaton recognising {a n1 ba n2 b .
Figure 9 :
9Example of levels: start levels are {r 1 , r 2 } and {r 3 }, end levels are {r 3 }, {r 2 }, and {r 1 }. The start level {r 1 , r 2 } morphs into end level {r 3 }, the start level {r 3 } disappears, and two new end levels appear, {r 1 } and {r 2 }. The constraint is {r 1
Case 1 :
1both present. The levels of x and y at m + 1 also exist at moment m. Let a, b be registers s.t. (a > b) ∈ C m laying at moment m on the same levels as x and y at moment m + 1. By data-assignment function (item D3), ν m (a) = ν m+1 (x) and ν m (b) = ν m+1(y). Note that the number of levels between x-y and between a-b may differ. Consider the depths of connecting chains for ab (m) and xy (m+1) : Since every ab (m) -connecting chain can be extended to xy (m+1) -connecting chain of the same depth as shown on the figure, we have 6 d ab ≤ d xy , and hence 2 b−d ab ≥ 2 b−dxy . Using the inductive hypothesis, we conclude ν m+1 (x) − ν m+1 (y) = ν m (a) − ν m (b) ≥ 2 b−d ab ≥ 2 b−dxy . Case 2: x is new top.The register x lays on the top level of both moments m and m + 1, and y lays on a level that was also present at moment m. This corresponds to item D1. Let (b = y ) ∈ C m and a lays on the largest level at moment m (a and b may coincide). Thus, ν m+1 (x) = ν m (a) + 2 b . The invariant holds for x, y because ν m+1 (x) = ν m (a) + 2 b and ν m (a) ≥ ν m (b) = ν m+1 (y). Case 3: x is middle new, y was present. The register x at moment m + 1 lays on a new level that is between the levels of a and b at moment m, so ν m+1 (x) = νm(a)+νm(b) 2
Figure 10 :
10Proving the invariant x > y . Note that c and b may coincide. Then, ν m+1 (x) − ν m+1 (y) = νm(a)+νm(b) by boundedness of r2w chains. Hence d ab ≤ b − 1, so ν m (a) − ν m (b) ≥ 2, implying ν m (a) > νm(a)+νm(b) 2
Lasso-shaped words are also called regular words or ultimately periodic words in the literature.
We only construct the given play, since the rest of the strategy does not matter.
Recall that over (N, ≤), 0 denotes its minimal element. Over (Q, ≤), its choice is irrelevant.
For a formal statement, see[47, Theorem 4.3] saying that the class of languages of finite-alphabet projections of "constraint automata" and the class of ωB-languages coincide.
A stronger result holds, namely d ab = dxy, but it is not needed here.
If at moment m there are registers different from x and y, we add the step that makes them equal to x (or to y): this does not affect the depth of xy-connecting chains at moments m and m+1; also, the maximal depths of r2w chains defined at moments {0. m} and {0, ..., m + 1} stay the same. Therefore, below we assume that at moment m every register is equal to x or to yIf at moment m there are registers different from x and y, we add the step that makes them equal to x (or to y): this does not affect the depth of xy-connecting chains at moments m and m+1; also, the maximal depths of r2w chains defined at moments {0, ..., m} and {0, ..., m + 1} stay the same. Therefore, below we assume that at moment m every register is equal to x or to y.
The future constraints then simply keep the registers constant. Otherwise. If b−d xy = 0, we are done: ν m (x)−ν m (y) < 2 b−dxy gives ν m (x) ≤ ν m (y) but C m−1 requires ν m (x) > ν m (y). when b − d xy > 0, we proceed as followsIf b−d xy = 0, we are done: ν m (x)−ν m (y) < 2 b−dxy gives ν m (x) ≤ ν m (y) but C m−1 requires ν m (x) > ν m (y). The future constraints then simply keep the registers constant. Otherwise, when b − d xy > 0, we proceed as follows.
To ensure consistency of constraints, C m contains all atoms over R that are implied by atoms over R of C m−1. To ensure consistency of constraints, C m contains all atoms over R that are implied by atoms over R of C m−1 .
Since ν m+1 (x) − ν m+1 (y) < 2 b−dxy , either ν m+1 (x) − ν m+1 (z) < 2 b−d xz or ν m+1 (z) − ν m+1 (y) < 2 b−d zy ; this is the key observation. If the first case holds, we have the original setting ν m+1 (x) − ν m+1 (z) < 2 b−d xz but at moment m + 1 and with registers x and z; for the second case -with registers z and. This gives d xz = d zy = d xy + 1 ≤ b, where d xy is the largest depth of connecting chains for xy (m) , d xz -for xz (m+1) , and d zy -for zy (m+1). Hence we repeat the whole procedure, again and again, until reaching the depth b, which gives the sought conclusion in itemC m places a register z between x and y: x > z > y . This gives d xz = d zy = d xy + 1 ≤ b, where d xy is the largest depth of connecting chains for xy (m) , d xz -for xz (m+1) , and d zy -for zy (m+1) . Since ν m+1 (x) − ν m+1 (y) < 2 b−dxy , either ν m+1 (x) − ν m+1 (z) < 2 b−d xz or ν m+1 (z) − ν m+1 (y) < 2 b−d zy ; this is the key observation. If the first case holds, we have the original setting ν m+1 (x) − ν m+1 (z) < 2 b−d xz but at moment m + 1 and with registers x and z; for the second case -with registers z and y. Hence we repeat the whole procedure, again and again, until reaching the depth b, which gives the sought conclusion in item (2).
by showing that it satisfies the conditions of Lemma 12. Moreover, it is 0-consistent, and all r2w chains of C 0 C 1 ... are depth-bounded by b because: (a) in the initial moment m, all r2w chains are depth-bounded by b; and (b) the procedure deepens only xy-connecting chains and only until the depth b, whereas other r2w chains existing at moments {0. Finally, it is easy to prove that the whole constraint sequence C 0 C 1 ... is 0-satisfiable. m} keep their depths unchanged (or at moments {0, ..., m + 1}, if we executed item 1Finally, it is easy to prove that the whole constraint sequence C 0 C 1 ... is 0-satisfiable, e.g. by showing that it satisfies the conditions of Lemma 12. Moreover, it is 0- consistent, and all r2w chains of C 0 C 1 ... are depth-bounded by b because: (a) in the initial moment m, all r2w chains are depth-bounded by b; and (b) the procedure deepens only xy-connecting chains and only until the depth b, whereas other r2w chains existing at moments {0, ..., m} keep their depths unchanged (or at moments {0, ..., m + 1}, if we executed item 1).
We first describe a data-assignment function, then prove an invariant about it, and finally conclude with the proof of Lemma 28. For simplicity, we assume that the constraints contain a register that never changes and always holds 0. That is not true in general. Tightness by Lemma 29 tells us that if a data-assignment function exists, it should separate the register values by at least 2 b−dxy . Such separation is sufficient as we show below. so later we will lift this assumptionTightness by Lemma 29 tells us that if a data-assignment function exists, it should separate the register values by at least 2 b−dxy . Such separation is sufficient as we show below. We first describe a data-assignment function, then prove an invariant about it, and finally conclude with the proof of Lemma 28. For simplicity, we assume that the constraints contain a register that never changes and always holds 0. That is not true in general, so later we will lift this assumption.
N R is constructed inductively on the length of C 0 ...C m−1 as follows. Initially, f (C 0|R ) = ν 0 where ν 0 (r) = 0 for all r ∈ R (since C 0 has r = s, ∀r, s ∈ R). C |r ∪ C +, The function f. Data-assignment function. Suppose at moment m, the register valuation is ν m = f (C 0|R C 0 ...C m−1Data-assignment function. The function f : (C |R ∪ C + ) → N R is constructed inductively on the length of C 0 ...C m−1 as follows. Initially, f (C 0|R ) = ν 0 where ν 0 (r) = 0 for all r ∈ R (since C 0 has r = s, ∀r, s ∈ R). Suppose at moment m, the register valuation is ν m = f (C 0|R C 0 ...C m−1 ).
C m ) is as follows: D1. If a register x at moment m + 1 lays above all registers at moment m, i.e. (x > r) ∈ C m for every register r, then set ν m+1 (x) = ν m (r) + 2 b , where r is one of the ≥ 2 b−d ab −1 + 2 b−dac−1 ≥ 2 b−d ab −1 +2 b−dac−1 , and since d ac +1 ≤ d xy , we get ν m+1 (x) , where r, s ∈ R and ∈ {<, >, =}, the expressions ν m (r) ν m (s) or ν m (r) ν m+1 (s) hold. Let C m be the next constraint, then ν m+1 = f (C 0|R C 0. respectively. Depending on r s, there are the following casesLet C m be the next constraint, then ν m+1 = f (C 0|R C 0 ...C m ) is as follows: D1. If a register x at moment m + 1 lays above all registers at moment m, i.e. (x > r) ∈ C m for every register r, then set ν m+1 (x) = ν m (r) + 2 b , where r is one of the ≥ 2 b−d ab −1 + 2 b−dac−1 ≥ 2 b−d ab −1 +2 b−dac−1 , and since d ac +1 ≤ d xy , we get ν m+1 (x) , where r, s ∈ R and ∈ {<, >, =}, the expressions ν m (r) ν m (s) or ν m (r) ν m+1 (s) hold, respectively. Depending on r s, there are the following cases.
= ν m (s) or ν m (r) = ν m+1 (s). moment m, i.e. there is a register t such that (t = s ) ∈ C m . Since ν m (t) = ν m+1 (s) by item D3 and since ν m (r) > ν m (t) by (r > t = s ) ∈ C m , we get ν m (r) > ν m+1 (s). Similarly for the case (r < s ) ∈ C. • If C m contains (r = s) or (r = s ) for r, s ∈ R, then item D3 implies resp. ν m (r). m where s lays on a level also present at moment m• If C m contains (r = s) or (r = s ) for r, s ∈ R, then item D3 implies resp. ν m (r) = ν m (s) or ν m (r) = ν m+1 (s). moment m, i.e. there is a register t such that (t = s ) ∈ C m . Since ν m (t) = ν m+1 (s) by item D3 and since ν m (r) > ν m (t) by (r > t = s ) ∈ C m , we get ν m (r) > ν m+1 (s). Similarly for the case (r < s ) ∈ C m where s lays on a level also present at moment m.
r < s ) ∈ C m and s lays on the highest level among all levels at moments m and m + 1. Then ν m (r) < ν m+1 (s) because ν m+1 (s) ≥ ν m (r) + 2 b by item D1. • Let, • Let (r < s ) ∈ C m and s lays on the highest level among all levels at moments m and m + 1. Then ν m (r) < ν m+1 (s) because ν m+1 (s) ≥ ν m (r) + 2 b by item D1.
Let (a > b) ∈ C m be two adjacent registers at moment m between which the register s is inserted at moment m+1, so (a > s > b) ∈ C m . Let d ab be the maximal depth of ab (m) -connecting chains; fix one such chain. We change it by going through s at moment m + 1, i.e. substitute the part (a, m) > (b, m) by (a, m) > (s, m + 1) > (b, m): the depth of the resulting chain is d ab + 1 and it is ≤ b References. • Finally, Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14. 1] Parosh Aziz Abdulla, Mohamed Faouzi Atig, Piotr Hofman, Richard Mayr, K. Narayan Kumar, and Patrick TotzkeVienna, Austria7there are two cases left: (r > s ) ∈ C m or (r < s ) ∈ C m , where s lays on a newly created level at moment m+1, and there are higher levels at moment m. This corresponds to item D2• Finally, there are two cases left: (r > s ) ∈ C m or (r < s ) ∈ C m , where s lays on a newly created level at moment m+1, and there are higher levels at moment m. This corresponds to item D2. Let (a > b) ∈ C m be two adjacent registers at moment m between which the register s is inserted at moment m+1, so (a > s > b) ∈ C m . Let d ab be the maximal depth of ab (m) -connecting chains; fix one such chain. We change it by going through s at moment m + 1, i.e. substitute the part (a, m) > (b, m) by (a, m) > (s, m + 1) > (b, m): the depth of the resulting chain is d ab + 1 and it is ≤ b References [1] Parosh Aziz Abdulla, Mohamed Faouzi Atig, Piotr Hofman, Richard Mayr, K. Narayan Kumar, and Patrick Totzke. Infinite-state energy games. In Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14, Vienna, Austria, July 14 -18, 2014, pages 7:1-7:10, 2014.
Deciding monotonic games. Aziz Parosh, Ahmed Abdulla, Julien Bouajjani, Orso, International Workshop on Computer Science Logic. Parosh Aziz Abdulla, Ahmed Bouajjani, and Julien d'Orso. Deciding monotonic games. In International Workshop on Computer Science Logic, pages 1-14.
. Springer, Springer, 2003.
Parameterized synthesis for fragments of first-order logic over data words. Béatrice Bérard, Benedikt Bollig, Mathieu Lehaut, Nathalie Sznajder, FOSSACS. 12077Béatrice Bérard, Benedikt Bollig, Mathieu Lehaut, and Nathalie Sznajder. Parameterized synthesis for fragments of first-order logic over data words. In FOSSACS, volume 12077 of Lecture Notes in Computer Science, pages 97-118.
. Springer, Springer, 2020.
Realizability problem for constraint ltl. Ashwin Bhaskar, M Praveen, arXiv:2207.06708arXiv preprintAshwin Bhaskar and M Praveen. Realizability problem for constraint ltl. arXiv preprint arXiv:2207.06708, 2022.
Graph games and reactive synthesis. Roderick Bloem, Krishnendu Chatterjee, Barbara Jobstmann, Handbook of Model Checking. Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick BloemRoderick Bloem, Krishnendu Chatterjee, and Barbara Jobstmann. Graph games and reactive synthesis. In Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors, Handbook of Model Checking, pages 921-962.
. Springer, Springer, 2018.
Bounds in ω-regularity. M Bojańczyk, T Colcombet, Proc. 21st IEEE Symp. on Logic in Computer Science. 21st IEEE Symp. on Logic in Computer ScienceM. Bojańczyk and T. Colcombet. Bounds in ω-regularity. In Proc. 21st IEEE Symp. on Logic in Computer Science, pages 285-296, 2006.
Twovariable logic on words with data. M Bojanczyk, A Muscholl, T Schwentick, L Segoufin, C David, Proc. 21st IEEE Symp. on Logic in Computer Science. 21st IEEE Symp. on Logic in Computer ScienceM. Bojanczyk, A. Muscholl, T. Schwentick, L. Segoufin, and C. David. Two- variable logic on words with data. In Proc. 21st IEEE Symp. on Logic in Computer Science, pages 7-16, 2006.
Weak MSO with the unbounding quantifier. Theory of Computing Systems. Bojańczyk Miko Laj, 48Miko laj Bojańczyk. Weak MSO with the unbounding quantifier. Theory of Computing Systems, 48(3):554-576, 2011.
Weak MSO+U with path quantifiers over infinite trees. Bojańczyk Miko Laj, Automata, Languages, and Programming -41st International Colloquium, ICALP 2014. Copenhagen, DenmarkProceedings, Part IIMiko laj Bojańczyk. Weak MSO+U with path quantifiers over infinite trees. In Automata, Languages, and Programming -41st International Colloquium, ICALP 2014, Copenhagen, Denmark, July 8-11, 2014, Proceedings, Part II, pages 38-49, 2014.
Rewriting systems with data. A Bouajjani, P Habermehl, Y Jurski, M Sighireanu, FCT. A. Bouajjani, P. Habermehl, Y. Jurski, and M. Sighireanu. Rewriting systems with data. In FCT, pages 1-22, 2007.
Automatic verification of recursive procedures with one integer parameter. A Bouajjani, P Habermehl, R R Mayr, Theoretical Computer Science. 295A. Bouajjani, P. Habermehl, and R R. Mayr. Automatic verification of recursive procedures with one integer parameter. Theoretical Computer Science, 295:85- 106, 2003.
Pushdown games with unboundedness and regular conditions. A.-J Bouquet, O Serre, I Walukiewicz, Proc. 23rd Conf. on Foundations of Software Technology and Theoretical Computer Science. 23rd Conf. on Foundations of Software Technology and Theoretical Computer ScienceSpringer2914A.-J. Bouquet, O. Serre, and I. Walukiewicz. Pushdown games with unbound- edness and regular conditions. In Proc. 23rd Conf. on Foundations of Software Technology and Theoretical Computer Science, volume 2914 of Lecture Notes in Computer Science, pages 88-99. Springer, 2003.
Synthesis of equilibria in infinite-duration games on graphs. Véronique Bruyère, ACM SIGLOG News. 82Véronique Bruyère. Synthesis of equilibria in infinite-duration games on graphs. ACM SIGLOG News, 8(2):4-29, 2021.
Solving sequential conditions by finite-state strategies. J R Büchi, L H Landweber, Trans. AMS. 138J.R. Büchi and L.H. Landweber. Solving sequential conditions by finite-state strategies. Trans. AMS, 138:295-311, 1969.
Two-way tree automata solving pushdown games. T Cachat, Automata Logics, and Infinite Games. E. Grädel, W. Thomas, and T. WilkeT. Cachat. Two-way tree automata solving pushdown games. In E. Grädel, W. Thomas, and T. Wilke, editors, Automata Logics, and Infinite Games, vol- ume 2500 of Lecture Notes in Computer Science, chapter 17, pages 303-317.
. Springer, Springer, 2002.
Deciding parity games in quasipolynomial time. C S Calude, S Jain, B Khoussainov, W Li, F Stephan, Proc. 49th ACM Symp. on Theory of Computing. 49th ACM Symp. on Theory of ComputingC.S. Calude, S. Jain, B. Khoussainov, W. Li, and F. Stephan. Deciding par- ity games in quasipolynomial time. In Proc. 49th ACM Symp. on Theory of Computing, pages 252-263, 2017.
Satisfiability of ctl* with constraints. Claudia Carapelle, Alexander Kartzow, Markus Lohrey, CONCUR 2013 -Concurrency Theory. Pedro R. D'Argenio and Hernán MelgrattiBerlin, Heidelberg; Berlin HeidelbergSpringerClaudia Carapelle, Alexander Kartzow, and Markus Lohrey. Satisfiability of ctl* with constraints. In Pedro R. D'Argenio and Hernán Melgratti, editors, CONCUR 2013 -Concurrency Theory, pages 455-469, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg.
Designing Data-Intensive Web Applications. S Ceri, P Fraternali, A Bongio, M Brambilla, S Comai, M Matera, Morgan Kaufmann Publishers IncSan Francisco, CA, USAS. Ceri, P. Fraternali, A. Bongio, M. Brambilla, S. Comai, and M. Matera. Designing Data-Intensive Web Applications. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2002.
Parameterized verification of broadcast networks of register automata. G Delzanno, A Sangnier, R Traverso, Reachability Problems. P. A. Abdulla and I. PotapovBerlin, HeidelbergSpringerG. Delzanno, A. Sangnier, and R. Traverso. Parameterized verification of broad- cast networks of register automata. In P. A. Abdulla and I. Potapov, editors, Reachability Problems, pages 109-121, Berlin, Heidelberg, 2013. Springer.
LTL with the freeze quantifier and register automata. S Demri, R Lazic, 16:1-16:30ACM Trans. Comput. Log. 103S. Demri and R. Lazic. LTL with the freeze quantifier and register automata. ACM Trans. Comput. Log., 10(3):16:1-16:30, 2009.
An automata-theoretic approach to constraint LTL. Stéphane Demri, D' Deepak, Souza, Information and Computation. 2053Stéphane Demri and Deepak D'Souza. An automata-theoretic approach to constraint LTL. Information and Computation, 205(3):380-415, 2007.
Constraint automata on infinite data trees: From ctl (z)/ctl*(z) to decision procedures. Stephane Demri, Karin Quaas, arXiv:2302.05327arXiv preprintStephane Demri and Karin Quaas. Constraint automata on infinite data trees: From ctl (z)/ctl*(z) to decision procedures. arXiv preprint arXiv:2302.05327, 2023.
Synthesis with identifiers. R Ehlers, S Seshia, H Kress-Gazit, Proc. 15th Int. Conf. on Verification, Model Checking, and Abstract Interpretation. 15th Int. Conf. on Verification, Model Checking, and Abstract InterpretationSpringer8318R. Ehlers, S. Seshia, and H. Kress-Gazit. Synthesis with identifiers. In Proc. 15th Int. Conf. on Verification, Model Checking, and Abstract Interpretation, volume 8318 of Lecture Notes in Computer Science, pages 415-433. Springer, 2014.
Automatic Synthesis of Systems with Data. Léo Exibard, AMUAix-Marseille UniversitéPhD ThesisLéo Exibard. Automatic Synthesis of Systems with Data. PhD Thesis, Aix- Marseille Université (AMU);
Church synthesis on register automata over linearly ordered data domains. Léo Exibard, Emmanuel Filiot, Ayrat Khalimov, 38th International Symposium on Theoretical Aspects of Computer Science, STACS 2021. Markus Bläser and Benjamin MonmegeSaarbrücken, Germany18716. Schloss Dagstuhl -Leibniz-Zentrum für InformatikLéo Exibard, Emmanuel Filiot, and Ayrat Khalimov. Church synthesis on register automata over linearly ordered data domains. In Markus Bläser and Ben- jamin Monmege, editors, 38th International Symposium on Theoretical Aspects of Computer Science, STACS 2021, March 16-19, 2021, Saarbrücken, Germany (Virtual Conference), volume 187 of LIPIcs, pages 28:1-28:16. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2021.
A generic solution to register-bounded synthesis with an application to discrete orders. Léo Exibard, Emmanuel Filiot, Ayrat Khalimov, 49th International Colloquium on Automata, Languages, and Programming, ICALP 2022. Mikolaj Bojanczyk, Emanuela Merelli, and David P. WoodruffParis, France229:19. Schloss Dagstuhl -Leibniz-Zentrum für InformatikLéo Exibard, Emmanuel Filiot, and Ayrat Khalimov. A generic solution to register-bounded synthesis with an application to discrete orders. In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th Interna- tional Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022, Paris, France, volume 229 of LIPIcs, pages 122:1-122:19. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2022.
Synthesis of data word transducers. Léo Exibard, Emmanuel Filiot, Pierre-Alain Reynier, 2021. REFERENCESLog. Methods Comput. Sci. 171Léo Exibard, Emmanuel Filiot, and Pierre-Alain Reynier. Synthesis of data word transducers. Log. Methods Comput. Sci., 17(1), 2021. REFERENCES
On synthesis of specifications with arithmetic. Rachel Faran, Orna Kupferman, Alexander Chatzigeorgiou, Riccardo Dondi, Herodotos Herodotou, Christos Kapoutsis, Yannis Manolopoulos, George A. Papadopoulos, and Florian Sikora. ChamSpringer International PublishingSOFSEM 2020: Theory and Practice of Computer ScienceRachel Faran and Orna Kupferman. On synthesis of specifications with arith- metic. In Alexander Chatzigeorgiou, Riccardo Dondi, Herodotos Herodotou, Christos Kapoutsis, Yannis Manolopoulos, George A. Papadopoulos, and Flo- rian Sikora, editors, SOFSEM 2020: Theory and Practice of Computer Science, pages 161-173, Cham, 2020. Springer International Publishing.
Strategy synthesis for linear arithmetic games. Azadeh Farzan, Zachary Kincaid, Proceedings of the ACM on Programming Languages. the ACM on Programming Languages2Azadeh Farzan and Zachary Kincaid. Strategy synthesis for linear arithmetic games. Proceedings of the ACM on Programming Languages, 2(POPL):1-30, 2017.
Playing with repetitions in data words using energy games. Diego Figueira, Anirban Majumdar, M Praveen, Log. Methods Comput. Sci. 1632020Diego Figueira, Anirban Majumdar, and M. Praveen. Playing with repetitions in data words using energy games. Log. Methods Comput. Sci., 16(3), 2020.
Temporal stream logic: Synthesis beyond the bools. B Finkbeiner, F Klein, R Piskac, M Santolucito, Proc. 31st Int. Conf. on Computer Aided Verification. 31st Int. Conf. on Computer Aided VerificationB. Finkbeiner, F. Klein, R. Piskac, and M. Santolucito. Temporal stream logic: Synthesis beyond the bools. In Proc. 31st Int. Conf. on Computer Aided Verification, 2019.
On the computational complexity of verifying one-counter processes. Stefan Göller, Richard Mayr, Anthony Widjaja To, Proceedings of the 24th Annual IEEE Symposium on Logic in Computer Science, LICS 2009. the 24th Annual IEEE Symposium on Logic in Computer Science, LICS 2009Los Angeles, CA, USAStefan Göller, Richard Mayr, and Anthony Widjaja To. On the computational complexity of verifying one-counter processes. In Proceedings of the 24th Annual IEEE Symposium on Logic in Computer Science, LICS 2009, 11-14 August 2009, Los Angeles, CA, USA, pages 235-244, 2009.
Automata, Logics, and Infinite Games: A Guide to Current Research. E Grädel, W Thomas, T Wilke, Lecture Notes in Computer Science. 2500SpringerE. Grädel, W. Thomas, and T. Wilke. Automata, Logics, and Infinite Games: A Guide to Current Research, volume 2500 of Lecture Notes in Computer Science. Springer, 2002.
Trees, automata, and games. Y Gurevich, L Harrington, Proc. 14th ACM Symp. on Theory of Computing. 14th ACM Symp. on Theory of ComputingACM PressY. Gurevich and L. Harrington. Trees, automata, and games. In Proc. 14th ACM Symp. on Theory of Computing, pages 60-65. ACM Press, 1982.
Verifying linear temporal properties of data insensitive controllers using finite instantiations. R Hojati, D L Dill, R K Brayton, Hardware Description Languages and their Applications. SpringerR. Hojati, D.L. Dill, and R.K. Brayton. Verifying linear temporal properties of data insensitive controllers using finite instantiations. In Hardware Description Languages and their Applications, pages 60-73. Springer, 1997.
Finite-memory automata. M Kaminski, N Francez, Theoretical Computer Science. 1342M. Kaminski and N. Francez. Finite-memory automata. Theoretical Computer Science, 134(2):329-363, 1994.
Bounded synthesis of register transducers. A Khalimov, B Maderbacher, R Bloem, 16th Int. Symp. on Automated Technology for Verification and Analysis. 11138A. Khalimov, B. Maderbacher, and R. Bloem. Bounded synthesis of register transducers. In 16th Int. Symp. on Automated Technology for Verification and Analysis, volume 11138 of Lecture Notes in Computer Science, pages 494-510.
. Springer, Springer, 2018.
Register-bounded synthesis. Ayrat Khalimov, Orna Kupferman, 30th International Conference on Concurrency Theory, CONCUR 2019. Wan Fokkink and Rob van GlabbeekAmsterdam, the Netherlands14016. Schloss Dagstuhl -Leibniz-Zentrum für InformatikAyrat Khalimov and Orna Kupferman. Register-bounded synthesis. In Wan Fokkink and Rob van Glabbeek, editors, 30th International Conference on Concurrency Theory, CONCUR 2019, August 27-30, 2019, Amsterdam, the Netherlands, volume 140 of LIPIcs, pages 25:1-25:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019.
Scalar and Vectorial mu-calculus with Atoms. Bartek Klin, Mateusz Le Lyk, Logical Methods in Computer Science. 154Bartek Klin and Mateusz Le lyk. Scalar and Vectorial mu-calculus with Atoms. Logical Methods in Computer Science, Volume 15, Issue 4, Oct 2019.
Decidable synthesis of programs with uninterpreted functions. Paul Krogmeier, Umang Mathur, Adithya Murali, P Madhusudan, Mahesh Viswanathan, Computer Aided Verification. Shuvendu K. Lahiri and Chao WangChamSpringer International PublishingPaul Krogmeier, Umang Mathur, Adithya Murali, P. Madhusudan, and Mahesh Viswanathan. Decidable synthesis of programs with uninterpreted functions. In Shuvendu K. Lahiri and Chao Wang, editors, Computer Aided Verification, pages 634-657, Cham, 2020. Springer International Publishing.
A unifying approach to data-independence. R Lazić, D Nowak, Proc. 11th Int. Conf. on Concurrency Theory. 11th Int. Conf. on Concurrency TheoryBerlin HeidelbergSpringerR. Lazić and D. Nowak. A unifying approach to data-independence. In Proc. 11th Int. Conf. on Concurrency Theory, pages 581-596. Springer Berlin Heidelberg, 2000.
Computation: Finite and Infinite Machines. M L Minsky, Prentice Hall1 editionM.L. Minsky. Computation: Finite and Infinite Machines. Prentice Hall, 1 edition, 1967.
On the synthesis of a reactive module. A Pnueli, R Rosner, Proc. 16th ACM Symp. on Principles of Programming Languages. 16th ACM Symp. on Principles of Programming LanguagesA. Pnueli and R. Rosner. On the synthesis of a reactive module. In Proc. 16th ACM Symp. on Principles of Programming Languages, pages 179-190, 1989.
Automata on infinite objects and Church's problem. M O Rabin, Amer. Mathematical SocietyM.O. Rabin. Automata on infinite objects and Church's problem. Amer. Mathematical Society, 1972.
On a problem of formal logic. Ramsey Frank Plumpton, Proceedings of the London Mathematical Society. 301Frank Plumpton Ramsey. On a problem of formal logic. Proceedings of the London Mathematical Society, 30(1):264-286, 1930.
Two-variable logic with two order relations. Thomas Schwentick, Thomas Zeume, Log. Methods Comput. Sci. 81Thomas Schwentick and Thomas Zeume. Two-variable logic with two order relations. Log. Methods Comput. Sci., 8(1), 2012.
Automata-based verification over linearly ordered data domains. Luc Segoufin, Szymon Torunczyk, 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik. Luc Segoufin and Szymon Torunczyk. Automata-based verification over lin- early ordered data domains. In 28th International Symposium on Theoretical Aspects of Computer Science (STACS 2011). Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2011.
Parity games played on transition graphs of one-counter processes. Olivier Serre, Foundations of Software Science and Computation Structures, 9th International Conference, FOSSACS 2006, Held as Part of the Joint European Conferences on Theory and Practice of Software. Vienna, AustriaETAPSOlivier Serre. Parity games played on transition graphs of one-counter pro- cesses. In Foundations of Software Science and Computation Structures, 9th International Conference, FOSSACS 2006, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2006, Vienna, Austria, March 25-31, 2006, Proceedings, pages 337-351, 2006.
The reactive synthesis competition. Syntcomp@cav, Syntcomp@CAV. The reactive synthesis competition. http://www.syntcomp.org, 2014.
Facets of synthesis: Revisiting church's problem. Wolfgang Thomas, Foundations of Software Science and Computational Structures, 12th International Conference, FOSSACS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software. Luca de AlfaroYork, UKSpringer5504Wolfgang Thomas. Facets of synthesis: Revisiting church's problem. In Luca de Alfaro, editor, Foundations of Software Science and Computational Struc- tures, 12th International Conference, FOSSACS 2009, Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS 2009, York, UK, March 22-29, 2009. Proceedings, volume 5504 of Lecture Notes in Computer Science, pages 1-14. Springer, 2009.
Automatic verification of database-driven systems: a new frontier. V Vianu, ICDT '09. V. Vianu. Automatic verification of database-driven systems: a new frontier. In ICDT '09, pages 1-13, 2009.
Model checking CTL properties of pushdown systems. I Walukiewicz, Proc. 20th Conf. on Foundations of Software Technology and Theoretical Computer Science. 20th Conf. on Foundations of Software Technology and Theoretical Computer Science1974I. Walukiewicz. Model checking CTL properties of pushdown systems. In Proc. 20th Conf. on Foundations of Software Technology and Theoretical Computer Science, volume 1974 of Lecture Notes in Computer Science, pages 127-138.
. Springer, Springer, 2000.
Expressing interesting properties of programs in propositional temporal logic. P Wolper, Proc. 13th ACM Symp. on Principles of Programming Languages. 13th ACM Symp. on Principles of Programming LanguagesP. Wolper. Expressing interesting properties of programs in propositional tem- poral logic. In Proc. 13th ACM Symp. on Principles of Programming Languages, pages 184-192, 1986.
| [] |
[
"PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST",
"PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST",
"PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST",
"PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST"
] | [
"Lei Zhang [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Di Li [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n\nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"George Hobbs [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Crispin H Agar \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Richard N Manchester \nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Patrick Weltevrede \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"William A Coles \nECE Department\nUniversity of California at San Diego\nLa JollaCAUSA\n",
"Pei Wang \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n",
"Weiwei Zhu \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n",
"Zhigang Wen \nXinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China\n",
"Jianping Yuan \nXinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China\n",
"Andrew D Cameron \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Shi Dai \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Kuo Liu \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 69D-53121BonnGermany\n",
"Qijun Zhi \nGuizhou Provincial Key Laboratory of Radio Astronomy and Data Processing\nGuizhou Normal University\n550001GuiyangPeople's Republic of China\n\nSchool of Physics and Electronic Science\nGuizhou Normal University\n550001GuiyangPeople's Republic of China\n",
"Chenchen Miao \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n",
"Mao Yuan \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n",
"Shuyun Cao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Li Feng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hengqian Gan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Long Gao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xuedong Gu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Minglei Guo \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qiaoli Hao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lin Huang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Peng Jiang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Chengjin Jin \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hui Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qi Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qisheng Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hongfei Liu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Gaofeng Pan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Zhichen Pan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Bo Peng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hui Qian \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lei Qian \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiangwei Shi \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinyou Song \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Liqiang Song \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Caihong Sun \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinghai Sun \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hong Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qiming Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Yi Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiaoyao Xie \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jun Yan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Li Yang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Shimo Yang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Rui Yao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Dongjun Yu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinglong Yu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Youling Yue \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Chengmin Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Haiyan Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Shuxin Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiaonian Zheng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Aiying Zhou \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Boqin Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lichun Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Ming Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Wenbai Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Yan Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lei Zhang [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Di Li [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n\nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"George Hobbs [email protected] \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Crispin H Agar \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Richard N Manchester \nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Patrick Weltevrede \nJodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"William A Coles \nECE Department\nUniversity of California at San Diego\nLa JollaCAUSA\n",
"Pei Wang \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n",
"Weiwei Zhu \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n",
"Zhigang Wen \nXinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China\n",
"Jianping Yuan \nXinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China\n",
"Andrew D Cameron \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Shi Dai \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nCSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia\n",
"Kuo Liu \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nMax-Planck-Institut für Radioastronomie\nAuf dem Hügel 69D-53121BonnGermany\n",
"Qijun Zhi \nGuizhou Provincial Key Laboratory of Radio Astronomy and Data Processing\nGuizhou Normal University\n550001GuiyangPeople's Republic of China\n\nSchool of Physics and Electronic Science\nGuizhou Normal University\n550001GuiyangPeople's Republic of China\n",
"Chenchen Miao \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n",
"Mao Yuan \nNational Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China\n",
"Shuyun Cao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Li Feng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hengqian Gan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Long Gao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xuedong Gu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Minglei Guo \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qiaoli Hao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lin Huang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Peng Jiang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Chengjin Jin \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hui Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qi Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qisheng Li \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hongfei Liu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Gaofeng Pan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Zhichen Pan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Bo Peng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hui Qian \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lei Qian \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiangwei Shi \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinyou Song \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Liqiang Song \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Caihong Sun \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinghai Sun \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Hong Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Qiming Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Yi Wang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiaoyao Xie \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jun Yan \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Li Yang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Shimo Yang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Rui Yao \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Dongjun Yu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Jinglong Yu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Youling Yue \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Chengmin Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Haiyan Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Shuxin Zhang \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Xiaonian Zheng \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Aiying Zhou \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Boqin Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Lichun Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Ming Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Wenbai Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n",
"Yan Zhu \nFAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China\n"
] | [
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK",
"ECE Department\nUniversity of California at San Diego\nLa JollaCAUSA",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"Xinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China",
"Xinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 69D-53121BonnGermany",
"Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing\nGuizhou Normal University\n550001GuiyangPeople's Republic of China",
"School of Physics and Electronic Science\nGuizhou Normal University\n550001GuiyangPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"Jodrell Bank Centre for Astrophysics\nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK",
"ECE Department\nUniversity of California at San Diego\nLa JollaCAUSA",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"Xinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China",
"Xinjiang Astronomical Observatory\n150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"CSIRO Astronomy and Space Science\nPO Box 761710EppingNSWAustralia",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"Max-Planck-Institut für Radioastronomie\nAuf dem Hügel 69D-53121BonnGermany",
"Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing\nGuizhou Normal University\n550001GuiyangPeople's Republic of China",
"School of Physics and Electronic Science\nGuizhou Normal University\n550001GuiyangPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"National Astronomical Observatories\nChinese Academy of Sciences\nA20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China",
"University of Chinese Academy of Sciences\n100049BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China",
"FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences\n100101BeijingPeople's Republic of China"
] | [] | We describe PSRJ1926−0652, a pulsar recently discovered with the Five-hundred-meter Aperture Spherical radio Telescope (FAST). Using sensitive single-pulse detections from FAST and long-term timing observations from the Parkes 64 m radio telescope, we probed phenomena on both long and short timescales. The FAST observations covered a wide frequency range from 270 to 800 MHz, enabling individual pulses to be studied in detail. The pulsar exhibits at least four profile components, short-term nulling lasting from 4 to 450 pulses, complex subpulse drifting behaviors and intermittency on scales of tens of minutes. While the average band spacing P 3 is relatively constant across different bursts and components, significant variations in the separation of adjacent bands are seen, especially near the beginning and end of a burst. Band shapes and slopes are quite variable, especially for the trailing components and for the shorter bursts. We show that for each burst the last detectable pulse prior to emission ceasing has different properties compared to other pulses. These complexities pose challenges for the classic carousel-type models. | 10.3847/1538-4357/ab1849 | null | 119,441,910 | 1904.05482 | 3354dc6a148740c25c1b556de70c69a54bb8f884 |
PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST
Lei Zhang [email protected]
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
University of Chinese Academy of Sciences
100049BeijingPeople's Republic of China
CSIRO Astronomy and Space Science
PO Box 761710EppingNSWAustralia
Di Li [email protected]
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
University of Chinese Academy of Sciences
100049BeijingPeople's Republic of China
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
George Hobbs [email protected]
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
CSIRO Astronomy and Space Science
PO Box 761710EppingNSWAustralia
Crispin H Agar
Jodrell Bank Centre for Astrophysics
School of Physics and Astronomy
University of Manchester
M13 9PLManchesterUK
Richard N Manchester
CSIRO Astronomy and Space Science
PO Box 761710EppingNSWAustralia
Patrick Weltevrede
Jodrell Bank Centre for Astrophysics
School of Physics and Astronomy
University of Manchester
M13 9PLManchesterUK
William A Coles
ECE Department
University of California at San Diego
La JollaCAUSA
Pei Wang
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
Weiwei Zhu
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
Zhigang Wen
Xinjiang Astronomical Observatory
150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China
Jianping Yuan
Xinjiang Astronomical Observatory
150, Science-1 Street830011Urumqi, XinjiangPeople's Republic of China
Andrew D Cameron
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
CSIRO Astronomy and Space Science
PO Box 761710EppingNSWAustralia
Shi Dai
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
CSIRO Astronomy and Space Science
PO Box 761710EppingNSWAustralia
Kuo Liu
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
Max-Planck-Institut für Radioastronomie
Auf dem Hügel 69D-53121BonnGermany
Qijun Zhi
Guizhou Provincial Key Laboratory of Radio Astronomy and Data Processing
Guizhou Normal University
550001GuiyangPeople's Republic of China
School of Physics and Electronic Science
Guizhou Normal University
550001GuiyangPeople's Republic of China
Chenchen Miao
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
University of Chinese Academy of Sciences
100049BeijingPeople's Republic of China
Mao Yuan
National Astronomical Observatories
Chinese Academy of Sciences
A20 Datun Road100101Chaoyang District, BeijingPeople's Republic of China
University of Chinese Academy of Sciences
100049BeijingPeople's Republic of China
Shuyun Cao
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Li Feng
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Hengqian Gan
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Long Gao
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Xuedong Gu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Minglei Guo
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Qiaoli Hao
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Lin Huang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Peng Jiang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Chengjin Jin
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Hui Li
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Qi Li
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Qisheng Li
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Hongfei Liu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Gaofeng Pan
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Zhichen Pan
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Bo Peng
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Hui Qian
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Lei Qian
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Xiangwei Shi
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Jinyou Song
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Liqiang Song
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Caihong Sun
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Jinghai Sun
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Hong Wang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Qiming Wang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Yi Wang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Xiaoyao Xie
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Jun Yan
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Li Yang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Shimo Yang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Rui Yao
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Dongjun Yu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Jinglong Yu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Youling Yue
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Chengmin Zhang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Haiyan Zhang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Shuxin Zhang
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Xiaonian Zheng
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Aiying Zhou
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Boqin Zhu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Lichun Zhu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Ming Zhu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Wenbai Zhu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
Yan Zhu
FAST Collabration, CAS Key Laboratory of FAST, NAOC, Chinese Academy of Sciences
100101BeijingPeople's Republic of China
PSR J1926-0652: A Pulsar with Interesting Emission Properties Discovered at FAST
10.3847/1538-4357/ab1849Received 2019 February 1; revised 2019 April 9; accepted 2019 April 9; published 2019 May 24pulsars: individual (PSR J1926-0652)
We describe PSRJ1926−0652, a pulsar recently discovered with the Five-hundred-meter Aperture Spherical radio Telescope (FAST). Using sensitive single-pulse detections from FAST and long-term timing observations from the Parkes 64 m radio telescope, we probed phenomena on both long and short timescales. The FAST observations covered a wide frequency range from 270 to 800 MHz, enabling individual pulses to be studied in detail. The pulsar exhibits at least four profile components, short-term nulling lasting from 4 to 450 pulses, complex subpulse drifting behaviors and intermittency on scales of tens of minutes. While the average band spacing P 3 is relatively constant across different bursts and components, significant variations in the separation of adjacent bands are seen, especially near the beginning and end of a burst. Band shapes and slopes are quite variable, especially for the trailing components and for the shorter bursts. We show that for each burst the last detectable pulse prior to emission ceasing has different properties compared to other pulses. These complexities pose challenges for the classic carousel-type models.
Introduction
This work made use of the data from the FAST telescope (Five-hundredmeter Aperture Spherical radio Telescope). FAST is a Chinese national mega-science facility built and operated by the National Astronomical Observatories, Chinese Academy of Sciences. Located in southern China, it is the world's largest single-dish radio telescope. Between 2017 August and 2018 February, FAST carried out drift-scan observations using an ultra-wide-bandwidth receiver (UWB) to search for radio pulsars. Before the removal of this UWB in 2018 May (it was installed primarily for commissioning projects), 60 pulsar candidates had been obtained, with 44 of these already confirmed either by further FAST observations or using the Parkes 64 m or Effelsberg 100 m radio telescopes. The survey strategy and the full collection of new discoveries will be published elsewhere, with a brief summary currently available on the Commensal Radio Astronomy FAST survey website 11 and in Qian et al. (2019).
In this paper, we report the discovery of PSRJ1926−0652, which has an ∼1.6 s pulse period. The pulsar was discovered using a single-pulse search pipeline (Zhu et al. 2014) that was applied to observations taken in 2017 August. The pulsar was independently confirmed using the Parkes radio telescope in 2017 October. We have continued observations with both FAST and Parkes and, as described in this paper, show that PSRJ1926 −0652 exhibits a wide-range of emission phenomena.
Most pulsars have been discovered through the detection of their regular pulsed signal. However, soon after the discovery of pulsars it was found that the pulsed emission is seldom completely stable. For example, individual pulses may be significantly stronger than their mean. This led to the development of searches for pulsars through the detection of individual bursts (e.g., McLaughlin et al. 2006) and to the single-pulse search pipelines as used for our discovery.
With sufficiently sensitive telescopes, many pulsars can be shown to have mean pulse profiles formed from subpulses that drift in pulse phase in successive pulses. This phenomenon is known as subpulse drifting and was first reported by Drake & Craft (1968). The subpulse drifting phenomena is often described with a "carousel model" in which a ring of source regions systematically rotates about the magnetic axis (Ruderman & Sutherland 1975). We now know that at least one-third of pulsars exhibit subpulse drifting (Weltevrede et al. 2006a). Various algorithms have been developed to quantify the subpulse drifting. For instance, Edwards & Stappers (2003) show how a two-dimensional fluctuation spectrum (2DFS) can be used to determine the period of the subpulses in both pulse phase (known as P 2 ) and as pulse number (P 3 , measured in time). They noted that observational results indicated that P 2 does not change as a function of observing epoch and P 3 does not change with pulse longitude. The drift rate or slope of a subpulse band, is conventionally defined as Δf=P 2 /P 3 .
Several pulsars that exhibit complex patterns in their drifting subbands are now known. For instance, Qiao et al. (2004) describe a complex model for the "bi-drifting" phenomenon seen in PSRJ0815+09. The observing-frequency-dependence of subpulse drifting has also been studied. For example, Taylor et al. (1975) and Wolszczan et al. (1981) showed that, for PSR B0031−07 and PSR B0809+74, P 2 varied with frequency approximately as ν −0.25 , similar to the dependence expected for radius-to-frequency mapping in polar-cap models of the pulsar emission (Cordes 1978).
The pulsed emission from some pulsars has also been observed to switch off suddenly. This phenomena, known as "nulling" was first reported by Backer (1970). Nulling is relatively common, particularly in long period pulsars (Rankin 1986). For instance, 43 out of 72 well-observed pulsars were found by Biggs (1992) to exhibit evidence for nulling. The duration of null events varies widely. In some cases, one or a few pulses may be missing, whereas in other cases the emission may be undetectable for hours, days, or in extreme cases, months and years. The "null fraction" (NF), the fraction of time that the pulsar is in a null state, can range from close to zero (e.g., PSR B1737+13; Biggs 1992) to more than 90% (Wang et al. 2007). Pulsars that switch off for very long periods (on scales of hours to years) are often termed "intermittent pulsars." Kramer et al. (2006) studied one such pulsar, PSRB1931+24, and showed that the pulsar's slow-down rate is reduced when the pulsar is in its null state. This was explained as a change in magnetospheric currents. Some pulsars are also known to switch between multiple discrete profile states. This is known as "mode changing." Wang et al. (2007) and Lyne et al. (2010) suggested that mode changing and nulling are related phenomena and differ only in the magnitude of the changes in the magnetospheric current flows. A few papers have described studies that explore how the nulling and drifting phenomena may be linked. For instance, Gajjar et al. (2017) studied PSRsJ1741−0840 and J1840 −0840. They reported that for PSRJ1840−0840 the pulsar tended (but not always) to start nulling after the end of a driftband. When PSRJ1840−0840 then switched back on, it typically started at the beginning of a new driftband in both of its profile components.
Long-term monitoring of a pulsar provides information on the spin-down of the pulsar and long timescale intermittent behavior. Such monitoring also provides high signal-to-noise ratio (S/N), polarization-calibrated, average pulse profiles that can be used to determine the emission geometry of the system. The single pulse observations provide information on the nulling and drifting phenomena. Together, these results (for instance, as in Rankin & Wright 2008) can be used to search for the elusive physical model that will link these emission phenomena.
The pulsar that we describe in this paper, PSRJ1926−0652, has multiple pulse profile components and exhibits both subpulse drifting and nulling on various timescales. The paper is organized as follows. In Section 2, we describe our observations of PSRJ1926−0652. In Section 3, we present our analysis of the individual pulses. In Section 4, we describe the long-term behavior and the timing solution, and analyze the polarization and flux-density properties of the average pulse profile. We discuss our results in Section 5 and conclude the paper in Section 6.
Observations
We have carried out observations of PSRJ1926−0652 with both the FAST and Parkes radio telescopes. FAST, which is still being commissioned, has a large collecting area allowing us to observe single pulses from PSRJ1926−0652. The Parkes telescope is not sensitive enough to detect single pulses from this pulsar, but can be accurately calibrated and has been used to measure the polarization properties of the pulsar, as well as to carry out long-term monitoring.
Observation of Single Pulses
We observed PSRJ1926−0652 for ∼50 minutes using FAST on 2017 November 28th (corresponding to a MJD 58085) using a wide-bandwidth receiver covering from 270 MHz to 1.6 GHz. For most of the early FAST commissioning data, including the observation presented here, only one of the two linear polarization signal paths was reliable and the pulsar was only detectable in the low-frequency band. 12 We therefore only make use of the low frequency (270-800 MHz) band with the single available polarization channel. The lack of complete polarization information limits the scope of our single-pulse analysis. We have therefore focused our analysis on the detectable variations in flux density and in pulse phase. During the observation we recorded a total of 1921 single pulses with a time resolution of 100 μs. We subsequently extracted individual pulses 13 with 512 phase bins per pulse period using the DSPSR program (van Straten & Bailes 2011).
Monitoring Observations
The Parkes telescope continues to be used for regular timing observations of PSRJ1926−0652 in the 20 cm (1400 MHz) observing band with the central beam of the 13-beam receiver (Staveley-Smith et al. 1996). We have obtained 35 observations of this pulsar between 2017 October 8 (MJD 58034) and 2018 September 26 (MJD 58387). Integration times are typically 1 hr and the observations are divided into 30 s time segments (known as "subintegrations"). The bandwidth used was 256 MHz, which was divided into 1024 frequency channels and 1024 phase bins were formed across the profile using the Parkes Digital Filterbank Mark 4. In order to obtain highquality flux density and polarization calibration solutions, each observation was preceded by observation of a switched calibration noise source.
We processed the data using the PSRCHIVE software suite (Hotan et al. 2004). Aliased signals and narrowband radio frequency interference (RFI) were removed by giving zero 12 It is currently unclear whether this was because the pointing position was inaccurate or whether the telescope efficiency was low in the higher band during these observations. 13 Note that this requires the use of the -K option. weight to channels within 5% of the band edge and those with a level substantially above a median-smoothed bandpass. PSRJ1926−0652 was clearly detected in 29 observations and was undetected on the other six occasions. Since there is no perceptible worsening of RFI conditions in those six epochs, the nondetection is probably due to nulling. The length of observing time for the six nondetections were 11.5 minutes, 9 minutes, 17.5 minutes, 64 minutes, 55.5 minutes, and 72 minutes. To convert the measured intensity from the Parkes telescope observations to absolute flux density, we made use of observations of the radio galaxy 3C 218 (Hydra A; Baasr et al. 1977, see also Xie et al. 2019) that are taken every few weeks to support the Parkes Pulsar Timing Array project (Manchester et al. 2013). This allowed us to determine the effective flux density of the calibration noise source and consequently an absolute flux scale.
These long-term observations allowed us to model the rotation of the pulsar (see Section 4.1) to produce a high S/N average pulse profile enabling determination of the polarization properties and the flux density of the pulsar (Section 4.2) and determine the long-term on-off timescale for the pulse emission (Section 4.3).
The raw and processed data sets described in this paper are available online. See Appendix A for details.
The Single-pulse Emission
A "pulse stack" is an array of consecutive pulses with pulse phase on the x axis and increasing pulse number on the y axis. The upper panel in Figure 1 shows the entire pulse stack obtained using the FAST single-pulse data set across the frequency band from 270 to 800 MHz with the single available polarization channel. 14 The average profile is shown in the lower panel. This profile and pulse stack has been obtained after summing in frequency over the entire band (from 270 to 800 MHz).
We have labeled the regions in which the emission is "on" in the pulse stack as Bursts 1 to 6. We show these on-states in more detail in the six panels of Figure 2. The average pulse profile from each of these bursts is shown in the lower section of each panel overlaid on the mean pulse profile for the whole observation. Various emission phenomena are seen in these panels including multiple profile components, subpulse drifting, and nulling.
The average pulse profile consists of two main components (labeled as C1 and C4 in the lower panel of Figure 1). An inspection of Figure 2 shows at least two extra components. A weak component to the right of C1 is seen in several bursts (we label this component C2) and similarly, a weak component to the left of C4 leads to the "bump" in the average profile that we have labeled C3. See Appendix B.1 for details. Between these components there is a bridge region of emission.
Over the wide observed FAST band (270 to 800 MHz) we expect to see pulse shape evolution relating to intrinsic profile changes, emission arising from different positions in the magnetosphere and interstellar-medium effects. Figure 3 shows mean pulse profiles for three subbands across this observed bandwidth. It is clear that, as the observing frequency increases, the component separation decreases-pulse widths at 50% of the peak amplitude for the low and high frequency bands are given in Table 1. The reduction in profile width as a function of frequency is well-known in the general pulsar population and is usually attributed to radius-to-frequency mapping (Cordes 1978).
As Figures 1 and 2 show, the observed burst durations cover a wide range. It could be argued that Burst 5 is in fact two bursts, 5a and 5b, separated by a null of four pulse periods. Given this, the burst durations range from 17 pulse periods for Burst 5a to 300 pulse periods for Burst 3. 15 The null durations are also highly variable and range from four to more than 450 pulse periods. The pulsar is in a null state about 75% of the time, but this NF is quite uncertain because of the limited number of bursts observed.
One striking property of the emission is that during the longest burst event, Burst 3, the leading components drift later in phase, with the separation of C1/C2 from C3/C4 decreasing through the burst. This is most easily seen in Figure 1. Similar behavior may be occurring in other bursts, but this is not certain. It appears that the phase of components C1/C2 resets to the same starting value for each burst. Longer data sets are needed to confirm this property and to investigate it in more detail. Figure 2 shows that the slopes of drift bands vary substantially from band to band within a burst, between bursts, and for the two main components, C1 and C4. To make this quantitative, we have fitted a single Gaussian to the intensity of each subpulse group representing a given drift band in each pulse. We then do a weighted fit of a straight line to the centroid phase of the fitted Gaussians for a given drift band to measure the drift rate or band slope Δf=P 2 /P 3 , and its uncertainty. Note that, with this definition, Δfis zero for a vertical band in a stack plot. Histograms of the band slopes for components C1 and C4 for Burst 3 and for all other bursts combined are given in Figure 4. The fitted centroid points and linear fits to these points are shown in Figure 5 for Burst 3.
These histograms confirm that observed band slopes or subpulse drift rates are quite variable, especially for the shorter bursts, and are systematically different for components C1 and C4. Positive drift rates are seen only in the short bursts, specifically Bursts 4, 5a, and 6.
To further investigate the characteristics of the drifting subpulses in PSR J1926−0652, we undertook a Fourier analysis of the longest burst, Burst 3, which also has the most regular drifting. The frequency (or equivalently P 3 modulation period), phase, and amplitude of a cosine function were fitted to the pulse intensities across the burst for each pulse phase bin. Pulse 770, near the center of the burst and at the boundary of the two panels in Figure 5, was adopted as the reference time, t 0 , for the cosine fit. Figure 6 shows the variations of P 3 across the leading and trailing components. The weighted mean values of P 3 (averaged between the vertical dashed lines in Figure 6) are (17.35±0.04)P and (17.31±0.03) P for the leading and trailing components, respectively. The difference between these values is of marginal significance, so we adopt a mean P 3 of (17.33±0.03)P for the whole profile.
Using this mean P 3 value, we fit for the cosine phase at t 0 across the leading and trailing components. For the drift band closest to the reference time t 0 , the time of the cosine maximum for a given pulse phase bin is given by:
f = - ( ) t t P 1 max 0 0 3
f 0 is phase of the modulation at t 0 , and t 0 and t max are expressed in units of pulse period. 16 Figure 5 shows the locus of the peak of the cosine function as a function of pulse phase for both the leading and trailing components. The locus of the modulation peak was then replicated for all drift bands in the burst using the same mean value of P 3 for both the leading and trailing components. The rate of Fourier phase drift is fairly stable through the main C1 and C4 components, about −1°.35/P and −1°.95/P, respectively. Given the mean P 3 value, these slopes correspond to P 2 =−23°and −34°respectively, with an uncertainty of about 1°. However, the drift rate is quite nonlinear across each component, appearing to flatten toward the component edges. Furthermore, the modulation phases of the inner Components 2 and 3 do not lie on the extrapolation of the phase variations in the main components. The modulation phases of the main components also differ, with Component 4 reaching its maximum Profile parameters: Mean flux density at 1400 MHz (mJy) 0.9(2) 50% pulse width at 1400 MHz (°) 45.0(4) 50% pulse width at 700 MHz b (°) 48.6(7) 50% pulse width at 350 MHz b (°) 56.1 (7) Polarization parameters at 1400 MHz:
Rotation measure (rad m −2 ) −55(3) Linear polarization fraction (L/I) 30% Circular polarization fraction (| | V I) 1.6%
Long-term emission-state parameters: Longest "on" duration (min.) 20 Mean "on" duration (min.) 5.9 Standard Deviation "on" duration (min.) 3.6 Longest "off" duration (min.) Notes. Uncertainties in parentheses refer to the last quoted digit. All the parameters, apart from those indicated, were obtained from the Parkes Observations. a Derived from the Yao et al. (2017) model. b Derived from the FAST observation. 16 Note that for a band drifting toward earlier pulse phases (as in this case) the Fourier (P 3 ) phase at t 0 is an increasing function of pulse phase. The minus sign in Equation (1) then implies that the slope of the drift bands and hence P 2 are negative for this pulsar. This Fourier phase convention is opposite to that adopted by Weltevrede (2016) although the convention on the sign of P 2 is the same.
amplitude about 90°in modulation phase (i.e., 0.25P 3 ) later than Component 1. This means that for most pulses there is emission in one or both components although there are pulses with no significant emission. These are not "nulls" in the usual sense, but just a consequence of the periodic modulations in the various components.
It is clear that the band slopes derived from the Fourier analysis are very different from those derived from the direct Gaussian fitting to the band profiles and illustrated in Figures 4 and 5. Since P 3 is relatively stable, this implies a similarly different distribution of the derived P 2 values. These results will be discussed further in Section 5. Estimates of P 2 and P 3 can also be obtained by computing fluctuation spectra. A description of such an analysis is provided in Appendix B.2, showing a consistent result.
Inspection of Figure 2 indicates that the trailing pulse components (C3 and C4) are always detectable in the last pulse before a null event, whereas the leading pulse components generally are not. In Figure 7 we plot the mean profile of the last active pulse (LAP) of each burst (taking Bursts 5a and 5b separately) and the mean pulse profile over all bursts. The LAP average profile is clearly dominated by the trailing components, although there is occasional emission for components 1 and 2, for example, in Burst 3. Within the uncertainties, the LAP emission for C3 and C4 has the same shape as the mean profile over all burst emission and a similar amplitude. To quantify the significance of the shape change we have carried out 500,000 trials in which we have summed seven randomly selected pulses (from the "on" or burst states) to form an integrated profile. The strength of the leading components relative to the trailing components was determined for each trial by calculating the area beneath the components using PSRSALSA (Weltevrede 2016). Out of the 500,000 trials, only one had a more extreme ratio than is observed in Figure 7 (0.154), thereby confirming that the weakness of the leading components in the LAP prior to a null is not a chance result. 17 The first detectable pulse of each burst is not systematically different from an average pulse, with two bursts starting with the leading component (e.g., Bursts 5a and 5b), one with the trailing component (Burst 4) and two with both components starting at the same time (Bursts 3 and 6).
The Long-term Timing and Emission Properties
Timing Solution
Timing residuals were formed using the long-term Parkes monitoring observations. We used the TEMPO2 (Hobbs et al. 2006) software package using the DE421 solar system ephemeris and the TT(TAI) time standard to obtain a phaseconnected timing solution extending over 353 days. The timing residuals are shown in Figure 8. The nulling timescale is too short to search for changes in the spin-down rate during such events and the pulse arrival times are modeled well using a very simple parameterization of the pulsar. The timing solution is presented in Table 1. We also present parameters derived from the timing parameters, including the DM-based distance estimate from the Yao et al. (2017) model for the Galactic free-electron distribution, the pulsar's characteristic age
(t =Ṗ P 2 c )
where P is the pulse period andṖ is its first time derivative) and a representative surface-dipole magnetic field strength ( =´Ḃ PP 3.2 10 s 19
Gauss) in the table. The mean flux density at 1400MHz, 0.9±0.2mJy, was calculated by using the PSRCHIVE routine PSRFLUX to give the flux density of each of the Parkes observations and then computing the mean and rms deviation of these values. The pulse widths are at 50% of the peak amplitude and were computed from the mean profiles for the Parkes and FAST observations.
We cross-correlated the pulse profile for each observation with our analytic template and inspected, by eye, the deviation from the scaled template and the observed profiles. We found no evidence for pulse shape changes and therefore have no evidence that this pulsar exhibits discrete pulse-shape states.
Polarization Properties and Flux Density
To probe the polarimetric properties of the pulsar and to measure the average on-state flux density in the 20 cm observing band from the Parkes observations, we selected subintegrations for observations in which emission was detected. Observations were aligned using the timing solution given in Table 1, and then summed to produce a calibrated profile of the pulsar in the 20 cm observing band using the PSRCHIVE software suite (Hotan et al. 2004). This summed profile was plotted using the PSRSALSA software package (Weltevrede 2016) and is shown in the left panel of Figure 9. We determined the rotation measure (RM) of the pulsar (RM = −55±3 rad m −2 ) using the RMFIT package.
The average profile is moderately linearly polarized (dashed curve in the upper left panel of Figure 9) with a fractional linear polarization of 30%±1%. As is commonly observed in "classic" double profiles (e.g., Lyne & Manchester 1988), the degree of linear polarization is low at the profile edges and high in the bridge region. There is little evidence for significant circular polarization (dotted curve in the top panel of the left plot of Figure 9). The position angle (PA) curve of the linear polarization is shown in the bottom panel of the left plot of Figure 9. Its shape can be fitted using the rotating vector model (RVM; Radhakrishnan & Cooke 1969). The fit is remarkably good, but the parameters are not well constrained. 18 In the right-hand panel of Figure 9 we show the reduced χ 2 values of the fit as a function of α and β. The magnetic inclination angle, Figure 4. Histograms of the observed subpulse drift rates or band slopes for all observed drift bands, separately for components C1 and C4. The bin width is the same for all histograms (0°. 2/P) and has been chosen to approximate the typical uncertainty in the measured band slopes. The vertical dashed lines mark zero drift rate.
α, is practically unconstrained and, from the RVM fit alone, we can only conclude that β<13°. We describe more constraints on these parameters in the discussion section.
Long-term, On-off Timescale
The Parkes observations, typically ∼1 hr in duration (but sometimes as long as ∼7 hr), show that the time period during which the emission remains on or off lasts for tens of minutes. Parameterizing the exact on-off timescale is nontrivial as the emission state may have only switched once during a given observation (and so we have no prior information on how long it was on or off before or after the observation). Also some of the observations were affected by RFI, which was often so strong that we were unable to determine whether the emission switched states during the RFI. Our subintegration time is 30 s for the Parkes observations and so we assume that the emission remains on (or off) when RFI is affecting our data for less than four subintegrations (2 minutes). Similarly, calibration observations (lasting a couple of minutes) were carried out regularly through long observations of the pulsar and we assumed that the pulsar remained in a single state throughout those calibration observations. With these assumptions the maximum on-state duration is ∼20 minutes. The maximum off-state duration is ∼93 minutes. The distribution of on and off state durations are quantified statistically in Figure 10 and Table 1. 19
Discussion
The Discovery of PSRJ1926-0652
PSRJ1926−0652 is a relatively bright pulsar and can be detected with the Parkes telescope within a few minutes. We therefore wished to understand why this pulsar had not been discovered by previous surveys. We checked the Parkes data archive (Hobbs et al. 2011) and downloaded previous search mode observations (that were not embargoed) that were observed at positions within 10 arcmin of the known pulsar position. We identified two observations, 20 each of 4.3 min duration, and searched for the pulsar usingPRESTO 21 (Ransom 2001). We did not detect the pulsar, but this is not surprising as we know PSRJ1926−0652 has a nulling fraction of ∼75% and an average off-state duration of 20 minutes. The "intermittency" (i.e., the on/off timescale) in the pulse emission timescales is similar to that seen in other pulsars such as PSRJ1717−4054 (Kerr et al. 2014;Young et al. 2015). Clearly it is likely that there are many such pulsars remaining to be discovered by repeated observations of the same sky position.
The Emission and Viewing Geometry
The pulse profile has two primary components (C1 and C4), two inner components (C2 and C3) and bridge emission. Various generic models for pulse profile shapes have been suggested including a core and one or more cones, randomly distributed patches or patchy cones (see, e.g., Karastergiou & Johnston 2007 for a review and an empirical model). Our work does not explicitly confirm, nor rule out, any particular model, but we note that the profile component separations decrease as expected for higher-frequency emission occurring lower in the pulsar magnetosphere. As shown in Section 4.2 the PA of the linear polarization is also remarkably well fitted by an RVM. However, the magnetic inclination angle, α, is unconstrained.
Following the description in Rookyard et al. (2015), further constraints on the viewing geometry can be obtained by making various assumptions. The relatively large width of the profile (taken to be 60°±5°based on the Parkes data) implies either that α is small, or that the emission comes from high up in the magnetosphere. This height can be constrained because the RVM inflection point occurs very close to the midpoint of the profile. Rookyard et al. (2015) considered how the inflection point can be delayed relative to the position of the fiducial plane. The upper limit of this delay for our data is only ∼15°, which implies an emission height lower than 5000 km. In Figure 9 we have identified values of α and β that can produce a pulse of the measured width. These are shown in the green areas and suggest that the magnetic axis is relatively aligned with the pulsar's rotation axis with α<55°.
The Subpulse Drifting and Nulling Phenomena
As Figures 1 and 2 clearly show, PSR J1926−0652 exhibits drifting subpulses. However, their properties are complex. The drifting is more regular in the longer bursts and, specifically for the longest Bursts 1 and 3, is more regular in the leading component C1, compared to the trailing component C4. Figure 5 shows that the modulation phases for the interior components C2 and C3 do not lie on the extrapolation of the band slopes for C1 and C4 respectively. The modulation phases shown in Figure 5 appear to smoothly join the inner C2, C3 components to the outer C1, C4 components, but this may be an artifact of the smoothing in time over the burst inherent in the Fourier analysis. There is no significant P 3 modulation for the bridge region between components C2 and C3.
As mentioned in Section 3, the band slopes obtained from the Gaussian fits to subpulse profiles and the Fourier analysis are systematically different, especially for the trailing component C4. This is most clearly illustrated in the longest burst, Burst 3 ( Figure 5). The Fourier band slopes tend to be flatter (larger absolute values) and, similarly, the derived P 2 values have larger absolute values. The reasons for this are not entirely obvious. The Fourier method averages over the whole burst, while the Gaussian fits are independent for each drift band. There are significant variations in band spacing (P 3 ) from band to band, especially at the beginning and end of the burst, where the actual drift band times differ substantially from the Fourier phase predictions assuming a constant P 3 . Additional band structures not described by the Fourier model exist, most notably the additional bands seen in C1 at pulse number 676 and in C4 at pulse number 661.
On the other hand, there is a degree of subjectivity involved in choosing the subpulse structures to fit with the Gaussian analysis. For example, it could be argued that there are independent double bands for C1 around pulse numbers 704 and 738. For C4, the drift structure is not so clear and the Fourier phases are evidently dominated by a few relatively flat bands, for example, around pulse numbers 655, 792, and 809. Both methods have their strengths but, unless the drift-band structure is very regular, they can give quite different results.
The carousel model, originally proposed by Ruderman & Sutherland (1975), is widely used to interpret drifting subpulses. In this model, emission is produced from a series of "sparks" that circulate at a fixed period around the magnetic axis. As these sparks rotate past the observer's line of sight, they give rise to the characteristic drifting subpulses seen in many pulsars.
PSR J1926−0652 has four profile components, each of which has distinct subpulse behavior. Four components would naturally arise if there is emission from a second, inner carousel of sparks. Within the uncertainties, all components share the same periodicity (P 3 ≈ 17.3P), suggesting that any such nested carousels are phase-locked in the sense that they have the same rotation period and the same number of sparks. Nested phase-locked carousels have been proposed previously, for example, to explain the drifting subpulses of PSRB0818 −41 (Bhattacharya et al. 2009).
However, for PSR J1926−0652 there are a number of features that do not fit naturally into such a carousel model. For example, there are significant variations in band spacing (P 3 ) between different drift bands for a given component. Also, there are clear extra drift bands that are not part of the regular P 3 modulation.
Other models for drifting subpulses also exist. For instance, Gogoberidze et al. (2005) suggest the possibility that the drifting subpulses result from the modulation of radio emission by magnetospheric oscillations. Such resonances that beat with the rotation of the pulsar may provide more natural explanations than carousel models for apparently complex phenomena such as harmonically related drift rates as seen in, e.g., PSRsB0031−07 and B2016+28 (Taylor et al. 1975), variable and even reversing drift rates as seen in PSRB0826−34 (Gupta et al. 2004) or opposite drift directions in different pulse components such as those observed in PSRs J0815+0939 (Champion et al. 2005) and B1839−04 (Weltevrede 2016).
As described in Section 3, the first pulses observed after a nulling event are comparable to a typical on-state pulse. However, we have shown that the LAP prior to a nulling event are significantly different in that the leading component is significantly weaker than the trailing pulse components (Figure 7). The leading component could be weaker before a null as it fades away, but we see no evidence of such fading in our observation. In contrast we see relatively strong emission in this component at the end of Burst 3. A second possibility is that the nulling events occur when the leading component is at (or near) its weakest point in the modulation cycle. This is similar to that observed by Gajjar et al. (2017) for PSRJ1840 −0840, which consistently enters the null state at the end of a drift-band in one of its profile components. For PSRJ1926 −0652 we cannot make such a definitive statement as we do see clear emission in the leading component in the LAP for Burst 3. However, we will show below that the drift-rate seems to change near the end of this burst.
Our results add to the menagerie of interesting phenomena relating to nulling and subpulse drifting and show that there does not seem to be a single, simple connection between nulling and drifting. For instance, Deich et al. (1986) found, in PSRB1944+17, that null events were preceded by a decay in pulse intensity of around 50% over about three pulse periods. They also showed that, like PSRJ1926−0652, the LAP were quantitatively different in shape and more variable than other pulses. Similarly individual pulses from PSRJ1727−2739 show a decay in intensity before a null event (Wen et al. 2016). This pulsar also has two primary components and, like PSRJ1926−0652, the intensity of the leading component is weaker than the trailing component prior to a null event. In contrast to the pulsar described in this paper, the pulses immediately after a null event in PSRJ1727−2739 were also significantly different from typical pulses.
There is currently no single physical model that can explain all of these phenomena. Further observations that hopefully would capture even longer burst events will be needed to obtain a deeper understanding of this unusual pulsar and drifting and nulling in general.
Conclusions
We report here a pulsar discovery, namely PSRJ1926 −0652, from the FAST radio telescope. Largely through FAST single pulse studies, aided by follow-up timing observations made by the Parkes telescope, PSRJ1926−0652 is found to exhibit a plethora of emission phenomena, including nulling and subpulse drifting. Our main findings include the following: FAST continues to discover pulsars , including bright ones that were probably missed by previous searches because of their nulling properties. We thus expect this work to be just the first of many in reporting new noteworthy pulsars. For PSRJ1926−0652, we have only scratched the surface in terms of analyzing its emission mechanism. We have further observations planned with FAST to obtain more single pulse data sets and Parkes for continued timing and monitoring, particularly with the new ultra-wideband receiver. We will be able to calibrate future FAST data sets and therefore will be able to obtain high S/N single pulses that provide a more detailed insight into the single pulse emission mechanism.
Having a decl. close to zero, this pulsar can be observed by almost all of the major radio telescopes. PSRJ1926−0652 holds the potential to help provide a coherent picture for explaining complex nulling and subpulse drifting, that do not fit easily into the simple carousel model.
MPG-CAS Joint Project "Low-Frequency Gravitational Wave
Astronomy and Gravitational Physics in Space".
The Parkes radio telescope is part of the Australia Telescope National Facility, which is funded by the Australian Government for operation as a National Facility managed by CSIRO. Pulsar research at Jodrell Bank Centre for Astrophysics and Jodrell Bank Observatory is supported by a consolidated grant from the UK Science and Technology Facilities Council (STFC).
Appendix A Data Access
The raw data from the FAST telescope used in the singlepulse study in this paper are owned by the National Astronomical Observatories, Chinese Academy of Sciences, and are available from the corresponding author upon reasonable request. The observations from the Parkes telescopes have been obtained using project codes PX500 and PX501. Conditional on data embargoes, these data are available on CSIRO's data archive 22 (Hobbs et al. 2011). We note that observations of the pulsar currently in the archive were recorded under the source name PSRJ1926−0649 (instead of the correct name of PSR J1926−0652). The raw PX500 data have an 18 month embargo period, whereas the PX501 data have a 10-yr embargo period.
We have produced a publicly downloadable data collection available from the CSIRO's data archive containing our processed data files. This data collection contains (1) FAST single-pulse data for PSRJ1926−0652 in four different frequency bands and (2) Parkes timing data at 20 cm, including pulse arrival times, the arrival time file, the timing model file, the timing template file, and the calibrated and summed profiles. This data collection is available from CSIRO Data Access Portal (Zhang et al. 2018).
Appendix B Analysis Using PSRSALSA
The software tools used to conduct the P 3 −fold and fluctuation spectra analysis here are part of the PSRSALSA package (Weltevrede 2016), and are freely available online. 23
B.1. - P 3 Fold
The single pulse data was folded at the identified period P 3 =17.33P, which was given in Section 3, using PSRSALSA for Burst 3 (the longest observed burst sequence). This folding results in a high S/N representation of the average driftband, and permits more detailed studies of weak features in the drifting behavior.
The P 3 -fold ( Figure 11) shows that the leading component has an associated average driftband that is relatively steep, while the trailing component shows a much shallower gradient. This is consistent with the measured P 2 value of the trailing component being larger. In addition, the figure reveals that there are two additional weak profile components with distinct drifting subpulse properties. The first of these two minor components (C2 as we described in the Section 3) can be associated with a small "tail" appearing around 170°p ulse longitude and pulse number ∼18. The second minor component (C3 as we described in the Section 3) around 195°pulse longitude appears in the P 3 -fold at pulse number ∼23.
B.2. Fluctuation Spectra Analysis
The longitude-resolved fluctuation spectrum (LRFS) presents the spectral power of fluctuations as a function of rotational phase. The power in the LRFS can be used to quantify the longitude-resolved modulation index, which is shown, for Burst 3, as the points with error bars in the top left panel of Figure 12, together with the pulse profile. More examples of such statistical analyses can be found in, e.g., Edwards & Stappers (2003) and Weltevrede et al. (2006aWeltevrede et al. ( , 2006b.
The vertical frequency axis of both the LRFS and 2DFS corresponds to P/P 3 , where P denotes the rotational period of the pulsar. The LRFS shows a clear spectral feature at P/P 3 ;0.058 cycles per period (cpp) for both components. This spectral feature corresponds to the pattern repetition period of the drifting subpulses P 3 ;17P that can also be identified by eye in the pulse stack. In addition to this well-defined spectral feature, there are two weaker peaks for the leading component at ;0.067 cpp corresponding to P 3 ;15 P and at ;0.043 cpp corresponding to P 3 ;23 P (see in the top part of panel (b)) and one weaker peak for the trailing component at ;0.068 cpp corresponding to P 3 ;15 P (see in the bottom part of panel b). These results indicate variations in the P 3 parameter for Burst 3. The horizontal axis of the 2DFS denotes the pattern repetition frequency along the pulse longitude axis, expressed as P/P 2 . Following the description in Weltevrede et al. (2006b), we have measured the P 2 and P 3 for the well-defined spectral feature for the two components. This gives --+ P 29 2 2 3 deg and P 3 ;17±0.5 P for the leading component and --+ P 42 2 10 5 deg and P 3 ;17±0.5 P for the trailing component. We note that the quoted errors do not capture the fact that there is a high variability in the drift band shapes, and only a relatively small number of drift bands are observed. These results are consistent, but less accurate with our Fourier analysis result (17.35±0.04 P and 17.31±0.03 P for the leading and trailing components respectively), which was given in Section 3.
Figure 1 .
1(Upper panel) Pulse stack with the single uncalibrated polarization channel averaged across the FAST observing band from 270 to 800 MHz. The six active intervals are indicated as bursts. (Lower panel) The average pulse profile obtained from these pulses with the four pulse components indicated.
Figure 2 .
2Pulse stacks for each burst are given in the upper panel of each subplot. The lower panel of each subplot shows the pulse profile averaged over the whole data span as a black solid line and averaged over the particular burst state as a blue dashed line.
Figure 3 .
3Mean pulse profiles for the whole observation in three frequency subbands across the observed bandwidth of 270 to 800MHz. The profiles are normalized to a peak amplitude of 1.0 and have been aligned by the midpoint of the leading and trailing edges at 50% of the profile peak.
93 Mean "off" duration (min.) 20.3 Standard Deviation "off" duration (min.) of first observation (MJD) 58034 Date of last observation (MJD) 58387 Total time span (days) 353
Figure 5 .
5Pulse stacks for the first half of Burst 3 (left panel) and the second half (right panel) with overlaid fits to the drift band structure. The white points are the centroid of Gaussian profiles to the subpulses in each pulse and the white lines are a weighted least-squares fit of a straight line to the centroid points for a given drift band. The purple and green points result from fits of a cosine function over the whole burst to each pulse phase bin across the profile components assuming constant P 3 values for the leading and trailing components. The derived cosine phases were converted to give the locus of the cosine peak near t 0 (pulse 770) and then replicated across the burst. See the text for further details.
Figure 6 .
6Variation of P 3 across the leading components (C1 and C2, purple +) and the trailing components (C3 and C4, green ×) for the burst number 3. For the trailing components, the actual pulse phase is 47°more than that indicated. The vertical dashed lines mark the boundaries of the significant emission. The left boundary applies to both the leading and trailing components.
Figure 7 .
7Mean pulse profile averaged over all bursts (thicker black line) and the average profile for the last detectable pulse of each burst (thinner blue line). The mean burst profile peak is normalized to 1.0. The profiles were averaged across the FAST observing band.
Figure 8 .
8Timing residuals corresponding to 353 days of Parkes monitoring observations in the 20 cm observing band. Note that the pulsar was only detected in 29 of the Parkes observations. It was not detected at the epochs where we have drawn a vertical, dotted line. Three such observations were recorded on the same day (at −176 days in the figure).
Figure 9 .
9Left: polarization profile at 20 cm (1400 MHz). The black line is the mean flux profile, the dashed line is the linear polarization profile, and the dotted line is the circle polarization profile in the top panel. The black dots, in the bottom panel, represent the linear polarization angles along with the best-fit curve from the RVM fit shown as the red line. Right: the results of fitting an RVM curve for each (α, β) combination. The reduced chi-squared (χ 2 ) of the fit is shown as the gray scale, with the darkest value corresponding to the best fit. The black contour lines represent 1−σ, 2−σ, and 3−σ confidence boundaries. The green regions show geometries allowed by the observed pulse width under certain assumptions-see Section 5.2.
Figure 10 .
10Histograms of the observed on (left panel) and off (right panel) state durations.
Figure 11 .
11Single P 3 −fold for PSRJ1926−0652 covering the full 270−800 MHz range for the longest observed burst sequence (Burst 3). 22 https://data.csiro.au/ 23 https://github.com/weltevrede/psrsalsa A two-dimensional Fourier Transform of the pulse stack produced the 2DFS for Burst 3 in panel (b) of Figure 12. The 2DFS of the leading and trailing components are shown separately, for the whole band between 270 and 800 MHz.
Table 1
1Parameters for PSRJ1926−0652 Timing model parameters: R.A. (J2000) (h:m:s)19:26:37.11(3)
1. PSRJ1926−0652 has a relatively long period of about 1.6 s and a mean 1400 MHz flux density at about 0.9 mJy. 2. The pulse emission switches off (nulls) about 75% of the time, on timescales between 4 to 450 pulse periods, and with an average off-state duration of about 20 minutes. 3. PSRJ1926−0652 has two primary components, two weaker inner components, and bridge emission. The separation between components decreases in the higherfrequency bands, consistent with that expected from radius-to-frequency mapping. 4. The average profile at 1400 MHz is moderately linearly polarized with a fractional linear polarization of about 30%. The magnetic inclination angle, α, is poorly constrained from the PA fit alone. Future multiple-band and polarized single-pulse observations promise much better constraints. 5. PSRJ1926−0652 exhibits complex drifting subpulse properties. Its four profile components, each of which has distinct behavior. Four components would naturally arise if there is emission from a second, inner carousel of sparks. However, PSRJ1926−0652 possesses a number of features that do not fit into such a carousel model. Significant variations in band spacing (P 3 ) between different drift bands were seen for any given component. There are clear extra drift bands that are not part of the regular P 3 modulation.
The Astrophysical Journal, 877:55 (13pp), 2019 May 20 Zhang et al.
In order to show the successive single pulses, we have not removed pulses affected by RFI in the pulse-stack figures(Figures 1 and 2). However, for the average pulse profiles and the single-pulse analysis described in this paper, we do remove the interference.
If Burst 5 is treated as a single burst, the minimum burst duration is 29 pulse periods for Burst 2.
A slightly higher, but still very low, ratio is obtained by taking six randomly selected pulses and treating Burst 5 as one burst. The deviation of the LAP profile from the average is still highly significant in this case.18 The PA curve inFigure 9uses values of magnetic inclination angle α=158°, impact parameter β=−3°, position-angle offset of 50°,and fiducial-plane angle of 181°.
Without the assumption that the pulse stays on or off across small time gaps, we then obtain maximum on and off durations of 13.5 and 36 minutes, respectively. 20 P309: an intermediate-latitude millisecond pulsar survey. 21 https://www.cv.nrao.edu/~sransom/presto/ 8 The Astrophysical Journal, 877:55 (13pp), 2019 May 20 Zhang et al.
This work is supported by National Key R&D Program of China No. 2017YFA0402600, State Key Development Program for Basic Research (2015CB857100), the National Natural Science Foundation of China (grant Nos. U1731238, 11725313, 11690024, 11743002, 11873067, U1731218, 11565010, 11603046, and U1531246)
. J W M Baasr, R Genzel, I I K Pauliny-Toth, A Witzel, A&A. 6199Baasr, J. W. M., Genzel, R., Pauliny-Toth, I. I. K., & Witzel, A. 1977, A&A, 61, 99
. D C Backer, 10.1038/228042a022842NaturBacker, D. C. 1970, Natur, 228, 42
. G Bhattacharya, Y Gupta, J Gil, 10.1111/j.1365-2966.2009.15210.xMNRAS. 3981435Bhattacharya, G., Gupta, Y., & Gil, J. 2009, MNRAS, 398, 1435
Below this panel the LRFS is shown on its horizontal axis with the pulse longitude in degrees, which is also the scale for the abscissa of the plot above. (b) Analyses for each component: The top panel is the 2DFS of the leading component and side panels show the horizontally (left) and vertically (bottom). The top panel shows the integrated pulse profile (solid line) and the longitude-resolved modulation index (solid line with error bars). integrated power. The bottom panel is the 2DFS of the trailing component. Note that there are 300 pulses during Burst 3 (pulse number from 641 toFigure 12. Fluctuation analysis of the emission in the Burst 3 state. (a) The top panel shows the integrated pulse profile (solid line) and the longitude-resolved modulation index (solid line with error bars). Below this panel the LRFS is shown on its horizontal axis with the pulse longitude in degrees, which is also the scale for the abscissa of the plot above. (b) Analyses for each component: The top panel is the 2DFS of the leading component and side panels show the horizontally (left) and vertically (bottom) integrated power. The bottom panel is the 2DFS of the trailing component. Note that there are 300 pulses during Burst 3 (pulse number from 641 to
In order to make the most of pulses and give a high resolution, we used the last 256 successive pulses (pulse number from 641 to 896) in Burst 3 for our fluctuation analysis here. We also note that these fluctuation spectra show only part of the full spectra. which extend up to P/P 3 = 0.5 cppIn order to make the most of pulses and give a high resolution, we used the last 256 successive pulses (pulse number from 641 to 896) in Burst 3 for our fluctuation analysis here. We also note that these fluctuation spectra show only part of the full spectra (which extend up to P/P 3 = 0.5 cpp).
. J D Biggs, 10.1086/171608ApJ. 394574Biggs, J. D. 1992, ApJ, 394, 574
. D Champion, D R Lorimer, M A Mclaughlin, 10.1111/j.1365-2966.2005.09499.xMNRAS. 363929Champion, D., Lorimer, D. R., McLaughlin, M. A., et al. 2005, MNRAS, 363, 929
. J M Cordes, 10.1086/156218ApJ. 2221006Cordes, J. M. 1978, ApJ, 222, 1006C
. W T S Deich, J M Cordes, T H Hankins, 10.1086/163831ApJ. 300540Deich, W. T. S., Cordes, J. M., Hankins, T. H., et al. 1986, ApJ, 300, 540
. F D Drake, H D Craft, 10.1038/220231a0220231NaturDrake, F. D., & Craft, H. D. 1968, Natur, 220, 231
. R T Edwards, B W Stappers, 10.1051/0004-6361:20030716A&A. 407273Edwards, R. T., & Stappers, B. W. 2003, A&A, 407, 273
. V Gajjar, J P Yuan, R Yuen, 10.3847/1538-4357/aa96acApJ. 850173Gajjar, V., Yuan, J. P., Yuen, R., et al. 2017, ApJ, 850, 173
. G Gogoberidze, G Z Machabeli, D B Melrose, Q Luo, 10.1111/j.1365-2966.2005.09070.xMNRAS. 360669Gogoberidze, G., Machabeli, G. Z., Melrose, D. B., & Luo, Q. 2005, MNRAS, 360, 669
. Y Gupta, J Gil, J Kijak, 10.1051/0004-6361:20035657A&A. 426229Gupta, Y., Gil, J., & Kijak, J. 2004, A&A, 426, 229
. G Hobbs, R T Edwards, R N Manchester, 10.1111/j.1365-2966.2006.10302.xMNRAS. 369655Hobbs, G., Edwards, R. T., & Manchester, R. N. 2006, MNRAS, 369, 655
. G Hobbs, K D Miller, R N Manchester, 10.1071/AS11016PASA. 28202Hobbs, G., Miller, K. D., Manchester, R. N., et al. 2011, PASA, 28, 202
. A W Hotan, W Van Straten, R N Manchester, 10.1071/AS04022PASA. 21302Hotan, A. W., van Straten, W., & Manchester, R. N. 2004, PASA, 21, 302
. A Karastergiou, S Johnston, 10.1111/j.1365-2966.2007.12237.xMNRAS. 3801678Karastergiou, A., & Johnston, S. 2007, MNRAS, 380, 1678
. M Kerr, G Hobbs, R M Shannon, 10.1093/mnras/stu1716MNRAS. 445320Kerr, M., Hobbs, G., Shannon, R. M., et al. 2014, MNRAS, 445, 320
. M Kramer, A G Lyne, J T O'brien, C A Jordan, D R Lorimer, 10.1126/science.1124060Sci. 312549Kramer, M., Lyne, A. G., O'Brien, J. T., Jordan, C. A., & Lorimer, D. R. 2006, Sci, 312, 549
. D Li, P Wang, L Qian, 10.1109/MMM.2018.2802178IMMag. 19112Li, D., Wang, P., Qian, L., et al. 2018, IMMag, 19, 112L
. A Lyne, R Manchester, 10.1093/mnras/234.3.477MNRAS. 234477Lyne, A., & Manchester, R. 1988, MNRAS, 234, 477
. A Lyne, G Hobbs, M Kramer, I Stairs, B Stappers, 10.1126/science.1186683Sci. 329408Lyne, A., Hobbs, G., Kramer, M., Stairs, I., & Stappers, B. 2010, Sci, 329, 408L
. D Manchester, G Hobbs, M Bailes, 10.1017/pasa.2012.017PASA. 3017Manchester, D., Hobbs, G., Bailes, M., et al. 2013, PASA, 30, 17
. M A Mclaughlin, A G Lyne, D R Lorimer, 10.1038/nature04440439817NaturMcLaughlin, M. A., Lyne, A. G., Lorimer, D. R., et al. 2006, Natur, 439, 817
. D Mitra, J Gil, G Melikidze, 10.1088/0004-637X/696/2/L141ApJ. 696141Mitra, D., Gil, J., & Melikidze, G. 2009, ApJ, 696, L141
SCPMA, in press Qiao. L Qian, Z C Pan, D Li, 10.1086/426862ApJ. 616127Qian, L., Pan, Z. C., Li, D., et al. 2019, SCPMA, in press Qiao, G. J., Lee, K. J., Zhang, B., Xu, R. X., & Wang, H. G. 2004, ApJ, 616L, 127Q
. V Radhakrishnan, D J Cooke, ApJ. 3225Radhakrishnan, V., & Cooke, D. J. 1969, ApJ, 3, 225
. J M Rankin, 10.1086/163955ApJ. 301901Rankin, J. M. 1986, ApJ, 301, 901
. J M Rankin, G A Wright, 10.1111/j.1365-2966.2008.13001.xMNRAS. 3851923Rankin, J. M., & Wright, G. A. E. 2008, MNRAS, 385, 1923
. S M S C Ransom, P Weltevrede, S Johnston, 10.1093/mnras/stu2083MNRAS. 4463356Harvard University Rookyard,PhD thesisRansom, S. M. 2001, PhD thesis, Harvard University Rookyard, S. C., Weltevrede, P., & Johnston, S. 2015, MNRAS, 446, 3356R
. M A Ruderman, P G Sutherland, 10.1086/153393ApJ. 19651Ruderman, M. A., & Sutherland, P. G. 1975, ApJ, 196, 51
. L Staveley-Smith, W E Wilson, T S Bird, 10.1017/S1323358000020919PASA. 13243Staveley-Smith, L., Wilson, W. E., Bird, T. S., et al. 1996, PASA, 13, 243
. J H Taylor, R N Manchester, G R Huguenin, W Van Straten, M Bailes, 10.1071/AS10021PASA. 1951ApJTaylor, J. H., Manchester, R. N., & Huguenin, G. R. 1975, ApJ, 195, 513 van Straten, W., & Bailes, M. 2011, PASA, 28, 1
. N Wang, R N Manchester, S Johnston, 10.1111/j.1365-2966.2007.11703.xMNRAS. 3771383Wang, N., Manchester, R. N., & Johnston, S. 2007, MNRAS, 377, 1383
. P Weltevrede, 10.1051/0004-6361/201527950A&A. 590109Weltevrede, P. 2016, A&A, 590, A109
. P Weltevrede, R T Edwards, B W Stappers, 10.1051/0004-6361:20053088A&A. 445243Weltevrede, P., Edwards, R. T., & Stappers, B. W. 2006a, A&A, 445, 243
. P Weltevrede, G A E Wright, B W Stappers, J M Rankin, 10.1051/0004-6361:20065572A&A. 458269Weltevrede, P., Wright, G. A. E., Stappers, B. W., & Rankin, J. M. 2006b, A&A, 458, 269
. Z G Wen, N Wang, J P Yuan, 10.1051/0004-6361/201628214A&A. 592127Wen, Z. G., Wang, N., Yuan, J. P., et al. 2016, A&A, 592, A127
. A Wolszczan, N Bartel, W Sieber, 10.1093/mnras/196.3.473MNRAS. 196473Wolszczan, A., Bartel, N., & Sieber, W. 1981, MNRAS, 196, 473
. Y W Xie, J B Wang, G Hobbs, arXiv:1903.01077Xie, Y. W., Wang, J. B., Hobbs, G., et al. 2019, arXiv:1903.01077
. J M Yao, R N Manchester, N Wang, 10.1093/mnras/stx729MNRAS. 4683289Yao, J. M., Manchester, R. N., Wang, N., et al. 2017, MNRAS, 468, 3289
. N J Young, P Weltevrede, B W Stappers, A G Lyne, M Kramer, 10.1093/mnras/stv392MNRAS. 4491495Young, N. J., Weltevrede, P., Stappers, B. W., Lyne, A. G., & Kramer, M. 2015, MNRAS, 449, 1495
L Zhang, D Li, G Hobbs, 10.25919/5c354f21160e2Data files from Parkes and FAST for PSR J1926-0652. v1. CSIRO. Data Collection. Zhang, L., Li, D., Hobbs, G., et al. 2018, Data files from Parkes and FAST for PSR J1926-0652. v1. CSIRO. Data Collection, https://doi.org/10.25919/ 5c354f21160e2
. W W Zhu, A Berndsen, E C Madsen, 10.1088/0004-637X/781/2/117ApJ. 781117Zhu, W. W., Berndsen, A., Madsen, E. C., et al. 2014, ApJ, 781, 117
| [
"https://github.com/weltevrede/psrsalsa"
] |
[
"Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment",
"Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment"
] | [
"Yuhong Deng ",
"Xiaofeng Guo ",
"Yixuan Wei ",
"Kai Lu ",
"Bin Fang ",
"Di Guo ",
"Huaping Liu ",
"Fuchun Sun "
] | [] | [] | In this paper, a novel robotic grasping system is established to automatically pick up objects in cluttered scenes. A composite robotic hand composed of a suction cup and a gripper is designed for grasping the object stably. The suction cup is used for lifting the object from the clutter first and the gripper for grasping the object accordingly. We utilize the affordance map to provide pixel-wise lifting point candidates for the suction cup. To obtain a good affordance map, the active exploration mechanism is introduced to the system. An effective metric is designed to calculate the reward for the current affordance map, and a deep Q-Network (DQN) is employed to guide the robotic hand to actively explore the environment until the generated affordance map is suitable for grasping. Experimental results have demonstrated that the proposed robotic grasping system is able to greatly increase the success rate of the robotic grasping in cluttered scenes. | 10.1109/iros40897.2019.8967899 | [
"https://export.arxiv.org/pdf/2302.10717v1.pdf"
] | 210,971,962 | 2302.10717 | 9de356e571926ef915968a4c65544d4dc88268c5 |
Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
Yuhong Deng
Xiaofeng Guo
Yixuan Wei
Kai Lu
Bin Fang
Di Guo
Huaping Liu
Fuchun Sun
Deep Reinforcement Learning for Robotic Pushing and Picking in Cluttered Environment
In this paper, a novel robotic grasping system is established to automatically pick up objects in cluttered scenes. A composite robotic hand composed of a suction cup and a gripper is designed for grasping the object stably. The suction cup is used for lifting the object from the clutter first and the gripper for grasping the object accordingly. We utilize the affordance map to provide pixel-wise lifting point candidates for the suction cup. To obtain a good affordance map, the active exploration mechanism is introduced to the system. An effective metric is designed to calculate the reward for the current affordance map, and a deep Q-Network (DQN) is employed to guide the robotic hand to actively explore the environment until the generated affordance map is suitable for grasping. Experimental results have demonstrated that the proposed robotic grasping system is able to greatly increase the success rate of the robotic grasping in cluttered scenes.
I. INTRODUCTION
With the rapid development of e-commerce, a growing demand has been put on using autonomous robots in logistics. There have been already a lot of mobile robots working at real warehouses for product transportation. It is still a great challenge for the robot to pick and sort products in real scenarios automatically [1]. This kind of work is highly dependent on human workers nowadays, which is not economically and time efficient. In this work, we propose a novel robotic grasping system which is able to automatically pick up objects in cluttered scenes. A composite robotic hand which is able to grasp many kinds of different objects robustly is designed. A deep Q-Network (DQN) [2] is employed to guide the robotic hand to actively explore the environment to find proper grasping points.
The design of robotic hands have been studied for years, and many different types of robotic hands have been proposed. The robotic hand with suction cup is very popular and widely used in robotic grasping tasks [3], [4]. It is because that the suction cup is usually with a simple structure and robust to many different objects. In [5], self-sealing suction cup arrays are proposed to greatly improve the robotic grasping ability in uncertain environments. To increase the adhesion force of the suction cup, a stretchable suction cup with electroadhesion is designed [6]. There are also some other suction cups inspired by biometric designs [7], [8], [9], [10], [11]. However, the working mechanism of the suction cup imposes many restrictions on the surface and postures of the object. In addition, the inconsistency between the moving Fig. 1. Grasping System. Our grasping system consists of a composite robotic hand, a UR5 manipulator, and a Kinect camera. An active exploration strategy is integrated to this system to implement effective robotic grasping.
direction of the suction cup and the force direction makes the grasping unstable [12] and causes the working life of the suction cup short. Therefore, it is important for the robotic hand to find proper grasping points when using the suction cup.
Zeng et al. [13] have proposed to use affordance map to indicate grasping points by analyzing the whole scene, which greatly improves the accuracy of robotic grasping. The affordance map is a graph which shows the confidence rate of each pixel in the input image for grasping. However, since the environment is usually complex and unstructured, sometimes the grasping location that the affordance map indicates is difficult for the robot to grasp. To solve this problem, the active exploration mechanism is introduced [14], [15], [16]. By actively exploring the environment, the robot is able to make some changes to the environment until it is suitable for grasping. For example, when the objects in the scene are too close to each other for grasping, the robot can actively explore the environment and change the position of the object until it is suitable for grasping. Similarly, the robot can rearrange positions of objects by pushing them apart [17].
In this paper, a composite robotic hand composed of a suction cup and a gripper is designed. With a deep Q-Network (DQN), the robotic hand can actively explore the environment until a good affordance map is obtained. The whole grasp system ( Fig. 1) is able to effectively grasp many kinds of objects in a real cluttered environment. The main contributions are summarized as follows:
• A novel composite robotic hand which combines a suction cup and a gripper is designed. It is able to grasp different objects quickly and stably. • An active exploration algorithm which leverages the deep Q-Network (DQN) is proposed to facilitate the robot to actively explore the environment until a good affordance map is generated. • The composite hand and the active exploration algorithm are fully integrated and the experimental results demonstrate the superior performance of this system when grasping objects in a real cluttered environment. The rest of this paper is organized as follows. Some related work is introduced in Section II. A brief overview of the proposed robotic grasping system is presented in Section IV-A. Section IV and Section V describes the composite robotic hand we designed and the grasping strategy in details. Extensive experimental results are demonstrated in Section VI to verify the effectiveness of the proposed robotic grasping system.
II. RELATED WORK
Robotic grasping is playing a more and more important role in many application areas such as industrial manufacturing, domestic service, logistics, etc. Because of the diversity and complexity of both the object and environment, higher requirements have been placed on the design of robotic hands. In general, robotic hands can be divided into two classes: 1) robotic hand with suction cup and 2) multi-finger robotic hand. Either design has its own specific application scenario. It is difficult to use only one single operation mode to fulfill all the tasks.
Therefore, many researchers try to leverage advantages of both types of robotic hands and some composite robotic hands are proposed [18], [19], [20]. For example, a multifunctional gripper with a retractable mechanism is designed, which can switch between suction mode and grasping mode quickly and automatically [13]. It provides a hardware basis for implementing different grasp strategies. However, this multi-functional gripper doesn't consider the coupling between the two modes. It can only choose to execute one operation mode at a time. In addition, Hasegawa et al. [19] propose the Suction Pinching Hand, which has two underactuated fingers and one extendable and foldable finger whose fingertip has a suction cup mounted on it. Experiments have shown that it can grasp various objects stably by using both the suction and pinching at the same time. Some other similar type of robotic hand has already been used in industrial solutions[21], [22].
Compared with the above composite robotic hand, the robotic hand proposed in this paper is of a much more simple and flexible structure. It seamlessly combines both a suction cup and a two-finger gripper. It has a suction mode and a grasping mode, which can be coupled to work simultaneously and also work separately. What's more, the proposed composite robotic hand is able to close its two fingers to push objects in order to actively explore the environment. A preliminary version of this paper has been published in [23], which discussed this robotic finger but it did not provide the design guidelines of the reward. In this paper, we present more detailed illustrations about the experimental results and provide the details about the reward function design for the deep reinforcement learning.
III. SYSTEM OVERVIEW
The pipeline of the proposed robotic grasping system is illustrated in Fig. 3. The RGB image and depth image of the scene are obtained firstly. The affordance ConvNet [13] is then used to calculate the affordance map based on the input images. A metric Φ is proposed to evaluate the quality of the current affordance map. If Φ is above a certain threshold value, the composite robotic hand will implement the suction operation with the suction cup and then grasp the object accordingly. Otherwise, the affordance map will be fed into the DQN, which guides the composite robotic hand to make some disturbances to the environment by pushing objects in the front. This process will be iterated until all the objects in the environment are successfully picked up.
IV. ARCHITECTURE
At present, the robotic hands can be mainly divided into two classes: 1) robotic hand with suction cup and 2) multifinger robotic hand. Either design of the robotic hand has its own characteristics. It is reasonable to leverage advantages of both robotic hands to construct a composite one. Therefore, we propose a new type of composite robotic hand, which combines a gripper and a suction cup together.
A. Robotic Hand Structure
The structure of our composite robotic hand is shown in Fig. 2. The composite robotic hand is composed of two parallel fingers and a suction cup. The two fingers are symmetrically distributed on the base. There is a motor-driven parallelogram mechanism for each finger, which ensures that the surfaces of the two fingers are always parallel when the finger is grasping the object. The suction cup system consists of a suction cup, a push rod, a cylinder, two air pumps, a miniature motor and a solenoid valve. The suction cup is placed in the middle of the two fingers. Two air pumps are respecptively equipped inside and outside of the composite robotic hand. The inside one and miniature motor are used for controlling the suction cup, while the outside one with solenoid valve drives the push rod with a range of 75mm. 3. System Pipeline. The system firstly obtains the RGB image and depth image of the scene, and the images are fed into the affordance ConvNet to calculate the affordance map. The system will then evaluate the quality of the affordance map by a metric. If the affordance map is not good enough, the current affordance map will be fed into the DQN, which can suggest an appropriate operation for the robotic hand to explore the environment. Otherwise, the composite robotic hand will implement the suction operation and then grasp the object.
B. Grasp Process
During the process of grasping (Fig. 4), the two fingers are in an open state, and the suction cup is held in the initial state. The robotic hand moves to the lifting point, and when the lifting point is reached, the suction cup will be popped out to approach the surface of the objects. Then the air pump generates the negative pressure in the suction cup so that the object is lifted. Next, the push rod retracts to take the object between the two fingers. Finally, the fingers close to ensure the stability of the grasp. At last, the object will be released. The process of releasing the object is opposite to the suction process.
C. Characteristics of Grasp Process
Compared with other suction grasping systems, the proposed composite robotic hand uses the two fingers to hold the object after the suction cup lifts the object, which increases the stability of the grasp. Especially, when the robotic hand is moving, the force applied by the fingers and suction cup can coordinate together to guarantee the object is stably grasped. Experiments have proved that our composite robotic hand can grasp objects of different sizes and shapes effectively and stably. Some results are demonstrated in Fig. 5.
V. DEEP Q-NETWORK STRUCTURE
A. Affordance Map
The affordance map is a graph which shows the confidence rate of each point in input images for grasping [13]. In Fig. 6. Deep Q-Network Structure. For current RGBD frame, we utilize the affordance map to output a primitive operation guidance, and crop and resize the input frame around the pixel with maximum confidence. We feed this local patch into 8 paralleled tiny U-Net and 8 diverse action directions are output on subpixel-wise locations. The reward for this network is calculated according to a specific designed metric which is derived from the affordance map. this paper, it is used to provide pixel-wise lifting point candidates for the suction cup. It solves the problem in traditional grasping strategy that requires to recognize the object first before grasping. However, it is inevitable that sometimes it is hard to distinguish good grasping points from the obtained affordance map, especially when the scenario is complicated. In this situation, we propose that the robot is supposed to have the ability to actively explore and change the environment until a good affordance map is obtained.
1) Affordance ConvNet: Affordance ConvNet [13] is a network which takes RGB and depth images as input and outputs the affordance map which is a dense pixel-wise heatmap with values ranging from 0 to 1. The closer the values are to 1, the more preferable the lifting locations are. For training purpose, we manually label the scene images, in which areas that are suitable for grasping are annotated.
2) Failure cases: In cluttered scenes, the affordance map usually fails in three situations. The first situation is when objects of similar height or color are close to each other ( Fig. 7(a)). These objects are likely to be regarded as one single object by the affordance ConvNet. In this situation, the junction between adjacent objects will be identified as suitable picking point, which will result in grasp failures. The second situation is when two objects are partially overlapped (Fig. 7(b)). The two objects may be treated as one by the affordance ConvNet, and the boundary of the two objects may be identified as suitable picking location. The third situation is that when the pose of the object is over-tilted (Fig. 7(c)). In this case, the picking point indicated by the affordance map may not be suitable for realistic operation, especially when the surface of the object is not smooth enough.
B. Active Exploration
In order to solve the above problem, the active exploration is introduced into the proposed system. Different from using only one static affordance map, the robot will actively explore and change the environment until a good affordance map is obtained. The deep Q-Network (DQN) is employed to train an agent which indicates actions given the affordance map for the current scene. The network structure (Fig. 6) is based on the U-Net [24], which indicates the pixel-wise action. U-Net is a powerful and lightweight network structure proposed recently for image segmentation, including times of downsampling and upsampling. It demonstrates good performance in yielding pixel-wise semantic information. To minimize the size of the network for speed reason, we adjust this structure to a more tiny one, with one downsampling and upsampling, and resize the RGBD image to a quarter resolution.
1) Local patch: Since our goal is to change the scene according to the current affordance map I a f f , we don't need to consider the whole scene at each time, which may lead to opposite results. Therefore, we propose a local-patch U-Net structure in the network, which can obtain a better scene with less steps and also minimize the model size for faster computation.
Assuming that in the current state, p M is the most promising picking point with the highest confidence score in the affordance map (p M = argmax{I a f f }). We crop the input RGBD image around this corresponding pixel with a size of 128 × 128 and downsample it to a size of 32 × 32 (32 = 128/4) before feeding it into our U-Net based network, which greatly reduces the model size.
2) Paralleled tiny U-Net: The U-Net [24] is able to indicate pixel-wise actions given image inputs. For each action, it outputs a confidence score on each location. We define 8 specific actions in this work. The robot could push the object from 8 directions with a fixed distance. We use O i = i * 45 • (i = 0, . . . , 7) to denote the directions and the push distance is half the size of the local patch. So the whole network contains 8 U-Net modules with the same structure.
The U-Net is trimmed to a tiny one, which down-samples and up-samples for only once. It is good enough for our input and suitable for our scenarios with subpixel-wise operation locations. In this way, the action space of DQN is reduced for a faster learning.
C. The metric of affordance map
Considering the above mentioned failure cases, a novel metric Φ is designed to calculate the reward for the current affordance map, which is useful for evaluating each action obtained from the DQN. And the next action will be generated accordingly for the robot to change the current scene. The process will be iterated until a good affordance map is obtained.
1) Flatness punishment based on Gaussian distribution: By analyzing the failure cases of the affordance map, it is found that the maximum affordance value appears around the area where there is accumulation or tilting and the distribution of the affordance values around this area is tend to be directional (Fig. 8). Thus, we extract the connected area near this maximum affordance value and binarize the affordance map. A Gaussian fitting is applied to this area and an estimated affordance valueŝ is obtained. With the real affordance value s of this specific area and the maximum affordance value v M , we calculate the standard deviation σ of the relative deviation e betweenŝ and s:
e i j =ŝ i j − s i j v M , σ = 1 m · n m ∑ i=0 n ∑ j=0 e i j 2(1)
When σ is small, it indicates that the relative deviation ofŝ and s is in a very small range of fluctuations, so the distribution of the affordance value in this connected area is well-distributed. To evaluate the affordance map by σ , a flat metric Φ f is introduced as Φ f = e −σ . 2) Interpeak intervals: In situation where there are too many objects in the scene, the affordance map often has sev-eral peak values. Fig. 9 shows this difference. We calculate the bounding box of the connected area which is closest to the maximum affordance value and choose the center of this area as one peak location. Besides, we find other peaks by detecting points which have higher values than other points in other small areas. The coordinate of the maximum affordance value point is denoted as p M . The length of the bounding box l and the width w are used to denote the size of the object that will be lifted. Taking k as the number of all the other peaks in the affordance map, P m = {p 0 · · · p k−1 } as the set of other peaks' coordinates, an interval metric Φ d is defined:
a = w + l 2 (2) Φ d = min{ ||p M − p i || 2 a , 1}(i = 0, . . . , k − 1)(3)
3) Maximum affordance: We also take the maximum affordance value v M itself into consideration, which is directly derived from the ConvNet. 4) Reward design: So the final metric Φ is defined as a weighted sum of the above three metrics:
Φ = λ f * Φ f + λ d * Φ d + λ v * v M(4)
where
λ f + λ d + λ v = 1. So Φ ∈ [0, 1].
And if the metric of current frame Φ i > 0.85, we assume the scene is good enough for grasping and the robot stops changing the environment. Please note that Ref. [23] does not provide the details about the reward design. Based on the designed metric, the goal of the agent is to maximize the value of this metric. Therefore, if Φ i is larger than the metric of last frame Φ i−1 by δ , the reward is 1, otherwise, -1. To reduce the noise interference, we set δ = 0.01.
VI. EXPERIMENT
We test the proposed grasping system by executing a series of experiments both in simulation environment and realworld environment. We choose V-REP [25] as the simulation environment. The simulation scene (Fig. 10) is just the same as that in [23], where a UR5 manipulator as well as a robotic hand are introduced to implement the process of the active exploration and a Kinect camera is utilized to obtain the visual data. To simulate a cluttered environment, 11 blocks are added into the scene and we manually design several challenging scenes for evaluation. During the training and testing phases, if the Φ value reaches the threshold, we directly remove the corresponding object.
A. Experiment of DQN Performance in Simulation Environment
We compare the proposed model and the random operation model. In the random operation model, instead of pushing the object based on the output of the DQN, a random pushing action is applied. Within 30 continuous operations, if 5 objects are removed, the test finishes and it is called a success. Otherwise, it is called a failure. Three metrics are used to evaluate their performance: 1) average number of operations per test, 2) average increment of metric Φ per push, and 3) test success rate, which defined as the number of successful tests divided by the number of tests.
In our experiments, we empirically choose the weight parameters as: λ f = 0.75, λ d = 0.15, λ v = 0.1. So the metric will be:
Φ = 0.75 * Φ f + 0.15 * Φ d + 0.1 * v M(5)
1) Evaluation result: The evaluation results in simulation environment are shown in Table I and Fig. 11. It can be seen that compared with random operation model, our model trained by DQN can improve the metric Φ of the affordance map more quickly for a higher grasp success rate and the grasping process is more efficient.
2) Training details: We train our U-Net based DQN model by RMSPropOtimizer, using learning rates decaying from 10 −3 to 2.5 × 10 −4 and setting the momentum equal to 0.9. Our future discount γ is set to be 0.6 for more attention on current epoch. The exploration ε is initialized with 1 and then to 0.2, giving allowance for more attempts on new pushing strategies. 2) Evaluation metric: The evaluation metrics of realworld experiments are different from those in simulation experiments because the real experiments can be more intuitive. In addition, we find out that when the object of maximum Metric Over Operations. Brown line stands for random operations. Blue line stands for DQN-based operation. Black line stands for the metric threshold for object removing. Using DQN-based methods, the metric of affordance map can reach threshold in less steps compared to random operations which proves that DQN has learned intelligent strategies to explore the environment. affordance value is unable to be lifted, the robot will repeat this failure operation, as the environment and affordance map are not changed. Therefore, we define a test as failure if the lift fails at the same object for 3 consecutive times, while a test is defined as a success if the 10 objects within a scene are lifted successfully. Based on that, we defined 3 metrics: 1) the average number of objects grasped successfully per test, 2) suction success rate, which is defined as number of objects grasped successfully divided by number of lift operations, 3) test success rate, which is defined as the number of successful tests divided by the number of tests.
3) Experiment result: We test our robotic hand in 20 different scenes with static affordance map and with affordance map optimized by active exploration. The result of every operation is recorded. Experiment results show that after active exploration optimization, the system performs better in suction success rate and test success rate. Compared with lifting only with static affordance map, the active exploration reduces the possibility of repeating failure lifts, making it more robust to the scene. The comparison results are shown in Fig. 12. When the system only relies on the static affordance map for grasping, it is likely to fail in cluttered scenes. Fig. 13 shows the result of the grasping experiment using static affordance map. By using the affordance map optimized by active exploration, it is easier for the system to find grasping point. Therefore, the system can find more reliable grasping point. In Fig. 14, the robotic hand actively explores the environment to find proper grasping points. However, affordance map optimized by active exploration still has some problems. There are two main problems. The first problem is that the proposed metric Φ can not distinguish all bad scenes that are not suitable for grasping ( Fig. 15(a)). When an object does not have enough support for lifting, our system can not get this information and this kind of objects are difficult for grasping. The second problem is that our DQN sometimes outputs useless pushing action ( Fig. 15(b)) in the area with no objects. 4) Result analysis: In some simple scenes, static affordance map can produce good results by quickly analyzing the whole scene to get the suitable location for grasping. However, in cluttered scenes, it is likely to output incorrect results. Especially, when several objects are very close, the best grasping point in the affordance map will be at the boundary of objects, leading to a failure grasp. What's more, it may always output the same wrong point, making the operation efficiency low. In the proposed model, the affordance map is optimized by active exploration and appropriate operations are generated to push the object away from each other. The situation of the scene is simplified to make the object easy to grasp.
However, the proposed grasping strategy is still not perfect, there exist some useless pushings including pushing in the place without any objects and the system can not recognize all kinds of objects that are not suitable for grasping. In addition, some objects with uneven surface are still difficult to lift with this strategy. As the objects are removed continuously, the scene will become more and more simple. Accordingly, the strategy is gradually inclined to adopt static affordance map directly or with less pushing, so as to ensure the efficiency of grasping.
VII. CONCLUSION
In this work, a novel robotic grasping system is proposed, which includes a composite robotic hand which combines a suction cup and a two-finger gripper. At the same time, a DQN based active exploration approach is applied to the system to intelligently grasp object in the cluttered environment. The pushing strategy is used for the robot to actively explore and change the environment until a good affordance map is obtained. It has been demonstrated that it's more efficient to use the suction cup together with the two-finger gripper for grasping. And the active exploration strategy shows superior performance compared to methods with only static affordance map.
Fig. 2 .
2Structure of the Composite Robotic Hand. The composite robotic hand is composed of two parallel fingers and a suction cup system.
Fig.
Fig. 3. System Pipeline. The system firstly obtains the RGB image and depth image of the scene, and the images are fed into the affordance ConvNet to calculate the affordance map. The system will then evaluate the quality of the affordance map by a metric. If the affordance map is not good enough, the current affordance map will be fed into the DQN, which can suggest an appropriate operation for the robotic hand to explore the environment. Otherwise, the composite robotic hand will implement the suction operation and then grasp the object.
Fig. 4 .
4Object Grasping Process. The suction cup extends first to lift the object, and then the gripper closes to grasp the object.
Fig. 5 .
5Prototype and Grasp Experiment. The composite robotic hand is grasping several objects of different shapes.
Fig. 7 .
7Affordance Map's Failure Case. Typical failure situations: gathering, covering and tilting. In these situations, the affordance map outputs locations that are not suitable for lifting.
Fig. 8 .
8The Distribution of Affordance Value. When the object of the maximum affordance value is tilted or piled up with other objects, the actual distribution of affordance value is directional and different from the Gaussian distribution. When it is not tilted or separate, the actual distribution is well-distributed and similar to the Gaussian distribution.
Fig. 9 .
9Leaks of Affordance Map. When the accumulation of objects occurs near the lift point, there will be often several peaks in the affordance map.
Fig. 10 .
10Simulation Environment
Fig. 11 .
11Fig. 11.
Fig. 12 .
12Comparison results.
Fig. 13 .
13Typical Failure Scenes and Successful Scenes. The failure scenes happen because the affordance map regards several close objects as one single object.
Fig. 14 .
14Exploration Strategy. Our system changes scene that are not suitable for grasping by pushing in a certain direction with fingers.
Fig. 15 .
15Remaining failure cases with Affordance map optimized by active exploration.
TABLE I SIMULATION
IRESULT OF RANDOM OPERATION AND DQN the image acquisition tool to get the RGB image and depth image of the scene. The composite robotic hand is mounted on the UR5 manipulator. We select 40 different objects to build different scenes for our robotic hand to grasp.Method
Operation
times
Metric Φ
increment
Test
success rate
Random operation
23.6
0.0216
60.0%
Our model
20.4
0.0219
71.4%
B. Robotic Experiments
1) Experiment setup: We choose Microsoft's Kinect V2
camera as
VIII. ACKNOWLEDGEMENTSThis work was supported in part by the National Natural Science Foundation of China under Grants 61703284 and U1613212.
Material identification using tactile perception: A semantics-regularized dictionary learning method. H Liu, F Sun, IEEE/ASME Transactions on Mechatronics. 233H. Liu and F. Sun. Material identification using tactile perception: A semantics-regularized dictionary learning method. IEEE/ASME Transactions on Mechatronics, 23(3):1050-1058, 2017.
Human-level control through deep reinforcement learning. V Mnih, K Kavukcuoglu, D Silver, A A Rusu, J Veness, M G Bellemare, A Graves, M Riedmiller, A K Fidjeland, G Ostrovski, S Petersen, C Beattie, A Sadik, I Antonoglou, H King, D Kumaran, D Wierstra, S Legg, D Hassabis, Nature. 518V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529-533, 2015.
New soft robots really suck: vacuumpowered systems empower diverse capabilities. M A Robertson, J Paik, Science Robotics. 29M. A. Robertson and J. Paik. New soft robots really suck: vacuum- powered systems empower diverse capabilities. Science Robotics, 2(9), 2017.
J Mahler, M Matl, X Liu, A Li, D Gealy, K Goldberg, Dex-Net 3.0: computing robust robot vacuum suction grasp targets in point clouds using a new analytic model and deep earning. arXiv e-printsJ. Mahler, M. Matl, X. Liu, A. Li, D. Gealy, and K. Goldberg. Dex- Net 3.0: computing robust robot vacuum suction grasp targets in point clouds using a new analytic model and deep earning. arXiv e-prints, September 2017.
Design, fabrication, and implementation of self-sealing suction cup arrays for grasping. C C Kessens, J P Desai, International Conference on Robotics and Automation (ICRA). IEEEC. C. Kessens and J. P. Desai. Design, fabrication, and implementation of self-sealing suction cup arrays for grasping. In International Conference on Robotics and Automation (ICRA), pages 765-770. IEEE, 2010.
Stretchable suction cup with electroadhesion. Y Okuno, H Shigemune, Y Kuwajima, S Maeda, Advanced Materials Technologies. 411800304Y. Okuno, H. Shigemune, Y. Kuwajima, and S. Maeda. Stretchable suction cup with electroadhesion. Advanced Materials Technologies, 4(1):1800304, 2019.
Inspiration, simulation and design for smart robot manipulators from the sucker actuation mechanism of cephalopods. F W Grasso, P Setlur, Bioinspiration & biomimetics. 24170F. W. Grasso and P. Setlur. Inspiration, simulation and design for smart robot manipulators from the sucker actuation mechanism of cephalopods. Bioinspiration & biomimetics, 2(4):S170, 2007.
Octopus sucker-arm coordination in grasping and manipulation. F Grasso, American Malacological Bulletin. 242F. Grasso. Octopus sucker-arm coordination in grasping and manipu- lation. American Malacological Bulletin, 24(2):13-23, 2008.
Design and development of innovative adhesive suckers inspired by the tube feet of sea urchins. A Sadeghi, L Beccai, B Mazzolai, International Conference on Biomedical Robotics and Biomechatronics (BioRob). IEEEA. Sadeghi, L. Beccai, and B. Mazzolai. Design and development of innovative adhesive suckers inspired by the tube feet of sea urchins. In International Conference on Biomedical Robotics and Biomechatronics (BioRob), pages 617-622. IEEE, 2012.
Vacuum gripper imitated octopus sucker-effect of liquid membrane for absorption. T Tomokazu, S Kikuchi, M Suzuki, S Aoyagi, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEET. Tomokazu, S. Kikuchi, M. Suzuki, and S. Aoyagi. Vacuum gripper imitated octopus sucker-effect of liquid membrane for absorption. In 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 2929-2936. IEEE, 2015.
Active suction cup actuated by electrohydrodynamics phenomenon. Y Kuwajima, H Shigemune, V Cacucciolo, M Cianchetti, C Laschi, S Maeda, International Conference on Intelligent Robots and Systems (IROS). IEEEY. Kuwajima, H. Shigemune, V. Cacucciolo, M. Cianchetti, C. Laschi, and S. Maeda. Active suction cup actuated by electrohydrodynamics phenomenon. In International Conference on Intelligent Robots and Systems (IROS), pages 470-s475. IEEE, 2017.
Theoretical and experimental study of the performance of flat suction cups in the presence of tangential loads. Mechanism and machine theory. G Mantriota, A Messina, 46G. Mantriota and A. Messina. Theoretical and experimental study of the performance of flat suction cups in the presence of tangential loads. Mechanism and machine theory, 46(5):607-617, 2011.
Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and crossdomain image matching. A Zeng, S Song, K T Yu, E Donlon, F R Hogan, M Bauza, D Ma, O Taylor, M Liu, E Romo, International Conference on Robotics and Automation (ICRA). IEEEA. Zeng, S. Song, K. T. Yu, E. Donlon, F. R. Hogan, M. Bauza, D. Ma, O. Taylor, M. Liu, E. Romo, et al. Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross- domain image matching. In International Conference on Robotics and Automation (ICRA), pages 1-8. IEEE, 2018.
Active and exploratory perception. R Bajcsy, M Campos, CVGIP: Image Understanding. 561R. Bajcsy and M. Campos. Active and exploratory perception. CVGIP: Image Understanding, 56(1):31-40, 1992.
Active vision in robotic systems: A survey of recent developments. S Chen, Y Li, N Kwok, International Journal of Robotics Research. 3011S. Chen, Y. Li, and N. Kwok. Active vision in robotic systems: A survey of recent developments. International Journal of Robotics Research, 30(11):1343-1377, 2011.
Robotic material perception using active multi-modal fusion. H Liu, F Sun, X Zhang, IEEE Transactions on Industrial Electronics. H. Liu, F. Sun, and X. Zhang. Robotic material perception using active multi-modal fusion. IEEE Transactions on Industrial Electronics, 2018.
Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. A Zeng, S Song, S Welker, J Lee, A Rodriguez, T Funkhouser, International Conference on Intelligent Robots and Systems (IROS). IEEEA. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser. Learning synergies between pushing and grasping with self-supervised deep reinforcement learning. In International Conference on Intelli- gent Robots and Systems (IROS), pages 4238-4245. IEEE, 2018.
Versatile passive grasping for manipulation. C C Kessens, J P Desai, IEEE/ASME Transactions on Mechatronics. 213C. C. Kessens and J. P. Desai. Versatile passive grasping for manipu- lation. IEEE/ASME Transactions on Mechatronics, 21(3):1293-1302, June 2016.
A threefingered hand with a suction gripping system for picking various objects in cluttered narrow space. S Hasegawa, K Wada, Y Niitani, K Okada, M Inaba, International Conference on Intelligent Robots and Systems (IROS). S. Hasegawa, K. Wada, Y. Niitani, K. Okada, and M. Inaba. A three- fingered hand with a suction gripping system for picking various objects in cluttered narrow space. In International Conference on Intelligent Robots and Systems (IROS), pages 1164-1171, 2017.
Suction helps in a pinch: Improving underwater manipulation with gentle suction flow. H S Stuart, M Bagheri, S Wang, H Barnard, A L Sheng, M Jenkins, M R Cutkosky, International Conference on Intelligent Robots and Systems (IROS). IEEEH. S. Stuart, M. Bagheri, S. Wang, H. Barnard, A. L. Sheng, M. Jenkins, and M. R. Cutkosky. Suction helps in a pinch: Improving underwater manipulation with gentle suction flow. In International Conference on Intelligent Robots and Systems (IROS), pages 2279- 2284. IEEE, 2015.
Active affordance exploration for robot grasping. H Liu, Y Yuan, Y Deng, X Guo, Y Wei, K Lu, B Fang, D Guo, F Sun, International Conference on Intelligent Robotics and Applications (ICIRA). IEEEH. Liu, Y. Yuan, Y. Deng, X. Guo, Y. Wei, K. Lu, B. Fang, D. Guo, and F. Sun. Active affordance exploration for robot grasping. In International Conference on Intelligent Robotics and Applications (ICIRA), pages 1-8. IEEE, 2019.
U-net: Convolutional networks for biomedical image segmentation. Olaf Ronneberger, Philipp Fischer, Thomas Brox, Medical Image Computing and Computer Assisted Intervention (MICCAI). Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convo- lutional networks for biomedical image segmentation. Medical Image Computing and Computer Assisted Intervention (MICCAI), pages 234- 241, 2015.
V-rep: a versatile and scalable robot simulation framework. E Rohmer, S P N Singh, M Freese, International Conference on Intelligent Robots and Systems (IROS). E. Rohmer, S. P. N. Singh, and M. Freese. V-rep: a versatile and scalable robot simulation framework. In International Conference on Intelligent Robots and Systems (IROS), 2013.
| [] |
[
"Symbolic Distillation for Learned TCP Congestion Control",
"Symbolic Distillation for Learned TCP Congestion Control"
] | [
"S P Sharan [email protected] \nUniversity of Texas at Austin\n\n",
"Wenqing Zheng [email protected] \nUniversity of Texas at Austin\n\n",
"Kuo-Feng Hsu \nRice University\n\n",
"Jiarong Xing [email protected] \nRice University\n\n",
"Ang Chen [email protected] \nRice University\n\n",
"Zhangyang Wang \nUniversity of Texas at Austin\n\n"
] | [
"University of Texas at Austin\n",
"University of Texas at Austin\n",
"Rice University\n",
"Rice University\n",
"Rice University\n",
"University of Texas at Austin\n"
] | [] | Recent advances in TCP congestion control (CC) have achieved tremendous success with deep reinforcement learning (RL) approaches, which use feedforward neural networks (NN) to learn complex environment conditions and make better decisions. However, such "black-box" policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments. At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree. The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network. We validate the performance of our distilled symbolic rules on both simulation and emulation environments. Our code is available at https://github.com/VITA-Group/SymbolicPCC. | 10.48550/arxiv.2210.16987 | [
"https://export.arxiv.org/pdf/2210.16987v1.pdf"
] | 253,237,809 | 2210.16987 | cf0ae306a5b485fbf391b60d026f75e008115500 |
Symbolic Distillation for Learned TCP Congestion Control
S P Sharan [email protected]
University of Texas at Austin
Wenqing Zheng [email protected]
University of Texas at Austin
Kuo-Feng Hsu
Rice University
Jiarong Xing [email protected]
Rice University
Ang Chen [email protected]
Rice University
Zhangyang Wang
University of Texas at Austin
Symbolic Distillation for Learned TCP Congestion Control
Recent advances in TCP congestion control (CC) have achieved tremendous success with deep reinforcement learning (RL) approaches, which use feedforward neural networks (NN) to learn complex environment conditions and make better decisions. However, such "black-box" policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments. At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree. The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network. We validate the performance of our distilled symbolic rules on both simulation and emulation environments. Our code is available at https://github.com/VITA-Group/SymbolicPCC.
Introduction
Congestion control (CC) is fundamental to Transmission Control Protocol (TCP) communication. Congestion occurs when the data volume sent to a network reaches or exceeds its maximal capacity, in which case the network drops excess traffic, and the performance unavoidably declines. CC mitigates this problem by carefully adjusting the data transmission rate based on the inferred network capacities, aiming to send data as fast as possible without creating congestion. For instance, a classic and best-known strategy, Additive-Increase/Multiplicative-Decrease (AIMD) [1], gradually increases the sending rate when there is no congestion but exponentially reduces the rate when the network is congested. It ensures that TCP connections fairly share the network capacity in the converged state. Figure 1 shows an example where two TCP connections share a link between routers 1 and 2. When the shared link becomes a bottleneck, the CC algorithms running on sources A and B will alter the traffic rate based on the feedback to avoid congestion. Efficient CC algorithms have been the bedrock for network services such as DASH video streaming, VoIP (voice-over-IP), VR/AR games, and IoT (Internet of Things), which ride atop the TCP protocol.
However, it is nontrivial to design a high-performance CC algorithm. Over the years, tens of CC proposals have been made, all with different metrics and strategies to infer and address congestion, and new designs are still emerging even today [2,3]. There are two main challenges when designing a CC algorithm. 1 it needs to precisely infer whether the network is congested, and if so, how to adjust the sending rate, based on only partial or indirect observations. Note that CC runs on end hosts while congestion happens in the network, so CC algorithms cannot observe congestion directly. Instead, it can only rely on specific signals to infer the network status. For instance, TCP Cubic [4] uses packet loss as a congestion signal, and TCP Vegas [5] opts for delay increase. 2 and they need to make real-time decisions to adjust the traffic rates frequently (e.g., per round-trip time). Therefore, the algorithm must be very efficient. Spending a long time to compute an action will significantly offset network performance. Over the long history of congestion control, most algorithms are implemented with manually-designed heuristics, including New Reno [6] Vegas [5], Cubic [4], and BBR [2]. In TCP New Reno, for example, the sender doubles the number of transmitted packets every RTT before reaching a predefined threshold, after which it sends one more packet every RTT. If there is a timeout caused by packet loss, it halves the sending rate immediately. Unfortunately, manually crafted CCs have been shown to be sub-optimal and cannot support cases that escape the heuristics [7]. For example, packet loss-based CCs like Cubic [4] cannot distinguish packet drops caused by congestion or non-congestion-related events [7]. Researchers have tried to construct CC algorithms with machine learning approaches to address these limitations [7][8][9][10][11]. The insight is that the CC decisions are dependent on traffic patterns and network circumstances, which can be exploited by deep reinforcement learning (RL) to learn a policy for each scenario. The learned policy can perform more flexible and accurate rate adjustments by discovering a mapping from experience, which can adapt to different network conditions and reduce manual tuning efforts.
Most notably, Aurora [7], a deep RL framework for Performance-oriented Congestion Control (PCC), trains a well-established PPO [12] agent to suggest sending rates as actions by observing the network statistics such as latency ratio, send ratio, and sent latency inflation. It achieves competitive results on emulation environments Mininet [13] and Pantheon [14], demonstrating the potential of deep learning approaches to outperform algorithmic, hand-crafted ones. Despite its immense success, Aurora being a neural network based approach, is essentially a black-box to users or, in other words, lacks explicit declarative knowledge [15]. They also require exponentially more computation resources than traditional hand-crafted algorithms such as the widely deployed TCP-CUBIC [4].
Our Contributions
In this work, we develop a new algorithmic framework for performance-oriented congestion control (PCC) agents, which can 1 run as fast as classical algorithmic methods; 2 adjust the rate as accurately as data-driven RL methods; and 3 be simpler than the original neural network [16], potentially improving practitioners' understanding of the model in an actionable manner. We solve this problem by grasping the opportunity enabled by advances in symbolic regression [17][18][19][20][21][22][23]. Symbolic regression bridges the gap between the infeasible search directly in the enormous symbolic algorithms space and the differentiable training of over-parameterized and un-interpretable neural networks.
At a high level, one can first train an RL agent through gradient descent, then distill the learned policy to obtain the data-driven optimized yet simpler and easier-to-understand symbolic rules. This results in a set of symbolic rules through data-driven optimization that meets TCP CC's extreme efficiency and reliability demands. However, considering the enormous volume of discrete symbolic space, it is challenging to learn effective symbolic rules from scratch directly. Therefore, in this paper, we adopt a two-stage approach: we first train a deep neural network policy with reinforcement learning mimicking Aurora [7], and then distill the resultant policy into numerical symbolic rules, using symbolic regression (SR).
As the challenge, directly applying symbolic regression out of the box does not yield a sufficiently versatile expression that captures diverse networking conditions. We hence propose a novel branching technique for training and then aggregating a number of SymbolicPCC agents, each of which caters to a subset of the possible network conditions. Specifically, we have multiple agents, each called a branch, and employ a light-weight "branch decider" to choose between the branches during deployment. In order to create the branching conditions we partition the network condition space into adjacent non-overlapping contexts, then regress symbolic rules in each context. With this modification, we enhance the expressiveness of the resulting SR equation and overcome the bias of traditional SR algorithms to output rules mostly using numerical operators. Our concrete technical contributions are summarized as follows:
• We propose a symbolic distillation framework for TCP congestion control, which improves upon the state-of-the-art RL solutions. Our approach, SymbolicPCC, consists of two stages: first training an RL agent and then distilling its policy network into ultra-compact and simple rules in the symbolic expression form. • We propose a novel branching technique that advances existing symbolic regression techniques for training and aggregating multiple context-dependent symbolic policies, each of which specializes for its own subset of network conditions. A branch decider driven by light-weight classification algorithms determines which symbolic policy to use. • Through our simulation and emulation experiments, we show that SymbolicPCC achieves highly competitive or even stronger performance results compared to their teacher policy networks while running orders of magnitude faster. The presented model uses a tree structure that is light-weight, and could be simpler for practitioners to reason about and improve manually while narrowing their performance gap. Figure 2: Overview of Conventional Baselines . Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [24], and NewReno [6]; while others rely on the variation of delay, e.g., Vegas [5], or combine packet loss and delay [25,26]. Different CC techniques specialized for datacenter networks are also proposed [3,27].
Related Works
Researchers have also investigated the use of machine learning to construct better heuristics. Indigo [10] and Remy [11] use offline learning to obtain high-performance CC algorithms. PCC [28] and PCC Vivace [9] opt for online learning to avoid any hardwired mappings between states and actions. Aurora [7] utilizes deep reinforcement learning to obtain a new CC algorithm running in userspace. Orca [8] improves upon Aurora and designs a userspace CC agent that infrequently configures kernel-space CC policies. Our proposal further improves these work.
At the same time, Symbolic regression methods [17][18][19][20][21][22][23] have recently emerged for discovering underlying math equations that govern some observed data. Algorithms with such a property are more favorable for real-world deployment as they output white-box rules. [18] use genetic programming based method while [20] uses a recurrent neural network to perform a search in the symbolic space.
We thus propose to synergize such numerical and data-driven approaches using symbolic regression (SR) in the congestion control domain. We use SR by following a post-hoc method of first training an RL algorithm then distilling it into its symbolic rules. Earlier methods that follow a similar procedure do exist, e.g., [29] distills the learned policy as a soft decision tree. They work on visual RL where the image observations are coarsely quantized into 10 × 10 cells, and the soft decision tree policy is learned over the 100 dimensional space. [30] also aims to learn abstract rules using a common sense-based approach by modifying Q learning algorithms. Nevertheless, they fail to generalize beyond the specific simple grid worlds they were trained in. [31] learns from boolean feature sets, [32] directly approximates value functions based on a state-transition model, [33] optimizes risk-seeking policy gradients. Other works on abstracting pure symbolic rules from data include attention-based methods [34], visual summary [35], reward decomposition [36], causal models [37], markov chain [38], and case-based expert-behaviour retrieval [21,22,39].
Methodology
Inspired from the idea of "teacher-student knowledge distillation" [40][41][42], our symbolic distillation technique is two-staged-first train regular RL agents (as the teacher), then distill the learned policy Figure 3: The proposed SymbolicPCC training and evaluation technique: A baseline RL agent is first trained then evaluated numerous times with the roll-outs being saved. Directly distilling out from this data provides a baseline symbolic policy. A light-weight clustering algorithm is used to cluster from the roll-out dataset, non-overlapping subsets of network conditions (aka. branching conditions) that achieve similar return. Separate RL agents are then trained on each of these network contexts and distilled into their respective symbolic rules. During the evaluation, the labels from the clustering algorithm are re-purposed to classify which branch is to be taken given the observation statistics. The chosen symbolic branch is then queried for the action.
networks into their white-box symbolic counterparts (as the student). In Section 3.1 we follow [7]'s approach in Aurora and the training of teacher agents on the PCC-RL gym environment. We also briefly discuss the approach of applying symbolic regression to create a light-weight numerical-driven expression that approximates a given teacher's behavior. In Section 3.2 we look at the specifics of symbol spaces and attach internal attributes to aid long-term planning. Finally, in Section 3.3 we discuss our novel branching algorithm as a method for training, then ensembling multiple contextdependent symbolic agents during deployment.
Preliminaries: The PCC-RL Environment and the Symbolic Distillation Workflow
PCC-RL [7] is an open-source RL testbed for simulation of congestion control agents based on the popular OpenAI Gym [43] framework. We adopt it as our main playground. It formulates congestion control as a sequential decision making problem. Time is first divided into multiple periods called MIs (monitor intervals), following [28]. At the onset of each MI, the environment provides the agent with the history of statistic vector observations over the network, and the agent responds with adjusted sending rates for the following MI. The sending rate remains fixed during a single MI.
The network statistics provided as observations to the congestion control agent are 1 the latency inflation, 2 the latency ratio, and 3 the sending ratio. The agent is guided by reward signals based on its ability to react appropriately when detecting changes and trends in the vector statistics of the PCC-RL environment. It is provided with positive return for higher values of throughput (packets/second) while being penalized for higher values of latency (seconds) and loss (ratio of sent vs. acknowledged packets).
Training of Teacher Agents: We first proceed to train RL agents using the PPO algorithm [12] similar to Aurora [7] in the PCC-RL gym environment till convergence. Although they statistically perform very well [7], the PPO agents are entirely black-boxes; this makes it difficult to explain its underlying causal rules directly. Also, their over-parameterized neural network forms incur high latency. Hence, we choose to indirectly learn the symbolic representations using a student-teacher type knowledge distillation approach based on the teacher's (in this case, the RL agent) behaviors.
Distillation of Student Agents: Using the teacher agents, we collect complete trajectories, formally known as roll-outs in RL, in an inference mode-deterministic actions are enforced. The observations and their corresponding teacher actions are MI-aligned and stored as an offline dataset. Note that this step is only performed once at the start of the distillation procedure and is reused in each of its iterative steps. A search space of operators and operands is also initialized (details are discussed shortly in Section 3.2). Guesses for possible symbolic relations are taken, composed of random operators and operands from their respective spaces. The stored observation trajectories are then re-evaluated based on this rule to output corresponding actions. The cross-entropy loss with respect to the teacher model's actions from the same dataset is used as feedback. This feedback drives the iterative mutation and pruning following a genetic programming technique [44,45]. The best candidate policies are collected and forwarded to the next stage. If the tree fails to converge or does not reach a specific threshold of acceptance, the procedure is restarted from scratch. Our symbolic distillation method is discussed with further details in the Appendix A.
A Symbolic Framework for Congestion Control
Defining the Symbol Space for CC: Unlike visual RL [46], the PCC observation space is vectorbased, hence we directly plug them into the search space of our numerical-driven symbolic algorithm.
We henceforth call these observations as vector statistic symbols. The distillation procedure as described earlier learns to chain these vector statistic symbols using a pre-defined operator space. Specifically, we employ three types of numerical operators. The first type of operators are arithmetic based which include +, −, * , /, sin, cos, tan, cot, (·) 2 , (·) 3 , √ ·, exp, log, |·|. The second type of operators are Boolean logic, such as is(
x < y), is(x ≤ y), is(x == y), a | b, a & b, and ¬ a.
We also utilize a third type of high level operators -namely the slope_of (observation history) which provides the average slope of an array of observations, and get_value_at (observation history, index). The slope operator is especially useful when trying to detect trends of a specific statistic vector over the provided monitor interval. For instance, identifying latency increase or decrease trends serves as one of the crucial indicators for adjusting sending rates. Meanwhile the index operator is observed from our experiments to be implicitly used for immediate responding-i.e., based on the latest observations. We note that the underlying decision procedure of the policy network could be efficiently represented in the form of a high-fidelity tree-shaped form similar to Figure 4. This decision tree contains said condition nodes and action nodes. Each condition node forks into two leaf nodes based on the Boolean result of its symbolic composition. Attributes for Long Term CC Planning: In addition to having these operators and operands as part of the symbolic search space, we also attach a few attributes/flags to the agent which are shared across consecutive MI processing steps and help with long-term planning. One behavior in our SymbolicPCC agents is to use this attribute for remembering if the agent is in the process of recovering from a network overload or if the network is stable. Indeed, a more straightforward option for such "multi-MI tracking" would be to just provide a longer history of the vector statistics into the searching algorithms. But this quickly becomes infeasible due to an exponential increase of the possible symbolic expressions with respect to the length of vector statistic symbols.
Novel Branching Algorithm: Switching between Context-Dependent Policies
Unlike traditional visual RL environments, congestion control is a more demanding task due to the variety of possible network conditions. The behavior of the congestion control agent will improve if its response is conditioned on the specific network context. However, as this context cannot be known by the congestion control agent, the traditional algorithms such as TCP Cubic [4] are forced to react in a slow and passive manner to support both slow-paced and fast-paced conditions. Such a notion of context splitting and branching for training specialized agents can be compared to earlier works in multi-task RL. Specifically, Divide-and-Conquer [47] learns to separate complex tasks into a set of local tasks, each of which can be used to learn a separate policy. Distral [48] proposes optimizing a joint objective function by training a "distilled" policy that captures common behavior across tasks.
Hence, we propose to create n non-overlapping contexts for different network conditions, namelybandwidth, latency, queue size, and loss rate. We then train n individual RL agents in the PCC Gym by exposing them only to their corresponding network conditions. We thus have a diverse set of teachers which are each highly performant in their individual contexts. Following the same approach as described in Section 3.1, each of the agents are distilled-each one called a branch. Finally, during deployment, based on the inference network conditions, the branch with the closest matching boundary conditions/contexts is selected and the corresponding symbolic policy is used. Partitioning the Networking Contexts: A crucial point to note in the proposed branching procedure is to identify the most suitable branching context boundary values. In other words, the best boundary conditions for grouping need to be statistically valid, and plain hand-crafted boundaries are not optimal. This is because we do not have ground truths of any of the network conditions [49], let alone four of them together. Therefore, we first use a trained a RL agent on the default (maximal) bounds of network conditions (hereinafter called the "baseline" agent). We then evaluate the baseline agent on multiple regularly spaced intervals of bandwidth, latency, queue size, and loss rate and store their corresponding return as well as observation trajectories. To create the optimal groupings, we simply use KMeans [50] to cluster the data based on their return. Due to the inherent proportional relation of difficulty (or in this case the ballpark of return) with respect to a network context, clear boundaries for the branches can be obtained by inspecting the extremes of each network condition within a specific cluster. Our experimentally obtained branching conditions are further discussed in Section 4.2 and Table 1.
Branch Decider: Since the network context is not known during deployment, one needs a branch decider module. The branch decider reuses cluster labels from the training stage for a K Nearest Neighbors [51] classification. The light-weight distance-based metric is used to classify the inferencetime observation into one of the training groupings and thereby executing the corresponding branch's symbolic policy. Figure 3 illustrates our complete training and deployment techniques.
Lastly, in order to support branching effectively, we have yet another long-term tracking attribute that stores a history of branches taken in order to smooth over any erratic bouncing between branches which are in non-adjacent contexts.
Experimental Settings and Results
Next, we discuss the abstract rules uncovered by SR, and validate the branching contexts. In Sections 4.3, 4.4, and 4.5 we provide emulation results on Mininet [13], a widely-used network emulator that can emulate a variety of networking conditions. Lastly in Section 4.6, we compare the compute requirements and efficiencies of SymbolicPCC with conventional algorithms, RL-driven methods as well as their pruned and quantized variants. More hyperparameter details are in Appendix B
Interpreting the Symbolic Policies
The baseline symbolic policy distilled from the baseline RL agent is represented in its decision tree form in Figure 4. One typical CC process presented by the tree is increasing the sending rate until the network starts to "choke" and then balancing around that rate. This process is guided with a series of conditions regarding to inflation and ratio signals, marked with circled numbers in Figure 4. The detailed explanation is in the following.
Condition node 1 checks whether the vector statistic symbols are all stable-namely, whether the latency inflation is close to zero, while latency ratio and send ratio are close to one. The sending rate starts to grow if the condition holds. Condition node 2 identifies if the network is in a over-utilized status slope_of (latency inflation) increasing as the key indicator. It the condition is true, the acceleration of sending rate will be reduced appropriately. On the other hand, condition node 3 is activated when the initial sending rate is too low or has been reduced extensively due to 2 . 4 is evaluated when major network congestion starts to occur due to increased sending rates from the earlier condition nodes. It checks both latency inflation and latency ratios in an increasing state. Its child nodes start reducing the sending rates and also flip the internal state attribute to 1. The latter is used to track if the agent is recovering from network congestion. On the "False" side of 6 (i.e. internal state = 1), 7 and 8 realize two stages of recovery, where the latency inflation ratio starts plateauing and then starts reducing. 11 indicates that stable conditions have been achieved again and the agent is at an optimal sending rate. The internal state is flipped back again to 0 after this recovery.
Inspecting the Branching Conditions
As discussed in Section 3.3, a light-weight clustering algorithm divides the network conditions into multiple non-overlapping subsets. Table 1 summarizes the obtained boundary values. The baseline agent is trained on all possible bandwidth, latency, queue size, and loss rate values, as depicted in the first row. During the evaluation, bandwidth, latency, and loss rate are tested on linearly spaced values of step sizes 50, 0.1, and 0.01, while queue sizes are exponentially spaced by powers of e 2 respectively. The return of the saved roll-outs are clustered using K-Means Clustering, and the optimal cluster number is found to be 4 using the popular elbow curve [52] and silhouette analysis [53] methods. By observing the maximum and minimum of each network condition individually in the 4 clusters, respective boundary values are obtained. A clear relation discovered is that higher bandwidths and lower latencies are directly related to higher baseline return.
Remark 1: Exceptions for non-overlapping contexts. It is also to be noted that no such trend was found between the queue size and return, and hence all the 4 resultant branches were given the same queue size. A similar exception was made for the loss rates of Branches 2 and 3.
Remark 2: Interpreting the symbolic policies branches. All the 4 distilled symbolic trees from the specialized RL agents possess high structural similarity and share similar governing rules as to that of the baseline agent in Section 4.1. They majorly differ in the numerical thresholds and magnitudes of action nodes, i.e., by varying their "reaction speeds" and "reaction strengths", respectively.
Emulation Performance on Lossy Network Conditions
The ability to differentiate between congestion-induced and random losses is essential to any PCC agent. Figure 5a 1 shows a 25-second trace of throughput on a link where 1% of packets are randomly dropped [54]. As the link's bandwidth is set to 30 Mbps, the ideal congestion control would aim to utilize it fully as depicted by the gray dotted line. Baseline SymbolicPCC shows nearideal performance with its branched version pushing boundaries further. In contrast, conventional algorithms, especially TCP CUBIC [4], repeatedly reduces its sending rates as a response to the random losses. Quantitative measures of mean square error with respect to the ideal line are provided in Table 2 as "Lossy ∆ 2 opt. ". This result proves that SymbolicPCC can effectively differentiate between packet loss caused by randomness and real network congestion. Unstable network conditions are common in the real world and this test benchmarks the agent's ability to quickly respond to network dynamics. Figure 5b shows our symbolic agent's ability to handle such conditions. The benefits of our novel branching algorithm, as well as switching between agents specializing in their own network context, is clearly visible from faster response speeds. In this case, the link was configured with its bandwidth alternating between 20 Mbps and 40 Mbps every 5 seconds with no loss. Quantitative results from Table 2 show the mean square error with respect to the ideal CC as "Unstable ∆ 2 opt. ".
Emulation Performance under Network Dynamics
Link Utilization and Network Sensitivities
Link utilization as measured from the server side is defined as the ratio of average throughput over the emulation period to the available bandwidth. A single link is first configured with defaults of 30 Mbps capacity, 30 ms of latency, a 1000-packet queue, and 0% random loss.
To measure the sensitivity with respect to a specific condition, it is independently varied while keeping the rest of the conditions constant. An ideal CC preserves high link utilization over the complete range of measurements. From Figure 6, it is observed that our branched SymbolicPCC provides near-capacity link-utilization at most tests and shows improvement over any of the other algorithms.
Efficiency and Speed Comparisons
Since TCP congestion control lies on the fast path, efficient responses are needed from the agents. Due to its GPU compute requirements and slower runtimes, RL-based approaches such as Aurora are constrained in their deployment settings (e.g., userspace decisions). On the other hand, our symbolic policies are entirely composed of numerical operators, making them structurally and computationally minimal. From our results in Table 2, adding the branch decider incurs a slight overhead as compared to the non-branched counterpart. Nevertheless, it is preferable due to its increased versatility in We also compare global magnitude pruned and dynamically quantized versions of Aurora. Although these run faster than their baseline versions, they come at the cost of worse CC performance.
Discussions and Potential Impacts of SymbolicPCC
Interpretability -a universal boon for ML? In the PCC domain, the model interpretability is linked to the wealth of domain knowledge. By distilling a black-box neural network into white-box symbolic rules, the resulting rules are easier for the network practitioners to digest and improve. It may be somewhat surprising that the distilled symbolic policy outperforms Aurora. A natural question arises if it is due to a generalization amplification that sometimes happens for distillation in general or if it is due to symbolic representation. We hypothesize that the performance of a symbolic algorithm boils down to the nature of the environment it is employed in. Congestion control is predominantly rule-based, with deep RL models brought to devise rules more complex and robust than hand-crafted ones through iterative interaction. It is only natural to observe that symbolic models outperform such PCC RL models when the distillation is composed of a rich operator space and dedicated policy denoising and pruning stages to boost their robustness and compactness further. To justify this, in Table 3 we analyze the performance obtained by decoupling distillation and symbolic representation: we first distill a black-box NN half the size of Aurora ("typical KD") and then further perform symbolic distillation on it.
On possible limitations. We have specifically focused on TCP congestion control as the problem setting,-e.g., the return clustering and reward design. Specific modifications are needed before the approach could be applied to other RL domains.
Conclusion and Future Work
This work studies the distillation of NN-based deep reinforcement learning agents into symbolic policies for performance-oriented congestion control in TCP. Our branched symbolic framework has better simplicity and efficiency while exhibiting comparable and often improved performance over their black-box teacher counterparts on both simulation and emulation environments. Our results point towards a fresh direction to make congestion control extremely light-weight, via a symbolic design. Our future work aims at more integrated neurosymbolic solutions and faster model-free online training/fine-tuning for performance-oriented congestion control. Exploring the fairness of neurosymbolic congestion control is also an interesting next step. Besides, we also aim to apply symbolic distillation to a wider range of systems and networking problems. cnt ← cnt + 1 6:
n ← pop(S) Sample action node 7:
Y sub ← Y[n.total_condition] Slices 8:
IF Entropy(Y sub ) < Θ entropy : 9:
n.policy ← M ean(Y sub ) 10: ELSE:
Single action cannot fit 11:
IF n.depth < depth M AX : 12:
With probability p 1 : Split condition 13:
n ← newConditionN ode() 14:
S ← S + {n.a LEF T , n.a RIGHT } 15:
With probability 1 − p 1 : De-noise 16: n.policy ← default action 17:
ELSE: Too deep, stop branching further 18:
With probability p 2 : 19:
X sub ← X[n.total_condition] 20:
n.policy ← runSR(X sub , Y sub ) 21:
With probability p 3 : De-noise 22: n.policy ← default action 23:
With probability 1 − p 2 − p 3 : 24:
n ← Sample(pathT oRoot(n)) 25:
removeSubtree(n ) 26:
n ← newConditionN ode() 27:
S ← S+{n .a LEF T , n .a RIGHT } 28: Return r Table 4: Symbolic distillation algorithm.
We note that the decision procedure of a wide range of policy networks could be efficiently represented as high-fidelity tree shaped symbolic policy. In this tree structure, one basic component -the condition node, has three key properties: the condition, a LEF T , and a RIGHT , and could be written equivalent to one basic boolean operation, condition * a LEF T +¬condition * a RIGHT , as explained in Figure 7.
A careful and delicate "DRL behavior dataset" is to be generated and processed, which we specify below. Once having generated the DRL behavior dataset, one could then apply one of the current symbolic regression benchmarks to parse out a symbolic rule that best fit the DRL behavior data.
We now specify how we build the DRL behavior dataset and process into a symbolic regression friendly format. In general, the symbolic regression algorithms are able to evolve into an expression that maps a vector x ∈ R d into a scalar y ∈ R 1 , where d is the dimensionality of the input vector. To do so, they require a dataset that stacks N Data samples of x and y, into X ∈ R NData×d and y ∈ R NData×1 , respectively. Given these input/output sample pairs, i.e., (X, y), a symbolic expression that faithfully fit the data can be reliably recovered. The overview of our symbolic distillation algorithm is provided in Table 4 and equivalently in Figure 8.
The genetic mutation is guided by a measure termed program fitness. It is an indicator of the population of genetic programs' performances. The fitness metric driving our evolu-tion is simply the MSE between the predicted action and the "expert" action (teacher model's action). We use the fitness metric to determine the fittest individuals of the population, essentially playing a survival of the fittest game. These individuals are mutated before proceeding to following evolution rounds. We specifically follow 5 different evolution schemes, either one picked stochastically. They are:
• Crossover: Requires a parent and a donor from two different evolution tournamets. This scheme replaces (or) inserts a random subtree part of the donor into a random subtree part of the parent. This mutant variant carries forth genetic material from both its sources. • Subtree Mutation: Unlike crossover which brings "intelligent" subtrees into the parent, subtree mutation instead randomly generates it before replacing its parent. This is more aggressive as compared to the crossover counterpart and reintroduce extinct functions and operators into the population to maintain diversity. • Hoist Mutation: Being a bloat-fighting mutation scheme, hoist mutation first selects a subtree. Then a subtree of that subtree is randomly chosen and hoists itself in the place of the original subtree chosen. • Point Mutation: Similar to subtree mutation, point mutation also reintroduces extinct functions and operators into the population to maintain diversity. Random nodes of a tree are selected and replaced with other terminals and operators with the same arity as the chosen one. • Reproduction: An unmodified clone of the winner is directly taken forth for the proceeding rounds.
B Experimental Settings
In our training regime, the configured link bandwidth is between 100 − 500 pps, latency 50 − 500 ms, queue size 2 − 2981 packets, and a loss rate between 0 − 5%. In the MiniNet emulation, the link bandwidth is between 0 − 100 mbps, latency 0 − 1000 ms, queue size 1 − 10000 packets, and a loss rate upto 8%. The MiniNet configuration is from its default setting, and we adopt this mismatch to purposely explore the model's robustness.
C Extended Discussions
The Interpretability. The simple form of distilled symbolic rules provides more insights for networking researchers of what are the key heuristic for TCP CC. Moreover, our success of using symbolic distillation for CC also paves the possibility of applying it to other systems and networking applications such as traffic classification and CPU scheduling tasks.
Need for Branching. The branched training of multiple symbolic models, each in different training regimes, is designed to ease the optimization process. It does not directly enforce similarity between solutions for the grouped states -therefore not causing brittleness. This is assured as the symbolic model within any branch does not directly perform the same action for all scenarios within its regime, but contains multiple operations within itself to map states to actions based on the network state observed. Also, during the inference/deployment stage, we use the branch-decider network which chooses branches based on the observed state, not the bandwidths or latencies (in fact, these measures are unavailable to the controller agent and cannot be observed). Table 4.
Figure 1 :
1Overview of a congestion control agent's role in the network. Multiple senders and receivers share a single network link controlled by the agent, which dynamically modulates the sending rates conditioned on feedback from receivers.
Figure 4 :
4A distilled symbolic policy from the baseline RL Agent in the PCC-RL Environment. Condition nodes are represented as rectangular blocks and action nodes as process blocks.
25-second thoughput trace for TCP CUBIC, PCC-Vivace, BBR, Aurora, and our SymbolicPCC variants on a 30 Mbps bandwidth link with 2% random loss, 30 ms latency, and a queue size of 1000. 25-second throughput trace for TCP CUBIC, PCC Vivace, BBR, Aurora, and our SymbolicPCC variants on a link alternating between 20 and 40 Mbps every 5 seconds with 0% random loss, 30 ms latency, and a queue size of 1000.
Figure 5 :
5Emulation on different conditions.
Figure 6 :
6Link-utilization trends as a measure of sensitivities of bandwidth, latency, queue size, and loss rate. Higher values are better.
Figure 7 :
7The equivalence of branching node in a subtree and the bool conditioning expression Algorithm: Distilling Teacher Behavior into Symbolic Tree Require: Temporary dataset D train containing X (numerical states), Y (actions) Return: r: the root of symbolic policy tree Maintain: S: the set of unsolved action nodes 1: Initializations 2: r ← newActionN ode(depth = 0) 3: S ← {r} ; cnt ← 0 4: While S = {} & cnt < cnt M AX : 5:
: a symbolic tree with condition nodes and action nodes root = new_action_node(depth=0) # initialize the root node as an action node unsolved_action_nodes = { root } loop_cnt = 0 while (unsolved_action_nodes is not empty) and (loop_cnt < max_cnt): loop_cnt += 1 node = sample(unsolved_action_nodes).pop() # randomly sample an unsolved action node # First check if the actions under the current total_condition is near deterministic. y_subset = y[node.total_condition] # select slices that satisfy total_condition if entropy(y_subset) < entropy_threshold: # If a single action fits under the current total_condition, then resolve and close this branch node.policy = mean(y_subset) else: if node.depth < max_depth: # If max depth is not met, branch on this node by a randomly guessed # condition, and mark new child nodes as unsolved replace_action_node_with_new_condition_node(node) unsolved_action_nodes.add([node.a_LEFT,node.a_RIGHT]) else: # If the current node is already too deep, then stop branching further. uniform_0_1 = rand() # sample from a uniform distribtion [0,1] if uniform_0_1 > p_SR: # With probability p_SR, directly solve this node using Symbolic_Regression. x_subset = x[node.total_condition] node.policy = Symbolic_Regression(x_subset, y_subset) elif uniform_0_1 > p_SR + p_default_action: # With probability p_default_action, set to default action to de-noise teacher behavior. node.policy = default_action else: # Otherwise, remove a subtree containing this node, then renew the searches. node_father = sample(node.father_nodes_list) remove_subtree(node_father) node_father = new_condition_node() unsolved_action_nodes.add([node_father.a_LEFT,node_father.a_RIGHT]) return root
Figure 8 :
8The pseudo-code for the algorithm in
CC algorithms operate within the OS kernel, where the computing and memory resources are limited, 36th Conference on Neural Information Processing Systems (NeurIPS 2022). arXiv:2210.16987v1 [cs.LG] 24 Oct 2022
Table 1 :
1The baseline network conditions and resultant branching boundary values (contexts) for each branch after clustering. The return centroid refers to the return value at cluster center of that specific branch.Branch
Return Centroid Bandwidth (pps) Latency (sec) Queue Size (packets) Loss Rate (%)
Baseline
-
100 -500
0.05 -0.5
2 -2981
0.00 -0.05
Branch 1
95.84
100 -200
0.35 -0.5
2 -2981
0.04 -0.05
Branch 2
576.57
200 -250
0.25 -0.35
2 -2981
0.02 -0.03
Branch 3
1046.46
250 -350
0.15 -0.25
2 -2981
0.02 -0.03
Branch 4
1516.70
350 -500
0.05 -0.15
2 -2981
0.00 -0.02
Table 2 :
2Efficiency and speed comparison of congestion control agents. Note that the ideal values for Lossy T hpt. and Lossy ΣT hpt. are 30 and 750 respectively (referFigure 5a).Algorithm
Type
FLOPs (↓) Runtime (µs) (↓) Lossy T hpt. (↑) Lossy ΣT hpt. (↑) Lossy ∆ 2
opt. (↓) Oscillating ∆ 2
opt. (↓)
TCP CUBIC
Conventional
-
< 10
1.27
33.01
823.02
126.07
PCC Vivace
Conventional
-
< 10
8.72
226.68
440.55
186.76
BBR
Conventional
-
< 10
25.76
669.91
92.96
123.82
Aurora (baseline)
RL-Based
1488
864
27.55
716.20
26.29
53.22
Aurora (50% pruned)
RL-Based
744
781
27.03
709.20
27.37
61.85
Aurora (80% pruned)
RL-Based
298
769
26.42
696.86
48.13
79.80
Aurora (95% pruned)
RL-Based
74
703
25.97
682.94
83.66
103.53
Aurora (quantized)
RL-Based
835
810
22.54
601.78
142.92
88.45
SymbolicPCC (baseline)
Symbolic
48
23
28.40
738.46
7.29
85.03
SymbolicPCC (branched)
Symbolic
63
37
28.55
742.46
4.14
43.83
different network conditions, as validated by Mininet emulation results. SymbolicPCC achieves 23×
faster execution times over Aurora, being reasonably comparable to PCC Vivace and TCP CUBIC.
Table 3 :
3Decoupling: symbolic alone helps generalization.Model
Avg. return (↑)
Aurora
832
Black-box dist. (50%) from Aurora
641
White-box dist. from above model
687
Interestingly, the figure shows that BBR has rate drop around 11th second. This is a limitation of the BBRv1 design-it reduces sending rate if the min_rtt has not been seen in 10s, which is triggered because the RTT in our setup is very stable.
AcknowledgmentA. Chen and Z. Wang are both in part supported by NSF CCRI-2016727. A. Chen is also supported by NSF CNS-2106751. Z. Wang is also supported by US Army Research Office Young Investigator Award W911NF2010240.
Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Computer Networks and ISDN systems. Ming Dah, Raj Chiu, Jain, 17Dah-Ming Chiu and Raj Jain. Analysis of the increase and decrease algorithms for congestion avoidance in computer networks. Computer Networks and ISDN systems, 17(1):1-14, 1989.
BBR: congestion-based congestion control. Neal Cardwell, Yuchung Cheng, Soheil Stephen Gunn, Van Hassas Yeganeh, Jacobson, Communications of the ACM. 602Neal Cardwell, Yuchung Cheng, C Stephen Gunn, Soheil Hassas Yeganeh, and Van Jacobson. BBR: congestion-based congestion control. Communications of the ACM, 60(2):58-66, 2017.
Delay is simple and effective for congestion control in the datacenter. Gautam Kumar, Nandita Dukkipati, Keon Jang, M G Hassan, Xian Wassel, Behnam Wu, Yaogong Montazeri, Kevin Wang, Christopher Springborn, Michael Alfeld, Ryan, Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication. the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communicationGautam Kumar, Nandita Dukkipati, Keon Jang, Hassan MG Wassel, Xian Wu, Behnam Montaz- eri, Yaogong Wang, Kevin Springborn, Christopher Alfeld, Michael Ryan, et al. Swift: Delay is simple and effective for congestion control in the datacenter. In Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication, pages 514-528, 2020.
Cubic: a new tcp-friendly high-speed tcp variant. Sangtae Ha, Injong Rhee, Lisong Xu, ACM SIGOPS operating systems review. 425Sangtae Ha, Injong Rhee, and Lisong Xu. Cubic: a new tcp-friendly high-speed tcp variant. ACM SIGOPS operating systems review, 42(5):64-74, 2008.
Tcp vegas: New techniques for congestion detection and avoidance. Lawrence S Brakmo, W Sean, Larry L O'malley, Peterson, Proceedings of the conference on Communications architectures, protocols and applications. the conference on Communications architectures, protocols and applicationsLawrence S Brakmo, Sean W O'Malley, and Larry L Peterson. Tcp vegas: New techniques for congestion detection and avoidance. In Proceedings of the conference on Communications architectures, protocols and applications, pages 24-35, 1994.
Rfc3782: The newreno modification to tcp's fast recovery algorithm. Sally Floyd, Tom Henderson, Andrei Gurtov, Sally Floyd, Tom Henderson, and Andrei Gurtov. Rfc3782: The newreno modification to tcp's fast recovery algorithm, 2004.
A deep reinforcement learning perspective on internet congestion control. Nathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, Aviv Tamar, International Conference on Machine Learning. PMLRNathan Jay, Noga Rotman, Brighten Godfrey, Michael Schapira, and Aviv Tamar. A deep reinforcement learning perspective on internet congestion control. In International Conference on Machine Learning, pages 3050-3059. PMLR, 2019.
Classic meets modern: A pragmatic learning-based congestion control for the internet. Soheil Abbasloo, Chen-Yu Yen, H Jonathan Chao, Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication. the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communicationSoheil Abbasloo, Chen-Yu Yen, and H Jonathan Chao. Classic meets modern: A pragmatic learning-based congestion control for the internet. In Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication, pages 632-647, 2020.
PCC vivace: Online-learning congestion control. Mo Dong, Tong Meng, Doron Zarchy, Engin Arslan, Yossi Gilad, Brighten Godfrey, Michael Schapira, Proc. NSDI. NSDIMo Dong, Tong Meng, Doron Zarchy, Engin Arslan, Yossi Gilad, Brighten Godfrey, and Michael Schapira. PCC vivace: Online-learning congestion control. In Proc. NSDI, 2018.
Pantheon: the training ground for internet congestion-control research. Jestin Francis Y Yan, Greg D Ma, Deepti Hill, Raghavan, S Riad, Philip Wahby, Keith Levis, Winstein, Proc. ATC. ATCFrancis Y Yan, Jestin Ma, Greg D Hill, Deepti Raghavan, Riad S Wahby, Philip Levis, and Keith Winstein. Pantheon: the training ground for internet congestion-control research. In Proc. ATC, 2018.
Tcp ex machina: Computer-generated congestion control. Keith Winstein, Hari Balakrishnan, ACM SIGCOMM Computer Communication Review. 434Keith Winstein and Hari Balakrishnan. Tcp ex machina: Computer-generated congestion control. ACM SIGCOMM Computer Communication Review, 43(4):123-134, 2013.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Mininet-wifi: Emulating software-defined wireless networks. Samira Ramon R Fontes, Afzal, H B Samuel, Brito, A S Mateus, Christian Esteve Santos, Rothenberg, 11th International Conference on Network and Service Management (CNSM). IEEERamon R Fontes, Samira Afzal, Samuel HB Brito, Mateus AS Santos, and Christian Esteve Rothenberg. Mininet-wifi: Emulating software-defined wireless networks. In 2015 11th International Conference on Network and Service Management (CNSM), pages 384-389. IEEE, 2015.
Pantheon: the training ground for internet congestion-control research. Jestin Francis Y Yan, Greg D Ma, Deepti Hill, Raghavan, S Riad, Philip Wahby, Keith Levis, Winstein, {USENIX} Annual Technical Conference ({USENIX}{ATC} 18). Francis Y Yan, Jestin Ma, Greg D Hill, Deepti Raghavan, Riad S Wahby, Philip Levis, and Keith Winstein. Pantheon: the training ground for internet congestion-control research. In 2018 {USENIX} Annual Technical Conference ({USENIX}{ATC} 18), pages 731-743, 2018.
From machine learning to explainable ai. Andreas Holzinger, 2018 world symposium on digital intelligence for systems and machines (DISA). IEEEAndreas Holzinger. From machine learning to explainable ai. In 2018 world symposium on digital intelligence for systems and machines (DISA), pages 55-66. IEEE, 2018.
Training your sparse neural network better with any mask. Ajay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, Zhangyang Wang, International Conference on Machine Learning. PMLRAjay Kumar Jaiswal, Haoyu Ma, Tianlong Chen, Ying Ding, and Zhangyang Wang. Training your sparse neural network better with any mask. In International Conference on Machine Learning, pages 9833-9844. PMLR, 2022.
Distilling free-form natural laws from experimental data. science. Michael Schmidt, Hod Lipson, 324Michael Schmidt and Hod Lipson. Distilling free-form natural laws from experimental data. science, 324(5923):81-85, 2009.
Discovering symbolic models from deep learning with inductive biases. Miles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, Shirley Ho, arXiv:2006.11287arXiv preprintMiles Cranmer, Alvaro Sanchez-Gonzalez, Peter Battaglia, Rui Xu, Kyle Cranmer, David Spergel, and Shirley Ho. Discovering symbolic models from deep learning with inductive biases. arXiv preprint arXiv:2006.11287, 2020.
Pysr: Fast & parallelized symbolic regression in python/julia. GitHub repository. Miles Cranmer, Miles Cranmer. Pysr: Fast & parallelized symbolic regression in python/julia. GitHub repository, 2020.
Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. Mikel Landajuela Brenden K Petersen, Larma, Claudio P Nathan Mundhenk, Santiago, K Soo, Joanne T Kim, Kim, arXiv:1912.04871arXiv preprintBrenden K Petersen, Mikel Landajuela Larma, T Nathan Mundhenk, Claudio P Santiago, Soo K Kim, and Joanne T Kim. Deep symbolic regression: Recovering mathematical expressions from data via risk-seeking policy gradients. arXiv preprint arXiv:1912.04871, 2019.
Cadtransformer: Panoptic symbol spotting transformer for cad drawings. Zhiwen Fan, Tianlong Chen, Peihao Wang, Zhangyang Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZhiwen Fan, Tianlong Chen, Peihao Wang, and Zhangyang Wang. Cadtransformer: Panoptic symbol spotting transformer for cad drawings. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10986-10996, 2022.
Wenqing Zheng, Tianlong Chen, Ting-Kuei Hu, Zhangyang Wang, arXiv:2203.06578Symbolic learning to optimize: Towards interpretability and scalability. arXiv preprintWenqing Zheng, Tianlong Chen, Ting-Kuei Hu, and Zhangyang Wang. Symbolic learning to optimize: Towards interpretability and scalability. arXiv preprint arXiv:2203.06578, 2022.
End-to-end symbolic regression with transformers. Pierre-Alexandre Kamienny, Guillaume Stéphane D'ascoli, François Lample, Charton, arXiv:2204.10532arXiv preprintPierre-Alexandre Kamienny, Stéphane d'Ascoli, Guillaume Lample, and François Charton. End-to-end symbolic regression with transformers. arXiv preprint arXiv:2204.10532, 2022.
Congestion avoidance and control. Van Jacobson, ACM SIGCOMM computer communication review. 184Van Jacobson. Congestion avoidance and control. ACM SIGCOMM computer communication review, 18(4):314-329, 1988.
A compound tcp approach for high-speed and long distance networks. Kun Tan, Jingmin Song, Qian Zhang, Murad Sridharan, Proceedings-IEEE INFOCOM. -IEEE INFOCOMKun Tan, Jingmin Song, Qian Zhang, and Murad Sridharan. A compound tcp approach for high-speed and long distance networks. In Proceedings-IEEE INFOCOM, 2006.
Tcp westwood: Bandwidth estimation for enhanced transport over wireless links. Saverio Mascolo, Claudio Casetti, Mario Gerla, Y Medy, Ren Sanadidi, Wang, Proceedings of the 7th annual international conference on Mobile computing and networking. the 7th annual international conference on Mobile computing and networkingSaverio Mascolo, Claudio Casetti, Mario Gerla, Medy Y Sanadidi, and Ren Wang. Tcp westwood: Bandwidth estimation for enhanced transport over wireless links. In Proceedings of the 7th annual international conference on Mobile computing and networking, pages 287-297, 2001.
Mohammad Alizadeh, Albert Greenberg, A David, Jitendra Maltz, Parveen Padhye, Balaji Patel, Sudipta Prabhakar, Murari Sengupta, Sridharan, Proceedings of the ACM SIGCOMM 2010 Conference. the ACM SIGCOMM 2010 ConferenceData center tcp (dctcpMohammad Alizadeh, Albert Greenberg, David A Maltz, Jitendra Padhye, Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari Sridharan. Data center tcp (dctcp). In Proceedings of the ACM SIGCOMM 2010 Conference, pages 63-74, 2010.
{PCC}: Rearchitecting congestion control for consistent high performance. Mo Dong, Qingxi Li, Doron Zarchy, P Brighten, Michael Godfrey, Schapira, 12th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 15). Mo Dong, Qingxi Li, Doron Zarchy, P Brighten Godfrey, and Michael Schapira. {PCC}: Re- architecting congestion control for consistent high performance. In 12th {USENIX} Symposium on Networked Systems Design and Implementation ({NSDI} 15), pages 395-408, 2015.
Distilling deep reinforcement learning policies in soft decision trees. Youri Coppens, Kyriakos Efthymiadis, Tom Lenaerts, Ann Nowé, Tim Miller, Rosina Weber, Daniele Magazzeni, Proceedings of the IJCAI 2019 workshop on explainable artificial intelligence. the IJCAI 2019 workshop on explainable artificial intelligenceYouri Coppens, Kyriakos Efthymiadis, Tom Lenaerts, Ann Nowé, Tim Miller, Rosina Weber, and Daniele Magazzeni. Distilling deep reinforcement learning policies in soft decision trees. In Proceedings of the IJCAI 2019 workshop on explainable artificial intelligence, pages 1-6, 2019.
Towards symbolic reinforcement learning with common sense. Artur D'avila Garcez, Aimore Resende Riquetti Dutra, Eduardo Alonso, arXiv:1804.08597arXiv preprintArtur d'Avila Garcez, Aimore Resende Riquetti Dutra, and Eduardo Alonso. Towards symbolic reinforcement learning with common sense. arXiv preprint arXiv:1804.08597, 2018.
Planning from pixels in atari with learned symbolic representations. Andrea Dittadi, Thomas Frederik K Drachmann, Bolander, arXiv:2012.09126arXiv preprintAndrea Dittadi, Frederik K Drachmann, and Thomas Bolander. Planning from pixels in atari with learned symbolic representations. arXiv preprint arXiv:2012.09126, 2020.
Symbolic regression methods for reinforcement learning. Jiří Kubalík, Jan Žegklitz, Erik Derner, Robert Babuška, arXiv:1903.09688arXiv preprintJiří Kubalík, Jan Žegklitz, Erik Derner, and Robert Babuška. Symbolic regression methods for reinforcement learning. arXiv preprint arXiv:1903.09688, 2019.
Discovering symbolic policies with deep reinforcement learning. Mikel Landajuela, K Brenden, Sookyung Petersen, Claudio P Kim, Ruben Santiago, Nathan Glatt, Mundhenk, F Jacob, Daniel Pettit, Faissol, International Conference on Machine Learning. PMLRMikel Landajuela, Brenden K Petersen, Sookyung Kim, Claudio P Santiago, Ruben Glatt, Nathan Mundhenk, Jacob F Pettit, and Daniel Faissol. Discovering symbolic policies with deep reinforcement learning. In International Conference on Machine Learning, pages 5979-5989. PMLR, 2021.
Self-supervised discovering of causal features: Towards interpretable reinforcement learning. Wenjie Shi, Shiji Song, Zhuoyuan Wang, Gao Huang, arXiv:2003.07069arXiv preprintWenjie Shi, Shiji Song, Zhuoyuan Wang, and Gao Huang. Self-supervised discovering of causal features: Towards interpretable reinforcement learning. arXiv preprint arXiv:2003.07069, 2020.
Interestingness elements for explainable reinforcement learning: Understanding agents' capabilities and limitations. Pedro Sequeira, Melinda Gervasio, Artificial Intelligence. 288103367Pedro Sequeira and Melinda Gervasio. Interestingness elements for explainable reinforcement learning: Understanding agents' capabilities and limitations. Artificial Intelligence, 288:103367, 2020.
Explainable reinforcement learning via reward decomposition. Zoe Juozapaitis, Anurag Koul, Alan Fern, Martin Erwig, Finale Doshi-Velez, IJCAI/ECAI Workshop on Explainable Artificial Intelligence. Zoe Juozapaitis, Anurag Koul, Alan Fern, Martin Erwig, and Finale Doshi-Velez. Explainable reinforcement learning via reward decomposition. In IJCAI/ECAI Workshop on Explainable Artificial Intelligence, 2019.
Explainable reinforcement learning through a causal lens. Prashan Madumal, Tim Miller, Liz Sonenberg, Frank Vetere, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Prashan Madumal, Tim Miller, Liz Sonenberg, and Frank Vetere. Explainable reinforcement learning through a causal lens. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 2493-2500, 2020.
Generation of policy-level explanations for reinforcement learning. Nicholay Topin, Manuela Veloso, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Nicholay Topin and Manuela Veloso. Generation of policy-level explanations for reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 2514-2521, 2019.
Neha Sugandh, and Ashwin Ram. Case-based planning and execution for real-time strategy games. Santiago Ontanón, Kinshuk Mishra, International Conference on Case-Based Reasoning. SpringerSantiago Ontanón, Kinshuk Mishra, Neha Sugandh, and Ashwin Ram. Case-based planning and execution for real-time strategy games. In International Conference on Case-Based Reasoning, pages 164-178. Springer, 2007.
Knowledge distillation: A survey. Jianping Gou, Baosheng Yu, J Stephen, Dacheng Maybank, Tao, International Journal of Computer Vision. 1296Jianping Gou, Baosheng Yu, Stephen J Maybank, and Dacheng Tao. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6):1789-1819, 2021.
Wenqing Zheng, W Edward, Nikhil Huang, Sumeet Rao, Zhangyang Katariya, Karthik Wang, Subbian, arXiv:2111.04840Cold brew: Distilling graph node representations with incomplete or missing neighborhoods. arXiv preprintWenqing Zheng, Edward W Huang, Nikhil Rao, Sumeet Katariya, Zhangyang Wang, and Karthik Subbian. Cold brew: Distilling graph node representations with incomplete or missing neighborhoods. arXiv preprint arXiv:2111.04840, 2021.
Kwang-Ting Cheng, and Marios Savvides. Is label smoothing truly incompatible with knowledge distillation: An empirical study. Zhiqiang Shen, Zechun Liu, Dejia Xu, Zitian Chen, arXiv:2104.00676arXiv preprintZhiqiang Shen, Zechun Liu, Dejia Xu, Zitian Chen, Kwang-Ting Cheng, and Marios Savvides. Is label smoothing truly incompatible with knowledge distillation: An empirical study. arXiv preprint arXiv:2104.00676, 2021.
. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, arXiv:1606.01540Openai gym. arXiv preprintGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Computing machinery and intelligence. M Alan, Turing, Parsing the turing test. SpringerAlan M Turing. Computing machinery and intelligence. In Parsing the turing test, pages 23-65. Springer, 2009.
Beagle-a darwinian approach to pattern recognition. Richard Forsyth, KybernetesRichard Forsyth. Beagle-a darwinian approach to pattern recognition. Kybernetes, 1981.
Explaining deep reinforcement learning agents in the atari domain through a surrogate model. Alexander Sieusahai, Matthew Guzdial, Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment. the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment17Alexander Sieusahai and Matthew Guzdial. Explaining deep reinforcement learning agents in the atari domain through a surrogate model. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, volume 17, pages 82-90, 2021.
Divide-andconquer reinforcement learning. Dibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, Sergey Levine, arXiv:1711.09874arXiv preprintDibya Ghosh, Avi Singh, Aravind Rajeswaran, Vikash Kumar, and Sergey Levine. Divide-and- conquer reinforcement learning. arXiv preprint arXiv:1711.09874, 2017.
Distral: Robust multitask reinforcement learning. Advances in neural information processing systems. Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, Razvan Pascanu, 30Yee Teh, Victor Bapst, Wojciech M Czarnecki, John Quan, James Kirkpatrick, Raia Hadsell, Nicolas Heess, and Razvan Pascanu. Distral: Robust multitask reinforcement learning. Advances in neural information processing systems, 30, 2017.
Continued development of internet congestion control: Reinforcement learning and robustness testing approaches. Nathan Jay, Nathan Jay. Continued development of internet congestion control: Reinforcement learning and robustness testing approaches. 2019.
Some methods for classification and analysis of multivariate observations. James Macqueen, Proceedings of the fifth Berkeley symposium on mathematical statistics and probability. the fifth Berkeley symposium on mathematical statistics and probabilityOakland, CA, USA1James MacQueen et al. Some methods for classification and analysis of multivariate observations. In Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, volume 1, pages 281-297. Oakland, CA, USA, 1967.
Discriminatory analysis. nonparametric discrimination: Consistency properties. Evelyn Fix, Joseph Lawson Hodges, International Statistical Review/Revue Internationale de Statistique. 573Evelyn Fix and Joseph Lawson Hodges. Discriminatory analysis. nonparametric discrimination: Consistency properties. International Statistical Review/Revue Internationale de Statistique, 57(3):238-247, 1989.
Who belongs in the family?. Robert L Thorndike, Psychometrika. 184Robert L Thorndike. Who belongs in the family? Psychometrika, 18(4):267-276, 1953.
Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J Peter, Rousseeuw, Journal of computational and applied mathematics. 20Peter J Rousseeuw. Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of computational and applied mathematics, 20:53-65, 1987.
Understanding and mitigating packet corruption in data center networks. Danyang Zhuo, Monia Ghobadi, Ratul Mahajan, Klaus-Tycho Förster, Arvind Krishnamurthy, Thomas Anderson, Proceedings of the SIGCOMM Conference. the SIGCOMM ConferenceDanyang Zhuo, Monia Ghobadi, Ratul Mahajan, Klaus-Tycho Förster, Arvind Krishnamurthy, and Thomas Anderson. Understanding and mitigating packet corruption in data center networks. In Proceedings of the SIGCOMM Conference, 2017.
| [
"https://github.com/VITA-Group/SymbolicPCC."
] |
[
"Published as a conference paper at ICLR 2022 FILLING THE G AP S: MULTIVARIATE TIME SERIES IMPUTATION BY GRAPH NEURAL NETWORKS",
"Published as a conference paper at ICLR 2022 FILLING THE G AP S: MULTIVARIATE TIME SERIES IMPUTATION BY GRAPH NEURAL NETWORKS"
] | [
"Andrea Cini \nThe Swiss AI Lab IDSIA\nUniversità della Svizzera italiana\n\n",
"Ivan Marisca \nThe Swiss AI Lab IDSIA\nUniversità della Svizzera italiana\n\n",
"Cesare Alippi ",
"\nPolitecnico di Milano\n\n"
] | [
"The Swiss AI Lab IDSIA\nUniversità della Svizzera italiana\n",
"The Swiss AI Lab IDSIA\nUniversità della Svizzera italiana\n",
"Politecnico di Milano\n"
] | [] | Dealing with missing values and incomplete time series is a labor-intensive, tedious, inevitable task when handling data coming from real-world applications. Effective spatio-temporal representations would allow imputation methods to reconstruct missing temporal data by exploiting information coming from sensors at different locations. However, standard methods fall short in capturing the nonlinear time and space dependencies existing within networks of interconnected sensors and do not take full advantage of the available -and often strong -relational information. Notably, most state-of-the-art imputation methods based on deep learning do not explicitly model relational aspects and, in any case, do not exploit processing frameworks able to adequately represent structured spatio-temporal data. Conversely, graph neural networks have recently surged in popularity as both expressive and scalable tools for processing sequential data with relational inductive biases. In this work, we present the first assessment of graph neural networks in the context of multivariate time series imputation. In particular, we introduce a novel graph neural network architecture, named GRIN, which aims at reconstructing missing data in the different channels of a multivariate time series by learning spatio-temporal representations through message passing. Empirical results show that our model outperforms state-of-the-art methods in the imputation task on relevant real-world benchmarks with mean absolute error improvements often higher than 20%.Vaswani et al., 2017). We argue that stronger, structural, inductive biases are needed to advance the state of the art in time series imputation and allow to build effective inference engines in the context of large and complex sensor networks as those found in real-world applications.In this work, we model input multivariate time series as sequences of graphs where edges represent relationships among different channels. We propose graph neural networks (GNNs)(Scarselli et al., 2008;Bronstein et al., 2017;Battaglia et al., 2018)as the building block of a novel, bidirectional, recurrent neural network for multivariate time series imputation (MTSI). Our method, named Graph Recurrent Imputation Network (GRIN), has at its core a recurrent neural network cell where gates are implemented by message-passing neural networks (MPNNs;Gilmer et al., 2017). Two of these networks process the input multivariate time series in both forward and backward time directions at each node, while hidden states are processed by a message-passing imputation layer which is constrained to learn how to perform imputation by looking at neighboring nodes. In fact, by considering each edge as a soft functional dependency that constraints the value observed at corresponding nodes, we argue that operating in the context of graphs introduces a positive inductive bias for MTSI. Our contributions are manifold: 1) we introduce a methodological framework to exploit graph neural networks in the context of MTSI, 2) we propose a novel, practical and effective implementation of a GNN-based architecture for MTSI, and 3) we achieve state-of-the-art results on several and varied MTSI benchmarks. Our method does not rely on any assumption on the distribution of the missing values (e.g., presence and duration of transient dynamics and/or length of missing sequences) other than stationarity of the underlying process. The rest of the paper is organized as follows. In Section 2 we discuss the related works. Then, in Section 3, we formally introduce the problem settings and the task of MTSI. We present our approach to MTSI in Section 4, by describing the novel framework to implement imputation architectures based on GNNs. We proceed with an empirical evaluation of the presented method against state-of-the-art baselines in Section 5 and, finally, we draw our conclusions in Section 6. | null | [
"https://arxiv.org/pdf/2108.00298v3.pdf"
] | 246,705,934 | 2108.00298 | 2e08a508fa9c6ae7195aa14dfe6c9e695e19aa33 |
Published as a conference paper at ICLR 2022 FILLING THE G AP S: MULTIVARIATE TIME SERIES IMPUTATION BY GRAPH NEURAL NETWORKS
Andrea Cini
The Swiss AI Lab IDSIA
Università della Svizzera italiana
Ivan Marisca
The Swiss AI Lab IDSIA
Università della Svizzera italiana
Cesare Alippi
Politecnico di Milano
Published as a conference paper at ICLR 2022 FILLING THE G AP S: MULTIVARIATE TIME SERIES IMPUTATION BY GRAPH NEURAL NETWORKS
Dealing with missing values and incomplete time series is a labor-intensive, tedious, inevitable task when handling data coming from real-world applications. Effective spatio-temporal representations would allow imputation methods to reconstruct missing temporal data by exploiting information coming from sensors at different locations. However, standard methods fall short in capturing the nonlinear time and space dependencies existing within networks of interconnected sensors and do not take full advantage of the available -and often strong -relational information. Notably, most state-of-the-art imputation methods based on deep learning do not explicitly model relational aspects and, in any case, do not exploit processing frameworks able to adequately represent structured spatio-temporal data. Conversely, graph neural networks have recently surged in popularity as both expressive and scalable tools for processing sequential data with relational inductive biases. In this work, we present the first assessment of graph neural networks in the context of multivariate time series imputation. In particular, we introduce a novel graph neural network architecture, named GRIN, which aims at reconstructing missing data in the different channels of a multivariate time series by learning spatio-temporal representations through message passing. Empirical results show that our model outperforms state-of-the-art methods in the imputation task on relevant real-world benchmarks with mean absolute error improvements often higher than 20%.Vaswani et al., 2017). We argue that stronger, structural, inductive biases are needed to advance the state of the art in time series imputation and allow to build effective inference engines in the context of large and complex sensor networks as those found in real-world applications.In this work, we model input multivariate time series as sequences of graphs where edges represent relationships among different channels. We propose graph neural networks (GNNs)(Scarselli et al., 2008;Bronstein et al., 2017;Battaglia et al., 2018)as the building block of a novel, bidirectional, recurrent neural network for multivariate time series imputation (MTSI). Our method, named Graph Recurrent Imputation Network (GRIN), has at its core a recurrent neural network cell where gates are implemented by message-passing neural networks (MPNNs;Gilmer et al., 2017). Two of these networks process the input multivariate time series in both forward and backward time directions at each node, while hidden states are processed by a message-passing imputation layer which is constrained to learn how to perform imputation by looking at neighboring nodes. In fact, by considering each edge as a soft functional dependency that constraints the value observed at corresponding nodes, we argue that operating in the context of graphs introduces a positive inductive bias for MTSI. Our contributions are manifold: 1) we introduce a methodological framework to exploit graph neural networks in the context of MTSI, 2) we propose a novel, practical and effective implementation of a GNN-based architecture for MTSI, and 3) we achieve state-of-the-art results on several and varied MTSI benchmarks. Our method does not rely on any assumption on the distribution of the missing values (e.g., presence and duration of transient dynamics and/or length of missing sequences) other than stationarity of the underlying process. The rest of the paper is organized as follows. In Section 2 we discuss the related works. Then, in Section 3, we formally introduce the problem settings and the task of MTSI. We present our approach to MTSI in Section 4, by describing the novel framework to implement imputation architectures based on GNNs. We proceed with an empirical evaluation of the presented method against state-of-the-art baselines in Section 5 and, finally, we draw our conclusions in Section 6.
INTRODUCTION
Imputation of missing values is a prominent problem in multivariate time-series analysis (TSA) from both theoretical and practical perspectives (Little & Rubin, 2019). In fact, in a world of complex interconnected systems such as those characterizing sensor networks or the Internet of Things, faulty sensors and network failures are widespread phenomena that cause disruptions in the data acquisition process. Luckily, failures of these types are often sparse and localized at the single sensor level, i.e., they do not compromise the entire sensor network at once. In other terms, it is often the case that, at a certain time step, missing data appear only at some of the channels of the resulting multivariate time series. In this context, spatio-temporal imputation methods (Yi et al., 2016;Yoon et al., 2018b) aim at reconstructing the missing parts of the signals by possibly exploiting both temporal and spatial dependencies. In particular, effective spatio-temporal approaches would reconstruct missing values by taking into account past and future values, and the concurrent measurements of spatially close neighboring sensors too. Here, spatial similarity does not necessarily mean physical (e.g., geographic) proximity, but rather indicates that considered sensors are related w.r.t. a generic (quantifiable) functional dependency (e.g., Pearson correlation or Granger causality - Granger, 1969) and/or that are close in a certain latent space. Relational information, then, can be interpreted as a set of constraints -linking the different time series -that allows replacing the malfunctioning sensors with virtual ones.
Among different imputation methods, approaches based on deep learning (LeCun et al., 2015;Schmidhuber, 2015;Goodfellow et al., 2016) have become increasingly popular (Yoon et al., 2018a;Cao et al., 2018;Liu et al., 2019). However, these methods often completely disregard available relational information or rely on rather simplistic modifications of standard neural architectures tailored 2 RELATED WORKS Time series imputation There exists a large literature addressing missing value imputation in time series. Besides the simple and standard interpolation methods based on polynomial curve fitting, popular approaches aim at filling up missing values by taking advantage of standard forecasting methods and similarities among time series. For example, several approaches rely on k-nearest neighbors (Troyanskaya et al., 2001;Beretta & Santaniello, 2016), the expectation-maximization algorithm (Ghahramani & Jordan, 1994;Nelwamondo et al., 2007) or linear predictors and statespace models (Durbin & Koopman, 2012;Kihoro et al., 2013). Low-rank approximation methods, such as matrix factorization (Cichocki & Phan, 2009), are also popular alternatives which can also account for spatial (Cai et al., 2010;Rao et al., 2015) and temporal Mei et al., 2017) information. Among linear methods, STMVL (Yi et al., 2016) combines temporal and spatial interpolation to fill missing values in geographically tagged time series.
More recently, several deep learning approaches have been proposed for MTSI. Among the others, deep autoregressive methods based on recurrent neural networks (RNNs) found widespread success (Lipton et al., 2016;Che et al., 2018;Luo et al., 2018;Yoon et al., 2018b;Cao et al., 2018). GRU-D (Che et al., 2018) learns how to process sequences with missing data by controlling the decay of the hidden states of a gated RNN. Cao et al. (2018) propose BRITS, a bidirectional GRU-D-like RNN for multivariate time series imputation that takes into account correlation among different channels to perform spatial imputation. Other successful strategies in the literature have been proposed that exploit the adversarial training framework to generate realistic reconstructed sequences (Yoon et al., 2018a;Fedus et al., 2018;Luo et al., 2018;. Notably, GAIN (Yoon et al., 2018a) uses GANs (Goodfellow et al., 2014) to learn models that perform imputation in the i.i.d. settings. Luo et al. (2018; aim, instead, at learning models that generate realistic synthetic sequences and exploit them to fill missing values. Miao et al. (2021) use an approach similar to GAIN, but condition the generator on the predicted label for the target incomplete time series. Concurrently to our work, Kuppannagari et al. (2021) developed a graph-based spatio-temporal denoising autoencoder for spatio-temporal data coming from smart grids with known topology. Liu et al. (2019), instead, uses adversarial learning to train a multiscale model that imputes highly sparse time series in a hierarchical fashion. However, we argue that none of the above-cited methods can take full advantage of relational information and nonlinear spatio-temporal dependencies. Most importantly, the above methods do not fully exploit the flexibility and expressiveness enabled by operating in the context of graph processing.
Graph neural networks for TSA Graph neural networks have been exploited in TSA mostly in spatio-temporal forecasting methods. The idea behind most of the methods present in the literature is to modify standard neural network architectures for sequential data by relying on operators that work in the graph domain. For example, Seo et al. (2018) propose a GRU cell where gates are implemented by spectral GNNs (Defferrard et al., 2016); propose an analogous architecture replacing spectral GNNs with a diffusion-convolutional network (Atwood & Towsley, 2016). Note that these models are different w.r.t. approaches that use recurrent networks to propagate information graph-wise (Scarselli et al., 2008;. Yu et al. (2017) and Wu et al. (2019;2020b) propose, instead, spatio-temporal convolutional neural networks that alternate convolutions on temporal and spatial dimensions. Similar approaches have also been studied in the context of attention-based models (Vaswani et al., 2017) with spatio-temporal Transformer-like architectures Cai et al., 2020). Another particularly interesting line of research is related to the problem of learning the graph structure underlying an input multivariate time series Wu et al., 2020b;Shang et al., 2020). While previously mentioned approaches focus on multivariate time series prediction, other methods aim at predicting changes in graph topology (Zambon et al., 2019;Paassen et al., 2020). Conversely, methods such as Temporal Graph Networks are tailored to learn node embeddings in dynamical graphs. Finally, recent works have proposed GNNs for imputing missing features in the context of i.i.d. data. Among the others, Spinelli et al. (2020) propose an adversarial framework to train GNNs on the data reconstruction task, while You et al. (2020) propose a bipartite graph representation for feature imputation. Lately, GNNs have also been exploited for spatial interpolation (Appleby et al., 2020;Wu et al., 2020a) -sometimes referred to as kriging (Stein, 1999). To the best of our knowledge, no previous GNN-based method targeted missing value imputation for generic multivariate time series.
PRELIMINARIES
Sequences of graphs We consider sequences of weighted directed graphs, where we observe a graph G t with N t nodes at each time step t. A graph is a couple G t = X t , W t , where X t ∈ R Nt×d is the node-attribute matrix whose i-th row contains the d-dimensional node-attribute vector x i t ∈ R d associated with the i-th node; entry w i,j t of the adjacency matrix W t ∈ R Nt×Nt denotes the scalar weight of the edge (if any) connecting the i-th and j-th node. Fig. 1 exemplifies this modelling framework. We assume nodes to be identified, i.e., to have a unique ID that enables time-wise consistent processing. This problem setting can be easily extended to more general classes of graphs with attributed edges and global attributes. In this work, we mainly focus on problems where the topology of the graph is fixed and does not change over time, i.e., at each time step W t = W and N t = N . Any generic multivariate time series fits the above framework by letting each channel of the sequence (i.e., each sensor) correspond to a node and using the available relation information to build an adjacency matrix. If no relational information is available, one could use the identity matrix, but this would defeat the purpose of the formulation. A more proper choice of W t can be made using any standard similarity score (e.g., Pearson correlation) or a (thresholded) kernel. A more advanced approach instead could aim at learning an adjacency directly from data by using, for instance, spatial attention scores or resorting to graph learning techniques, e.g., . From now on, we assume that input multivariate time series have homogeneous channels, i.e., sensors are of the same type. Note that this assumption does not imply a loss in generality: it is always possible to standardize node features by adding sensor type attributes and additional dimensions to accommodate the different types of sensor readings. Alternatively, one might directly model the problem by exploiting heterogeneous graphs (Schlichtkrull et al., 2018).
Multivariate time series imputation To model the presence of missing values, we consider, at each step, a binary mask M t ∈ {0, 1} Nt×d where each row m i t indicates which of the corresponding node attributes of x i t are available in X t . It follows that, m i,j t = 0 implies x i,j t to be missing; conversely, if m i,j t = 1, then x i,j t stores the actual sensor reading. We denote by X t the unknown ground truth node-attribute matrix, i.e., the complete node-attribute matrix without any missing data. We assume stationarity of missing data distribution and, in experiments, we mostly focus on the missing at random (MAR) scenario (Rubin, 1976). We neither make assumptions on the number of concurrent sensor failures, nor on the length of missing data blocks, i.e., multiple failures extended over time are accounted for. Clearly, one should expect imputation performance to scale with the number of concurrent faults and the time length of missing data bursts.
The objective of MTSI is to impute missing values in a sequence of input data. More formally, given a graph sequence G [t,t+T ] of length T , we can define the missing data reconstruction error as
L X [t,t+T ] , X [t,t+T ] , M [t,t+T ] = t+T h=t N t i=1 m i h , x i h ,x i h m i h , m i h ,(1)
wherex i h is the reconstructedx i h ; M [t,t+T ] and m i h are respectively the logical binary complement of M [t,t+T ] and m i h , ( · , · ) is an element-wise error function (e.g., absolute or squared error) and · , · indicates the standard dot product. Note that, in practice, it is impossible to have access to X [t,t+T ] and, as a consequence, it is necessary to define a surrogate optimization objective by, for example, using a forecasting loss or generating synthetic missing values. In the context of trainable, parametric, imputation methods, we consider two different operational settings. In the first one, named in-sample imputation, the model is trained to reconstruct missing values in a given fixed input sequence X [t,t+T ] , i.e., the model is trained on all the available data except those that are missing and those that have been removed from the sequence to emulate additional failures for evaluation. Differently, in the second one (referred to as out-of-sample imputation), the model is trained and evaluated on disjoint sequences. Note that in both cases the model does not have access to the ground-truth data used for the final evaluation. The first operational setting simulates the case where a practitioner fits the model directly on the sequence to fill up its gaps. The second, instead, simulates the case where one wishes to use a model fitted on a set of historical data to impute missing values in an unseen target sequence.
GRAPH RECURRENT IMPUTATION NETWORK
In this section, we present our approach, the Graph Recurrent Imputation Network (GRIN), a graphbased, recurrent neural architecture for MTSI. Given a multivariate time series X [t,t+T ] with mask M [t,t+T ] , our objective is to reconstruct missing values in the input sequence by combining the information coming from both the temporal and spatial dimensions. To do so, we design a novel bidirectional graph recurrent neural network which progressively processes the input sequence both forward and backward in time by performing two stages of imputation for each direction. Then, a feed-forward network takes as input the representation learned by the forward and backward models and performs a final -refined -imputation for each node of the graph and step of the sequence. More precisely, the final imputation depends on the output of two GRIN modules whose learned representations are finally processed (space and time wise) by a last decoding multilayer perceptron (MLP). An overview of the complete architecture is given in Fig. 2. As shown in the figure, the two modules impute missing values iteratively, using at each time step previously imputed values as input. We proceed by first describing in detail the unidirectional model, and then we provide the bidirectional extension.
Unidirectional model Each GRIN module is composed of two blocks, a spatio-temporal encoder and a spatial decoder, which process the input sequence of graphs in two stages. The spatio-temporal encoder maps the input sequence X [t,t+T ] to a spatio-temporal representation H Here, each unidirectional GRIN module is processing the τ -th step of an input sequence with 4 dimensions (sensors). Two values are missing at the considered time step. GRIN performs a first imputation, which is then processed and refined by the spatial decoder. These second-stage imputations are then used to continue the processing at the next step. An MLP processes learned representations node and time wise to obtain final imputations.
exploiting an ad-hoc designed recurrent GNN. The spatial decoder, instead, takes advantage of the learned representations to perform two consecutive rounds of imputation. A first-stage imputation is obtained from the representation by using a linear readout; the second one exploits available relational, spatial, information at time step t. In particular, the decoder is implemented by an MPNN which learns to infer the observed values at each i-th nodex i t -by refining first-stage imputations considering -locally -H t−1 and values observed at neighboring nodes.
Spatio-temporal Encoder
In the encoder, the input sequence X [t,t+T ] and mask M [t,t+T ] are processed sequentially one step at a time, by means of a recurrent neural network with gates implemented by message-passing layers. Any message-passing operator could be used in principle. In particular, given z i t,k−1 , i.e., the node features vector at layer k − 1, we consider the general class of MPNNs described as
MPNN k (z i t,k−1 , Wt) = γ k z i t,k−1 , j∈N (i) ρ k z i t,k−1 , z j t,k−1 = z i t,k ,(2)
where N (i) is the set of neighbors of the i-th node in G t , γ k and ρ k are generic, differentiable, update and message functions (e.g., MLPs), and Σ is a permutation invariant, differentiable aggregation function (e.g., sum or mean). Note that several definitions of neighborhood are possible, e.g., one might consider nodes connected by paths up to a certain length l. For the sake of simplicity, from now on, we indicate with MPNN(z i t , W t ) the forward pass of a generic K-layered message-passing neural network. In the following, we use MPNNs as the building blocks for our spatio-temporal feature extractors. To learn the dynamics of the system, we leverage on gated recurrent units (GRUs; . As previously mentioned, similarly to Seo et al. (2018) and , we implement the GRU gates by relying on the message-passing layers defined above. At the node level, the elements of the message-passing GRU (MPGRU) can be described as:
r i t = σ MPNN x
Spatial Decoder As a first decoding step, we generate one-step-ahead predictions from the hidden representations of the MPGRU by means of a linear readout
Y (1) t = H t−1 V h + b h ,(7)
where V h ∈ R l×d is a learnable weight matrix and b h ∈ R d is a learnable bias vector. We then define the filler operator as
Φ(Y t ) = M t X t + M t Y t ;
(8) intuitively, the filler operator replaces the missing values in the input X t with the values at the same positions in Y t . By feeding Y
(1) t to the filler operator, we get the first-stage imputation X
(1) t such that the output is X t with missing values replaced by the one-step-ahead predictions Y
(1) t . The resulting node-level predictions are then concatenated to the mask M t and the hidden representation H t−1 , and processed by a final one-layer MPNN which computes for each node an imputation representation s i t as
s i t = γ h i t−1 , j∈N (i)/i ρ Φ(x j(1) t )||h j t−1 ||m j t .(9)
Notice that, as previously highlighted, the imputation representations only depend on messages received from neighboring nodes and the representation at the previous step. In fact, by aggregating only messages from the one-hop neighborhood, the representations s i t are independent of the input features x i t of the i-th node itself. This constraint forces the model to learn how to reconstruct a target input by taking into account spatial dependencies: this has a regularizing effect since the model is constrained to focus on local information. Afterward, we concatenate imputation representation S t with hidden representation H t−1 , and generate second-stage imputations by using a second linear readout and applying the filler operator:
Y (2) t = [S t ||H t−1 ] V s + b s ; X (2) t = Φ( Y (2) t )(10)
Finally, we feed X
(2) t as input to the MPGRU (Eq. 3 -6) to update the hidden representation and proceed to process the next input graph G t+1 .
Bidirectional Model Extending GRIN to account for both forward and backward dynamics is straightforward and can be achieved by duplicating the architecture described in the two previous paragraphs. The first module will process the sequence in the forward direction (from the beginning of the sequence towards its end), while the second one in the other way around. The final imputation is then obtained with an MLP aggregating representations extracted by the two modules:
y i t = MLP s i,f wd t ||h i,f wd t−1 ||s i,bwd t ||h i,bwd t+1 ,(11)
where f wd and bwd denote the forward and backward modules, respectively. The final output can then be easily obtained as t+T ] ). Note that, by construction, our model can exploit all the available relevant spatio-temporal information, since the only value explicitly masked out for each node is x i t . At the same time, it is important to realize that our model does not merely reconstruct the input as an autoencoder, but it is specifically tailored for the imputation task due to its inductive biases. The model is trained by minimizing the reconstruction error of all imputation stages in both directions (see Appendix A).
X [t,t+T ] = Φ( Y [t,
EMPIRICAL EVALUATION
In this section, we empirically evaluate our approach against state-of-the-art baselines on four datasets coming from three relevant application domains. Our approach, remarkably, achieves stateof-the-art performance on all of them.
• Air Quality (AQI): dataset of recordings of several air quality indices from 437 monitoring stations spread across 43 Chinese cities. We consider only the PM2.5 pollutant. Prior works on imputation (Yi et al., 2016;Cao et al., 2018) consider a reduced version of this dataset, including only 36 sensors (AQI-36 in the following). We evaluate our model on both datasets. We use as adjacency matrix a thresholded Gaussian kernel (Shuman et al., 2013) computed from pairwise geographic distances. . We select only the subset of the available smart meters monitoring the energy consumption of small and medium-sized enterprises (SMEs), i.e., 485 time series with samples acquired every 30 minutes. We build an adjacency matrix by extracting a k-nearest neighbor graph (with k = 10) from the similarity matrix built by computing the correntropy (Liu et al., 2007) among the time series.
For the air quality datasets, we adopt the same evaluation protocol of previous works (Yi et al., 2016;Cao et al., 2018) and we show results for both the in-sample and out-of-sample settings. For the traffic and energy consumption datasets, we consider only the out-of-sample scenario (except for matrix factorization which only works in-sample). We simulate the presence of missing data by considering 2 different settings: 1) Block missing, i.e, at each step, for each sensor, we randomly drop 5% of the available data and, in addition, we simulate a failure with probability p f ailure = 0.15% and sample its duration uniformly in the interval [min steps, max steps], where min steps and max steps are the number of time steps corresponding respectively to 1 and 4 hours in the traffic case and 2 hours and 2 days for CER-E; 2) Point missing, i.e., we simply randomly mask out 25% of the available data. We split all the datasets into training/validation/test sets. We use as performance metrics the mean absolute error ( We provide further comment and in depth details on baselines and datasets, together with additional experiments on synthetic data in the appendix.
RESULTS
Empirical results show that GRIN can achieve large improvements in imputation performance on several scenarios, as well as increased flexibility. In fact, differently from the other state-of-theart baselines, GRIN can handle input with a variable number of dimensions. Tab. 1 shows the experimental results on the air quality datasets. In the in-sample settings, we compute metrics using as imputation the value obtained by averaging predictions over all the overlapping windows; in the out-of-sample settings, instead, we simply report results by averaging the error over windows. GRIN largely outperforms other baselines on both settings. In particular, in the latter case, GRIN decreases MAE w.r.t. the closest baseline by more than 20% in AQI. Interestingly, GRIN consistently outperforms BRITS in imputing missing values also for sensors corresponding to isolated (disconnected) nodes, i.e., nodes corresponding to stations more than 40 km away from any other station (see B.1): this is empirical evidence of the positive regularizations encoded into GRIN. Our method achieves more accurate imputation also in the 36-dimensional dataset, where we could expect the graph representation to have a lower impact. Results for the traffic and smart grids datasets are shown in Tab. 2. In the traffic dataset, our method outperforms both BRITS and rGAIN by a wide margin in all the considered settings while using a much lower number of parameters (see A). In the traffic datasets, on average, GRIN reduces MAE by ≈ 29% w.r.t. BRITS and, in particular, in the Point missing setting of the PEMS-BAY dataset, the error is halved. In CER-E, GRIN consistently outperforms other baselines. Besides show-ing the effectiveness of our approach in a relevant application field, this experiment also goes to show that GRIN can be exploited in settings where relational information is not readily available. In particular, we compare GRIN against 3 baselines to assess the impact of the spatial decoder and of the bidirectional architecture. The first baseline is essentially a bidirectional MPGRU where values are imputed by a final MLP taking as inputs h f wd t−1 and h bwd t+1 , while the second one has an analogous architecture, but uses hidden representation and time step t (for both directions) and, thus, behaves similarly to a denoising autoencoder. As reference, we report the results of the unidirectional MPGRU. Results show that the components we introduce do contribute to significantly reduce imputation error. It is clear that spatial decoding and the bidirectional architecture are important to obtain accurate missing data reconstruction, especially in realistic settings with blocks of missing data. Interestingly, the denoising model suffers in the Block missing scenario, while, as one might expect, works well in the Point Missing setting. For additional results and discussion about scalability issues, we refer to the appendix of the paper. As a final experiment, we provide a quantitative and qualitative assessment of the proposed method in virtual sensing. The idea (often studied in the context of kriging -see Section 2) is to simulate the presence of a sensor by adding a node with no available data and, then, let the model reconstruct the corresponding time series. Note that for the approach to work several assumptions are needed: 1) we have to assume that the physical quantity being monitored can be reconstructed from observations at neighboring sensors; 2) we should assume a high-degree of homogeneity of sensors (e.g., in the case of air quality stations we should assume that sensors are placed at the same height) or that the features characterizing each neighboring sensor (e.g., placement) are available to the model. In this context, it is worth noting that, due to the inductive biases embedded in the model, GRIN performs reconstruction not only by minimizing reconstruction error at the single node, but by regularizing the reconstructed value for imputation at neighboring sensors. We masked out observed values of the two nodes of AQI-36 with highest (station no. 1014) and lowest (no. 1031) connectivity, and trained GRIN on the remaining part of the data as usual. Results, in Fig. 3, qualitatively show that GRIN can infer the trend and scale for unseen sensors. In terms of MAE, GRIN scored 11.74 for sensor 1014 and 20.00 for sensor 1031 (averages over 5 independent runs).
VIRTUAL
CONCLUSIONS
We presented GRIN, a novel approach for MTSI exploiting modern graph neural networks. Our method imputes missing data by leveraging the relational information characterizing the underlying network of sensors and the functional dependencies among them. Compared against state-of-the-art baselines, our framework offers higher flexibility and achieves better reconstruction accuracy on all the considered scenarios. There are several possible directions for future works. From a theoretical perspective, it would be interesting to study the properties that would guarantee an accurate reconstruction. Furthermore, future work should study extensions able to deal with a non-stationary setting and further assess applications of GRIN in virtual and active sensing.
REPRODUCIBILITY STATEMENT
Code to reproduce experiments presented in the paper is provided as supplementary material together with configuration files to replicate reported results. All datasets, except CER-E, are open and downloading links are provided in the supplementary material. The CER-E dataset can be obtained free of charge for research purposes (see appendix). For experiments where failures are simulated, we use random number generators with fixed seed for missing data generation to ensure reproducibility and consistency among experiments and baselines.
APPENDIX A DETAILED EXPERIMENTAL SETTINGS
In this appendix, we give more details on the experimental settings used to evaluate our approach. We train all the models by sampling at random 160 batches of 32 elements for each epoch, we fix the maximum number of epochs to 300 and we use early stopping on the validation set with a patience of 40 epochs. All methods are trained using a cosine learning rate scheduler with initial value of 0.001, decayed over the 300 training epochs. During training, we randomly mask out an additional 5% of the input data for each batch to foster robustness to noise and missing data.
For GRIN we minimize the following loss function is
L = L Y [t,t+T ] , X [t,t+T ] , M [t,t+T ] + L Y (1),f wd [t,t+T ] , X [t,t+T ] , M [t,t+T ] + L Y (2),f wd [t,t+T ] , X [t,t+T ] , M [t,t+T ] + L Y (1),bwd [t,t+T ] , X [t,t+T ] , M [t,t+T ] + L Y (2),bwd [t,t+T ] , X [t,t+T ] , M [t,t+T ] ,
where each L ( · , · , · ) is of the form of Eq. 1 and the element-wise error function is MAE. Note that here we are using X [t,t+T ] and M [t,t+T ] instead of X [t,t+T ] and M [t,t+T ] .
For BRITS, we use the same network hyperparameters of Cao et al. (2018) for the AQI-36 dataset.
To account for the larger input dimension, for the other datasets we increase the number of hidden neurons in the RNNs cells to 128 for AQI/METR-LA and 256 for PEMS-BAY/CER-E. The number of neurons was tuned on the validation sets. For rGAIN we use the same number of units in the cells of the bidirectional RNN used by BRITS, but we concatenate a random vector (sampled from a uniform distribution) of dimension z = 4 to the input vector in order to model the sampling of the data generating process. To obtain predictions, we average out the outputs of k = 5 forward passes. For VAR we used an order of 5 and trained the model with SGD. Since the VAR model needs past 5 observations to predict the next step, we pad each sequence using the mean for each channel. Here we used a batch size to 64 and a learning rate of 0.0005. The order was selected with a small search in the range [2, 12]: we found out a window size of 5 to be ideal for all the considered datasets. For GRIN we use the same hyperparameters in all the datasets: a hidden dimension of 64 neurons for both the spatio-temporal encoder and the spatial decoder and of 64 neurons for the MLP. We use diffusion convolution as message-passing operation, with a diffusion step k = 2 in the spatio-temporal encoder and k = 1 in the temporal decoder. Note that, due to the architectural differences, the other neural network baselines have a number of parameters that is far higher than GRIN (depending on the considered dataset, up to ≈ 4M against ≈ 200K). For MPGRU we use the same hyperparameters of GRIN (64 units for both the spatio-temporal encoder and the decoder).
For data processing we use the same steps of , data are normalized across the feature dimension (which means graph-wise for GRIN and node-wise for BRITS/rGAIN/VAR). Data masked out for evaluation are never used to train any model. The implementation of the diffusion convolutional operator was adapted from the Graph-WaveNet codebase 2 . For the implementation of BRITS, we used the code provided by the authors 3 . The code to reproduce the experiments of the paper is available online 4 . In this appendix, we provide more details on the datasets that we used to run experiments. Tab. 4 shows detailed statistics for graph structure associated with each dataset, while Fig. 4 shows the corresponding adjacency matrices. Tab. 5 shows missing data statistics. In the following subsections, we go deeper into details for each dataset. Air pollution is nowadays a ubiquitous problem. The Urban Computing project (Zheng et al., 2014;2015) published several datasets containing real measurements of different indices affecting human life in urban spaces. We consider as benchmark the dataset regarding the air quality index (AQI). The complete dataset contains hourly measurements of six pollutants from 437 air quality monitoring stations, spread over 43 cities in China, over a period of one year (from May 2014 to April 2015). Prior works on imputation (Yi et al., 2016;Cao et al., 2018) considered a reduced version of this dataset, including only 36 sensors (AQI-36). This dataset is particularly interesting as a benchmark for imputation due to the high rate of missing values (25.7% in AQI and 13.2% in AQI-36). Along with Yi et al. (2016), we consider as the test set the months of March, June, September and December. We consider both the in-sample and out-of-sample scenarios. In latter case, we do not consider windows overlapping with any of the test months. We use the same procedure of Yi et al. (2016) to simulate the presence of missing data for evaluation. We select windows of data of length T = 24 for AQI and T = 36 for AQI-36 (in line with Cao et al. (2018)). To evaluate the imputation performances, we mask out from the test set and use as groundtruth the value x i,j t if: (1) the value is not missing (m i,j t = 1) and (2) the value is missing at the same hour and day in the following month. Besides air quality readings, the dataset provides geographic coordinates of each monitoring station. To obtain an adjacency matrix from the geographic distances between nodes, we use a thresholded Gaussian kernel (Shuman et al., 2013): the weight w i,j t = w i,j of the edge connecting i-th and j-th node is
w i,j = exp − dist(i,j) 2 γ dist (i, j) ≤ δ 0 otherwise ,(12)
where dist ( · , · ) is the geographical distance operator, γ controls the width of the kernel and δ is the threshold. We set γ to the standard deviation of geographical distances in AQI-36 in both datasets. We set δ so that it corresponds to a distance of ≈ 40 km.
B.2 TRAFFIC
The study of traffic networks is key for the development of intelligent transportation systems and a relevant application field for network science. While previous works (Yu et al., 2017;Wu et al., 2019;Shang et al., 2020) have assessed spatio-temporal deep learning methods for the traffic forecasting task, we focus on reconstruction. We use as benchmark the PEMS-BAY and METR-LA datasets from . PEMS-BAY contains 6 months of data from 325 traffic sensors in San Francisco Bay Area, while METR-LA contains 4 months of sensor readings from 207 detectors in the Los Angeles County Highway (Jagadish et al., 2014); for both datasets, the sampling rate corresponds to 5 minutes.
C ADDITIONAL RESULTS
In this appendix, we show an additional experiment in a controlled environment, comparison against additional baselines, additional ablation studies, and sensitivity analyses.
C.1 SYNTHETIC DATA
In this experiment, we test our method on the simulated particle system introduced by 6 . We simulate the trajectories of N = 10 particles in a (10 × 10) box with elastic collision. Each particle carries a either positive or negative charge q ∈ {1, −1}. Two particles attract each other if they have opposite sign, otherwise they repel. Interaction forces between two particles are ruled by Coulomb's law. We collect two datasets, each containing 5000 independent simulations of T = 36 steps each. In the first dataset, particles have the same charge in every simulation. In the second one, we sample the charges uniformly at random at the beginning of every simulation. In both scenarios, the initial location and velocity of the particles are drawn randomly. At each step, we randomly remove blocks of consecutive readings with a probability p f ailure = 2.5% and a length sampled uniformly from the interval [4,9]. Here, a reading consists of the (x, y) coordinates of the particle's position. We further mask out 2.5% of positions at random. The percentage of values not masked out is ≈ 74%. For evaluation purposes, we generate another mask using the same missing data distribution and use the masked values as ground-truth for evaluation. We split dataset in training/validation/test folds using 70%/10%/20% splits, respectively. We test our method (GRIN) and BRITS in both synthetic datasets. We use 32 units for the hidden layer of BRITS (≈ 25K parameters) and 16 units for both the encoder and decoder of GRIN (≈ 10K parameters). Results are reported in Tab. 6. Both the methods take as input only the particles' positions, with no information about the charges. As can be seen, consistently with what observed by , relational representations are impressively effective in this scenario. Our method outperforms the baseline by more than an order of magnitude in terms of MSE. Surprisingly, BRITS is more accurate in the setting with varying charge. Our hypothesis is that the added stochasticity acts as a regularization and forces BRITS to learn a more general model. As mentioned in Section 2, several matrix factorization approaches -often studied in the context of recommender systems -can be regularized by considering priors on the spatio-temporal struc-ture of the data. Intuitively, spatial regularization is achieved by imposing soft constraints on the smoothness of the interpolated function w.r.t. nodes of an underlying graph (Cai et al., 2010;Rao et al., 2015). Temporal regularization can be obtained by imposing analogous constraints modelling temporal dependencies as -eventually weighted -edges of a graph. In temporal regularized matrix factorization (TRMF; , similarly, coefficients of an autoregressive model are used as temporal regularizer.
C.2 EMPIRICAL COMPARISON AGAINST MATRIX FACTORIZATION WITH SIDE INFORMATION
Tab. 7 shows a comparison of different matrix factorization approaches on imputation in the air quality datasets (where we considered the in-sample setting in Section 5). For TRMF we used an implementation adapted from the Transdim repository 7 , while for graph regularized matrix factorization (GMRF) we use a custom implementation of the method proposed by Cai et al. (2010). We fixed the rank to be equal to 10 (as the one used in all the experiments for standard MF) and tuned the regularization coefficients on a validation set. Results do show that introducing spatial and temporal regularization improve w.r.t. vanilla MF; however, deep learning methods -and even linear VAR predictors -achieve far superior reconstruction accuracy here. Arguably, low-rank approximation methods might instead have an edge in a low-data regime: this type of analysis is, however, out of the scope of this work.
C.3 SCALABILITY
With reference to a standard bidirectional GRU, using MPNNs to implement the cell's gates increases the computational complexity by a factor that scales with the number of edges O(E) -if using an efficient sparse implementation -or with the number of nodes squared O(N 2 ). Luckily, this overhead can be amortized as most of the computation can be parallelized. Research on scalable and memory efficient GNNs is a very active field (e.g., Hamilton et al., 2017;: depending on the task, the designer can opt for massage passing operators that meet the application requirements in terms of performance, time and space constraints.
C.4 ABLATION STUDY
Here we provide two different ablation studies, the first one on the architecture of GRIN and the second one on the graph structure.
C.4.1 ARCHITECTURAL ABLATIONS Tab. 8 shows additional results for the ablation study presented in Section 5. Consistently with what we already observed, the spatial decoder and bidirectional architecture improve performance and appear particularly relevant in settings with blocks of missing data. shown in Tab. 9, performance for BRITS are reported as reference. It is clear that the constraints posed by the graph structure do have an impact on the accuracy of missing data imputation and, at the same time, that spatial information is relevant for the task. Finally, in this subsection we carry out an assessment of performance degradation w.r.t. the amount of missing data. Before discussing results, there are a few remarks that are worth bringing up regarding imputation in highly sparse settings. In the first place, GRIN, as well as a large portion of the state-of-the-art baselines, is an autoregressive model, which means that it might be subject to error accumulation over long time horizons. Furthermore, here, consistently with Section 5, we consider the out-of-sample setting which is particularly challenging in the sparse data regime. That being said, GRIN achieves remarkable performance also in this benchmark.
We train one model each for GRIN and BRITS by randomly masking out 60% of input data for each batch during training, then, we run the models on the test set by using evaluation masks with increasing sparsity (note that this causes a distribution shift in evaluation). For each level of sparsity, evaluation is repeated 5 times by sampling different evaluation masks. Results are reported in Tab. 10 and Fig. 5 shows that GRIN outperforms BRITS in all the considered scenarios.
Figure 1 :
1Representation of a multivariate time series as a sequence of graphs. Red circles denote nodes with missing values, nodes are identified.
Figure 2 :
2An overview of the bidirectional architecture.
MAE), mean squared error (MSE) and mean relative error (MRE;Cao et al., 2018) computed over the imputation window. For all the experiments, we use as messagepassing operator the diffusion convolution introduced byAtwood & Towsley (2016). We consider BRITS(Cao et al., 2018) as the principal competing alternative among non-adversarial deep autoregressive approaches, as it shares architectural similarities with our methods. As additional baselines we consider: 1) MEAN, i.e., imputation using the node-level average; 2) KNN, i.e., imputation by averaging values of the k = 10 neighboring nodes with the highest weight in the adjacency matrix W t ; 3) MICE (White et al., 2011), limiting the maximum number of iterations to 100 and the number of nearest features to 10; 4) Matrix Factorization (MF) with rank = 10; 5) VAR, i.e., a Vector Autoregressive one-step-ahead predictor; 6) rGAIN, i.e., an unsupervised version of SSGAN(Miao et al., 2021) which can be seen as GAIN(Yoon et al., 2018a) with bidirectional recurrent encoder and decoder; 7) MPGRU, a one-step-ahead GNN-based predictor similar to DCRNN.
Figure 3 :
3Reconstruction of observations from sensors removed from the training set. Plots show that GRIN might be used for virtual sensing.
All the models were developed in Python (Van Rossum & Drake, 2009) using the following opensource libraries: • PyTorch (Paszke et al., 2019); • numpy (Harris et al., 2020); • Neptune 1 (neptune.ai, 2021); • scikit-learn (Pedregosa et al., 2011); • fancyimpute (Rubinsteyn & Feldman).
Figure 4 :
4Adjacency matrices of the different datasets.
Figure 5 :
5The plot shows graphically the results in Tab. 10.
[t,t+T ] ∈ R Nt×l byMLP
1 st STAGE
IMPUTATION
Time
SPATIAL DECODING
SPATIO-TEMP. ENCODING
2 nd STAGE
IMPUTATION
1 st STAGE
IMPUTATION
Time
SPATIAL DECODING
SPATIO-TEMP. ENCODING
2 nd STAGE
IMPUTATION
1st stage imputation
Imputed value
Missing value
Valid value
2nd stage imputation
Table 1 :
1Results on the air datasets. Performance averaged over 5 runs.• Traffic: we consider the PEMS-BAY and METR-LA datasets from, containing data from traffic sensors from the San Francisco Bay Area and the Los Angeles County Highway. We use the same approach of andWu et al. (2019) to obtain an adjacency matrix. • Smart grids: we consider data from the Irish Commission for Energy Regulation SmartMetering Project (CER-E; Commission for EnergyRegulation, 2016)In-sample
Out-of-sample
D
M
MAE
MSE
MRE (%)
MAE
MSE
MRE (%)
AQI-36
Mean
53.48 ± 0.00 4578.08 ± 00.00 76.77 ± 0.00 53.48 ± 0.00
4578.08 ± 00.00 76.77 ± 0.00
KNN
30.21 ± 0.00 2892.31 ± 00.00 43.36 ± 0.00 30.21 ± 0.00
2892.31 ± 00.00 43.36 ± 0.00
MF
30.54 ± 0.26 2763.06 ± 63.35 43.84 ± 0.38
-
-
-
MICE
29.89 ± 0.11 2575.53 ± 07.67 42.90 ± 0.15 30.37 ± 0.09
2594.06 ± 07.17 43.59 ± 0.13
VAR
13.16 ± 0.21
513.90 ± 12.39 18.89 ± 0.31 15.64 ± 0.08
833.46 ± 13.85 22.02 ± 0.11
rGAIN 12.23 ± 0.17
393.76 ± 12.66 17.55 ± 0.25 15.37 ± 0.26
641.92 ± 33.89 21.63 ± 0.36
BRITS 12.24 ± 0.26
495.94 ± 43.56 17.57 ± 0.38 14.50 ± 0.35
662.36 ± 65.16 20.41 ± 0.50
MPGRU 12.46 ± 0.35
517.21 ± 41.02 17.88 ± 0.50 16.79 ± 0.52 1103.04 ± 106.83 23.63 ± 0.73
GRIN
10.51 ± 0.28
371.47 ± 17.38 15.09 ± 0.40 12.08 ± 0.47
523.14 ± 57.17 17.00 ± 0.67
AQI
Mean
39.60 ± 0.00 3231.04 ± 00.00 59.25 ± 0.00 39.60 ± 0.00
3231.04 ± 00.00 59.25 ± 0.00
KNN
34.10 ± 0.00 3471.14 ± 00.00 51.02 ± 0.00 34.10 ± 0.00
3471.14 ± 00.00 51.02 ± 0.00
MF
26.74 ± 0.24 2021.44 ± 27.98 40.01 ± 0.35
-
-
-
MICE
26.39 ± 0.13 1872.53 ± 15.97 39.49 ± 0.19 26.98 ± 0.10
1930.92 ± 10.08 40.37 ± 0.15
VAR
18.13 ± 0.84
918.68 ± 56.55 27.13 ± 1.26 22.95 ± 0.30
1402.84 ± 52.63 33.99 ± 0.44
rGAIN 17.69 ± 0.17
861.66 ± 17.49 26.48 ± 0.25 21.78 ± 0.50
1274.93 ± 60.28 32.26 ± 0.75
BRITS 17.24 ± 0.13
924.34 ± 18.26 25.79 ± 0.20 20.21 ± 0.22
1157.89 ± 25.66 29.94 ± 0.33
MPGRU 15.80 ± 0.05
816.39 ± 05.99 23.63 ± 0.08 18.76 ± 0.11
1194.35 ± 15.23 27.79 ± 0.16
GRIN
13.10 ± 0.08
615.80 ± 10.09 19.60 ± 0.11 14.73 ± 0.15
775.91 ± 28.49 21.82 ± 0.23
Table 2 :
2Results on the traffic and smart grids datasets. Performance averaged over 5 runs.Block missing
Point missing
D
M
MAE
MSE
MRE(%)
MAE
MSE
MRE(%)
PEMS-BAY
Mean
5.46 ± 0.00
87.56 ± 0.00
8.75 ± 0.00 5.42 ± 0.00
86.59 ± 0.00
8.67 ± 0.00
KNN
4.30 ± 0.00
49.90 ± 0.00
6.90 ± 0.00 4.30 ± 0.00
49.80 ± 0.00
6.88 ± 0.00
MF
3.28 ± 0.01
50.14 ± 0.13
5.26 ± 0.01 3.29 ± 0.01
51.39 ± 0.64
5.27 ± 0.02
MICE
2.94 ± 0.02
28.28 ± 0.37
4.71 ± 0.03 3.09 ± 0.02
31.43 ± 0.41
4.95 ± 0.02
VAR
2.09 ± 0.10
16.06 ± 0.73
3.35 ± 0.16 1.30 ± 0.00
6.52 ± 0.01
2.07 ± 0.01
rGAIN 2.18 ± 0.01
13.96 ± 0.20
3.50 ± 0.02 1.88 ± 0.02
10.37 ± 0.20
3.01 ± 0.04
BRITS 1.70 ± 0.01
10.50 ± 0.07
2.72 ± 0.01 1.47 ± 0.00
7.94 ± 0.03
2.36 ± 0.00
MPGRU 1.59 ± 0.00
14.19 ± 0.11
2.56 ± 0.01 1.11 ± 0.00
7.59 ± 0.02
1.77 ± 0.00
GRIN
1.14 ± 0.01
6.60 ± 0.10
1.83 ± 0.02 0.67 ± 0.00
1.55 ± 0.01
1.08 ± 0.00
METR-LA
Mean
7.48 ± 0.00 139.54 ± 0.00 12.96 ± 0.00 7.56 ± 0.00 142.22 ± 0.00 13.10 ± 0.00
KNN
7.79 ± 0.00 124.61 ± 0.00 13.49 ± 0.00 7.88 ± 0.00 129.29 ± 0.00 13.65 ± 0.00
MF
5.46 ± 0.02 109.61 ± 0.78
9.46 ± 0.04 5.56 ± 0.03 113.46 ± 1.08
9.62 ± 0.05
MICE
4.22 ± 0.05
51.07 ± 1.25
7.31 ± 0.09 4.42 ± 0.07
55.07 ± 1.46
7.65 ± 0.12
VAR
3.11 ± 0.08
28.00 ± 0.76
5.38 ± 0.13 2.69 ± 0.00
21.10 ± 0.02
4.66 ± 0.00
rGAIN 2.90 ± 0.01
21.67 ± 0.15
5.02 ± 0.02 2.83 ± 0.01
20.03 ± 0.09
4.91 ± 0.01
BRITS 2.34 ± 0.01
17.00 ± 0.14
4.05 ± 0.01 2.34 ± 0.00
16.46 ± 0.05
4.05 ± 0.00
MPGRU 2.57 ± 0.01
25.15 ± 0.17
4.44 ± 0.01 2.44 ± 0.00
22.17 ± 0.03
4.22 ± 0.00
GRIN
2.03 ± 0.00
13.26 ± 0.05
3.52 ± 0.01 1.91 ± 0.00
10.41 ± 0.03
3.30 ± 0.00
CER-E
Mean
1.49 ± 0.00
5.96 ± 0.00 72.47 ± 0.00 1.51 ± 0.00
6.09 ± 0.00 71.51 ± 0.00
KNN
1.15 ± 0.00
6.53 ± 0.00 56.11 ± 0.00 1.22 ± 0.00
7.23 ± 0.00 57.71 ± 0.00
MF
0.97 ± 0.01
4.38 ± 0.06 47.20 ± 0.31 1.01 ± 0.01
4.65 ± 0.07 47.87 ± 0.36
MICE
0.96 ± 0.01
3.08 ± 0.03 46.65 ± 0.44 0.98 ± 0.00
3.21 ± 0.04 46.59 ± 0.23
VAR
0.64 ± 0.03
1.75 ± 0.06 31.21 ± 1.60 0.53 ± 0.00
1.26 ± 0.00 24.94 ± 0.02
rGAIN 0.74 ± 0.00
1.77 ± 0.02 36.06 ± 0.14 0.71 ± 0.00
1.62 ± 0.02 33.45 ± 0.16
BRITS 0.64 ± 0.00
1.61 ± 0.01 31.05 ± 0.05 0.64 ± 0.00
1.59 ± 0.01 30.07 ± 0.11
MPGRU 0.53 ± 0.00
1.84 ± 0.01 25.88 ± 0.09 0.41 ± 0.00
1.22 ± 0.01 19.51 ± 0.03
GRIN
0.42 ± 0.00
1.07 ± 0.01 20.24 ± 0.04 0.29 ± 0.00
0.53 ± 0.00 13.71 ± 0.03
Table 3 :
3Ablation study. Averages over 5 runs.Model
AQI
METR-LA
CER-E
GRIN
14.73 ± 0.15
2.03 ± 0.00 0.29 ± 0.00
w/o sp. dec.
15.40 ± 0.14
2.32 ± 0.01 0.29 ± 0.00
w/ denoise dec. 17.23 ± 1.12
2.96 ± 0.18 0.32 ± 0.00
MPGRU
18.76 ± 0.11
2.57 ± 0.01 0.41 ± 0.00
Table 4 :
4Statistics on adjacency matrices used in the experiments. Self loops are excluded.GRAPH
N. NEIGHBORS
Dataset
type
nodes edges mean median isolated nodes
AQI
undirected
437
2699 12.35
9.0
14
CER-E
directed
485
4365
9.0
9.0
0
PEMS-BAY
directed
325
2369
7.29
7.0
12
METR-LA
directed
207
1515
7.32
7.0
5
B DATASETS
Table 5 :
5Statistics on missing data distribution. (P) and (B) indicate the Point Missing and BlockMissing settings, respectively. With block, we refer to missing data bursts longer than 2 time steps and shorter than or equal to 48.ORIGINAL DATA
INJECTED FAULTS
D
% missing avg. block median block
%
avg. block median block
AQI
25.67
6.69
4.0
10.67
7.59
4.0
AQI-36
13.24
7.24
4.0
11.33
6.52
4.0
PEMS-BAY (P)
0.02
12.0
12.0
25.0
3.33
3.0
(B)
9.07
27.26
28.0
METR-LA
(P)
8.10
12.44
9.0
23.00
3.33
3.0
(B)
8.4
25.68
26.0
CER-E
(P)
0.04
48.0
48.0
24.97
3.33
3.0
(B)
8.38
22.45
21.0
B.1 AIR QUALITY
Table 6 :
6Results on the synthetic datasets. Performance averaged over 5 runs.Fixed charge
Varying charge
Model
MAE
MSE
MAE
MSE
BRITS 0.1203 ± 0.0003 0.0878 ± 0.0002 0.1089 ± 0.0007 0.0840 ± 0.0001
GRIN
0.0500 ± 0.0055 0.0061 ± 0.0010 0.0530 ± 0.0092 0.0074 ± 0.0033
Improv.
2.41×
14.39×
2.05×
11.35×
Table 7 :
7Comparison of regularized matrix factorization methods on air quality datasets. Results averaged over 5 independent runs. MF 30.54 ± 0.26 2763.06 ± 63.35 43.84 ± 0.38 26.74 ± 0.24 2021.44 ± 27.98 40.01 ± 0.35 GRMF 19.29 ± 0.39 1054.48 ± 40.79 27.68 ± 0.56 26.38 ± 0.32 2031.21 ± 72.10 39.48 ± 0.48 TRMF 15.97 ± 0.14 1178.65 ± 60.14 22.92 ± 0.20 21.86 ± 0.28 1516.81 ± 45.53 32.71 ± 0.42 GRIN 10.51 ± 0.28 371.47 ± 17.38 15.09 ± 0.40 13.10 ± 0.08 615.80 ± 10.09 19.60 ± 0.11AQI-36
AQI
Model
MAE
MSE
MRE (%)
MAE
MSE
MRE (%)
Table 8 :
8Ablation study. MAE averaged over 5 runs. (P) and (B) indicate the Point Missing and Block Missing settings, respectively.
Table 9 :
9Performance with different adjacency matrices. Results averaged over 5 runs. (B) indicates the Block Missing setting.GRIN2.03 ± 0.00 13.26 ± 0.05 3.52 ± 0.01 fully connected 2.63 ± 0.01 27.37 ± 0.38 4.56 ± 0.02METR-LA (B)
Method
MAE
MSE
MRE (%)
no edges
3.42 ± 0.04 51.68 ± 0.71 5.93 ± 0.08
BRITS
2.34 ± 0.01 17.00 ± 0.14 4.05 ± 0.01
C.5 SENSITIVITY ANALYSIS
Table 10 :
10Performance with different amounts of missing data. Results averaged over 5 different evaluation masks in the out-sample setting. (P) indicates the Point Missing setting.GRIN1.87 ± 0.01 1.90 ± 0.00 1.94 ± 0.00 1.98 ± 0.00 2.04 ± 0.00 2.11 ± 0.00 2.22 ± 0.00 2.40 ± 0.00 2.84 ± 0.00 BRITS 2.32 ± 0.01 2.34 ± 0.00 2.36 ± 0.00 2.40 ± 0.00 2.47 ± 0.00 2.57 ± 0.01 2.76 ± 0.00 3.08 ± 0.00 4.02 ± 0.01METR-LA (P)
i(2) t ||m i t ||h i t−1 , W t(3)u i t = σ MPNN x i(2) t ||m i t ||h i t−1 , W t (4) c i t = tanh MPNN x i(2) t ||m i t ||r i t h i t−1 , W t (5) h i t = u i t h i t−1 + (1 − u i t ) c i t(6)where r i t , u i t are the reset and update gates, respectively, h i t is the hidden representation of the i-th node at time t, andx i(2) t is the output of the decoding block at the previous time-step (see next paragraph). The symbols and || denote the Hadamard product and the concatenation operator, respectively. The initial representation H t−1 can either be initialized as a constant or with a learnable embedding. Note that for the steps where input data are missing, the encoder is fed with predictions from the decoder block, as explained in the next subsection. By carrying out the above computation time and node wise, we get the encoded sequence H[t,t+T ] .
https://neptune.ai/
https://github.com/nnzhan/Graph-WaveNet 3 https://github.com/caow13/BRITS 4 https://github.com/Graph-Machine-Learning-Group/grin
https://www.ucd.ie/issda/data/commissionforenergyregulationcer
https://github.com/ethanfetaya/NRI
https://github.com/xinychen/transdim
ACKNOWLEDGMENTSThis research is funded by the Swiss National Science Foundation project 200021 172671: "ALPS-FORT: A Learning graPh-baSed framework FOr cybeR-physical sysTems". The authors wish to thank the Institute of Computational Science at USI for granting access to computational resources.We use input sequences of 24 steps, which correspond to 2 hours of data. For adjacency, we use a thresholded Gaussian kernel applied to geographic distances following previous worksWu et al. (2019). We split the data into three folds, using 70% of them for training and the remaining 10% and 20% for validation and testing, respectively.B.3 SMART GRIDSWe consider data from the Irish Commission for Energy Regulation (CER) Smart Metering Project 5 . We select only the subset of the available smart meters monitoring energy consumption of small and medium-sized enterprises (SMEs), i.e., 485 time series with samples acquired every 30 minutes. Note that access to dataset can be obtained free of charge for research purposes.We build an adjacency matrix by extracting a k-nearest neighbor graph (with k = 10) from the similarity matrix built by computing the week-wise correntropy(Liu et al., 2007)among time series. As in the traffic case, we use a 70%/10%/20% split for training, validation and testing and use a window size of 24 steps. Data were normalized using standard scaling as in the previous settings, and we did not perform additional preprocessing steps.Here we study how exploiting the relational structure of the problem affects the accuracy of the reconstruction. In particular, we run two additional experiments on the METR-LA dataset (Block missing settings), where instead of using as adjacency matrix the thresholded kernel in Eq. 12, we use (1) a fully connected graph (W = 1) and (2) a graph with no edges (W = I). To provide node -i.e., sensor -identification, we use learnable embeddings as additional node features. Results are
Kriging convolutional networks. Gabriel Appleby, Linfeng Liu, Li-Ping Liu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Gabriel Appleby, Linfeng Liu, and Li-Ping Liu. Kriging convolutional networks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pp. 3187-3194, 2020.
Diffusion-convolutional neural networks. James Atwood, Don Towsley, Advances in neural information processing systems. James Atwood and Don Towsley. Diffusion-convolutional neural networks. In Advances in neural information processing systems, pp. 1993-2001, 2016.
An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. Shaojie Bai, Zico Kolter, Vladlen Koltun, arXiv:1803.01271arXiv preprintShaojie Bai, J Zico Kolter, and Vladlen Koltun. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271, 2018.
W Peter, Jessica B Battaglia, Victor Hamrick, Alvaro Bapst, Vinicius Sanchez-Gonzalez, Mateusz Zambaldi, Andrea Malinowski, David Tacchetti, Adam Raposo, Ryan Santoro, Faulkner, arXiv:1806.01261Relational inductive biases, deep learning, and graph networks. arXiv preprintPeter W Battaglia, Jessica B Hamrick, Victor Bapst, Alvaro Sanchez-Gonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, et al. Relational inductive biases, deep learning, and graph networks. arXiv preprint arXiv:1806.01261, 2018.
Nearest neighbor imputation algorithms: a critical evaluation. Lorenzo Beretta, Alessandro Santaniello, BMC medical informatics and decision making. 163Lorenzo Beretta and Alessandro Santaniello. Nearest neighbor imputation algorithms: a critical evaluation. BMC medical informatics and decision making, 16(3):197-208, 2016.
Geometric deep learning: going beyond euclidean data. Joan Michael M Bronstein, Yann Bruna, Arthur Lecun, Pierre Szlam, Vandergheynst, IEEE Signal Processing Magazine. 344Michael M Bronstein, Joan Bruna, Yann LeCun, Arthur Szlam, and Pierre Vandergheynst. Geomet- ric deep learning: going beyond euclidean data. IEEE Signal Processing Magazine, 34(4):18-42, 2017.
Graph regularized nonnegative matrix factorization for data representation. Deng Cai, Xiaofei He, Jiawei Han, Thomas S Huang, IEEE transactions on pattern analysis and machine intelligence. 33Deng Cai, Xiaofei He, Jiawei Han, and Thomas S Huang. Graph regularized nonnegative matrix factorization for data representation. IEEE transactions on pattern analysis and machine intelli- gence, 33(8):1548-1560, 2010.
Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting. Ling Cai, Krzysztof Janowicz, Gengchen Mai, Bo Yan, Rui Zhu, Transactions in GIS. 243Ling Cai, Krzysztof Janowicz, Gengchen Mai, Bo Yan, and Rui Zhu. Traffic transformer: Capturing the continuity and periodicity of time series for traffic forecasting. Transactions in GIS, 24(3): 736-755, 2020.
Brits: bidirectional recurrent imputation for time series. Wei Cao, Dong Wang, Jian Li, Hao Zhou, Yitan Li, Lei Li, Proceedings of the 32nd International Conference on Neural Information Processing Systems. the 32nd International Conference on Neural Information Processing SystemsWei Cao, Dong Wang, Jian Li, Hao Zhou, Yitan Li, and Lei Li. Brits: bidirectional recurrent impu- tation for time series. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 6776-6786, 2018.
Recurrent neural networks for multivariate time series with missing values. Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, Yan Liu, Scientific reports. 81Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. Recurrent neural networks for multivariate time series with missing values. Scientific reports, 8(1):1-12, 2018.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Fast local algorithms for large scale nonnegative matrix and tensor factorizations. Andrzej Cichocki, Anh-Huy Phan, IEICE transactions on fundamentals of electronics, communications and computer sciences. 923Andrzej Cichocki and Anh-Huy Phan. Fast local algorithms for large scale nonnegative matrix and tensor factorizations. IEICE transactions on fundamentals of electronics, communications and computer sciences, 92(3):708-721, 2009.
. Commission for Energy Regulation. CER Smart Metering Project -Electricity Customer Behaviour Trial. datasetCommission for Energy Regulation. CER Smart Metering Project -Electricity Customer Behaviour Trial, 2009-2010 [dataset].
. Irish Social Science Data Archive. SN. Irish Social Science Data Archive. SN: 0012-00, 2016. URL https: //www.ucd.ie/issda/data/commissionforenergyregulationcer.
Convolutional neural networks on graphs with fast localized spectral filtering. Michaël Defferrard, Xavier Bresson, Pierre Vandergheynst, Advances in neural information processing systems. 29Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. Advances in neural information processing systems, 29:3844-3852, 2016.
Time series analysis by state space methods. James Durbin, Siem Jan Koopman, Oxford university pressJames Durbin and Siem Jan Koopman. Time series analysis by state space methods. Oxford univer- sity press, 2012.
Maskgan: Better text generation via filling in the. William Fedus, Ian Goodfellow, Andrew M Dai, International Conference on Learning Representations. William Fedus, Ian Goodfellow, and Andrew M Dai. Maskgan: Better text generation via filling in the . In International Conference on Learning Representations, 2018.
Sign: Scalable inception graph neural networks. Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, Federico Monti, ICML 2020 Workshop on Graph Representation Learning and Beyond. Fabrizio Frasca, Emanuele Rossi, Davide Eynard, Benjamin Chamberlain, Michael Bronstein, and Federico Monti. Sign: Scalable inception graph neural networks. In ICML 2020 Workshop on Graph Representation Learning and Beyond, 2020.
Supervised learning from incomplete data via an em approach. Zoubin Ghahramani, Michael Jordan, Advances in Neural Information Processing Systems. J. Cowan, G. Tesauro, and J. AlspectorMorgan-Kaufmann6Zoubin Ghahramani and Michael Jordan. Supervised learning from incomplete data via an em approach. In J. Cowan, G. Tesauro, and J. Alspector (eds.), Advances in Neural Information Processing Systems, volume 6. Morgan-Kaufmann, 1994.
Neural message passing for quantum chemistry. Justin Gilmer, S Samuel, Schoenholz, F Patrick, Oriol Riley, George E Vinyals, Dahl, International conference on machine learning. PMLRJustin Gilmer, Samuel S Schoenholz, Patrick F Riley, Oriol Vinyals, and George E Dahl. Neural message passing for quantum chemistry. In International conference on machine learning, pp. 1263-1272. PMLR, 2017.
Generative adversarial nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Advances in neural information processing systems. 27Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. Advances in neural information processing systems, 27, 2014.
Ian Goodfellow, Yoshua Bengio, Aaron Courville, Yoshua Bengio, Deep learning. MIT Press1Ian Goodfellow, Yoshua Bengio, Aaron Courville, and Yoshua Bengio. Deep learning, volume 1. MIT Press, 2016.
Investigating causal relations by econometric models and cross-spectral methods. W J Clive, Granger, Econometrica: journal of the Econometric Society. Clive WJ Granger. Investigating causal relations by econometric models and cross-spectral methods. Econometrica: journal of the Econometric Society, pp. 424-438, 1969.
Inductive representation learning on large graphs. Rex William L Hamilton, Jure Ying, Leskovec, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsWilliam L Hamilton, Rex Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1025-1035, 2017.
Array programming with numpy. Jarrod Charles R Harris, Millman, J Stéfan, Ralf Van Der Walt, Pauli Gommers, David Virtanen, Eric Cournapeau, Julian Wieser, Sebastian Taylor, Nathaniel J Berg, Smith, Nature. 5857825Charles R Harris, K Jarrod Millman, Stéfan J van der Walt, Ralf Gommers, Pauli Virtanen, David Cournapeau, Eric Wieser, Julian Taylor, Sebastian Berg, Nathaniel J Smith, et al. Array program- ming with numpy. Nature, 585(7825):357-362, 2020.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
Big data and its technical challenges. V Hosagrahar, Johannes Jagadish, Alexandros Gehrke, Yannis Labrinidis, Papakonstantinou, M Jignesh, Raghu Patel, Cyrus Ramakrishnan, Shahabi, Communications of the ACM. 577Hosagrahar V Jagadish, Johannes Gehrke, Alexandros Labrinidis, Yannis Papakonstantinou, Jig- nesh M Patel, Raghu Ramakrishnan, and Cyrus Shahabi. Big data and its technical challenges. Communications of the ACM, 57(7):86-94, 2014.
Imputation of incomplete nonstationary seasonal time series data. J Kihoro, Athiany, Mathematical Theory and Modeling. 312J Kihoro, K Athiany, et al. Imputation of incomplete nonstationary seasonal time series data. Math- ematical Theory and Modeling, 3(12):142-154, 2013.
Neural relational inference for interacting systems. Thomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, Richard Zemel, International Conference on Machine Learning. PMLRThomas Kipf, Ethan Fetaya, Kuan-Chieh Wang, Max Welling, and Richard Zemel. Neural relational inference for interacting systems. In International Conference on Machine Learning, pp. 2688- 2697. PMLR, 2018.
Spatio-temporal missing data imputation for smart power grids. R Sanmukh, Yao Kuppannagari, Chung Ming Fu, Viktor K Chueng, Prasanna, 10.1145/3447555.3466586Proceedings of the Twelfth ACM International Conference on Future Energy Systems, e-Energy '21. the Twelfth ACM International Conference on Future Energy Systems, e-Energy '21New York, NY, USA, 2021Association for Computing MachinerySanmukh R. Kuppannagari, Yao Fu, Chung Ming Chueng, and Viktor K. Prasanna. Spatio-temporal missing data imputation for smart power grids. In Proceedings of the Twelfth ACM International Conference on Future Energy Systems, e-Energy '21, pp. 458-465, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450383332. doi: 10.1145/3447555.3466586. URL https://doi.org/10.1145/3447555.3466586.
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, nature. 5217553Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015.
Diffusion convolutional recurrent neural network: Data-driven traffic forecasting. Yaguang Li, Rose Yu, Cyrus Shahabi, Yan Liu, International Conference on Learning Representations. Yaguang Li, Rose Yu, Cyrus Shahabi, and Yan Liu. Diffusion convolutional recurrent neural net- work: Data-driven traffic forecasting. In International Conference on Learning Representations, 2018.
Gated graph sequence neural networks. Yujia Li, Daniel Tarlow, Marc Brockschmidt, Richard S Zemel, International Conference on Learning Representations. Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard S. Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations, 2016.
Directly modeling missing data in sequences with rnns: Improved classification of clinical time series. David Zachary C Lipton, Randall Kale, Wetzel, PMLRProceedings of the 1st Machine Learning for Healthcare Conference. Finale Doshi-Velez, Jim Fackler, David Kale, Byron Wallace, and Jenna Wiensthe 1st Machine Learning for Healthcare ConferenceBoston, MA, USA56Northeastern UniversityZachary C Lipton, David Kale, and Randall Wetzel. Directly modeling missing data in sequences with rnns: Improved classification of clinical time series. In Finale Doshi-Velez, Jim Fackler, David Kale, Byron Wallace, and Jenna Wiens (eds.), Proceedings of the 1st Machine Learning for Healthcare Conference, volume 56 of Proceedings of Machine Learning Research, pp. 253- 270, Northeastern University, Boston, MA, USA, 18-19 Aug 2016. PMLR.
Statistical analysis with missing data. J A Roderick, Little, Donald B Rubin, John Wiley & Sons793Roderick JA Little and Donald B Rubin. Statistical analysis with missing data, volume 793. John Wiley & Sons, 2019.
Correntropy: Properties and applications in non-gaussian signal processing. Weifeng Liu, P Puskal, Jose C Pokharel, Principe, IEEE Transactions on signal processing. 5511Weifeng Liu, Puskal P Pokharel, and Jose C Principe. Correntropy: Properties and applications in non-gaussian signal processing. IEEE Transactions on signal processing, 55(11):5286-5298, 2007.
Naomi: Non-autoregressive multiresolution sequence imputation. Yukai Liu, Rose Yu, Stephan Zheng, Eric Zhan, Yisong Yue, Advances in Neural Information Processing Systems. 32Yukai Liu, Rose Yu, Stephan Zheng, Eric Zhan, and Yisong Yue. Naomi: Non-autoregressive multiresolution sequence imputation. Advances in Neural Information Processing Systems, 32: 11238-11248, 2019.
Multivariate time series imputation with generative adversarial networks. Yonghong Luo, Xiangrui Cai, Ying Zhang, Jun Xu, Yuan Xiaojie, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Yonghong Luo, Xiangrui Cai, Ying Zhang, Jun Xu, and Yuan Xiaojie. Multivariate time series impu- tation with generative adversarial networks. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems, vol- ume 31. Curran Associates, Inc., 2018.
E²gan: End-to-end generative adversarial network for multivariate time series imputation. Yonghong Luo, Ying Zhang, Xiangrui Cai, Xiaojie Yuan, Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19. the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-1972019Yonghong Luo, Ying Zhang, Xiangrui Cai, and Xiaojie Yuan. E²gan: End-to-end generative ad- versarial network for multivariate time series imputation. In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, pp. 3094-3100. International Joint Conferences on Artificial Intelligence Organization, 7 2019.
Nonnegative matrix factorization for time series recovery from a few temporal aggregates. Jiali Mei, Yohann De Castro, Yannig Goude, Georges Hébrail, International Conference on Machine Learning. PMLRJiali Mei, Yohann De Castro, Yannig Goude, and Georges Hébrail. Nonnegative matrix factorization for time series recovery from a few temporal aggregates. In International Conference on Machine Learning, pp. 2382-2390. PMLR, 2017.
Generative semi-supervised learning for multivariate time series imputation. Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, Jianwei Yin, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence35Xiaoye Miao, Yangyang Wu, Jun Wang, Yunjun Gao, Xudong Mao, and Jianwei Yin. Generative semi-supervised learning for multivariate time series imputation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pp. 8983-8991, 2021.
Missing data: A comparison of neural network and expectation maximization techniques. Shakir Fulufhelo V Nelwamondo, Tshilidzi Mohamed, Marwala, Current Science. Fulufhelo V Nelwamondo, Shakir Mohamed, and Tshilidzi Marwala. Missing data: A comparison of neural network and expectation maximization techniques. Current Science, pp. 1514-1521, 2007.
Neptune: Metadata store for mlops, built for research and production teams that run a lot of experiments. neptune.aineptune.ai. Neptune: Metadata store for mlops, built for research and production teams that run a lot of experiments, 2021. URL https://neptune.ai.
Daniele Benjamin Paassen, Daniele Grattarola, Cesare Zambon, Barbara Eva Alippi, Hammer, International Conference on Learning Representations. Graph edit networksBenjamin Paassen, Daniele Grattarola, Daniele Zambon, Cesare Alippi, and Barbara Eva Hammer. Graph edit networks. In International Conference on Learning Representations, 2020.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in neural information processing systems. 32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32: 8026-8037, 2019.
Scikit-learn: Machine learning in python. the. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of machine Learning research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830, 2011.
Collaborative filtering with graph information: Consistency and scalable methods. Nikhil Rao, Hsiang-Fu Yu, Pradeep Ravikumar, Dhillon, NIPS. 2Nikhil Rao, Hsiang-Fu Yu, Pradeep Ravikumar, and Inderjit S Dhillon. Collaborative filtering with graph information: Consistency and scalable methods. In NIPS, volume 2, pp. 7. Citeseer, 2015.
Emanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, Michael Bronstein, arXiv:2006.10637Temporal graph networks for deep learning on dynamic graphs. arXiv preprintEmanuele Rossi, Ben Chamberlain, Fabrizio Frasca, Davide Eynard, Federico Monti, and Michael Bronstein. Temporal graph networks for deep learning on dynamic graphs. arXiv preprint arXiv:2006.10637, 2020.
Inference and missing data. Donald B Rubin, Biometrika. 633Donald B Rubin. Inference and missing data. Biometrika, 63(3):581-592, 1976.
fancyimpute: An imputation library for python. Alex Rubinsteyn, Sergey Feldman, Alex Rubinsteyn and Sergey Feldman. fancyimpute: An imputation library for python. URL https://github.com/iskandr/fancyimpute.
The graph neural network model. Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, Gabriele Monfardini, IEEE transactions on neural networks. 201Franco Scarselli, Marco Gori, Ah Chung Tsoi, Markus Hagenbuchner, and Gabriele Monfardini. The graph neural network model. IEEE transactions on neural networks, 20(1):61-80, 2008.
Modeling relational data with graph convolutional networks. Michael Schlichtkrull, N Thomas, Peter Kipf, Rianne Bloem, Van Den, Ivan Berg, Max Titov, Welling, European semantic web conference. SpringerMichael Schlichtkrull, Thomas N Kipf, Peter Bloem, Rianne Van Den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional networks. In European semantic web conference, pp. 593-607. Springer, 2018.
Deep learning in neural networks: An overview. Jürgen Schmidhuber, Neural networks. 61Jürgen Schmidhuber. Deep learning in neural networks: An overview. Neural networks, 61:85-117, 2015.
Structured sequence modeling with graph convolutional recurrent networks. Youngjoo Seo, Michaël Defferrard, Pierre Vandergheynst, Xavier Bresson, International Conference on Neural Information Processing. SpringerYoungjoo Seo, Michaël Defferrard, Pierre Vandergheynst, and Xavier Bresson. Structured sequence modeling with graph convolutional recurrent networks. In International Conference on Neural Information Processing, pp. 362-373. Springer, 2018.
Discrete graph structure learning for forecasting multiple time series. Chao Shang, Jie Chen, Jinbo Bi, International Conference on Learning Representations. Chao Shang, Jie Chen, and Jinbo Bi. Discrete graph structure learning for forecasting multiple time series. In International Conference on Learning Representations, 2020.
The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. IEEE signal processing magazine. David I Shuman, K Sunil, Pascal Narang, Antonio Frossard, Pierre Ortega, Vandergheynst, 30David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to net- works and other irregular domains. IEEE signal processing magazine, 30(3):83-98, 2013.
Missing data imputation with adversariallytrained graph convolutional networks. Indro Spinelli, Simone Scardapane, Aurelio Uncini, Neural Networks. 129Indro Spinelli, Simone Scardapane, and Aurelio Uncini. Missing data imputation with adversarially- trained graph convolutional networks. Neural Networks, 129:249-260, 2020.
Interpolation of spatial data: some theory for kriging. L Michael, Stein, Springer Science & Business MediaMichael L Stein. Interpolation of spatial data: some theory for kriging. Springer Science & Business Media, 1999.
Missing value estimation methods for dna microarrays. Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein, Russ B Altman, Bioinformatics. 176Olga Troyanskaya, Michael Cantor, Gavin Sherlock, Pat Brown, Trevor Hastie, Robert Tibshirani, David Botstein, and Russ B Altman. Missing value estimation methods for dna microarrays. Bioinformatics, 17(6):520-525, 2001.
Python 3 Reference Manual. Guido Van Rossum, Fred L Drake, 1441412697CreateSpace, Scotts Valley, CAGuido Van Rossum and Fred L. Drake. Python 3 Reference Manual. CreateSpace, Scotts Valley, CA, 2009. ISBN 1441412697.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pp. 5998-6008, 2017.
Multiple imputation using chained equations: issues and guidance for practice. Patrick Ian R White, Angela M Royston, Wood, Statistics in medicine. 304Ian R White, Patrick Royston, and Angela M Wood. Multiple imputation using chained equations: issues and guidance for practice. Statistics in medicine, 30(4):377-399, 2011.
Inductive graph neural networks for spatiotemporal kriging. Yuankai Wu, Dingyi Zhuang, Aurelie Labbe, Lijun Sun, arXiv:2006.07527arXiv preprintYuankai Wu, Dingyi Zhuang, Aurelie Labbe, and Lijun Sun. Inductive graph neural networks for spatiotemporal kriging. arXiv preprint arXiv:2006.07527, 2020a.
Graph wavenet for deep spatial-temporal graph modeling. Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Chengqi Zhang, arXiv:1906.00121arXiv preprintZonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, and Chengqi Zhang. Graph wavenet for deep spatial-temporal graph modeling. arXiv preprint arXiv:1906.00121, 2019.
Connecting the dots: Multivariate time series forecasting with graph neural networks. Zonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, Chengqi Zhang, Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningNew York, NY, USAAssociation for Computing MachineryZonghan Wu, Shirui Pan, Guodong Long, Jing Jiang, Xiaojun Chang, and Chengqi Zhang. Con- necting the dots: Multivariate time series forecasting with graph neural networks. In Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 753-763, New York, NY, USA, 2020b. Association for Computing Machinery.
St-mvl: Filling missing values in geo-sensory time series data. Xiuwen Yi, Yu Zheng, Junbo Zhang, Tianrui Li, Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16. the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16AAAI PressISBN 9781577357704Xiuwen Yi, Yu Zheng, Junbo Zhang, and Tianrui Li. St-mvl: Filling missing values in geo-sensory time series data. In Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16, pp. 2704-2710. AAAI Press, 2016. ISBN 9781577357704.
Gain: Missing data imputation using generative adversarial nets. Jinsung Yoon, James Jordon, Mihaela Schaar, International Conference on Machine Learning. PMLRJinsung Yoon, James Jordon, and Mihaela Schaar. Gain: Missing data imputation using generative adversarial nets. In International Conference on Machine Learning, pp. 5689-5698. PMLR, 2018a.
Estimating missing data in temporal data streams using multi-directional recurrent neural networks. Jinsung Yoon, Mihaela William R Zame, Van Der Schaar, IEEE Transactions on Biomedical Engineering. 665Jinsung Yoon, William R Zame, and Mihaela van der Schaar. Estimating missing data in temporal data streams using multi-directional recurrent neural networks. IEEE Transactions on Biomedical Engineering, 66(5):1477-1490, 2018b.
Handling missing data with graph representation learning. Jiaxuan You, Xiaobai Ma, Daisy Yi Ding, Mykel Kochenderfer, Jure Leskovec, Neural Information Processing Systems. NeurIPS2020Jiaxuan You, Xiaobai Ma, Daisy Yi Ding, Mykel Kochenderfer, and Jure Leskovec. Handling miss- ing data with graph representation learning. Neural Information Processing Systems (NeurIPS, 2020.
Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. Bing Yu, Haoteng Yin, Zhanxing Zhu, arXiv:1709.04875arXiv preprintBing Yu, Haoteng Yin, and Zhanxing Zhu. Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting. arXiv preprint arXiv:1709.04875, 2017.
Temporal regularized matrix factorization for high-dimensional time series prediction. Hsiang-Fu Yu, Nikhil Rao, Dhillon, Advances in neural information processing systems. 29Hsiang-Fu Yu, Nikhil Rao, and Inderjit S Dhillon. Temporal regularized matrix factorization for high-dimensional time series prediction. Advances in neural information processing systems, 29: 847-855, 2016.
Autoregressive models for sequences of graphs. Daniele Zambon, Daniele Grattarola, Lorenzo Livi, Cesare Alippi, 2019 International Joint Conference on Neural Networks (IJCNN). IEEEDaniele Zambon, Daniele Grattarola, Lorenzo Livi, and Cesare Alippi. Autoregressive models for sequences of graphs. In 2019 International Joint Conference on Neural Networks (IJCNN), pp. 1-8. IEEE, 2019.
Gaan: Gated attention networks for learning on large and spatiotemporal graphs. Jiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, Dit Yan Yeung, 34th Conference on Uncertainty in Artificial. UAIJiani Zhang, Xingjian Shi, Junyuan Xie, Hao Ma, Irwin King, and Dit Yan Yeung. Gaan: Gated attention networks for learning on large and spatiotemporal graphs. In 34th Conference on Un- certainty in Artificial Intelligence 2018, UAI 2018, 2018.
Urban computing: concepts, methodologies, and applications. Yu Zheng, Licia Capra, Ouri Wolfson, Hai Yang, ACM Transactions on Intelligent Systems and Technology (TIST). 53Yu Zheng, Licia Capra, Ouri Wolfson, and Hai Yang. Urban computing: concepts, methodologies, and applications. ACM Transactions on Intelligent Systems and Technology (TIST), 5(3):1-55, 2014.
Forecasting fine-grained air quality based on big data. Yu Zheng, Xiuwen Yi, Ming Li, Ruiyuan Li, Zhangqing Shan, Eric Chang, Tianrui Li, Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining. the 21th ACM SIGKDD international conference on knowledge discovery and data miningYu Zheng, Xiuwen Yi, Ming Li, Ruiyuan Li, Zhangqing Shan, Eric Chang, and Tianrui Li. Fore- casting fine-grained air quality based on big data. In Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp. 2267-2276, 2015.
| [
"https://github.com/nnzhan/Graph-WaveNet",
"https://github.com/caow13/BRITS",
"https://github.com/Graph-Machine-Learning-Group/grin",
"https://github.com/ethanfetaya/NRI",
"https://github.com/xinychen/transdim",
"https://github.com/iskandr/fancyimpute."
] |
[
"Secure synchronization of artificial neural networks used to correct errors in quantum cryptography",
"Secure synchronization of artificial neural networks used to correct errors in quantum cryptography"
] | [
"Marcin Niemiec *[email protected]†[email protected]‡[email protected] ",
"Tymoteusz Widlarz ",
"Miralem Mehic \nDepartment of Telecommunications\nFaculty of Electrical Engineering\nUniversity of Sarajevo\nZmaja od Bosne bb71000Sarajevo\n\nBosnia and Herzegovina ‡ VSB -Technical University of Ostrava\n17. listopadu 2172/15708 00OstravaCzechia\n",
"\nAGH University of Science and Technology\nal. Mickiewicza 3030-059KrakowPoland\n"
] | [
"Department of Telecommunications\nFaculty of Electrical Engineering\nUniversity of Sarajevo\nZmaja od Bosne bb71000Sarajevo",
"Bosnia and Herzegovina ‡ VSB -Technical University of Ostrava\n17. listopadu 2172/15708 00OstravaCzechia",
"AGH University of Science and Technology\nal. Mickiewicza 3030-059KrakowPoland"
] | [] | Quantum cryptography can provide a very high level of data security. However, a big challenge of this technique is errors in quantum channels. Therefore, error correction methods must be applied in real implementations. An example is error correction based on artificial neural networks. This paper considers the practical aspects of this recently proposed method and analyzes elements which influence security and efficiency. The synchronization process based on mutual learning processes is analyzed in detail. The results allowed us to determine the impact of various parameters. Additionally, the paper describes the recommended number of iterations for different structures of artificial neural networks and various error rates. All this aims to support users in choosing a suitable configuration of neural networks used to correct errors in a secure and efficient way. | 10.48550/arxiv.2301.11440 | [
"https://export.arxiv.org/pdf/2301.11440v1.pdf"
] | 256,358,720 | 2301.11440 | 9329f66c5cb1778c07b6afdc7daa98be199c4897 |
Secure synchronization of artificial neural networks used to correct errors in quantum cryptography
Marcin Niemiec *[email protected]†[email protected]‡[email protected]
Tymoteusz Widlarz
Miralem Mehic
Department of Telecommunications
Faculty of Electrical Engineering
University of Sarajevo
Zmaja od Bosne bb71000Sarajevo
Bosnia and Herzegovina ‡ VSB -Technical University of Ostrava
17. listopadu 2172/15708 00OstravaCzechia
AGH University of Science and Technology
al. Mickiewicza 3030-059KrakowPoland
Secure synchronization of artificial neural networks used to correct errors in quantum cryptography
Index Terms-quantum cryptographykey reconciliationerror correctionartificial neural networks
Quantum cryptography can provide a very high level of data security. However, a big challenge of this technique is errors in quantum channels. Therefore, error correction methods must be applied in real implementations. An example is error correction based on artificial neural networks. This paper considers the practical aspects of this recently proposed method and analyzes elements which influence security and efficiency. The synchronization process based on mutual learning processes is analyzed in detail. The results allowed us to determine the impact of various parameters. Additionally, the paper describes the recommended number of iterations for different structures of artificial neural networks and various error rates. All this aims to support users in choosing a suitable configuration of neural networks used to correct errors in a secure and efficient way.
I. INTRODUCTION
The emergence and intensive development of the field of quantum computing has put many cryptography algorithms at risk. However, quantum physics also allows to achieve multiple cryptography tasks. One of the most popular is quantum key distribution [1]. Unfortunately, quantum communication is not perfect and additional solutions are required to correct any errors after the key distribution in the quantum channel. Artificial neural networks can be utilized to correct these errors [2]. It is a recently proposed solution which provides high level of security and efficiency comparing to other existing error correction methods. This paper analyzes the impact of different neural networks' parameters on the synchronization process. These parameters influence the number of iterations required as well as the security and efficiency of quantum cryptography. Therefore, it is important to know which neural network scheme should be chosen and which should be avoided. Additionally, the synchronization requires the number of iterations to be specified. Therefore, a recommended number of iterations for a particular multiple neural network's scheme is provided.
The paper is structured as follows. Related work is reviewed in Section 2. Section 3 presents the basics of quantum cryptography, the architecture of the tree parity machine, and error correction using this structure of artificial neural networks. Analysis of synchronization parameters including the recommended number of iterations for typical keys and error rates is described in Section 4. Section 5 concludes the paper.
II. RELATED WORK
The first quantum key distribution (QKD) protocol, introduced in 1984 by Bennet and Brassard, is BB84 [3]. This scheme uses the polarization state of a single photon to transmit information. Since then, several other protocols have been presented. One of them is the E91 protocol introduced in 1991 by Ekerd [4]. It utilizes entangled pairs of photons in the QKD process. However, some errors usually appear during data exchange in the quantum channel. After the initial QKD, there is a specific step: quantum bit error rate (QBER) estimation based on the acquired keys. The QBER value is usually low [5]. It must to be lower than the chosen threshold used to detect the eavesdropper.
Several methods of correcting error incurred in the quantum key distribution process have been developed. The first described method -BBBSS -was proposed in 1992 [6]. However, the most popular is the Cascade key reconciliation protocol [7]. It is based on multiple random permutations. The Winnow protocol, based on the exchange of parity and Hamming codes, is another method of error correction in the raw key [8]. Its main improvement is the reduction of the required communication between both parties. The third most popular error reconciliation scheme is the low density parity check approach. It offers a significant reduction of exchanged information; however, it introduces more computation and memory costs than the Cascade and Winnow protocols [7].
In 2019, another method of error correction in quantum cryptography was proposed by Niemiec in [2]. The solution uses mutual synchronization of two artificial neural networks (ANN) to correct the errors. The tree parity machine (TPM) is proposed as a neural network used in this approach. It is a well-known structure in cryptography -the synchronization of two TPMs can be used as a key exchange protocol. TPMs cannot be used as a general method to correct a selected error because it is not possible to predict the final string of bits after the synchronization process. However, it is a desirable feature for shared keys which should be random strings of bits.
III. QUANTUM CRYPTOGRAPHY SUPPORTED BY
ARTIFICIAL NEURAL NETWORKS Symmetric cryptography uses a single key to encrypt and decrypt secret messages. Let's assume that Alice and Bob, the two characters used in describing cryptography protocols, are using symmetric encryption. The goal is to send information from Alice to Bob in a way that provides confidentiality. To achieve this, Alice and Bob need to agree on a shared secret key. Alice encrypts confidential data using the previously chosen key and Bob decrypts it using the same key. The same key is applied to encrypt and decrypt the information, hence the name: symmetric-key encryption. It is worth mentioning only the one-time-pad symmetric scheme has been proven secure but it requires a key not smaller than the message being sent.
In general, symmetric-key encryption algorithms -for example the Advanced Encryption Standard (AES) [9] -perform better than asymmetric-key algorithms [10]. However, symmetric-key algorithms have an important disadvantage compared to asymmetric-key schemes. In the symmetric key encryption scheme, the key needs to be safely distributed or established between Alice and Bob [11]. The symmetric key can be exchanged in a number of ways, including via a trusted third party or by direct exchange between involved parties. However, both methods introduce some vulnerabilities, including passive scanning of network traffic. A method where the eavesdropper can be easily detected uses quantum mechanics to establish keys between Alice and Bob. It is called the quantum key distribution protocol.
A. Quantum key distribution
Quantum mechanics allows for secure key distribution 1 among network users. Two main principles are the core of the security of QKD: an unknown quantum state cannot be copied [12], and the quantum state cannot be estimated without disturbing it. One of the most popular QKD protocols which uses those principles is the BB84 scheme [3].
The BB84 protocol uses photons with two polarization bases: rectilinear or diagonal. Alice encodes a string of bits using photons on a randomly chosen basis. After that, all the photons are sent through a quantum channel. Bob randomly chooses a basis for each photon to decode the binary 0 or 1. Alice and Bob's bases are compared through a public communication channel. Each bit where both parties chose the same basis should be the same. However, when Bob measures the photon in a different basis than Alice, this bit is rejected. The remaining bits are the same for both parties and can be considered as a symmetric key. Next, the error estimation is performed. Randomly chosen parts of the keys between Alice and Bob are compared to compute the QBER value. If the comparison results in a high error rate, it means that the eavesdropper (Eve) is trying to gain information about the exchanged photons. However, the quantum channel is not perfect, and errors are usually detected due to disturbance, noise in the detectors or other elements. The number of errors introduced by the quantum channel's imperfections must be considered while deciding the maximum acceptable error rate.
The differences between Alice and Bob's keys need to be corrected. Several error correction methods are known. BBBSS is the earliest scheme proposed in [6]. It is mainly based on parity checks. The most popular method is the Cascade protocol [13]. It is an improved version of BBBSS and requires less information to be sent between Alice and Bob through the public channel. The Cascade protocol and its predecessor are based on multiple parity checks. The basic idea is that the keys are divided into blocks of a fixed size. The number of bits in each block depends on the previously calculated QBER value. Alice and Bob compare the parities of each block to allow them to find an odd number of errors. If errors are detected in a given block, it is split into two. The process is repeated recursively for each block until all errors are corrected. It concludes a single iteration after which Alice and Bob have keys with an even number of errors or without any errors. Before performing the following iterations, the keys are scrambled, and the size of the block is increased. The number of iterations is predetermined. As a result of this process, Alice and Bob should have the same keys. However, it is not always the case. A number of iterations or block sizes can be chosen incorrectly and cause failure in error correction. Additionally, the algorithm performs multiple parity checks over the public channel, which can be intercepted by an eavesdropper (Eve). As a result, Eve can construct a partial key. Alice and Bob should discard parts of their keys to increase the lost security. This reduces the performance of this method since the confidential keys must be shortened in the process. Another error reconciliation method is based on mutual synchronization of artificial neural networks.
B. Tree parity machine
An artificial neural network (ANN) is a computing system inspired by biological neural networks [14]. ANNs are used to recognize patterns and in many other solutions in the fields of machine learning. ANNs consist of multiple connected nodes (artificial neurons), with each neuron representing a mathematical function [15]. These nodes are divided into three types of layers: the first (input) layer, at least one hidden layer, and the output layer. The connections between neurons in each layer can be characterized by weights.
In cryptography, the most commonly used neural network is the tree parity machine (TPM) [16]. A scheme of this model is presented in Fig. 1. There are K ×N input neurons, divided into K groups. There is a single hidden layer with K nodes. Each of these nodes has N inputs. The TPM has a single output neuron. The connections between input neurons and hidden layer neurons are described by weights W -integers in the range [−L, L], thus L is the maximum and −L is the minimum weight value. The values of σ characterize the connections between the hidden layer neurons and an output neuron. The output value of the TPM is described by τ .
The value of σ is calculated using the following formulas:
σ k = sgn( N n=1 x kn * w kn )(1)sgn(z) = −1 z ≤ 0 1 z > 0(2)
Due to the usage of the presented signum function, σ can take two values: 1 or −1. The output value of TPM is calculated as:
τ = K k=1 σ k(3)
This neural network has two possible outcomes: 1 or −1.
For the TPM structure, multiple learning algorithms are proposed. Most popular are Hebbian, anti-Hebbian, and random walk. The leading is the Hebbian rule [17]. The Hebbian algorithm updates ANN weights in the following manner:
w * kn = v L (w kn + x kn * σ k * θ(σ k , τ ))(4)
where θ limits the impact of hidden layer neurons whose value was different than τ :
θ(σ k , τ ) = 0 if σ k = τ 1 if σ k = τ(5)
The v L function makes sure that the new weights are kept within the [−L, L] range:
v L (z) = −L if z ≤ −L z if − L < z < L L if z ≥ L(6)
The TPM structure allows for mutual learning of the two neural networks [18], primarily based on updating weights only when the outputs from both neural networks are the same. The input values are random and the same for both Alice and Bob's TPMs. Inputs are updated in each iteration. The security of this process relies on the fact that cooperating TPMs can achieve convergence significantly faster than Eve's machine, which can update weights less frequently. The TPM is most commonly used in cryptography to exchange a secret key. This usage is defined as neural cryptography [19]. Alice and Bob mutually synchronize their TPMs to achieve the same weights. After the synchronization process, these weights provide a secure symmetric key.
C. Error correction based on TPMs
TPMs can be utilized during the error correction process in quantum cryptography [2]. The neural network's task is to correct all errors to achieve the same string of confidential bits at both endpoints. Firstly, Alice and Bob prepare their TPMs. The number of neurons in the hidden layer (K) and the number of input neurons (N ) is determined by Alice and passed on to Bob. The value L must also be agreed between the users. The keys achieved using the QKD protocol are changed into integer values in the range [−L, L]. These values are used in the appropriate TPMs as weights between neurons in the input layer and the hidden layer. Since Alice's string of bits is similar to Bob's (QBER is usually not high), the weights in the created TPMs are almost synchronized. At this point, Alice and Bob have constructed TPMs with the same structure but with a few differences in the weight values.
After establishing the TPM structure and changing bits to weights, the synchronization process starts. It consists of multiple iterations, repeated until common weights are achieved between Alice and Bob. A single iteration starts from Alice choosing the input string and computing the result using the TPM. After that, the generated input string is passed on to Bob, who computes the output of his TPM using the received input. Then, the results are compared. If the outputs of both TPMs match, the weights can be updated. Otherwise, the process is repeated with a different input string.
After an appropriate number of iterations, the TPMs are synchronized and Alice and Bob can change the weights back into a string of bits. The resulting bits are the same. However, the privacy amplification process after error correction is still recommended [20]. The reduction of the key protecting Alice and Bob from information leakage is defined as [2]:
Z = log 2L+1 2 i (7)
where i is the number of TPM iterations. This usage of TPM is safer than the neural cryptography solution, because weights are similar before the synchronization. Therefore, significantly fewer iterations are required to achieve convergence than the randomly initialized weights in key establishing algorithms. It is worth mentioning this method of error correction is characterized by high efficiency, e.g. requires approximately 30% less iterations than Cascade algorithm [2].
IV. ANALYSIS OF THE SYNCHRONIZATION PROCESS
The crucial decision regarding the error detection approach based on TPMs is the number of iterations during the synchronization process. This value should be as low as possible for security reasons. However, it cannot be too low, since neural networks will not be able to correct all errors in the key otherwise. It is the user's responsibility to select the appropriate value for the error correction. The main objective of the analysis is to determine the impact of various neural network parameters on the synchronization process. Another goal is to provide a recommended number of iterations for users.
A. Testbed
The experiments require an application to simulate the error correction process based on artificial neural networks. The application for correcting errors arising in quantum key distribution was written in Python and uses the NumPy package -a library for scientific computing which provides fast operations on arrays required by the TPM. The functions provided by NumPy satisfy all necessary calculations to achieve neural network convergence. Synchronization of TPMs is performed over sockets to allow real-world usage of this tool. The Hebbian learning algorithm for updating weights is used.
The developed application makes it possible to correct errors in the keys using quantum key distribution protocols. The users are also able to correct simulated keys with the chosen error rate. It helps if users do not have strings of bits created by a real QKD system. An important feature of the tool is its ability to select neural network parameters. The user can personalize the synchronization process, starting from the key length and error rate. The least sufficient number of bits was used for translation into a single integer (values of the weights must be in the range [−L, L]). It was demonstrated that the number of hidden neurons and the number of inputs depend on the chosen key length and L value. Therefore, users need to select these parameters taking into account the requirements and needs.
During the experiments the minimum number of returned required iterations for a single TPM configuration was set to 200. The maximum number of iterations was limited to 1000. Additionally, the maximum number of retries in a single iteration was limited to 10 to speed up the simulation process. Finally, 1880 different scenarios were analyzed. All possible TPM configurations for key lengths varying between 100 and 700 with a 100 bit step are available. Moreover, the data is available for other keys with lengths varying between 128 and 352 with an 8 bit step. Between 350 and 500 synchronizations were performed for each TPM. It was assumed that this number of iterations is sufficient to achieve convergence.
B. Recommended number of iterations
To obtain the recommended number of iterations of TPMs for successful error correction, the sum of means and standard deviations of the results was calculated. The median and variance values were calculated as well for comparison. The full results are available online 2 . The selected part -the neural network configurations where the key length equals 256 bits with the recommended number of iterations -is presented in Tab. I. Fig. 2 shows the histogram of data gathered for a single neural network configuration. The distribution is rightskewed. The mean value is greater than the median. It is a common characteristic for other tested TPM configurations. If the distribution is not positively skewed, it is symmetrical. The recommended number of iterations for the presented configuration, according to Tab. I, equals 302. It is based on the sum of the mean and standard deviation values. For all presented TPM configurations, this sum gives an 84% chance of successful synchronization, assuming a normal distribution of results. For the right-skewed distribution, similar to the one presented in Fig. 2, the probability of success is higher. The 85-th percentile for the given set is equal to 276 -less than the proposed value. In this case, after choosing the suggested number of iterations the user has more than an 88% chance of success.
Knowing the lowest required number of iterations is important because it reduces the risk of a successful attack by Eve. The attacker could create independent TPMs and try to synchronize one of them with Alice or Bob's machine. The recommended number of iterations increases the security of this solution because Alice and Bob require far fewer iterations to synchronize, compared to Alice (or Bob) and Eve synchronizing using random weights.
C. Impact of TPM structures
The results of simulations allow us to analyze how TPM structures affect the number of required iterations during the synchronization process. Fig. 3 shows the number of required iterations depending on the K and N parameters. It shows two different TPM configurations: one with a 144 bit key and another with a 216 bit key. These configurations were chosen due to having a similar number of possible K and N pairs. For a given key length, L value and error rate there is a limited number of possible N and K values. The K value changes in inverse proportion to the N value. As presented in Fig. 3 the speed of the TPM synchronization process depends on the neural network structure (N and K values). The number of required iterations increases alongside the higher number of neurons in the hidden layer (K). The trend is similar for both presented TPMs. After achieving a certain threshold, the number of recommended iterations increases slowly. The results fit the logarithmic trend line. It means that after a certain K value, increasing this parameter further does not affect the synchronization speed as much as under a certain threshold. Other configurations of the selected TPMs were studied based on the increasing error rate of the keys. Two configurations with 128 and 256 bit keys were tested. The average of every possible configuration of the recommended number of iterations was calculated for different QBER values. The results are presented in Fig. 4. This confirms that a greater number of errors results in a higher average number of recommended iterations. It confirms the applicability of TPMs to correct errors emerging in quantum key distribution, where the error rate should not be higher than a few percent. Therefore, the eavesdropper needs more iterations to synchronize its TPM.
Additionally, it was verified that value L has an exponential impact on the average recommended number of iterations. The data was gathered using a similar approach to the study with the impact of QBER. The average recommended number of iterations of each configuration for a given L was calculated. Fig. 5 shows the exponential trend line. It is worth mentioning that the impact of L value on the synchronization time is significant. It is the user's responsibility to choose the best possible configuration for a given key length and QBER value. The analysis shows that the L value should be chosen carefully since it exponentially affects the required number of iterations. Additionally, the choice of the K value should be made with caution due to its logarithmic impact on the number of iterations.
V. SUMMARY
The analysis of the TPM synchronization process used for error correction purposes was presented in this paper. It shows that the parameters of the TPM structure have an impact on the synchronization time and security of this error correction method. However, different parameters of artificial neural networks have different effects. Therefore, users should be aware of how to choose the configuration of neural networks used to correct errors in a secure and efficient way. One of the deciding factors which need to be selected is the number of iterations. The paper describes the recommended number of iterations for different TPM structures and QBER values to assist users in this step. The numbers recommended by the authors are as low as possible but with a high probability of successful synchronization to ensure secure and efficient error correction based on artificial neural networks.
Fig. 2 .
2Histogram for number of iterations (TPM with a 256 bit key, N = 16, K = 4, L = 4, QBER = 3%).
2
Recommended numbers of iterations for 1880 different scenarios -TPM structures and QBER values -are available from: http://kt.agh.edu.pl/ ∼ niemiec/ICC-2023 This is mainly based on possible key lengths which vary between 128 and 500 bits with 4 bit steps. Additionally, keys with lengths between 500 and 700 with 100 bit steps are included.
Fig. 3 .
3Number of iterations for TPMs with 144 and 216 bit keys for different K value.
Fig. 4 .
4Number of iterations for TPMs with 128 and 256 bit keys depended on the QBER.
Fig. 5 .
5Number of iterations for TPMs with 128 and 256 bit keys dependent on the L value.
Fig. 1. Model of tree parity machine.X 11
X 12
X 13
X 1N
X 21
X 22
X 23
X 2N
X K1
X K2
X K3
X KN
∑
∏
∑
∑
W 11
W 1N
W 21
W 2N
W KN = {-L, … ,L}
W K1
σ K = {-1, 1}
σ 2
σ 1
τ={-1, 1}
In fact, a key is not distributed but negotiated. However, the term 'distribution' is consistently used in this paper to be consistent with the commonly accepted name of the technique.
ACKNOWLEDGMENT This work was supported by the ECHO project which has received funding from the European Union's Horizon 2020 research and innovation programme under the grant agreement no. 830943.
Quantum cryptography technique: A way to improve security challenges in mobile cloud computing (mcc). S Abidin, A Swami, E Ramirez-Asís, J Alvarado-Tolentino, R K Maurya, N Hussain, Materials Today: Proceedings. 51S. Abidin, A. Swami, E. Ramirez-Asís, J. Alvarado-Tolentino, R. K. Maurya, and N. Hussain, "Quantum cryptography technique: A way to improve security challenges in mobile cloud computing (mcc)," Materials Today: Proceedings, vol. 51, pp. 508-514, 2022.
Error correction in quantum cryptography based on artificial neural networks. M Niemiec, Quantum Information Processing. M. Niemiec, "Error correction in quantum cryptography based on artificial neural networks," Quantum Information Processing, 2019.
Quantum cryptography: Public key distribution and coin tossing. C Bennett, G Brassard, Theoretical Computer Science -TCS. C. Bennett and G. Brassard, "Quantum cryptography: Public key dis- tribution and coin tossing," Theoretical Computer Science -TCS, pp. 175-179, 1984.
Quantum cryptography based on Bell's theorem. A Ekert, Phys. Rev. Lett. A. Ekert, "Quantum cryptography based on Bell's theorem," Phys. Rev. Lett., pp. 661-663, 1991.
Evaluations of quantum bit error rate using the three stage multiphoton protocol. M Khodr, 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA). M. Khodr, "Evaluations of quantum bit error rate using the three stage multiphoton protocol," 2017 International Conference on Electrical and Computing Technologies and Applications (ICECTA), pp. 1-4, 2017.
Experimental quantum cryptography. C Bennett, F Bessette, G Brassard, L Salvail, J Smolin, Journal of Cryptology. C. Bennett, F. Bessette, G. Brassard, L. Salvail, and J. Smolin, "Exper- imental quantum cryptography," Journal of Cryptology, pp. 3-28, 1992.
Error reconciliation in quantum key distribution protocols. M Mehic, M Niemiec, H Siljak, M Voznak, Reversible Computation: Extending Horizons of Computing: Selected Results of the COST Action IC1405. M. Mehic, M. Niemiec, H. Siljak, and M. Voznak, "Error reconcilia- tion in quantum key distribution protocols," Reversible Computation: Extending Horizons of Computing: Selected Results of the COST Action IC1405, pp. 222-236, 2020.
Fast, efficient error reconciliation for quantum cryptography. W T Buttler, S K Lamoreaux, J R Torgerson, G H Nickel, C H Donahue, C G Peterson, Phys. Rev. A. W. T. Buttler, S. K. Lamoreaux, J. R. Torgerson, G. H. Nickel, C. H. Donahue, and C. G. Peterson, "Fast, efficient error reconciliation for quantum cryptography," Phys. Rev. A, 2003.
Symmetric-key encryption. H Delfs, H Knebl, Introduction to Cryptography: Principles and Applications. H. Delfs and H. Knebl, "Symmetric-key encryption," Introduction to Cryptography: Principles and Applications, pp. 11-31, 2007.
Performance analysis of encryption algorithms for security. M Panda, 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES). M. Panda, "Performance analysis of encryption algorithms for security," 2016 International Conference on Signal Processing, Communication, Power and Embedded System (SCOPES), pp. 278-284, 2016.
Evaluation of symmetric encryption algorithms for manets. M Umaparvathi, D K Varughese, 2010 IEEE International Conference on Computational Intelligence and Computing Research. M. Umaparvathi and D. K. Varughese, "Evaluation of symmetric en- cryption algorithms for manets," 2010 IEEE International Conference on Computational Intelligence and Computing Research, pp. 1-3, 2010.
A single quantum cannot be cloned. W K Wootters, W H Zurek, Nature. W. K. Wootters and W. H. Zurek, "A single quantum cannot be cloned," Nature, pp. 802-803, 1982.
Secret-key reconciliation by public discussion. G Brassard, L Salvail, Advances in Cryptology. G. Brassard and L. Salvail, "Secret-key reconciliation by public discus- sion," Advances in Cryptology, pp. 410-423, 1994.
Artificial neural networks. J Hopfield, IEEE Circuits and Devices Magazine. J. Hopfield, "Artificial neural networks," IEEE Circuits and Devices Magazine, pp. 3-10, 1988.
Use of neural networks in cryptography: A review. P P Hadke, S G Kale, 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare. Startup ConclaveP. P. Hadke and S. G. Kale, "Use of neural networks in cryptography: A review," in 2016 World Conference on Futuristic Trends in Research and Innovation for Social Welfare (Startup Conclave), 2016, pp. 1-4.
Secure exchange of information using artificial intelligence and chaotic system guided neural synchronization. A Sarkar, Multimedia Tools and Applications. 80A. Sarkar, "Secure exchange of information using artificial intelligence and chaotic system guided neural synchronization," Multimedia Tools and Applications, vol. 80, pp. 1-31, 05 2021.
Factors affecting synchronization time of tree parity machines in cryptography. M Aleksandrov, Y Bashkov, 2020 IEEE 2nd International Conference on Advanced Trends in Information Theory (ATIT). M. Aleksandrov and Y. Bashkov, "Factors affecting synchronization time of tree parity machines in cryptography," 2020 IEEE 2nd International Conference on Advanced Trends in Information Theory (ATIT), pp. 108- 112, 2020.
Interacting neural networks. R Metzler, W Kinzel, I Kanter, Phys. Rev. E. R. Metzler, W. Kinzel, and I. Kanter, "Interacting neural networks," Phys. Rev. E, pp. 2555-2565, 2000.
Neural cryptography. W Kinzel, I Kanter, Proceedings of the 9th International Conference on Neural Information Processing. the 9th International Conference on Neural Information ProcessingW. Kinzel and I. Kanter, "Neural cryptography," Proceedings of the 9th International Conference on Neural Information Processing, pp. 1351- 1354, 2002.
Privacy amplification by public discussion. C Bennett, G Brassard, J Robert, SIAM J. Comput. C. Bennett, G. Brassard, and J. Robert, "Privacy amplification by public discussion," SIAM J. Comput., p. 210-229, 1988.
| [] |
[
"RECOVERING UTILITY",
"RECOVERING UTILITY"
] | [
"Christopher P Chambers \nDepartment of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n\n",
"ANDFederico Echenique \nDepartment of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n\n",
"Nicolas S Lambert \nDepartment of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n\n"
] | [
"Department of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n",
"Department of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n",
"Department of Economics\nDepartment of Economics\nDepartment of Economics\nGeorgetown University (Echenique)\nUC Berkeley (Lambert)\nUniversity of Southern California\n"
] | [] | We provide sufficient conditions under which a utility function may be recovered from a finite choice experiment. Identification, as is commonly understood in decision theory, is not enough. We provide a general recoverability result that is widely applicable to modern theories of choice under uncertainty. Key is to allow for a monetary environment, in which an objective notion of monotonicity is meaningful. In such environments, we show that subjective expected utility, as well as variational preferences, and other parametrizations of utilities over uncertain acts are recoverable. We also consider utility recovery in a statistical model with noise and random deviations from utility maximization. | null | [
"https://export.arxiv.org/pdf/2301.11492v1.pdf"
] | 256,358,793 | 2301.11492 | 7760a8b06ad84aa00f7e80abf381e85b9c3c0a7c |
RECOVERING UTILITY
27 Jan 2023
Christopher P Chambers
Department of Economics
Department of Economics
Department of Economics
Georgetown University (Echenique)
UC Berkeley (Lambert)
University of Southern California
ANDFederico Echenique
Department of Economics
Department of Economics
Department of Economics
Georgetown University (Echenique)
UC Berkeley (Lambert)
University of Southern California
Nicolas S Lambert
Department of Economics
Department of Economics
Department of Economics
Georgetown University (Echenique)
UC Berkeley (Lambert)
University of Southern California
RECOVERING UTILITY
27 Jan 2023Echenique thanks the National Science Foundation for its support through grant SES 1558757. Lambert gratefully acknowledges the financial support and hospitality of Microsoft Research New York and the Yale University Cowles Foundation. 1 2
We provide sufficient conditions under which a utility function may be recovered from a finite choice experiment. Identification, as is commonly understood in decision theory, is not enough. We provide a general recoverability result that is widely applicable to modern theories of choice under uncertainty. Key is to allow for a monetary environment, in which an objective notion of monotonicity is meaningful. In such environments, we show that subjective expected utility, as well as variational preferences, and other parametrizations of utilities over uncertain acts are recoverable. We also consider utility recovery in a statistical model with noise and random deviations from utility maximization.
Introduction
Economists are often interested in recovering preferences and utility functions from data on agents' choices. If we are able to recover a utility function, then a preference relation is obviously implied, but the inverse procedure is more delicate. In this paper, we presume access to data on an agent's choices, and that these describe the agent's preferences (or that preferences have been obtained as the outcome of a statistical estimation procedure). Our results describe sufficient conditions under which one can recover, or learn, a utility function from the agents' choices.
At a high level, the problem is that preferences essentially are choices, because they encode the choice that would be made from each binary choice problem. When we write x ≻ y we really mean that x would be chosen from the set {x, y}. Utility functions are much richer objects, and a given choice behavior may be described by many different utilities. For example, one utility can be used to discuss an agent's risk preferences: they could have a "constant relative risk aversion" utility, for which a single parameter describes attitudes towards risk. But the same preferences can be represented by a utility that does not have such a convenient parametrization. So recovering, or learning, utilities present important challenges that go beyond the problem of recovering a preference. In the paper, we describe some simple examples that illustrate the challenges. Our main results describe when one may (non-parametrically) recover a utility representation from choice data.
We first consider choice under uncertainty. We adopt the standard (Anscombe-Aumann) setting of choice under uncertainty, and focus attention on a class of utility representations that has been extensively studied in the literature. Special cases include subjected expected utility, the max-min expected utility model of Gilboa and Schmeidler (1989), Choquet expected utility (Schmeidler, 1989), the variational preferences of Maccheroni, Marinacci, and Rustichini (2006), and many other popular models. Decision theorists usually place significance on the uniqueness of their utility representations, arguing that uniqueness provides an identification argument that allows for utility to be recovered from choice data. We argue, in contrast, that uniqueness of a utility representation is not enough to recover a utility from finite choice data.
Counterexamples are not hard to find. Indeed, even when a utility representation is unique, one may find a convergent sequence of utilities that is consistent with larger and larger finite datasets, but that does not converge to the utility function that generated the choices in the data, or to any utility to which it is equivalent. So uniqueness is necessary but not sufficient for a utility representation to be empirically tractable, in the sense of ensuring that a utility is recovered from large, but finite, choice experiments.
Our main results are positive, and exhibit sufficient conditions for utility recovery. Key to our results is the availability of an objective direction of improvements in utility: we focus our attention on models of monotone preferences. Our paper considers choices among monetary acts, meaning state-contingent monetary payoffs. For such acts, there is a natural notion of monotonicity. Between two acts, if one pays more in every state of the world, the agent agent should prefer it. As a discipline on the recovery exercise, this essential notion of monotonicity suffices to ensure that a sequence of utilities that explains the choices in the data converges to the utility function that generated the choices.
We proceed by first discussing the continuity of a utility function in its dependence on the underlying preference relation. If U( , x) is a function of a preference and of choice objects x, then we say that it is a utility function if x → U( , x) represents . We draw on the existing literature (Theorem 1) to argue that such continuous utilities exist in very general circumstances. Continuity of this mapping in the preference ensures that if the choice data allow for preference recovery, they also allow a utility to be recovered. The drawback, however, of such general utility representation results is that they do not cover the special theories of utility in which economists generally take interest. There is no reason to expect that the utility U( , x) coincides with the standard parametrizations of, for example, subjective expected utility or variational preferences.
We then go on to our main exercise, which constrains the environment to the Anscombe-Aumann setting, and considers utility representations that have received special attention in the theory of choice under uncertainty. We consider a setup that is flexible enough to accommodate most theories of choice under uncertainty that have been studied in the literature. Our main result (Theorem 2) says that, whenever a choice experiment succeeds in recovering agents' underlying preferences, it also serves to recover a utility in the class of utilities of interest. For example, if an agent has subjective expected utility preferences, and these can be recovered from a choice experiment, then so can the parameters of the subjective expected utility representation: the agents' beliefs and Bernoulli utility index. Or, if the agent has variational preferences that can be inferred from choice data, then so can the different components of the variational utility representation.
Actual data on choices may be subject to sampling noise, and agents who randomly deviate from their preferences. The results we have just mentioned are useful in such settings, once the randomness in preference estimates is taken into account. As a complement to our main findings, we proceed with a model that explicitly takes noisy choice, and randomness, into account. Specifically, we consider choice problems that are sampled at random, and an agent who may deviate from their preferences. They make mistakes. In such a setting, we present sufficient conditions for the consistency of utility function estimates (Theorem 3).
In the last part of the paper we take a step back and revisit the problem of preference recovery, with the goal of showing how data from a finite choice experiment can approximate a preference relation, and, in consequence, a utility function. Our model considers a large, but finite, number of binary choices. We show that when preferences are monotone, then preference recovery is possible (Theorem 5). In such environments, utility recovery follows for the models of choice under uncertainty that we have been interested in (Corollary 1). Related literature. The literature on revealed preference theory in economics is primarily devoted to tests for consistency with rational choice. The main result in the literature, Afriat's theorem (Afriat, 1967a;Diewert, 1973;Varian, 1982), is in the context of standard demand theory (assuming linear budgets and a finite dataset). Versions of Afriat's result have been obtained in a model with infinite data (Reny, 2015), nonlinear budget sets (e.g., Matzkin, 1991;Forges and Minelli, 2009), general choice problems (e.g., Chavas and Cox, 1993;Nishimura, Ok, and Quah, 2017), and multiperson equilibrium models (e.g., Brown and Matzkin, 1996;Carvajal, Deb, Fenske, and Quah, 2013). Algorithmic questions related to revealed preference are discussed by Echenique, Golovin, andW (2011) andCamara (2022). The monograph by Chambers and Echenique (2016) presents an overview of results.
The revealed preference literature is primarily concerned with describing the datasets that are consistent with the theory, not with recovering or learning a preference, or a utility. In the context of demand theory and choice from linear budgets, Mas-Colell (1978) introduces sufficient conditions under which a preference relation is recovered, in the limit, from a sequence of ever richer demand data observations. More recently, Forges and Minelli (2009) derive the analog of Mas-Colell's results for nonlinear budget sets. An important strand of literature focuses on non-parametric econometric estimation methods applied to demand theory data: Crawford (2003, 2008) propose statistical tests for revealed preference data, and consider counterfactual bounds on demand changes.
The problem of preference and utility recovery has been studied from the perspective of statistical learning theory. Beigman and Vohra (2006) considers the problem of learning a demand function within the PAC paradigm, which is closely related to the exercise we perform in Section 4. A key difference is that we work with data on pairwise choices, which are common in experimental settings (including in many recent large-scale online experiments). Zadimoghaddam and Roth (2012) look at the utility recovery problem, as in Beigman and Vohra (2006), but instead of learning a demand function they want to understand when a utility can be learned efficiently. Balcan, Daniely, Mehta, Urner, and Vazirani (2014) follow up on this important work by providing sample complexity guarantees, while Ugarte (2022) considers the problem of recovery of preferences under noisy choice data, as in our paper, but within the demand theory framework. Similarly, the early work of Balcan, Constantin, Iwata, and Wang (2012) considers a PAC learning question, focusing on important sub-classes of valuations in economics. Bei, Chen, Garg, Hoefer, and Sun (2016) pursues the problem assuming that a seller proposes budgets with the objective of learning an agent's utility (they focus on quasilinear utility, and a seller that obtains aggregate demand data). Zhang and Conitzer (2020) considers this problem under an active-learning paradigm, and contrasts with the PAC sample complexity.
In all, these works are important precedents for our paper, but they are all within the demand theory setting. The results do not port to other environments, such as, for example, binary choice under risk or uncertainty. The closest paper to ours is Chambers, Echenique, and Lambert (2021), which looks at a host of related questions to our paper but focusing on preference, not utility, recovery. The work by Chambers, Echenique, and Lambert considers choices from binary choice problem, but does not address the question of recovering, or learning, a utility function. As we explain below in the paper, the problem for utilities is more delicate than the problem for preferences. In this line of work, Chase and Prasad (2019) obtains important results on learning a utility but restricted to settings of intertemporal choice. The work by Basu and Echenique (2020) looks at learnability of utility functions (within the PAC learning paradigm), but focusing on particular models of choice under uncertainty. Some of our results rely on measures of the richness of a theory, or of a family of preferences, which is discussed by Basu and Echenique (2020) and Fudenberg, Gao, and Liang (2021): the former by estimating the VC dimension of theories of choice under uncertainty, and the latter by proposing and analyzing new measures of richness that are well-suited for economics, as well as implementing them one economic datasets.
Finally, it is worth mentioning that preference and utilty recovery is potentially subject to to strategic manipulations, as emphasized by Dong, Roth, Schutzman, Waggoner, and Wu (2018) and Echenique and Prasad (2020). This possibility is ignored in our work.
The Question
We want to understand when utilities can be recovered from data on an agent's choices. Consider an agent with a utility function u. We want know when, given enough data on the agent's choices, we can "estimate" or "recover" a utility function that is guaranteed to be close to u.
In statistical terminology, recovery is analogous to the consistency of an estimator, and approximation guarantees are analogous to learnability. Imagine a dataset of size k, obtained from an incentivized experiment with k different choice problems. 1 The observed choice behavior in the data may be described by a preference k , which is associated with a utility function u k . The preference k could be a rationalizing preference, or a preference estimate. So we choose a utility representation for u k . The recovery, or consistency, property is that u k → u as k → ∞.
Suppose that the utility u represents preferences , which summarize the agent's full choice behavior. Clearly, unless k → , the exercise is hopeless. So our first order of business is to understand when k → is enough to ensure that u k → u. In other words, we want to understand when recovering preferences is sufficient for recovering utilities. To this end, our main results are in Section 3.4. In recovering a utility, we are interested in particular parametric representations. In choice over uncertainty, for example, one may be interested in measures of risk-attitudes, or uncertainty aversion. It is key then that the utility recovery exercises preserves the aspects of utility that allow such measures to be have meaning. If, say, preferences have the "constant relative risk aversion" (CRRA) form, then we want to recover the Arrow-Pratt measure of risk aversion. Chapman, Dean, Ortoleva, Snowberg, and Camerer (2022) and Falk, Becker, Dohmen, Enke, Huffman, and Sunde (2018). One can also apply our results to roll call data from congress, as in Poole and Rosenthal (1985) or Clinton, Jackman, and Rivers (2004). Large-scale A/B testing by tech firms may provide further examples (albeit involving proprietary datasets).
Our data is presumably obtained in an experimental setting, where an agent's behavior may be recorded with errors; o in which the agent may randomly deviate from their underlying preference . Despite such errors, with high probability, "on the sample path," we should obtain that k → . In our paper we uncover situations where this convergence leads to utility recovery. Indeed, the results in Section 3.4 and 3.5 may be applied to say that, in many popular models in decision theory, when k → (with high probability), then the resulting utility representations enable utility recovery (with high probability). The next step is to discuss learning and sample complexity. Here we need to explicitly account for randomness and errors. We lay out a model of random choice, with random sampling of choice problems and errors in agents' choices. The errors may take a very general form, as long as random choices are more likely to go in the direction of preferences than against it (if x ≻ y then x is the more likely choice from the choice problem {x, y}), and that this likelihood ratio remains bounded away from one. Contrast with the standard theory of discrete choice, where the randomness usually is taken to be additive, and independent of the particular pair of alternatives that are being compared.
Here we consider a formal statistical consistency problem, and exhibit situations where utility recovery is feasible. We use ideas from the literature on PAC learning to provide formal finite sample-size bounds for each desired approximation guarantee. See Section 4.
The Model
3.1. Basic definitions and notational conventions. Let X be a set. Given a binary relation R ⊆ X × X, we write x R y when (x, y) ∈ R. A binary relation that is complete and transitive is called a weak order. If X is a topological space, then we say that R is continuous if R is closed as a subset of X × X (see, for example, Bergstrom, Parks, and Rader, 1976). A preference relation is a weak order that is also continuous.
A preference relation is locally strict if, for all x, y ∈ X, x y implies that for each neighborhood U of (x, y), there is (x ′ , y ′ ) ∈ U with x ≻ y. The notion of local strictness was first introduced by Border and Segal (1994) as a generalization of the property of being locally non-satiated from consumer theory.
If
is a preference on X and u : X → R is a function for which x y if and only if u(x) ≥ u(y) then we say that u is a representation of , or that u is a utility function for .
If A ⊆ R d is a Borel set, we write ∆(A) for the set of all Borel probability measures on A. We endow ∆(A) with the weak* topology. If S is a finite set, then we topologize ∆(A) S with the product topology.
For p, q ∈ ∆(A), we say that p is larger than q in the sense of first-order stochastic dominance if A f dx ≥ A f dy for all monotone increasing, continuous and bounded functions f on A.
3.2. Topologies on preferences and utilities. The set of preferences over X, when X is a topological space, is endowed with the topology of closed convergence. The space of corresponding utility representations is endowed with the compact-open topology. These are the standard topologies for preferences and utilities, used in prior work in mathematical economics. See, for example, Hildenbrand (1970), Kannai (1970), and Mas-Colell (1974). Here we offer definitions and a brief discussion of our choice of topology.
Let X be a topological space, and F = {F n } n be a sequence of closed sets in X × X (with the product topology). We define Li(F ) and Ls(F ) to be closed subsets of X × X as follows:
• (x, y) ∈ Li(F ) if and only if, for all neighborhoods V of (x, y), there exists N ∈ N such that F n ∩ V = ∅ for all n ≥ N. • (x, y) ∈ Ls(F ) if and only if, for all neighborhoods V of (x, y), and all N ∈ N, there is n ≥ N such that F n ∩ V = ∅.
Observe that Li(F ) ⊆ Ls(F ). The definition of closed convergence is as follows.
Definition 1. F n converges to F in the topology of closed convergence if Li(F ) = F = Ls(F ).
Closed convergence captures the property that agents with similar preferences should have similar choice behavior-a property that is necessary to be able to learn the preference from finite data. Specifically, if X ⊆ R n , and P is the set of all locally strict and continuous preferences on X, then the topology of closed convergence is the smallest topology on P for which the sets
{(x, y, ) : x ≻ y} ⊆ X × X × P
are open. 2 In words: suppose that x ≻ y, then for x ′ close to x, y ′ close to y, and ′ close to , we obtain that x ′ ≻ ′ y ′ .
For utility functions, we adopt the compact-open topology, which we also claim is a natural choice of topology. The compact-open topology is characterized by the convergence criterion of uniform convergence on compact sets. The reason it is natural for utility functions is that a utility usually has two arguments: one is the object being "consumed" (a lottery, for example) and the other is the ordinal preference that utility is meant to represent. (The preference argument is usually implicit, but of course it remains a key aspect of the exercise.) Now an analyst wants the utility to be "jointly continuous," or continuous in both of its arguments. For such a purpose, the natural topology on the set of utilities, when they are viewed solely as functions of consumption, is indeed the compact-open topology. More formally, consider the following result, originally due to Mas-Colell (1977). 3 Theorem 1. Let X be a locally compact Polish space, and P the space of all continuous preferences on X endowed with the topology of closed convergence. Then there exists a continuous function U :
P × X → [0, 1] so that x → U( , x) represents .
We may view the map U as a mapping from to the space of utility functions. Then continuity of this induced mapping is equivalent to the joint continuity result discussed in Theorem 1, as long as we impose the compact-open topology on the space of utility functions (see Fox (1945)).
3.3. The model. As laid our in Section 2, we want to understand when we may conclude that u k → u from knowing that k → . Mas-Colell's theorem (Theorem 1) provides general conditions under which there exists one utility representation that has the requisite convergence property, but he is clear about the practical limitations of his result: "There is probably not a simple constructive ("canonical") method to find a U function." In contrast, economists are generally interested in specific parameterizations of utility.
For example, if an agent has subjective expected-utility preferences, economists want to estimate beliefs and a von-Neumann-Morgenstern index; not some arbitrary representation of the agent's preferences. Or, if the data involve intertemporal choices, and the agent discounts utility exponentially, then an economist will want to estimate their discount factor. Such specific parameterizations of utility are not meaningful in the context of Theorem 1.
The following (trivial) example shows that there is indeed a problem to be studied. Convergence of arbitrary utility representations to the correct limit is not guaranteed, even when recovered utilities form a convergent sequence, and recovered preferences converge to the correct limit.
Example 1. Consider expected-utility preferences on ∆(K) S , where K is a compact space, S a finite set of states, and ∆ S (K) is the set of Anscombe-Aumann acts. Fix an affine function v : ∆(K) → R, a prior µ ∈ ∆(S), and consider the preference with representation S v(f (s)) dµ(s).
Now if we set k = then k → holds trivially. However, it is possible to choose an expected utility representation S v k (f (s)) dµ k (s) that does not converge to a utility representation (of any kind) for . In fact one could choose a µ k and a "normalization" for v k , for example v k = 1 (imagine for concreteness that K is finite, and use the Euclidean norm for v k ). Specifically, choose scalars β k with β k + 1 k v = 1. Then the utility f → S v k (f (s)) dµ(s) represents k and converges to a constant function.
The punchline is that the limiting utility represents the preference that exhibits complete indifference among all acts. This is true, no matter what the original preference was.
In the example, we have imposed some discipline on the representation. Given that the utility converges to a constant, the discipline we have chosen is a particular normalization of the utility representations (their norm is constant). The normalization just makes the construction of the example slightly more challenging, and reflects perhaps the most basic care that an analyst could impose on the recovery exercise.
3.4. Anscombe-Aumann acts. We present our first main result in the context of Anscombe-Aumann acts, the workhorse model of the modern theory of decisions under uncertainty. Let S be a finite set of states of the world, and fix a closed interval
of the real line [a, b] ⊆ R. An act is a function f : S → ∆([a, b]).
We interpret the elements of ∆([a, b]) as monetary lotteries, so that acts are state-contingent monetary lotteries. The set of all acts is ∆([a, b]) S . When p ∈ ∆([a, b]), we denote the constant act that is identically equal to p by (p, . . . , p); or sometimes by p for short.
Note that we do not work with abstract, general, Anscombe-Aumann acts, but in assuming monetary lotteries we impose a particular structure on the objective lotteries in our Anscombe-Aumann framework. The reason is that our theory necessitates a certain known and objective direction of preference. Certain preference comparisons must be known a priori: monotonicity of preference will do the job, but for monotonicity to be objective we need the structure of monetary lotteries.
An act f dominates an act g if, for all s ∈ S, f (s) first-order stochastic dominates g(s). And f strictly dominates g if, for all s ∈ S, f (s) strictly first-order stochastic dominates g(s). A preference over acts is weakly monotone if f g whenever f first-order stochastic dominates g.
Let U be the set of all continuous and monotone weakly increasing functions u : b] u dp, for all constant acts (p, . . . , p). Moreover, we say that a standard representation
[a, b] → R with u(a) = 0 and u(b) = 1. A pair (V, u) is a standard representation if V : ∆([a, b]) S → R and u ∈ U are continuous functions such that v(p, . . . , p) = [a,(V, u) is aggregative if there is an aggregator H : [0, 1] S → R with V (f ) = H(( u df (s)) s∈S ) for f ∈ ∆([a, b]) S . An aggregative representation with aggregator H is denoted by (V, u, H).
Observe that a standard representation rules out total indifference.
A preference on ∆([a, b]) S is standard if it is weakly monotone, and there is a standard representation (V, u) in which V represents . Roughly, standard preferences will be those that satisfy the expected utility axioms across constant acts, and are monotone with respect to the (statewise) first order stochastic dominance relation. Aggregative preferences will additionally satisfy an analogue of Savage's P3 or the Anscombe-Aumann notion of monotonicity.
Example 2. Variational preferences (Maccheroni, Marinacci, and Rustichini, 2006) are standard and aggregative. 4 Let
V (f ) = inf{ v(f (s))dπ(s) + c(π) : π ∈ ∆(S)} where (1) v : ∆([a, b]) → R is continuous and affine.
(2) c : ∆(S) → [0, ∞] is lower semicontinuous, convex and grounded (meaning that inf{c(π) : π ∈ ∆(S)} = 0).
Note that V (p, . . . , p) = v(p) + inf{c(π) : π ∈ ∆(S)} = u dp, by the assumption that c is grounded, and where the existence of u : [a, b] → R so that v(p) = u dp is an instance of the Riesz representation theorem. It is clear that we may choose u ∈ U. So (V, u) is a standard representation.
Letting H : [0, 1] S → R be defined by H(x) = inf{ s∈S x(s)π(s) + c(π) : π ∈ ∆(S)}, we see that indeed (V, u, H) is also an aggregative representation of these preferences.
Some other examples of aggregative preferences include special cases of the variational model Gilboa and Schmeidler (1989), as well as generalizations of it, Cerreia-Vioglio, Maccheroni, (2011);Chandrasekher, Frick, Iijima, and Le Yaouanq (2021), and others which are not comparable Schmeidler (1989); Chateauneuf, Grabisch, and Rico (2008); Chateauneuf and Faro (2009). 5 Theorem 2. Let be a standard preference with standard representation (V, u), and { k } a sequence of standard preferences, each with a standard representation (V k , u k ).
(1) If k → , then (V k , u k ) → (V, u).
(2) If, in addition, these preferences are aggregative with representations (V k , u k , H k ) and (V, u, H), then H k → H.
In terms of interpretation, Theorem 2 suggests that, as preferences converge, riskattitudes, or von Neumann morgenstern utility indices also converge in a pointwise sense. The aggregative part claims that we can study the convergence of risk attitudes and the convergence of the aggregator controlling for risk separately. So, for example, in the multiple priors case, two decision makers whose preferences are close will have similar sets of priors.
3.5. Preferences over lotteries and certainty equivalents. In this section, we focus on a canonical representation for preferences over lotteries: the certainty equivalent. There are many models of preferences over lotteries, but we have in mind in particular Cerreia-Vioglio, Dillenberger, and Ortoleva (2015), whereby a preference representation over lotteries is given by U(p) = inf u∈U u −1 ( udp); a minimum over a set of certainty equivalents for expected utility maximizers. Key is that for this representation, and any degenerate lottery δ x , U(δ x ) = x. 5 A class of variational preferences that are of particular interest to computer scientists are preferences with a max-min representation (Gilboa and Schmeidler, 1989). These evaluate acts by
V (f ) = inf{ v(f (s))dπ(s) : π ∈ Π},
with Π ⊆ ∆(S) a closed and convex set. Here c is the indicator function of Π (as defined in convex analysis).
Let [a, b] ⊂ R, where a < b, be an interval in the real line and consider ∆ ([a, b]). Say that on ∆([a, b]) is certainty monotone if when ever p first order stochastically dominates q, then p q, and for all x, y ∈ [a, b] for which x > y, δ x ≻ δ y . Any certainty monotone continuous preference and any lottery p ∈ ∆([a, b]) then possesses a unique certainty equivalent x ∈ [0, 1], satisfying δ x ∼ p. To this end, we define ce( , p) to be the certainty equivalent of p for . It is clear that, fixing , ce(·, ) is a continuous utility representation of .
Proposition 1. Let be a certainty monotone preference and let p ∈ ∆([a, b]). Let { k } be a sequence of certainty monotone preferences and let p k be a sequence in
∆([a, b]). If ( k , p k ) → ( , p), then ce( k , p k ) → ce( , p).
To this end, the map carrying each preference to its certainty equivalent representation is a continuous map in the topology of closed convergence.
Utility recovery with noisy choice data
We develop a model of noisy choice data, and consider when utility may be recovered from a traditional estimation procedure. Recovery here takes the form of an explicit consistency result, together with sample complexity bounds in a PAC learning framework.
The focus is on the Wald representation, analogous to the certainty equivalent we considered in Section 3.5. When choosing among vectors in x ∈ R d , the Wald representation is u(x) ∈ R so that x ∼ (u(x), . . . , u(x)).
If the choice space is well behaved, a Wald representation exists for any monotone and continuous preference relation. To this end, we move beyond the Anscombe-Aumann setting that we considered above, but it should be clear that some versions of Anscombe-Aumann can be accommodated within the assumptions of this section.
Our main results for the model that explicitly accounts for noisy choice data assumes Wald representations that are either Lipschitz or homogeneous (meaning that preferences are homothetic). 4.1. Noisy choice data. The primitives of our noisy choice model are collected in the tuple (X, P, λ, q), where:
• X ⊆ R d is the ambient choice, or consumption, space. The set X is endowed with the (relative) topology inherited from R d .
• P is a class of continuous and locally strict preferences on X. The class comes with a set of utility functions U, so that each element of P has a utility representation in the set U. • λ is a probability measure on X, assumed to be absolutely continuous with respect to Lebesgue measure. We also assume that λ ≥ c Leb, where c > 0 is a constant and Leb denotes Lebesgue measure. • q : X × X × P → [0, 1] is a random choice function, so q(x, y; ) is the probability that an agent with preferences chooses x over y. Assume that if x ≻ y, then x is chosen with probability q(x, y; ) > 1/2 and y with probability q(y, x; * ) = 1 − q(x, y; ). If x ∼ y then x and y are chosen with equal probability. • We shall assume that the error probability q satisfies that Θ ≡ inf{q( , (x, y)) : x ≻ y and ∈ P} > 1 2 .
The tuple (X, P, λ, q) describes a data-generating process for noisy choice data. Fix a sample size n and consider an agent with preference * ∈ P. A sequence of choice problems {x i , y i }, 1 ≤ i ≤ n are obtained by drawing x i and y i from X, independently, according to the law λ. Then a choice is made from each problem {x i , y i } according to q(·, ·; * ).
Observe that our assumptions on q are mild. We allow errors to depend on the pair {x, y} under consideration, almost arbitrarily. The only requirement is that one is more likely to choose according to one's preference than to go against them, as well as the more technical assumptions of measurability and a control on how large the deviation from 1/2-1/2 choice may get.
To keep track of the chosen alternative, we order the elements of each problem so that (x i , y i ) means that x i was chosen from the choice problem {x i , y i }. So a sample of size n is {(x 1 , y 1 ), . . . , (x n , y n )}, consisting of 2n iid draws from X × X according to our stochastic choice model: in the ith draw, the choice problem was {x i , y i } and x i was chosen.
A utility function u n ∈ U is chosen to maximize the number of rationalized choices in the data. So u n maximizes n i=1 1 u(x i )≥u(y i ) . The space of utility functions is endowed with a metric, ρ. In this section, all we ask of ρ is that, for any u, u ′ ∈ U, there is x ∈ X with |u(x) − u ′ (x)| ≥ ρ(u, u ′ ). For example, we could use the sup norm for the purposes of any of the results in this section. 4.1.1. Lipschitz utilities. One set of sufficient conditions will need the family of relevant utility representations to satisfy a Lipschitz property with a common Lipschitz bound. The representations are of the Wald kind, as in Section 3.5. We now add the requirement of having the Lipschitz property, which allows us to connect differences in utility functions to quantifiable observable (but noisy) choice behavior. The main idea is expressed in Lemma 4 of Section 6.
We say that (X, P, λ, q) is a Lipschitz environment if:
(1) X ⊆ R d is convex, compact, and has nonempty interior.
(2) Each preference ∈ P has a Wald utility representation u : X → R so that x ∼ u (x)1.
(3) All utilities in U are Lipschitz, and admit a common Lipschitz constant κ. So, for any x, x ′ ∈ X and u ∈ U, |u(
x) − u(x ′ )| ≤ κ x − x ′ .
Homothetic preferences.
The second set of sufficient conditions involve homothetic preferences. It turns out, in this case, that the Wald representations have a homogeneity property, and this allows us to connect differences in utilities to a probability of detecting such differences. The key insights is contained in Lemma 5 of Section 6. We employ the following auxiliary notation. S M α = {x ∈ R d : x = M and x ≥ α1} and D M α = {θx : x ∈ S M α and θ ∈ [0, 1]}. We say that (X, P, λ, q) is a homothetic environment if:
(1) X = D M α for some (small) α > 0 and (large) M > 0.
(2) P is a class of continuous, monotone, homothetic, and complete preferences on X ⊆ R d . (3) U is a class of Wald representations, so that for each ∈ P there is a utility function u ∈ U with x ∼ u(x)1.
Remark: if u ∈ U is the Wald representation of , then u is homogeneous of degree one because x ∼ u(x)1 iff λx ∼ λu(x)1, so u(λx) = λu(x).
VC dimension.
The Vapnik-Chervonenkis (VC) dimension of a set P of preferences is the largest sample size n for which there exists a utility u ∈ U that perfectly rationalizes all the choices in the data, no matter what those are. That is so that n = n i=1 1 u(x i )≥u(y i ) for any dataset (x i , y i ) n i=1 of size n. VC dimension is a basic ingredient in the standard PAC learning paradigm. It is a measure of the complexity of a theory used in machine learning, and lies behind standard results on uniform laws of large numbers (see, for example, Boucheron, Bousquet, and Lugosi (2005)). Applications of VC to decision theory can be found in Basu and Echenique (2020) and Chambers, Echenique, and Lambert (2021).
It is worth noting that VC dimension is used in classification tasks. It may not be obvious, but when it comes to preferences, our exercise may be thought of as classification. For each pair of alternatives x and y, a preference "classifies" the pair as x y or y ≻ x. Then we can think of preference recovery as a problem of learning a classifier within the class P.
Consistency and sample complexity.
Theorem 3. Consider a noisy choice environment (X, P, λ, q) that is either a homothetic or a Lipschitz environment. Suppose that u * ∈ U is the Wald utility representation of * ∈ P.
(1) The estimates u n converge to u * in probability.
(2) There are constants K andC so that, for any δ ∈ (0, 1) and n, with probability at least 1 − δ,
ρ(u n , u * ) ≤C K V /n + 2 ln(1/δ)/n 1/D ,
where V is the VC dimension of P, D = d when the environment is Lipschitz and D = 2d when it is homothetic.
Of course, the second statement in the theorem is only meaningful when the VC dimension of P is finite. The constants K andC depend on the primitives in the environment, but not on preferences, utilities, or sample sizes.
Recovering preferences and utilities
The discussion in Section 3.4 focused on utility recovery, taking convergence of preferences as given. Here we take a step back, provide some conditions for preference recovery that are particularly relevant for the setting of Section 3.4, and then connect these back to utility recovery in Corollary 1. First we describe an experimental setting in which preferences may be elicited: an agent, or subject, faces a sequence of (incentivized) choice problems, and the choices made produce data on his preferences. The specific model and description below is borrowed from Chambers, Echenique, and Lambert (2021), but the setting is completely standard in choice theory.
Let X = ∆([a, b]) S be the set of acts over monetary lotteries, as discussed in Section 3.4. A choice function is a pair (Σ, c) with Σ ⊆ 2 X \ {∅} a collection of nonempty subsets of X, and c : Σ → 2 X with ∅ = c(A) ⊆ A for all A ∈ Σ. When Σ, the domain of c, is implied, we refer to c as a choice function.
A choice function (Σ, c) is generated by a preference relation over X if
c(A) = {x ∈ A : x y for all y ∈ B},
for all A ∈ Σ.
The notation (Σ, c ) means that the choice function (Σ, c ) is generated by the preference relation on X.
Our model features an experimenter (a female) and a subject (a male). The subject chooses among alternatives in a way described by a preference * over X, which we refer to as data-generating preference. The experimenter seeks to infer * from the subject's choices in a finite experiment.
In a finite experiment, the subject is presented with finitely many unordered pairs of alternatives B k = {x k , y k } in X. For every pair B k , the subject is asked to choose one of the two alternatives: x k or y k .
A sequence of experiments is a collection Σ ∞ = {B i } i∈N of pairs of possible choices presented to the subject. Let Σ k = {B 1 , . . . , B k } collect the first k elements of a sequence of experiments, and B = ∪ ∞ k=1 B k be the set of all alternatives that are used over all the experiments in a sequence. Here Σ k is a finite experiment of size k.
We make two assumptions on Σ ∞ . The first is that B is dense in X. The second is that, for any x, y ∈ B there is k for which B k = {x, y}. The first assumption is obviously needed to obtain any general preference recovery result. The second assumption means that the experimenter is able to elicit the subject's choices over all pairs used in her experiment. 6 For each k, the subject's preference * generates a choice function (Σ k , c) by letting, for each B i ∈ Σ k , c(B) be a maximal element of B i according to * . Thus the choice behavior observed by the experimenter is always consistent with (Σ k , c * ).
We introduce two notions of rationalization: weak and strong. A preference k weakly rationalizes
(Σ k , c) if, for all B i ∈ Σ k , c(B i ) ⊆ c k (B i ). A preference k weakly rationalizes a choice sequence (Σ ∞ , c) if it rationalizes the choice function of order k (Σ k , c), for all k ≥ 1. A preference k strongly rationalizes (Σ k , c) if, for all B i ∈ Σ k , c(B i ) = c k (B i )
. A preference k strongly rationalizes a choice sequence (Σ ∞ , c) if it rationalizes the choice function of order k (Σ k , c), for all k ≥ 1.
In the history of revealed preference theory in consumer theory, strong rationalizability came first. It is essentially the notion in Samuelson (1938) and Richter (1966). Strong rationalizability is the appropriate notion when it is known that all potentially chosen alternatives are actually chosen, or when we want to impose, as an added discipline, that the observed choices are uniquely optimal in each choice problem. This makes sense when studying demand functions, as Samuelson did. Weak rationalizability was one of the innovations in Afriat (1967b), who was interested in demand correspondences. 7 5.1. A general "limiting" result. Our next result serves to contrast what can be achieved with the "limiting" (countably infinite) data with the limit of preferences recovered from finite choice experiments.
Theorem 4. Suppose that and * are two continuous preference relations (complete and transitive). If | B×B = * | B×B , then = * .
Indeed, as the proof makes clear, Theorem 4 would hold more generally for any X which is a connected topological space, but it may not hold in absence of connectedness. There is a sense in which the limiting case with an infinite amount of data offers no problems for preference recovery. The structure we impose is needed for the limit of rationalizations drawn from finite data.
5.2.
Recovery from finite data in the AA model. Here we adopt the same structural assumptions as in Section 3.4, namely that X = ∆([a, b]) S , endowed with the weak topology and the first order stochastic dominance relation. However, the result easily extends to broader environments, as the proof makes clear.
Theorem 5. There is a sequence of finite experiments Σ ∞ so that if the subject's preference * is continuous and weakly monotone, and for each k ∈ N, k is a continuous and weakly monotone preference that strongly rationalizes a choice function (Σ k , c) generated by * ; then k → * . Corollary 1. Let * and k be as in the statement of Theorem 5. If, in addition, * and k have standard representations (V, u) and (V k , u k ) then (V, u) = lim k→∞ (V k , u k ).
Note that Theorem 5 requires the existence of the data-generating preference * . A "dual" result to Theorem 5 was established in Chambers, Echenique, and Lambert (2021). There, the focus was on weak rationalization via k , which is a weaker notion than the strong rationalization hypothesized here. To achieve a weak rationalization result, we assumed instead that preferences were strictly monotone.
Proofs
In this section, unless we say otherwise, we denote by X the set of acts ∆([a, b]) S , and the elements of X by x, y, z etc. Note that X is compact Polish when ∆([a, b]) is endowed with the topology of weak convergence of probability measures. Let P be the set of all complete and continuous binary relations on X.
6.1. Lemmas. The lemmas stated here will be used in the proofs of our results.
Lemma 1. Let X ⊆ R n . If {x ′ n } is an increasing sequence in X, and {x ′′ n } is a decreasing sequence, such that sup{x ′ n : n ≥ 1} = x * = inf{x ′′ n : n ≥ 1}. Then lim n→∞ x ′ n = x * = lim n→∞ x ′′ n .
Proof. This is obviously true for n = 1. For n > 1, convergence and sups and infs are obtained component-by-component, so the result follows.
Lemma 2. Let X = ∆([a, b]). Let {x n } be a convergent sequence in X, with x n → x * . Then there is an increasing sequence {x ′ n } and an a decreasing sequence {x ′′ n } such that x ′ n ≤ x n ≤ x ′′ n , and lim n→∞ x ′ n = x * = lim n→∞ x ′′ n .
Proof. The set X ordered by first order stochastic dominance is a complete lattice (see, for example, Lemma 3.1 in Kertz and Rösler (2000)). Suppose that x n → x * . Define x ′ n and x ′′ n by x ′ n = inf{x m : n ≤ m} and x ′′ n = sup{x m : n ≤ m}. Clearly, {x ′ n } is an increasing sequence, {x ′′ n } is decreasing, and x ′ n ≤ x n ≤ x ′′ n . Let F x denote the cdf associated with x. Note that F x ′′ n (r) = inf{F xm (r) : n ≤ m} while F x ′ n (r) is the right-continuous modification of sup{F xm (r) : n ≤ m}. For any point of continuity r of F , F xm (r) → F x * (r), so F x (r) = sup{inf{F xm (r) : n ≤ m} : n ≥ 1} by Lemma 1.
Moreover, F x * (r) = inf{sup{F xm (r) : n ≤ m} : n ≥ 1}. Let ε > 0. Then
F x * (r − ε) ← sup{F xm (r − ε) : n ≤ m} ≤ F x ′ n (r) ≤ sup{F xm (r + ε) : n ≤ m} → F x * (r + ε)
Then F x ′ n (r) → F x * (r), as r is a point of continuity of F x * .
The results we have obtained motivate two definitions that will prove useful. Say that the set X, together with the collection of finite experiments Σ ∞ , has the countable order property if for each x ∈ X and each neighborhood
V of x in X there is x ′ , x ′′ ∈ (∪ i B i ) ∩ V with x ′ ≤ x ≤ x ′′ .
We say that X has the squeezing property if for any convergent sequence {x n } n in X, if x n → x * then there is an increasing sequence {x ′ n } n , and an a decreasing sequence {x ′′ n } n , such that x ′ n ≤ x n ≤ x ′′ n , and lim n→∞ x ′ n = x * = lim n→∞ x ′′ n .
Lemma 3. If X = ∆([a, b]) S , then X has the squeezing property, and there is Σ ∞ such that (X, Σ ∞ ) has the countable order property.
Proof. The squeezing property follows from Lemma 2, and the countable order property from Theorem 15.11 of Aliprantis and Border (2006): Indeed, let B be the set of probability distributions p with finite support on Q∩ [a, b], where for all q ∈ Q∩[a, b], p(q) ∈ Q. Then we may choose a sequence of pairs B i , and let Σ ∞ to be B i with B = ∪B i so that the countable order property is satisfied.
6.2. Proof of Theorem 2. Without loss of generality, we may set [a, b] = [0, 1]. First we show that u k → u in the compact-open topology. To this end, let x k → x. We want to show that u k (x k ) → u(x). Suppose then that this is not the case, and by selecting a subsequence that u k (x k ) → Y > u(x) (without loss). Note that δ x k ∼ k p k , where p k is the lottery that pays 1 with probability u k (x k ) ∈ [0, 1], and 0 with probability 1−u k (x k ). Let p be the lottery that pays 1 with probability Y , and 0 with probability 1 − Y (given that the range of u k is [0, 1], we must have Y ∈ [0, 1]). Now we have that (δ x k , p k ) → (δ x , p) and δ x k ∼ k p k implies δ x ∼ p. This is a contradiction because δ x is indifferent in to the lottery that pays 1 with probability u k (x k ) ∈ [0, 1], and 0 with probability 1 − u k (x k ). The latter is strictly first-order stochastically dominated by the lottery p.
To finish the proof, we show that V k → V . This is the same as proving that
V k (f k ) → V (f ) when f k → f .
For each k, continuity and weak monotonicity imply that there is x k ∈ [0, 1] so that
V k (f k ) = V k (δ x k , . . . , δ x k ) = u k (x k ). Similarly, there is x with V (f ) = V (δ x , . . . , δ x ) = u(x).
Now we argue that x k → x. Indeed {x k } is a sequence in [0, 1]. If there is a subsequence that converges to, say, x ′ > x then we may choose
x ′′ = x+x ′ eventually f k k (δ x ′′ , . . . , δ x ′′ ) ≻ (δ x , . . . , δ x ) ∼ f, using weak monotonicity. This is impossible because (f k , (δ x k , . . . , δ x k ) → (f, (δ x ′ , . . . , δ x ′ )) and f k k ((δ x k , . . . , δ x k ) imply that f ((δ x ′ , . . . , δ x ′ ) (δ x ′′ , . . . , δ x ′′ ).
Finally, using what we know about the convergence of u k to u,
V k (f k ) = u k (x k ) → u(x) = V (f ).
We now turn to the second statement in the theorem. Observe that H k is a continuous function from [0, 1] S onto [0, 1]. Let z k ∈ [0, 1] S be an arbitrary convergent sequence, and say that z k → z * . We claim that H k (z k ) → H(z * ). Without loss we may assume that H k (z k ) → Y , by taking a subsequence if necessary. For each k and s, choose y k (s) ∈ [0, 1] for which u k (y k (s)) = z k (s). Again, without loss, we may assume that y k → y * by taking a subsequence if necessary, and using the finiteness of S. Observe also that u(y * (s)) = z * (s) as we have shown that u k → u in the compact-open topology. Now, we may also chooseẑ k ∈ [0, 1] so that
u k (ẑ k ) = H k (z k ) = H k ((u k (y k (s))) s∈S ),
and further may again without loss (by taking a subsequence) assume thatẑ k converges toẑ * . Thus u(ẑ * ) = lim u k (ẑ k ) = lim H k (z k ) = Y , again using what we have shown regarding u k → u. Then (δẑk, . . . , δẑk) ∼ k (y k (s)) s∈S so that, by taking limits, (δẑ * , . . . , δẑ * ) ∼ * (y * (s)) s . This implies that Y = u(ẑ * ) = H(u(y * (s)) = H(z * ).
6.3. Proof of Proposition 1. Take ( k , p k ) as in the statement of the Proposition, and observe that for every p ∈ ∆([a, b]), ce( k , p k ) ∈ [a, b]. Suppose by means of contradiction that ce( k , p k ) → ce( , p) is false, then there is some ǫ > 0 and a subsequence for which |ce( km , p km ) − ce( , p)| > ǫ, by taking a further subsequence, we assume without loss that ce( km , p km ) → α = ce( , p). Now, p km ∼ km δ ce( km ,p km ) , and p km → p and δ ce( km ,p km ) → δ α . So by definition of closed convergence, it follows that p ∼ δ α ; but this violates certainty monotonicity as α = ce( , p).
Proof of Theorem 3
First some notation. Let µ n ( ) = 1 n n i=1 1 x i y i , and n ∈ P be represented by u n ∈ U. By definition of u n , we have that µ n ( n ) ≥ µ n ( ) for all ∈ P. And we use Vol(A) to denote the volume of a set A in R d , when this is well defined (see Schneider (2014)).
Consider the measure µ on X × X defined as µ(A, ) = A q( ; x, y) dλ(x, y).
In particular
µ( ′ , ) = X×X 1 ′ (x, y)q( ; x, y) dλ(x, y).
is the probability that a choice with error made at a randomly-drawn choice problem by an agent with preference will coincide with ′ .
The key identification result shown in Chambers, Echenique, and Lambert (2021) is that, if ′ = , then µ( ′ , ) < µ( , ).
Lemma 4. Consider a Lipschitz noise choice environment (X, P, λ, q). There is a constant C with the following property. If and ′ are two preferences in P with representations u and u ′ (respectively) in U. Then
Cρ(u, u ′ ) d ≤ µ( , ) − µ( ′ , )
Proof. The ball in R d with center x and radius ε is denoted by B ε (x). First we show that the map
ε → Vol(B ǫ (x) ∩ X) Vol(B ǫ (x)) ,
defined for x ∈ X, is nonincreasing as a function of ǫ > 0. Indeed, let ǫ 1 < ǫ 2 , and let y ∈ B ǫ 2 (x) ∩ X. Then y ∈ X and y − x ≤ ǫ 2 . By convexity of X, y 1 ≡ x + ǫ 1 ǫ 2 (y − x) = (1 − ǫ 1 ǫ 2 )x + ǫ 1 ǫ 2 y ∈ X, and y 1 ∈ B ǫ 1 (x). Observe further by properties of Lebesgue measure in R d that Vol({x+ ǫ 1 ǫ 2 (y −x) : y ∈ B ǫ 2 (x)∩ X}) = ǫ 1 as X has nonempty interior and the volume of a ball in R d is independent of its center. Now we proceed with the proof of the statement in the lemma. Let ∆ = ρ(u, u ′ ) and fix
x ∈ X with (wlog) u(x) − u ′ (x) = ∆ > 0. Set ε = ∆ 4κ .
We may assume that ε ≤ 2ε as defined above, as otherwise we can use a larger upper bound on the Lipschitz constants for the functions in U.
Consider the interval
I = [(u ′ (x) + κε)1, (u(x) − κε)1], with volume (u(x) − κε − (u ′ (x) + κε)) d = (∆/2) d . Consider B ε/2 (x). If y ∈ B ε/2 (x) then |ũ(y) −ũ(x)| < κε for anyũ ∈ U. Now, if z ∈ I and y ∈ B ε (x) then u(y) > u(x) − κε = u((x − κε)1) ≥ u(z)
by monotonicity. Similarly,
u ′ (z) ≥ u ′ ((x + κε)1) = u ′ (x) + κε > u ′ (y)
Thus (y, z) ∈≻ \ ′ for any (y, z) ∈ B ε (x) × I, and µ( , ) − µ( ′ , ) = 1 ≻\≻ ′ (y, z)[q( ; (y, z)) − q( ; (z, y))] dλ(y, z) ≥ B ε/2 (x)×I 1 ≻\≻ ′ (y, z)[q( ; (y, z)) − q( ; (z, y))] dλ(y, z)
≥ λ(B ε(x)/2 × I) inf{q( ; (y, z) − q( ; (z, y)) : (y, z) ∈ B ε/2 (x) × I}.
Where the first identity is shown in Chambers, Echenique, and Lambert (2021). The second inequality follows because q( ; (x, y)) > 1/2 > q( ; (y, x)) on (x, y) ∈≻. The third inequality is because (y, z) ∈≻ \ ′ ⊆≻ \≻ ′ on B ε (x) × I.
By the assumptions we have placed on λ, and the calculations above, we know that
λ(B ε(x)/2 ) ≥c Vol(Bǭ(x) ∩ X) ≥cc ′ Vol(Bǭ(x)) =cc ′ (ε/2) d π d/2 Γ(1 + d/2) .
So there is a constant C ′′ (that only depends on X andc) so that λ(I × B ε/2 (x)) is bounded below by
(∆/2) d C ′′ (ε/2) d π d/2 Γ(1 + d/2) = (∆/2) d C ′′ ∆ d π d/2 (8κ) d Γ(1 + d/2) = C ′ ∆ 2d .
Here C ′ is a constant that only depends on C ′′ , κ and d.
By the assumption that Θ > 1/2, we get that
µ( , ) − µ( ′ , ) ≥ C∆ 2d
for some constant C that depends on C ′ and Θ.
Lemma 5. Consider a homothetic noise choice environment (X, P, λ, q). There is a constant C with the following property. If and ′ are two preferences in P with representations u and u ′ (respectively) in U. Then
Cρ(u, u ′ ) 2d ≤ µ( , ) − µ( ′ , ) Proof. Let x ∈ X be such that ρ(u, u ′ ) ≤ u(x) − u ′ (x) = ∆ > 0.
Choose η ∈ (0, 1) so that u(ηx) − u ′ (x) = ∆/2. Let I = (u ′ (x)1, u(ηx)1) and
Z η = [ηx, x] ∩ D M α . Note that I ⊆ X because by homotheticity, x = M and hence x ≥ α1. Then we must have α1 ≤ u ′ (x)1 as α1 ≤ u ′ (x)1 would mean that u ′ (x)1 ≪ α1, contradicting monotonicity and x ∼ ′ u ′ (x)1.
Observe that if y ∈ I and z ∈ Z η then we have that
u(y) < u(u(ηx)1) = u(ηx) ≤ u(z),
as y < u(ηx)1 and ηx ≤ z; while
u ′ (z) ≤ u ′ (x) = u ′ (u ′ (x)1) < u ′ (y). Hence (z, y) ∈ ≻ \ ′ . First we estimate Vol(Z η ). Write Z 0 for [0, x] ∩ D M α .
Define the function f (z) = x + (1 − η)(z − x) and note that when z ∈ Z 0 then f (z) = ηx + (1 − η)z ∈ [ηx, x] because z ≥ 0. Note also that f (z) is a convex combination of x and z, so f (z) ∈ D M α as the latter is a convex set. This shows that
Z η = {x} + (1 − η)(Z 0 − {x}), and hence that Vol(Z η ) = (1 − η) d Vol(Z 0 ). Now, since Z 0 is star shaped we have Vol(Z 0 ) = 1 d y∈S M α ρ(y, [0, x]) d dy ≥ ( α M ) d A M α ,
where A M α is the surface area of S M α and ρ(y, [0, x]) = max{θ > 0 : θy ∈ [0, x] is the radial function of the set [0, x] (see Schneider (2014) page 57). The inequality results from ρ(y, [0, x]) ≥ α/M as x i ≥ α and y i ≤ M for any y ∈ S M α . Now,
1 − η = 1 − ∆/2 + u ′ (x) u(x) = ∆/2 u(x) ≥ ∆/2 M , as u(x) ≤ M. Thus we have that Vol(Z η ) ≥ ∆ d C ′ , with C ′ = Vol(Z 0 )/(2M) d > 0, a constant.
Moreover, we have Vol(I) = (∆/2) d as I ⊆ X. Then we obtain, again using a formula derived in Chambers, Echenique, and Lambert (2021), and that q( ; (x, y)) > 1/2 > q( ; (y, x)) on (x, y) ∈≻: µ( , ) − µ( ′ , ) = 1 ≻\≻ ′ (z, y)[q( ; (z, y)) − q( ; (y, z))] dλ(z, y) ≥ Zη×I 1 ≻\≻ ′ (z, y)[q( ; (z, y)) − q( ; (y, z))] dλ(z, y) ≥ λ(Z λ × I) inf{q( ; (z, y) − q( ; (y, z)) : (z, y) ∈ Z η × I}
≥ (∆/2) d C ′ ∆ d Θ,
where Θ = inf{q( ; (z, y) − q( ; (y, z)) : (z, y) ∈ Z η × I} > 0. 7.1. Proof of Theorem 3. For the rest of this proof, we denote µ( , * ) by µ( ).
The rest of the proof uses routine ideas from statistical learning theory. By standard results (see, for example, Theorem 3.1 in Boucheron, Bousquet, and Lugosi (2005)), there exists an event E with probability at least 1 − δ on which:
sup{|µ n ( ) − µ( )| : ∈ P } ≤ E sup{|µ n ( ) − µ( )| : ∈ P } + 2 ln(1/δ) n .
Moreover, again by standard arguments (see Theorem 3.2 in Boucheron, Bousquet, and Lugosi (2005)), we also have
E sup{|µ n ( ) − µ( )| : ∈ P } ≤ 2 E sup{ 1 n i σ i 1x i y i : ∈ P}, where R n (P) = E sup{ 1 n i σ i 1x i y i : ∈ P}
is the Rademacher average of P. Now, by the Vapnik-Chervonenkis inequality (see Theorem 3.4 in Boucheron, Bousquet, and Lugosi (2005)), we have that
E sup{|µ n ( ) − µ( )| : ∈ P } ≤ K V n ,
where V is the VC dimension of P, and K is a universal constant. So on the event E, we have we have that sup{|µ n ( ) − µ( )| : ∈ P } ≤ K V /n + 2 ln(1/δ) n .
We now combine these statements with Lemmas 4 and 5. In particular, we let D = d or D = 2d depending on which of the lemmas we invoke. Let u * ∈ U represent * and u n ∈ U represent n . Let ∆ = ρ(u * , u n ), a magnitude that depends on the sample. Then, on the event E, by Lemma 4 or 5, we have that
C∆ D ≤ µ( * ) − µ( n ) = µ( * ) − µ n ( * ) + µ n ( * ) − µ n ( n ) + µ n ( n ) − µ( n ) ≤ 2K V n + 2 2 ln(1/δ) n ,
where we have used that µ n ( * ) − µ n ( n ) < 0 by definition of n . This proves the second statement in the theorem.
To prove the first statement in the theorem, by Lemmas 4 and 5 again, and using that µ n ( n ) ≥ µ n ( * ), we have that, for any η > 0, Pr(ρ(u * , u n ) > η) ≤ Pr(µ( * ) − µ( n ) > Cη D ) ≤ Pr(µ( * ) − µ n ( * ) > Cη D /2) + Pr(µ n ( n ) − µ( n ) > Cη D /2) ≤ 2 Pr(sup{|µ( ′ ) − µ n ( ′ )| : ′ ∈ P} > Cη D /2) → 0 as n → ∞ by the uniform convergence in probability result shown in Chambers, Echenique, and Lambert (2021). 7.2. Proof of Theorem 5. By standard results (see Hildenbrand (1970)), since X is locally compact Polish, the topology of closed convergence is compact metric.
We will show that for any subsequence of k , there is a subsubsequence converging to * , which will establish that k → * .
So choose a convergent subsubsequence of the given subsequence. To simplify notation and with a slight abuse of notation, let us also refer to this subsubsequence as k . Call its limit ;
is complete as the set of complete relations is closed in the closed convergence topology. It is therefore sufficient to establish that ≻ * ⊆≻ and * ⊆ .
First we show that x ≻ * y implies that x ≻ y. So let x ≻ * y. Let U and V be neighborhoods of x and y, respectively, such that x ′ ≻ * y ′ for all x ′ ∈ U and y ′ ∈ V . Such neighborhoods exist by the continuity of * . We prove first that if (x ′ , y ′ ) ∈ U × V , then there exists N such that x ′ ≻ n y ′ for all n ≥ N. Recall that B = ∪{B ′ : B ′ ∈ Σ ∞ }. By hypothesis, there exist x ′′ ∈ U ∩ B and y ′′ ∈ V ∩ B such that x ′′ ≤ x ′ and y ′ ≤ y ′′ . Each n is a strong rationalization of the finite experiment of order n, so if {x,ỹ} ∈ Σ n thenx ≻ nỹ implies thatx ≻ mỹ for all m ≥ n. Since x ′′ , y ′′ ∈ B, there is N is such that {x ′′ , y ′′ } ∈ Σ N . Thus x ′′ ≻ * y ′′ implies that x ′′ ≻ n y ′′ for all n ≥ N. So, for n ≥ N, x ′ ≻ n y ′ , as n is weakly monotone. Now we establish that x ≻ y. Let {(x n , y n )} be an arbitrary sequence with (x n , y n ) → (x, y). By hypothesis, there is an increasing sequence {x ′ n }, and a decreasing sequence {y ′ n }, such that x ′ n ≤ x n and y n ≤ y ′ n while (x, y) = lim n→∞ (x ′ n , y ′ n ). Let N be large enough that x ′ N ∈ U and y ′ N ∈ V . Let N ′ ≥ N be such that x ′ N ≻ n y ′ N for all n ≥ N ′ (we established the existence of such N ′ above). Then, for any n ≥ N ′ we have that
x n ≥ x ′ n ≥ x ′ N ≻ n y ′ N ≥ y ′ n ≥ y n .
By the weak monotonicity of n , then, x n ≻ n y n . The sequence {(x n , y n )} was arbitrary, so (y, x) / ∈ = lim n→∞ n . Thus ¬(y x). Completeness of implies that x ≻ y.
In second place we show that if x * y then x y, thus completing the proof. So let x * y. We recursively construct sequences x n k , y n k such that x n k n k y n k and x n k → x, y n k → y.
So, for any k ≥ 1, choose x ′ ∈ N x (1/k) ∩ B with x ′ ≥ x, and y ′ ∈ N y (1/k) ∩ B with y ′ ≤ y; so that x ′ * x * y * y ′ , as * is weakly monotone. Recall that n strongly rationalizes c * for Σ n . So x ′ * y ′ and x ′ , y ′ ∈ B imply that x ′ n y ′ for all n large enough. Let n k > n k−1 (where we can take n 0 = 0) such that x ′ n k y ′ ; and let x n k = x ′ and y n k = y ′ .
Then we have (x n k , y n k ) → (x, y) and x n k n k y n k . Thus x y.
7.3. Proof of Theorem 4. First, it is straightforward to show that x ≻ y implies x ′ y. Because otherwise there are x, y for which x ≻ y and y ≻ ′ x. Take an open neighborhood U about (x, y) and a pair (z, w) ∈ U ∩ (B × B) for which z ≻ w and w ≻ ′ z, a contradiction. Symmetrically, we also have x ≻ ′ y implies x y. Now, without loss, suppose that there is a pair x, y for which x ≻ y and x ∼ ′ y. By connectedness and continuity, V = {z : x ≻ z ≻ y} is nonempty. Indeed if we assume, towards a contradiction that V = ∅, then {z : x ≻ z} and {z : z ≻ y} are nonempty open sets. Further, for any z ∈ X, either x ≻ z or z ≻ y (because if ¬(x ≻ z) then by completeness z
x, which implies that z ≻ y). Conclude that {z : x ≻ z} ∪ {z : z ≻ y} = X and each of the sets are nonempty and open (by continuity of the preference ); these sets are disjoint, violating connectedness of X. So we conclude that V is nonempty. By continuity of the preference , V os open.
We claim that there is a pair (w, z) ∈ (V × V ) ∩ (B × B) for which w ≻ z. For otherwise, for all (w, z) ∈ V × V ∩ (B × B), w ∼ z. Conclude then by continuity that for all (w, z) ∈ V × V , w ∼ z. Observe that this implies that, for any w ∈ V , the set {z : w ≻ z ≻ y} = ∅, as if w ≻ z ≻ y, we also have that x w ≻ z, from which we conclude x ≻ z, so that z ∈ V and hence z ∼ w, a contradiction. Observe that {z : w ≻ z ≻ y} = ∅ contradicts the continuity of and the connectedness of X (same argument as nonemptyness of V ; see our discussion above).
We have shown that there is (w, z) ∈ (V × V ) ∩ (B × B) for which w ≻ z, so that x ≻ w ≻ z ≻ y. Further, we have hypothesized that x ∼ ′ y. By the first paragraph, we know that x ′ w ′ z ′ y. If, by means of contradiction, we have w ≻ ′ z, then x ≻ ′ y, a contradiction. So w ∼ ′ z and w ≻ z, a contradiction to B×B = ′ B×B .
SeeKannai (1970) andHildenbrand (1970) for a discussion; a proof of this claim is available from the authors upon request. 3Levin (1983) provides a generalization to incomplete preferences.
Variational preferences are widely used in macroeconomics and finance to capture decision makers' concerns for using a misspecified model. Here it is important to recover the different components of a representation, v and c, because they quantify key features of the environment. See for exampleHansen and Sargent (2001);Hansen, Sargent, Turmuhambetova, and Williams (2006);Hansen and Sargent (2022).
If there is a countable dense A ⊆ X, then one can always construct such a sequence of experiments via a standard diagonalization argument.
As an illustration of the difference between these two notions of rationalizability, note that, in the setting of consumer theory, one leads to the Strong Axiom of Revealed Preference while the other to the Generalized Axiom of Revealed Preference. Of course, Afriat's approach is also distinct in assuming a finite dataset. SeeChambers and Echenique (2016) for a detailed discussion.
and
Zhang, H., and V. Conitzer (2020): "Learning the Valuations of a k-demand Agent," in International Conference on Machine Learning.
The Construction of Utility Functions from Expenditure Data. S N Afriat, International Economic Review. 81International Economic ReviewAfriat, S. N. (1967a): "The Construction of Utility Functions from Expenditure Data," International Economic Review, 8(1), 67-77. (1967b): "The Construction of Utility Functions from Expenditure Data," International Economic Review, 8(1), 67-77.
Infinite Dimensional Analysis: A Hitchhiker's Guide. C D Aliprantis, K Border, Springer3 ednAliprantis, C. D., and K. Border (2006): Infinite Dimensional Analysis: A Hitchhiker's Guide. Springer, 3 edn.
Learning valuation functions. M F Balcan, F Constantin, S Iwata, L Wang, JMLR Workshop and Conference Proceedings. Conference on Learning TheoryBalcan, M. F., F. Constantin, S. Iwata, and L. Wang (2012): "Learning valuation functions," in Conference on Learning Theory, pp. 4-1. JMLR Workshop and Conference Proceedings.
Learning economic parameters from revealed preferences. M.-F Balcan, A Daniely, R Mehta, R Urner, V V Vazirani, International Conference on Web and Internet Economics. SpringerBalcan, M.-F., A. Daniely, R. Mehta, R. Urner, and V. V. Vazirani (2014): "Learning economic parameters from revealed preferences," in International Con- ference on Web and Internet Economics, pp. 338-353. Springer.
On the falsifiability and learnability of decision theories. P Basu, F Echenique, Theoretical Economics. 154Basu, P., and F. Echenique (2020): "On the falsifiability and learnability of deci- sion theories," Theoretical Economics, 15(4), 1279-1305.
Learning Market Parameters Using Aggregate Demand Queries. X Bei, W Chen, J Garg, M Hoefer, X Sun, AAAI. Bei, X., W. Chen, J. Garg, M. Hoefer, and X. Sun (2016): "Learning Market Parameters Using Aggregate Demand Queries," in AAAI.
Learning from revealed preference. E Beigman, R Vohra, Proceedings of the 7th ACM Conference on Electronic Commerce. the 7th ACM Conference on Electronic CommerceBeigman, E., and R. Vohra (2006): "Learning from revealed preference," in Pro- ceedings of the 7th ACM Conference on Electronic Commerce, pp. 36-42.
Preferences which Have Open Graphs. T C Bergstrom, R P Parks, T Rader, Journal of Mathematical Economics. 33Bergstrom, T. C., R. P. Parks, and T. Rader (1976): "Preferences which Have Open Graphs," Journal of Mathematical Economics, 3(3), 265-268.
Best Nonparametric Bounds on Demand Responses. R Blundell, M Browning, I Crawford, Econometrica. 766Blundell, R., M. Browning, and I. Crawford (2008): "Best Nonparametric Bounds on Demand Responses," Econometrica, 76(6), 1227-1262.
Nonparametric Engel Curves and Revealed Preference. R W Blundell, M Browning, I A Crawford, Econometrica. 711Blundell, R. W., M. Browning, and I. A. Crawford (2003): "Nonparametric Engel Curves and Revealed Preference," Econometrica, 71(1), 205-240.
Dynamic Consistency Implies Approximately Expected Utility Preferences. K C Border, U Segal, Journal of Economic Theory. 632Border, K. C., and U. Segal (1994): "Dynamic Consistency Implies Approxi- mately Expected Utility Preferences," Journal of Economic Theory, 63(2), 170- 188.
Theory of classification: A survey of some recent advances. S Boucheron, O Bousquet, G Lugosi, ESAIM: probability and statistics. 9Boucheron, S., O. Bousquet, and G. Lugosi (2005): "Theory of classification: A survey of some recent advances," ESAIM: probability and statistics, 9, 323-375.
Testable Restrictions on the Equilibrium Manifold. D J Brown, R L Matzkin, Econometrica. 646Brown, D. J., and R. L. Matzkin (1996): "Testable Restrictions on the Equilib- rium Manifold," Econometrica, 64(6), 1249-1262.
Computationally Tractable Choice. M K Camara, Proceedings of the 23rd ACM Conference on Economics and Computation, EC '22. the 23rd ACM Conference on Economics and Computation, EC '22New York, NY, USAAssociation for Computing Machinery28Camara, M. K. (2022): "Computationally Tractable Choice," in Proceedings of the 23rd ACM Conference on Economics and Computation, EC '22, p. 28, New York, NY, USA. Association for Computing Machinery.
Revealed Preference Tests of the Cournot Model. A Carvajal, R Deb, J Fenske, J K , -H Quah, Econometrica. 816Carvajal, A., R. Deb, J. Fenske, and J. K.-H. Quah (2013): "Revealed Pref- erence Tests of the Cournot Model," Econometrica, 81(6), 2351-2379.
Cautious expected utility and the certainty effect. S Cerreia-Vioglio, D Dillenberger, P Ortoleva, Econometrica. 832Cerreia-Vioglio, S., D. Dillenberger, and P. Ortoleva (2015): "Cautious expected utility and the certainty effect," Econometrica, 83(2), 693-728.
Uncertainty averse preferences. S Cerreia-Vioglio, F Maccheroni, M Marinacci, L Montrucchio, Journal of Economic Theory. 1464Cerreia-Vioglio, S., F. Maccheroni, M. Marinacci, and L. Montrucchio (2011): "Uncertainty averse preferences," Journal of Economic Theory, 146(4), 1275-1330.
C P Chambers, F Echenique, Revealed preference theory. Cambridge University Press56Chambers, C. P., and F. Echenique (2016): Revealed preference theory, vol. 56. Cambridge University Press.
Recovering preferences from finite data. C P Chambers, F Echenique, N S Lambert, Econometrica. 894Chambers, C. P., F. Echenique, and N. S. Lambert (2021): "Recovering pref- erences from finite data," Econometrica, 89(4), 1633-1664.
Dualself representations of ambiguity preferences. M Chandrasekher, M Frick, R Iijima, Y Le Yaouanq, Econometrica, forthcoming. Chandrasekher, M., M. Frick, R. Iijima, and Y. Le Yaouanq (2021): "Dual- self representations of ambiguity preferences," Econometrica, forthcoming.
Willingness to Pay and Willingness to Accept are Probably Less Correlated Than You Think. J Chapman, M Dean, P Ortoleva, E Snowberg, C Camerer, Forthcoming, Journal of Political Economic: Microeconomics. EconographicsChapman, J., M. Dean, P. Ortoleva, E. Snowberg, and C. Camerer (2017): "Willingness to Pay and Willingness to Accept are Probably Less Correlated Than You Think," NBER working paper No. 23954. (2022): "Econographics," Forthcoming, Journal of Political Economic: Mi- croeconomics.
Learning Time Dependent Choice. Z Chase, S Prasad, 10th Innovations in Theoretical Computer Science Conference (ITCS). Chase, Z., and S. Prasad (2019): "Learning Time Dependent Choice," in 10th Innovations in Theoretical Computer Science Conference (ITCS).
Ambiguity through confidence functions. A Chateauneuf, J H Faro, Journal of Mathematical Economics. 459Chateauneuf, A., and J. H. Faro (2009): "Ambiguity through confidence func- tions," Journal of Mathematical Economics, 45(9-10), 535-558.
Modeling attitudes toward uncertainty through the use of the Sugeno integral. A Chateauneuf, M Grabisch, A Rico, Journal of Mathematical Economics. 4411Chateauneuf, A., M. Grabisch, and A. Rico (2008): "Modeling attitudes to- ward uncertainty through the use of the Sugeno integral," Journal of Mathematical Economics, 44(11), 1084-1099.
On Generalized Revealed Preference Analysis. J.-P Chavas, T L Cox, The Quarterly Journal of Economics. 1082Chavas, J.-P., and T. L. Cox (1993): "On Generalized Revealed Preference Anal- ysis," The Quarterly Journal of Economics, 108(2), 493-506.
The Statistical Analysis of Roll Call Data. J Clinton, S Jackman, D Rivers, The American Political Science Review. 982Clinton, J., S. Jackman, and D. Rivers (2004): "The Statistical Analysis of Roll Call Data," The American Political Science Review, 98(2), 355-370.
Afriat and Revealed Preference Theory. W E Diewert, The Review of Economic Studies. 403Diewert, W. E. (1973): "Afriat and Revealed Preference Theory," The Review of Economic Studies, 40(3), 419-425.
Strategic classification from revealed preferences. J Dong, A Roth, Z Schutzman, B Waggoner, Z S Wu, Proceedings of the 2018 ACM Conference on Economics and Computation. the 2018 ACM Conference on Economics and ComputationDong, J., A. Roth, Z. Schutzman, B. Waggoner, and Z. S. Wu (2018): "Strategic classification from revealed preferences," in Proceedings of the 2018 ACM Conference on Economics and Computation, pp. 55-70.
A revealed preference approach to computational complexity in economics. F Echenique, D Golovin, A Wierman, Proceedings of the 12th ACM conference on Electronic commerce. the 12th ACM conference on Electronic commerceEchenique, F., D. Golovin, and A. Wierman (2011): "A revealed preference approach to computational complexity in economics," in Proceedings of the 12th ACM conference on Electronic commerce, pp. 101-110.
Incentive Compatible Active Learning. F Echenique, S Prasad, 11th Innovations in Theoretical Computer Science Conference (ITCS). Echenique, F., and S. Prasad (2020): "Incentive Compatible Active Learning," in 11th Innovations in Theoretical Computer Science Conference (ITCS).
Global Evidence on Economic Preferences. A Falk, A Becker, T Dohmen, B Enke, D Huffman, U Sunde, The Quarterly Journal of Economics. 1334Falk, A., A. Becker, T. Dohmen, B. Enke, D. Huffman, and U. Sunde (2018): "Global Evidence on Economic Preferences," The Quarterly Journal of Economics, 133(4), 1645-1692.
Afriat's Theorem for General Budget Sets. F Forges, E Minelli, Journal of Economic Theory. 1441Forges, F., and E. Minelli (2009): "Afriat's Theorem for General Budget Sets," Journal of Economic Theory, 144(1), 135-145.
On topologies for function spaces. R H Fox, Bull. Amer. Math. Soc. 51Fox, R. H. (1945): "On topologies for function spaces," Bull. Amer. Math. Soc., 51, 429-432.
How Flexible is that Functional Form? Measuring the Restrictiveness of Theories. D Fudenberg, W Gao, A Liang, Proceedings of the 22nd ACM Conference on Economics and Computation. the 22nd ACM Conference on Economics and ComputationFudenberg, D., W. Gao, and A. Liang (2021): "How Flexible is that Functional Form? Measuring the Restrictiveness of Theories," in Proceedings of the 22nd ACM Conference on Economics and Computation, pp. 497-498.
Maxmin expected utility with non-unique prior. I Gilboa, D Schmeidler, Journal of mathematical economics. 182Gilboa, I., and D. Schmeidler (1989): "Maxmin expected utility with non-unique prior," Journal of mathematical economics, 18(2), 141-153.
Robust control and model uncertainty. L P Hansen, T J Sargent, Risk, Ambiguity, and Misspecification: Decision Theory, Robust Control, and Statistics. NYU91MimeoHansen, L. P., and T. J. Sargent (2001): "Robust control and model uncertainty," American Economic Review, 91(2), 60-66. (2022): "Risk, Ambiguity, and Misspecification: Decision Theory, Robust Control, and Statistics," Mimeo: NYU.
Robust control and model misspecification. L P Hansen, T J Sargent, G Turmuhambetova, N Williams, Journal of Economic Theory. 1281Hansen, L. P., T. J. Sargent, G. Turmuhambetova, and N. Williams (2006): "Robust control and model misspecification," Journal of Economic The- ory, 128(1), 45-90.
On Economies with Many Agents. W Hildenbrand, Journal of Economic Theory. 22Hildenbrand, W. (1970): "On Economies with Many Agents," Journal of Economic Theory, 2(2), 161-188.
Continuity Properties of the Core of a Market. Y Kannai, Econometrica. 386Kannai, Y. (1970): "Continuity Properties of the Core of a Market," Econometrica, 38(6), 791-815.
Complete Lattices of Probability Measures with Applications to Martingale Theory. R P Kertz, U Rösler, Lecture Notes-Monograph Series. 35Kertz, R. P., and U. Rösler (2000): "Complete Lattices of Probability Measures with Applications to Martingale Theory," Lecture Notes-Monograph Series, 35, 153- 177.
A continuous utility theorem for closed preorders on a metrizable σ-compact space. V L Levin, Doklady Akademii Nauk. 2734Levin, V. L. (1983): "A continuous utility theorem for closed preorders on a metriz- able σ-compact space," Doklady Akademii Nauk, 273(4), 800-804.
Ambiguity aversion, robustness, and the variational representation of preferences. F Maccheroni, M Marinacci, A Rustichini, Econometrica. 746Maccheroni, F., M. Marinacci, and A. Rustichini (2006): "Ambiguity aver- sion, robustness, and the variational representation of preferences," Econometrica, 74(6), 1447-1498.
Continuous and Smooth Consumers: Approximation Theorems. A Mas-Colell, The Review of Economic Studies. 83International Economic ReviewMas-Colell, A. (1974): "Continuous and Smooth Consumers: Approximation The- orems," Journal of Economic Theory, 8(3), 305-336. (1977): "On the Continuous Representation of Preorders," International Economic Review, 18(2), 509-513. (1978): "On Revealed Preference Analysis," The Review of Economic Studies, 45(1), 121-131.
Axioms of Revealed Preference for Nonlinear Choice Sets. R L Matzkin, Econometrica. 596Matzkin, R. L. (1991): "Axioms of Revealed Preference for Nonlinear Choice Sets," Econometrica, 59(6), 1779-1786.
A Comprehensive Approach to Revealed Preference Theory. H Nishimura, E A Ok, J K , -H Quah, American Economic Review. 1074Nishimura, H., E. A. Ok, and J. K.-H. Quah (2017): "A Comprehensive Ap- proach to Revealed Preference Theory," American Economic Review, 107(4), 1239- 1263.
A Spatial Model for Legislative Roll Call Analysis. K T Poole, H Rosenthal, American Journal of Political Science. 292Poole, K. T., and H. Rosenthal (1985): "A Spatial Model for Legislative Roll Call Analysis," American Journal of Political Science, 29(2), 357-384.
A Characterization of Rationalizable Consumer Behavior. P J Reny, Econometrica. 831Reny, P. J. (2015): "A Characterization of Rationalizable Consumer Behavior," Econometrica, 83(1), 175-192.
Revealed Preference Theory. M K Richter, Econometrica. 343Richter, M. K. (1966): "Revealed Preference Theory," Econometrica, 34(3), 635- 645.
A note on the pure theory of consumer's behaviour. P A Samuelson, Economica. 517Samuelson, P. A. (1938): "A note on the pure theory of consumer's behaviour," Economica, 5(17), 61-71.
Subjective probability and expected utility without additivity. D Schmeidler, Econometrica. 573Schmeidler, D. (1989): "Subjective probability and expected utility without addi- tivity," Econometrica, 57(3), 571-587.
R Schneider, Convex Bodies: the Brunn-Minkowski Theory. Cambridge University Press2 ednSchneider, R. (2014): Convex Bodies: the Brunn-Minkowski Theory. Cambridge University Press, 2 edn.
Preference Recoverability from Inconsistent Choices. C Ugarte, Mimeo, Berkeley, Ugarte, C. (2022): "Preference Recoverability from Inconsistent Choices," Mimeo, UC Berkeley.
The Nonparametric Approach to Demand Analysis. H R Varian, Econometrica. 504Varian, H. R. (1982): "The Nonparametric Approach to Demand Analysis," Econo- metrica, 50(4), 945-973.
Heterogeneity in Risky Choice Behavior in a Broad Population. , H.-M Von Gaudecker, A Van Soest, E Wengstrom, The American Economic Review. 1012von Gaudecker, H.-M., A. van Soest, and E. Wengstrom (2011): "Hetero- geneity in Risky Choice Behavior in a Broad Population," The American Economic Review, 101(2), 664-94.
Efficiently learning from revealed preference. M Zadimoghaddam, A Roth, International Workshop on Internet and Network Economics. SpringerZadimoghaddam, M., and A. Roth (2012): "Efficiently learning from revealed preference," in International Workshop on Internet and Network Economics, pp. 114-127. Springer.
| [] |
[
"Mixed Valence Pseudobrookite Al1.75Ti1.25O5: High Temperature Phase Transitions, Magnetism and Resistivity",
"Mixed Valence Pseudobrookite Al1.75Ti1.25O5: High Temperature Phase Transitions, Magnetism and Resistivity"
] | [
"Davor Tolj \nLaboratory of Quantum Magnetism\n\n",
"Wenhua Bi \nCrystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland\n",
"Yong Liu \nCrystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland\n",
"Ivica Zivkovic \nLaboratory of Quantum Magnetism\n\n",
"Henrik M Ronnow \nLaboratory of Quantum Magnetism\n\n",
"Arnaud Magrez \nCrystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland\n"
] | [
"Laboratory of Quantum Magnetism\n",
"Crystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland",
"Crystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland",
"Laboratory of Quantum Magnetism\n",
"Laboratory of Quantum Magnetism\n",
"Crystal Growth Facility Institute of Physics\nEcole Polytechnique Fédérale de Lausanne -EPFL\nSwitzerland"
] | [] | Dark blue single crystals of Al 1.75 3+ Ti 1.0 4+ Ti 0.25 3+ O 5 were grown with a novel synthesis method based on the reaction of a Ti 3+ /Ti 4+ containing langbeinite melt and Al2O3. The obtained needles crystallize in the pseudobrookite structure and undergo two reversible phase transitions from orthorhombic Cmcm to C2/m first and subsequently to C2 symmetry. Like the known aluminum titanate pseudobrookites, anistropic thermal expansion is observed. The temperature evolution of the crystal structure reveals some insights into the mechanism leading to the decomposition of the Al1.75Ti1.25O5 above 725°C. The magnetic and electrical properties are discussed and compared to other reported aluminum titanate pseudobrookites. | null | [
"https://export.arxiv.org/pdf/2211.17252v2.pdf"
] | 254,096,044 | 2211.17252 | 9fa8b9224a2b0c2d5d6319f1bc5344d2e7d29643 |
Mixed Valence Pseudobrookite Al1.75Ti1.25O5: High Temperature Phase Transitions, Magnetism and Resistivity
Davor Tolj
Laboratory of Quantum Magnetism
Wenhua Bi
Crystal Growth Facility Institute of Physics
Ecole Polytechnique Fédérale de Lausanne -EPFL
Switzerland
Yong Liu
Crystal Growth Facility Institute of Physics
Ecole Polytechnique Fédérale de Lausanne -EPFL
Switzerland
Ivica Zivkovic
Laboratory of Quantum Magnetism
Henrik M Ronnow
Laboratory of Quantum Magnetism
Arnaud Magrez
Crystal Growth Facility Institute of Physics
Ecole Polytechnique Fédérale de Lausanne -EPFL
Switzerland
Mixed Valence Pseudobrookite Al1.75Ti1.25O5: High Temperature Phase Transitions, Magnetism and Resistivity
1PseudobrookitePhase transitionsMixed valencelangbeinite…
Dark blue single crystals of Al 1.75 3+ Ti 1.0 4+ Ti 0.25 3+ O 5 were grown with a novel synthesis method based on the reaction of a Ti 3+ /Ti 4+ containing langbeinite melt and Al2O3. The obtained needles crystallize in the pseudobrookite structure and undergo two reversible phase transitions from orthorhombic Cmcm to C2/m first and subsequently to C2 symmetry. Like the known aluminum titanate pseudobrookites, anistropic thermal expansion is observed. The temperature evolution of the crystal structure reveals some insights into the mechanism leading to the decomposition of the Al1.75Ti1.25O5 above 725°C. The magnetic and electrical properties are discussed and compared to other reported aluminum titanate pseudobrookites.
Introduction
Pseudobrookite was discovered in 1878 on the Uroi Hill in Romania by A. Koch. [1] The appearance of the crystals, the crystal system as well as the physical and chemical properties are very reminiscent of brookite, the orthorhombic polytype of TiO2. By closely examining the shape and measuring the main angles between the facets of the crystals discovered by A. Koch, Prof von Rath proved that the crystals were a "false" brookite and gave the name of pseudobrookite to the new mineral. [2] In 1930, L. Pauling determined the atomic arrangement of pseudobrookite by X-ray diffraction and confirmed that there were no structural relationships between brookite and pseudobrookite crystals. [3] [4] The later have a formula Fe2TiO5 and crystallize in the Cmcm space group (when the lattice parameter a is the smallest axis and c the largest). In the original description given by L. Pauling, the structure is composed of oxygen octahedra containing iron or titanium. Each iron-containing octahedron shares one edge with another iron-containing octahedron and three edges with titanium-containing octahedra, while each titanium-containing octahedron shares six edges with iron-containing octahedra. The two octahedra are strongly distorted, and are arranged to form c-oriented double chains weakly bonded by the shared edges. The framework of the structure gives rise to rhombus-shaped open channels that extend along the c-axis (Fig. 1a). It was shown in synthetic and natural pseudobrookites that Fe and Ti can substitute for each other at both metal sites, so that the solid solution between Fe2TiO5 and FeTi2O5 has varying proportions of Fe 2+ and Fe 3+ . [5] [6] Although the most studied pseudobrookites have a composition M2 3+ Ti 4+ O5 (M= Sc, Cr, Fe, Ti, Ga, Al), or M 2+ Ti2 4+ O5 (M = Mg, Fe, Co), isomorphic niobates, tantalates, zirconates, antimonates, vanadates, as well as nitrides, oxynitrides and rare earth tetraoxybromides, have been synthesized. [7][8] [9][10] [11] [12] Such a broad compositional spectrum allows pseudobrookite to be found in a wide variety of applications. For instance, Ti3-dO4-xNx, formed by the N substitution of the high temperature phase of anosovite -Ti3O5 , isostructural of pseudobrookite, has a band gap of 2.6eV and exhibits superior photocatalytic performance to TiO2. [12] Ta3N5 materials, which exhibit one of the highest solar to hydrogen conversion efficiencies, are among the most effective compounds for renewable hydrogen production via water splitting. [13] The topotactic insertion of the alkali metals in the open channels gives pseudobrookite materials the potential to be applied in Liion batteries. [14] [15] Pseudobrookites with lattices containing magnetic elements, in which the exchange interactions are frustrated by the site symmetry of the moments, are of great fundamental interest. This leads to a plethora of physical phenomena that can occur in such geometrically frustrated systems. CoTi2O5 and FeTi2O5 are two examples of such systems, both of which exhibit a long-range antiferromagnetic ordered state with a spin-driven Jahn-Teller lattice distortion. [9] [10] Aluminum titanates (Al2-xTi1+xO5) are the most applied pseudobrookites. With high aluminum content (Al2TiO5), they are widely used in high-temperature applications where thermal shock resistance and thermal insulation is required, such as the thermal pigments and barriers, internal combustion engine components and metallurgy. [16][17][8] [18] The strong anisotropy of thermal expansion generates localized internal stress and causes severe microcracking in pseudobrookites. However, addition of appropriate stabilizers such as SiO2, Fe2O3 and MgO enhances the mechanical strength performance and improves the sinterability of the ceramics. [19] Al2TiO5 was also used as a precursor to produce AlTi alloys for aerospace industry by magnesiothermic reduction. [20] Skala et al. have shown that aluminum stoichiometry can be higher than 2 in aluminum titanate pseudobrookites at high temperature. [17] Al2-xTi1+xO5 pseudobrookites with 0 < x < 1 are less studied although Al2TiO5 and AlTi2O5 crystallize in the same orthorhombic Cmcm structure. The solid solution seems to be discontinued and the substitution of Al 3+ by Ti 3+ was found to be limited to x 0.4 in polycrystalline samples. [21] Aluminum titanate pseudobrookite with high titanium content (1 x 2) shows a rich structural phase diagram with three different structures at room temperature including a distorted monoclinic pseudobrookite structure. At room temperature, Ti3O5 (x=2) exhibits a different monoclinic structure that transits to the orthorhombic pseudobrookite structure at about 500K. The presence of Ti 3+ (3d 1 , spin 1/2) and Ti 4+ (3d 0 , spin 0) in these compounds has led to an intensive characterization of their magnetic and transport properties which are dependent on x. While Al2-xTi1+xO5 pseudobrookites behave as an isolated spin system when x 1.5, non-magnetic Ti 3+ -Ti 3+ dimers develop when Ti-Ti distance shortens with increasing
x. [22] A Charge Density Wave (CDW) is proposed theoretically in AlTi2O5 (x=1) but the material experimentally shows no sign of a CDW state. [22][23] All Al2-xTi1+xO5 pseudobrookites show an insulating behavior with a band gap narrowing from 0.1eV to 0.05eV when x is increased from 1 to 2. [23] Herein, we report a facile growth process to produce millimeter-sized single crystals of Al1.75Ti1.25O5. The crystals undergo phase transitions at 550°C and at 650°C. The thermal expansion behavior, as well as the magnetic and transport properties of Al1.75Ti1.25O5 are also discussed.
Experimental details Synthesis
Single crystals of Al1.75Ti1.25O5 were grown using single phase K1.7Ti2(PO4)3 powder (prepared similarly as described elsewhere). [24] The powder was thoroughly ground, placed in an αalumina crucible and heated at 1550 °C for 24h in a pure argon atmosphere. Melt was then cooled down to 1000 °C at a rate of 3°C/h. Subsequently, the heating was stopped and furnace cooled down to room temperature naturally.
SEM-EDX and XRF
Sample morphology and composition information were observed by scanning electron microscopy (SEM, The Gemini 300 with Oxford Inst. EDX detector). Energy-dispersive X-ray spectroscopy (EDX) analysis was performed on multiple single crystal samples from the batch to get precise elemental ratio information. Aluminum and titanium ratio were further confirmed using X-ray fluorescence (Orbis PC Micro EDXRF analyzer).
X-Ray diffraction
Powder XRD Precursor, powder and single crystals of Al1.75Ti1.25O5 were characterized by powder X-ray diffraction (PXRD) at room temperature on a Malvern-Panalytical diffractometer (Empyrean system) with Cu Kα radiation (λ = 1.5148 Å) operating at 45 kV and 40 mA in Bragg-Brentano geometry. Patterns were collected between 10 and 100° in 2θ with step size of 0.013°).
Single crystal XRD
A high quality crystal with suitable size was selected and mounted on a goniometer head with a cryo-loop. Frames were collected at 100K, 200K and 300K on a Rigaku Synergy-I XtaLAB Xray diffractometer, equipped with a Mo micro-focusing source (λKα = 0.71073 Å) and a HyPix-3000 Hybrid Pixel Array detector (Bantam). The temperature was controlled by a Cryostream 800 from Oxford Cryosystems Ltd. CrysAlis Pro , [25] and OLEX2 software, [26] were used for data reduction and structural refinements, respectively. Structure solutions were obtained with ShelXT program. [27] All other experimental details are listed in Table S1.
Structures at 300K, 200K and 100K are available as CIF files with CSD numbers 2221313, 2221314, and 2221315, respectively. DIAMOND program from Crystal Impact was used for crystal structure plotting. [28] High Temperature Powder XRD Few needles of Al1.75Ti1.25O5 were ground into fine powder. It was subsequently sealed under vacuum in a quartz capillary which was mounted in the high temperature chamber installed on the powder X-ray diffractometer. Patterns were recorded between 25°C and 1000°C at every 25°C. Rietveld refinement were performed using the Fullprof package. [29]
Results and discussion
Growth of single crystals
To the best of our knowledge, investigations of Al2-xTi1+xO5 have been limited to polycrystalline samples with the exception of aluminum titanate pseudobrookites with 1 x 2 studied as single crystals grown by the floating zone (FZ) method. For the FZ growth, pseudobrookites were first prepared by a solid-state reaction between Al2O3, TiO2 and Ti2O3 at 1000°C in air. The resulting polycrystalline samples were then pressed into feed and seed rods for the FZ process. Although no information on the growth conditions is given in the paper other than the growth atmosphere being a mixture of Ar and H2, [30] the temperature required to melt the rods and stabilize the molten zone should be greater than 1850°C, according to the Al2O3-TiO2 phase diagram proposed by D. Goldberg. [31] On the other hand K1.7Ti2(PO4)3 is a mixed valence (1.3 Ti 4+ + 0.7 Ti 3+ , average oxidation state of Ti is 3.6) langbeinite that melts at about 1250°C. [32] It is well-known that molten phosphates react strongly with alumina crucibles. Consequently, other crucibles like platinum are preferred for the growth of phosphate single crystals. [33] We took advantage of the known alumina diffusion in molten phosphates to
Structure refinement
The crystal structure was determined based on a single crystal diffraction data collected at 300 K. As shown in Fig. S1, the reconstructed reciprocal space is consistent with the orthorhombic Cmcm space group (100% of the diffraction spot indexed). The symmetry, the refined lattice constants and the atomic coordinates confirm the structure of the needles to be pseudobrookite.
In Fig. 2, the room temperature lattice parameters of Al1.75Ti1.25O5, obtained by Rietveld refinement, are compared to the lattice parameters of the solid solution Al2-xTi1+xO5 versus the oxidation state of Ti. [34] As expected, the lattice parameters evolve linearly with x and consequently with the average oxidation state of Ti. The charge neutrality and the oxygen stoichiometry of pseudobrookites impose that for every Al substituted by Ti, additional Ti 3+ needs to be present. Therefore, the average oxidation state of Ti in AlTi2O5 is 3.5+ while Al2TiO5 only contains Ti 4+ . As shown in Fig. 2b distance as the equatorial plans are parallel to a mirror plane. In Al2TiO5, [17] Al1.75Ti1.25O5 and AlTi2O5, [23] the apical distance of the M2 containing octahedron is larger than in the M1 Low temperature SC diffraction data shows that the aorth-lattice parameter of the Al1.75Ti1.25O5
contracts from 100K to 300K with a coefficient of thermal expansion (CTE) of -8.92 10 -6 K -1 while borth-and corth-lattice parameters expand with close CTE of 4.08 10 -6 K -1 and 3.90 10 -6 K -1 respectively. Similar thermal expansion anisotropy has been observed in other aluminum titanate pseudobrookites. [35]
High temperature behavior
As can be seen in Fig. 3, the temperature evolution of the powder patterns at high temperature clearly shows discontinuities at 550°C as well as at 650°C (Fig. 3a), and are best seen with the shift of 0kl peaks position with temperature. Above 725°C, additional XRD peaks are assigned to rutile and corundum. Their appearance and their increasing height with temperature illustrate the thermal decomposition of Al1.75Ti1.25O5 as previously observed in all aluminum titanate pseudobrookites. [35] Above 950°C, the intensity of pseudobrookite peaks is decreasing quickly, while the one of rutile and corundum peaks keep increasing. The quartz capillary starts crystallizing and strong SiO2 peaks can be observed.
From room temperature to 550°C, the lattice parameters of Al1.75Ti1.25O5 were refined in the Cmcm orthorhombic symmetry. Although the absolute values of CTE measured at low temperature from single crystal XRD data cannot be directly compared with those measured at high temperature from powder XRD data, the thermal evolution of the lattice parameters determined below and above room temperature are consistent in their trend. While CTE along borth-and corth-axis are constant (1.029 10 -5 K -1 and 1.895 10 -5 K -1 , respectively) over this temperature range as indicated by the linear evolution of the lattice parameters versus temperature, the CTE along the a-axis is temperature dependent. As can be seen in Fig. 4a, the aorth-lattice parameter contracts up to about 300°C (CTE= -2.843 10 -5 ) while it is almost temperature independent between 300°C and 550°C (CTE= -5.30 10 -7 ). Between 550 and 575°C, Al1.75Ti1.25O5 undergoes a first phase transition from an orthorhombic Cmcm symmetry to a C2/m monoclinic symmetry. A similar transition was already observed with the increase of Ti content in the solid solution Al2-xTi1+xO5 (1.5 < x ≤ 2, β and λ-phases). [22] As can be seen in the Fig. 4b, the βmono angle increases almost linearly in the 575°C -675°C temperature range. The bmono-corresponding to the aorth-lattice parameter remains almost temperature independent (CTE = -2.25 10 -6 K -1 ) while the CTE of the cmono-corresponding to the corth-lattice parameters is reduced to 9.97 10 -6 K -1 as compared to the CTE along the same axis in the 25°C -550°C temperature range. On the other hand, the amono-corresponding to the borth-lattice parameter starts contracting from 550°C to 675°C with a CTE= -7.04 10 -6 K -1 . A second discontinuity in the temperature evolution of the lattice parameters (Fig. 4a) as well as of the peak positions (Fig. 3a) is observed between 675°C and 700°C. It corresponds to a second phase transition from the C2/m to a C2 monoclinic structure. From 700°C to the temperature of thermal decomposition of Al1.75Ti1.25O5, the β angle decreases, the bmono shrinks (CTE= -9.36 10 -6 K -1 ) while amono and cmono expands similarly with a CTE of 1.560 10 -5 K -1 and 1.673 10 -5 K -1 respectively. These CTE are very close to the ones measured below 550°C when Al1.75Ti1.25O5 exhibits the Cmcm symmetry.
We performed a Rietveld refinement of the Al1.75Ti1.25O5 structure with each powder pattern from room temperature to 725°C in order to study the evolution of the interatomic distances, the coordination as well as the Al-and Ti-distribution in the lattice (site occupancy) leading to the thermal decomposition of Al1.75Ti1.25O5 (Fig. 4).
Interatomic distances
The shortest distance between sites occupied by M and between sites occupied by O atoms can be found in the center of the lattice. At room temperature, the M2-M2 distance is 2.797 Å. The site occupancy is 62% by Al (i.e. 38% by Ti), Fig. 4c. Considering the size and stoichiometry of Al 3+ , Ti 3+ and Ti 4+ , the average ionic radius of the atom occupying the M2 site is 0.567 Å.
As temperature rises to 550°C, the average ionic radius of the M2 is decreasing following an increase of the site occupancy by Al. Simultaneously, the M2-M2 distance is constant (Fig. 4d). For the same temperature range, the distance M1-M1 as well as M1-M2 are approximately 3.08 Å and 3.61 Å, respectively. They remain almost unaffected by the raise of the temperature and the increase of the average ionic radius of the atom located in the sites.
As refined from both single crystal and powder XRD data, the O2-O2 distance is short (2.44 Å) as opposed to the ionic radius of oxygen which is 1.36 Å when three-fold coordinated. [36] As the temperature rises up to 550°C, the O2-O2 distance shrinks to approximately 2.35 Å.
At 550°C, the symmetry breaking due to the Cmcm → C2/m structural transition allows the O2 atoms to shift from the 8f position on the mirror plane located at x=0 and x=1/2. It provides to O2 one new degree of freedom to move along the a-axis. changing from a 6-fold distorted octahedron (at room temperature) to a 5-fold distorted square pyramid (at 550°C) as illustrated in Fig. 3b. site multiplicity being respectively 8f, 4c and 4c.
After
In summary, Al1.75Ti1.25O5 undergo two reversible phase transitions from the Cmcm pseudobrookite structure to the C2/m monoclinic structure at 550°C and subsequently to C2 monoclinic structure at 650°C. The phase transition is caused by the decrease of the interatomic distance between the two oxygen atoms located in the center of the lattice despite the fact that the lattice expands. The coordination of M is progressively reduced which weakens the structure. This effect associated with a segregation of Al in specific crystal sites leads to the decomposition of Al1.75Ti1.25O5 into Al2O3 and TiO2 starting from 725°C.
Physical properties
Magnetic susceptibility temperature dependence is presented in Fig. 5a. Paramagnetic behavior at high temperatures shows a Curie-Wiess-like behavior. The temperature dependent magnetic susceptibility is fitted, in the temperature range ≈110-320 K, to a Curie-Weiss law,
( ) = 0 + −(1)
where C is the Curie constant and θ the Curie-Weiss temperature. Because aluminum has no unpaired electrons in its 3+ valence state, same as titanium in 4+ state, the magnetic moment is assumed to be only on Ti 3+ , d give the value of 1.73 μB showing that Ti 3+ ions appears primarily as free spins. However, the observed effective moment is at the lower range of values observed for Ti 3+ compounds.
Similar behavior of lower number of Ti 3+ ions obtained when compared to a number calculated from the formal valence was observed in other mixed valence pseudobookite compounds of Al2-xTi1+xO5 solid solution series (1≤ x ≤ 1.5). [22] Proposed pairing of magnetic ions into the antiferromagnetic spin singlet dimers Ti 3+ -Ti 3+ could not be inferred from the magnetic susceptibility behavior for this composition. This effect, however, should be limited due to the comparatively low concentration of Ti 3+ ions. Small contribution could be hidden, together with the effects of crystal defects and impurities from alumina crucible, in the temperature independent susceptibility χ0, lowering the calculated effective magnetic moment. single crystal. It shows semiconducting behavior in the accessible temperature range of 160 to 400 K. Below 160 K, the resistance was too large to be measured. Linear part of the resistivity (190-340 K) was fitted to the thermal activation model using the equation
= 0 exp ( )(2)
where ρ0 is the prefactor, kB is the Boltzmann constant and Ea is the thermal activation energy.
The observed room temperature resistivity (135 Ωcm) and activation energy (Ea = 0.18 eV)
follow the trend of values reported for other mixed valence aluminum titanate compound with an orthorhombic structure in the solid solution Al2-xTi1+xO5 (1 ≤ x ≤ 1.5). [22] The introduction of the magnetic titanium 3d 1 metal centers significantly alters magnetic and Figure S1 h0l hk0 Figure S1: Representative cross sections of the reciprocal space reconstruction for Al1.75Ti1.25O5 crystal. The axes horth, korth and lorth (in yellow) correspond to the orthorhombic unit cell. The reciprocal unit cell is marked in white.
The obtained systematic extinction of the reflections (hkl: h+k=2n) indicates C centering of the orthorhombic lattice. Second, the systematic extinction of the reflections (h0l: l=2n) indicates the presence of a c-glide mirror perpendicular to the b axis. These observations confirm that the Al1.75Ti1.25O5 crystal correspond to the Cmcm orthorhombic structure of pseudobrookite with lattice parameters aorth= 3.63A, borth= 9.55A and corth= 9.73A.
Figure S2
Al1.75Ti1.25O5 crystal structure at 600°C Al1.75Ti1.25O5 crystal structure at 725°C Figure S2. The dark green polyhedrons contain M2a atoms while the light green polyhedrons contain M2b atoms.
Fig. 1 a
1) The 3D packing along c axis. b) Optical photograph of Al1.75Ti1.25O5 single crystals on a millimeter scale paper. c) SEM image of a single crystal (upper panel) with EDX elemental maps (bottom panels).
grow Al1.75Ti1.25O5 single crystals in K1.7Ti2(PO4)3 flux using the alumina crucible as aluminum source. After the growth, dark blue needle-like crystals were found attached to the alumina crucible. They can easily be removed mechanically from the crucible. The typical size of the needles is about 1.5×0.2×0.15 mm(Fig. 1b). On the other hand, reaction of langbeinite precursor with alumina powder under the same conditions (instead of direct reaction with alumina crucible) didn't produce single crystals but rather resulted in a polycrystalline product with the same composition. The compositional SEM-EDX analysis of the singe crystals showed an average element ratio of Al:Ti:O = 1.75:1.25:5. This composition was confirmed by XRF using Al2O3/TiO2 standards for accurate quantification of aluminum and titanium stoichiometry. Chemical mappings as well as line scans reveal the Al and Ti to be homogeneously distributed(Fig. 1c)over the needles. In total, more than 5 needles on multiple positions were measured by either SEM-EDX or XRF confirming the homogeneity of the chemical composition between needles. No foreign elements like K or P are found in the needles.The composition was further confirmed by structure refinement. The obtained site occupancies for Al and Ti give a stoichiometry of Al:Ti:O = 1.758(7):1.242(7):5 after refinement which is in excellent agreement with the SEM-EDX and XRF results. The crystallographic information can be found in theTable 1in supplementary information. In Al1.75Ti1.25O5, the average oxidation state of titanium is 3.8 similar to the one in the langbeinite precursor. The use of an argon atmosphere limits the oxidation of Ti during the crystal growth. The chemical formula of the pseudobrookite needles can therefore be written Al 1
Fig. 2 a
2) Lattice parameters at room temperature obtained via Rietveld analysis (black markers at x = 0.25 correspond to the title compound parameters) in dependence of x in Al2-xTi1+xO5. b) M1-M2 octahedrons bonding motif (Al/Ti ratio depicted in light/dark blue, respectively).
containing octahedron. The distances behave versus the Ti content similarly to the equatorial M-O distances. On the other hand, no clear trend can be seen in the evolution of the O-M-O apical angle versus the titanium content in both M1 and M2 containing octahedra.
Fig. 3 a
3) Contour plot showing the evolution of the XRD pattern with a temperature. b) 3D packing view of Al1.75Ti1.25O5 structure at different temperatures (Orthorombic structure -M1/M2 containing polyhedrons in blue/green, respectively; Monoclinic structure -M1/M2a/M2b containing polyhedrons in blue/dark green/light green, respectively).
Fig. 4
4Temperature dependence of a change in: a) lattice parameters, b) beta angle, c) M2 site occupancy, d) M-M distances, e) O-O distances. f) Expansions in M2b-O2a-M2b-O2a (green square) and M2a-O2b-M2a-O2b (red square) units in monoclinic structure.
the Cmcm → C2/m phase transition, the M2 as well as the O3 position is split into two, same as O2 into O2a and O2b. At 575°C, both M2a and M2b are 5-fold coordinated. M2a square pyramids are sharing edges and are arranged in layer perpendicular to the cmono-axis. The double chain of edge sharing M2b containing square pyramids are running parallel to the bmono-axis (Fig. S2). As temperature rises above the C2/m → C2 phase transition, many M-O distances become longer (close to 2.1 Å or higher) and O-M-O angles are modified significantly. At 725°C, only 4 M1-O distances remain below 2 Å with O-M-O getting close to the theoretical angle of a tetrahedron. Similar changes can be observed for M2a indicating that the coordination of M1 and M2a is tending to 4. Above the C2/m → C2 phase transition, the layers built of M2a containing square pyramids are progressively converted into (M2a)2-O4 blocks sharing corner and forming chains running along bmono (Fig. S3). In the case of M2b, only 2 M-O distances remain below 2 Å. However, the polyhedron angles remain close to the ones of a square pyramid. M site occupancy As can be seen in Fig. 4c, the Al and Ti distribution in the orthorhombic lattice of Al1.75Ti1.25O5 between the M1 and M2 sites is not equal. M2 is mainly occupied with Al. Taking into account the site occupancy, the formula of the orthorhombic Al1.75Ti1.25O5 can be written as (Al0.52Ti0.48)(Al0.62Ti0.38)2O5 using the M1(M2)2O5 formalism. After the orthorhombic to monoclinic transition and the M2 site splitting, M2a occupancy by Al is continuously increasing up to the second phase transition. In the structure with C2 space group, M2a is exclusively occupied by Al. The formula of Al1.75Ti1.25O5 in the C2 symmetry is therefore (Al0.53Ti0.47)(Al1.0)(Al0.23Ti0.77)O5 with M1M2aM2bO5 formalism. The M2a sites exclusively occupied by Al is suggests a chemical segregation in the structure which could be the initiator of the thermal decomposition of Al1.75Ti1.25O5 into Al2O3 and TiO2. Single crystal quenching Al1.75Ti1.25O5 single crystals were annealed at 700°C in vacuum for 12h and subsequently quenched to room temperature. As confirmed by X-ray diffraction, the structure of Al1.75Ti1.25O5 after quenching is orthorhombic Cmcm pseudobrookite. The interatomic distances and the coordination of the M atoms are fully recovered. Therefore, both phase transitions are reversible. It has to be noted that the recovery of the site occupancies was expected. At 700°C, M2a is fully occupied with Al (occM2a(Al)=1). Occupancy of M2b with Al occM2b(Al) is 0.317. After quenching, M2a and M2b combine as M2 with occM2(Al) = 0.606. Therefore 2 occM2(Al) occM2a(Al) + occM2b(Al) which is consistent with the M2, M2a and M2b
1 atoms (0.25 per unit cell). The best fit to the experimental data (see inset to Figure 5a) yields θ = -14.8 K, χ0 = 5.21 10 -4 emu mol −1 Oe -1 and C = 0.2618 emu mol -1 Oe -1 K -1 corresponding to an effective magnetic moment of 1.45 μB. Calculations assuming spin-only contribution (spin quantum number S = ½, g factor of 3d electron g = 2)
Fig. 5 a
5) Temperature dependence of the magnetic susceptibility χ with H = 0.1 T. The inset shows the fitted result using the Curie-Weiss law, where the red line represents the fitting of the measurement. b) Temperature dependence of the resistivity of Al1.75Ti1.25O5 single crystal. The inset shows the plot lnρ vs T -1 to extract the activation energy for conduction.
Fig
. 5b shows the temperature dependence of the electrical resistivity for the Al1.75Ti1.25O5
, both M1 and M2 containing octahedra in the pseudobrookite structure are strongly distorted. The M1 containing octahedron, with C2v symmetry, can be described with two M-O interatomic distances (i.e. 1.868 Å and 2.107 Å) and three O-M-O angles (80.28, 88.12 and 111.32°). The octahedron interatomic distances increase with the Ti content sincethe ionic radius of Al 3+ is smaller than the ones of Ti 3+ and Ti 4+ . However, the equatorial plane angles remain very similar to those in Al2TiO5 and AlTi2O5 structures.[17][23] The M2 containing octahedron has a C1v symmetry. Therefore, the four equatorial interatomic M-O distances aredifferent. They range from 1.831 to 2.139 Å while the four O-M-O angles varies between 82.16
and 101.32°. Both M1 and M2 containing octahedra are described with one apical M-O
The O2 position in the Cmcm space group splits into two 4i positions (i.e O2a and O2b) in the C2/m space group by the absence ofthe c-glide plane in the space group. At 575°C, the O2a -O2a and O2b -O2b interatomic
distances are recovered (i.e. 2.43 Å and 2.44 Å respectively). However, the evolution of the
O2a -O2a and O2b -O2b interatomic distances versus temperature between 550°C and the
decomposition temperature is different. While the O2a-O2a distance increases up to 2.62 Å,
the O2b -O2b distance remains close to 2.34 Å as before the Cmcm → C2/m transition (Fig.
4e). After the first phase transition and the decomposition temperature, M2 site is split. The
M2a-M2a and M2b-M2b distances behave differently. The temperature has limited effect on
the M2b-M2b which is kept close to 2.79 Å while the M2a-M2a distance increases to 2.9 Å.
In conclusion, the M2b-O2a-M2b-O2a unit in the monoclinic Al1.75Ti1.25O5 structure,
highlighted in green in the Fig. 4f, sees the M2b-M2b unchanged while it is expanded along
the O2a-O2a direction. On the other hand, the M2a-O2b-M2a-O2b unit, highlighted in brown,
elongates along the M2a-M2a while the O2b-O2b is kept constant.
M containing polyhedron
Heating the Al1.75Ti1.25O5 crystals up to 550°C induce reduced changes to the M1 containing
octahedra but modifies the coordination of M2 containing octahedron. The already long apical
M2-O1 distance expands further from 2.10 Å to 2.20 Å while the second apical M2-O2 distance
shrinks from 1.92 Å to 1.79 Å. Therefore, the coordination of the M2 atoms is gradually
INFORMATION Table S1 .
INFORMATIONS1electronic properties of aluminum titanate which can be observed in magnetic susceptibility and electrical resistivity measurements. Nominal aluminum titanate Al2TiO5 is white while Al1.75Ti1.25O5 is dark blue. Color change can be attributed to a d-d transitions related to the presence of Ti 3+ ions and a significant decrease in a band gap energy (~3 eV for parent compound Al2TiO5) as indicated by resistivity measurement.Considering that aluminum has an oxidation state of +3 and oxygen an oxidation state of -2, the final formula is Al1.75 3+ Ti0.25 3+ Ti1.0 4+ O5. The precursor's EDX compositional analysis shows a compound with a formula K1.7Ti2(PO4)3, exhibiting theTi 4+ /Ti 3+ ratio in the same range as in a resulting aluminum titanate. Since it is possible to synthesize series of langbeinite precursors with varying potassium content (K2-xTi2(PO4)3 with 0 ≤ x ≤ 1), precise control over titanium valence state in the final product should be achievable by managing the starting precursor.[24] Single crystals of Al1.75Ti1.25O5 were grown by reaction of a K1.7Ti2(PO4)3 melt with Al2O3 at low temperature. Facile method using the alumina crucible as a precursor allows for fast growth of aluminum titanate single crystals with pseudobrookite structure. They undergo a first phase transition from the Cmcm orthorhombic to the C2/m monoclinic symmetry at 550°C and a second transition to the C2 symmetry at 650°C. The temperature-driven evolution of the structure reveals the O-O interatomic distance decrease, despite the lattice expansion, to be the cause of the phase transitions. Furthermore, reduced coordination of Al and Ti atoms at higher temperature is present, together with non-random metal cation distribution. This leads to the instability of Al1.75Ti1.25O5 above 725°C yielding to the thermal decomposition into Al2O3 andTiO2. Calculated activation energy and size of magnetic moment for the Al1.75Ti1.25O5 follow the trend in values reported for other mixed valence aluminium titanate compounds, filling the gap in less explored part in solid solution series Al2-xTi1+xO5. Introduction of even small amount of magnetic titanium Ti 3+ in parent aluminium titanate (Al2TiO5) significantly alters its electronic and magnetic properties. Precise control over titanium oxidation state opens up various possibilities for titanium containing pseudobrookite compounds with new structural and electronic properties that show promise in scientific research and industrial application. The crystallographic table for single crystal X-ray diffraction at room temperature. Largest peak & hole (e. Å -3 ) 0.636 and -0.807Conclusion
SUPPLEMENTARY Empirical formula
Al1.75Ti1.25O5
Formula weight
187.05
Temperature (K)
300.00(2) K
Wavelength (Å)
0.71073
Crystal system
Orthorhombic
Space group
Cmcm
a
3.6353(1)
b
9.5509(3)
c
9.7391(3)
Volume (Å 3 )
338.25(2)
Z
4
Calculated density (Mg/m 3 )
3.672
Absorption coefficient (mm -1 )
4.400
F(000)
361
Crystal size (mm)
0.11 × 0.09 × 0.08
Reflections collected/unique
2866 / 477 [R(int) = 0.0166]
Completeness (%)
100.0
Data/restraints/parameters
475 / 4 / 32
GOOF
1.000
R1 [I>2sigma(I)]
0.0154
wR2 [I>2sigma(I)]
0.0372
R1 (all data)
0.0168
wR2 (all data)
0.0382
Extinction coefficient
0.0024(8)
Al1.75Ti1.25O5 @ 550°C Al1.75Ti1.25O5 @ 725°C
Supplementary information:The powder XRD patterns as well as the CIF files of the structure refined at low and high temperature are provided. The crystallographic table for single crystal X-ray diffraction is included. The reciprocal space reconstruction from single crystal XRD data is provided. Rietveld refinement pattern at 25°C, 550°C, 650°C and 725°C are shown.Author contribution:The single crystals were grown by D.T. The single crystal X-ray diffraction, the hightemperature
. A Koch, Mineral Petrogr, A. Koch, Mineral. und Petrogr. Mitteilungen 1878, 1, 331.
. C Pellache, Am. Miner. C. Pellache, Am. Miner. 1934, 19.
. L Pauling, J H Sturdivant, Zeitschrift für Krist. -Cryst. Mater. 239L. Pauling, J. H. Sturdivant, Zeitschrift für Krist. -Cryst. Mater. 1928, 68, 239.
. L Pauling, Zeitschrift für Krist. -Cryst. Mater. 7397L. Pauling, Zeitschrift für Krist. -Cryst. Mater. 1930, 73, 97.
. J Ottemann, G Frenzel, Schweizerische Mineral, Und Petrogr, 819J. Ottemann, G. Frenzel., Schweizerische Mineral. und Petrogr. Mitteilungen 1965, 45, 819.
. D M Xirouchakis, Lithos. 951D. M. Xirouchakis, Lithos 2007, 95, 1.
. H Müller-Buschbaum, H.-R Freund, Zeitschrift für Naturforsch. B. 590H. Müller-Buschbaum, H.-R. Freund, Zeitschrift für Naturforsch. B 1974, 29, 590.
. P Tiedemann, H Müller-Buschbaum, Zeitschrift für Anorg. und Allg. Chemie. 49498P. Tiedemann, H. Müller-Buschbaum, Zeitschrift für Anorg. und Allg. Chemie 1982, 494, 98.
. F K K Kirschner, R D Johnson, F Lang, D D Khalyavin, P Manuel, T Lancaster, D Prabhakaran, S J Blundell, Phys. Rev. B. 64403F. K. K. Kirschner, R. D. Johnson, F. Lang, D. D. Khalyavin, P. Manuel, T. Lancaster, D. Prabhakaran, S. J. Blundell, Phys. Rev. B 2019, 99, 064403.
. F Lang, L Jowitt, D Prabhakaran, R D Johnson, S J Blundell, Phys. Rev. B. 94401F. Lang, L. Jowitt, D. Prabhakaran, R. D. Johnson, S. J. Blundell, Phys. Rev. B 2019, 100, 094401.
. G Brauer, J R Weidlein, Angew. Chemie Int. Ed. English. G. Brauer, J. R. Weidlein, Angew. Chemie Int. Ed. English 1965, 4, 241.
. G Hyett, M A Green, I P Parkin, J. Am. Chem. Soc. 15541G. Hyett, M. A. Green, I. P. Parkin, J. Am. Chem. Soc. 2007, 129, 15541.
. G Hitoki, A Ishikawa, T Takata, J N Kondo, M Hara, K Domen, Chem. Lett. 736G. Hitoki, A. Ishikawa, T. Takata, J. N. Kondo, M. Hara, K. Domen, Chem. Lett. 2002, 31, 736.
. S J Clarke, F J Disalvo, J. Solid State Chem. 132394S. J. Clarke, F. J. Disalvo, J. Solid State Chem. 1997, 132, 394.
. K M Min, K S Park, A H Lim, J C Kim, D W Kim, Ceram. Int. 386009K. M. Min, K. S. Park, A. H. Lim, J. C. Kim, D. W. Kim, Ceram. Int. 2012, 38, 6009.
. B Morosin, R W Lynch, Acta Crystallogr. Sect. B Struct. Crystallogr. Cryst. Chem. B. Morosin, R. W. Lynch, Acta Crystallogr. Sect. B Struct. Crystallogr. Cryst. Chem. 1972, 28, 1040.
. R D Skala, D Li, I M Low, 67R. D. Skala, D. Li, I. M. Low, 2009, 29, 67.
. E Sequoia, G Bayer, J. Less Comm. Met. 129E. Sequoia, G. Bayer, J. Less Comm. Met 1971, 24, 129.
. I J Kim, L G Gauckler, J. Ceram. Sci. Technol. 349I. J. Kim, L. G. Gauckler, J. Ceram. Sci. Technol. 2012, 3, 49.
. Z Wang, M Seo, I Sohn, Metall. Mater. Trans. B Process Metall. Mater. Process. Sci. 52883Z. Wang, M. Seo, I. Sohn, Metall. Mater. Trans. B Process Metall. Mater. Process. Sci. 2021, 52, 883.
. T L Lekanova, Y I Ryabkov, O A Sevbo, V N Filippov, Russ. J. Appl. Chem. 1223T. L. Lekanova, Y. I. Ryabkov, O. A. Sevbo, V. N. Filippov, Russ. J. Appl. Chem. 2005, 78, 1223.
. R Takahama, T Ishii, D Indo, M Arizono, C Terakura, Y Tokura, N Takeshita, M Noda, H Kuwahara, T Saiki, T Katsufuji, R Kajimoto, T Okuda, Phys. Rev. Mater. 20201R. Takahama, T. Ishii, D. Indo, M. Arizono, C. Terakura, Y. Tokura, N. Takeshita, M. Noda, H. Kuwahara, T. Saiki, T. Katsufuji, R. Kajimoto, T. Okuda, Phys. Rev. Mater. 2020, 4, 1.
. T Tohyama, R Ogura, K Yoshinaga, S Naito, N Miyakawa, E Kaneshita, J. Phys. Chem. Solids. 127252T. Tohyama, R. Ogura, K. Yoshinaga, S. Naito, N. Miyakawa, E. Kaneshita, J. Phys. Chem. Solids 2019, 127, 252.
. A Leclaire, A Benmoussa, M M Borel, A Grandin, B Raveau, J. Solid State Chem. 227A. Leclaire, A. Benmoussa, M. M. Borel, A. Grandin, B. Raveau, J. Solid State Chem. 1989, 78, 227.
. O D Rigaku, Rigaku Oxford Diffr. Ltd. O. D. Rigaku, Rigaku Oxford Diffr. Ltd, Yarnton, 2015.
. O V Dolomanov, L J Bourhis, R J Gildea, J A K Howard, H Puschmann, J. Appl. Crystallogr. 42339O. V. Dolomanov, L. J. Bourhis, R. J. Gildea, J. A. K. Howard, H. Puschmann, J. Appl. Crystallogr. 2009, 42, 339.
. G M Sheldrick, Acta Crystallogr. Sect. A Found. Adv. 713G. M. Sheldrick, Acta Crystallogr. Sect. A Found. Adv. 2015, 71, 3.
. H Putz, K Brandenburg, H. Putz, K. Brandenburg, 1996.
. J Rodríguez, Phys. B. 55J. Rodríguez-Carvajal, Phys. B 1993, 192, 55.
. T Tohyama, R Ogura, K Yoshinaga, S Naito, N Miyakawa, E Kaneshita, J. Phys. Chem. Solids. 127252T. Tohyama, R. Ogura, K. Yoshinaga, S. Naito, N. Miyakawa, E. Kaneshita, J. Phys. Chem. Solids 2019, 127, 252.
. D Goldberg, Rev. Int. Hautes. Temp. Refract. 5181D. Goldberg, Rev. Int. Hautes. Temp. Refract. 1968, 5, 181.
The melting temperature was determined optically during the laser floating zone growth of K1. 7Ti2(PO4)3 langbeiniteThe melting temperature was determined optically during the laser floating zone growth of K1.7Ti2(PO4)3 langbeinite.
. G Dhanaraj, K Byrappa, V Prasad, M Dudley, SpringerBerlin Heidelberg; Berlin, HeidelbergG. Dhanaraj, K. Byrappa, V. Prasad, M. Dudley, Eds. , Springer Handbook of Crystal Growth, Springer Berlin Heidelberg, Berlin, Heidelberg, 2010.
. R Takahama, T Ishii, D Indo, M Arizono, T S I S Vi, R. Takahama, T. Ishii, D. Indo, M. Arizono, T. S. I. S. Vi, 2020.
. G Bayer, J. Less Common Met. 24129G. Bayer, J. Less Common Met. 1971, 24, 129.
. R D Shannon, Acta Crystallogr. Sect. A. 32751Rietveld refinement results Al1.75Ti1.25O5 @ 25°CR. D. Shannon, Acta Crystallogr. Sect. A 1976, 32, 751. Rietveld refinement results Al1.75Ti1.25O5 @ 25°C
| [] |
[
"Decomposed Mutual Information Estimation for Contrastive Representation Learning",
"Decomposed Mutual Information Estimation for Contrastive Representation Learning"
] | [
"Alessandro Sordoni ",
"Nouha Dziri ",
"Hannes Schulz ",
"Geoff Gordon ",
"Phil Bachman ",
"Remi Tachet "
] | [] | [] | Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation. | null | [
"https://arxiv.org/pdf/2106.13401v1.pdf"
] | 235,652,395 | 2106.13401 | 824aa4526f5a94c0fe19ffdad190911390fa989a |
Decomposed Mutual Information Estimation for Contrastive Representation Learning
Alessandro Sordoni
Nouha Dziri
Hannes Schulz
Geoff Gordon
Phil Bachman
Remi Tachet
Decomposed Mutual Information Estimation for Contrastive Representation Learning
Recent contrastive representation learning methods rely on estimating mutual information (MI) between multiple views of an underlying context. E.g., we can derive multiple views of a given image by applying data augmentation, or we can split a sequence into views comprising the past and future of some step in the sequence. Contrastive lower bounds on MI are easy to optimize, but have a strong underestimation bias when estimating large amounts of MI. We propose decomposing the full MI estimation problem into a sum of smaller estimation problems by splitting one of the views into progressively more informed subviews and by applying the chain rule on MI between the decomposed views. This expression contains a sum of unconditional and conditional MI terms, each measuring modest chunks of the total MI, which facilitates approximation via contrastive bounds. To maximize the sum, we formulate a contrastive lower bound on the conditional MI which can be approximated efficiently. We refer to our general approach as Decomposed Estimation of Mutual Information (DEMI). We show that DEMI can capture a larger amount of MI than standard non-decomposed contrastive bounds in a synthetic setting, and learns better representations in a vision domain and for dialogue generation.
Introduction
The ability to extract actionable information from data in the absence of explicit supervision seems to be a core prerequisite for building systems that can, for instance, learn from few data points or quickly make analogies and transfer to other tasks. Approaches to this problem include generative models (Hinton, 2012;Kingma & Welling, 2014) and self-supervised representation learning approaches, in which the objective is not to maximize likelihood, but to formulate a series of (label-agnostic) tasks that the model needs to solve through its representations (Noroozi & Favaro, 2016;Devlin et al., 2019;Gidaris et al., 2018;Hjelm et al., 2019). Self-supervised learning includes successful models leveraging contrastive learning, which have recently attained comparable performance to their fully-supervised counterparts Chen et al., 2020a).
Recent self-supervised learning methods can be seen as training an encoder f such that it maximizes the mutual information (MI) between representations f (·) of a pair of views x and y of the same input datum, I(f (x); f (y)) ≤ I(x; y) 1 . For images, different views can be built using random flipping or color jittering Chen et al., 2020a). For sequential data such as conversational text, the views can be past and future utterances in a given dialogue, or a particular word and its surrounding context (Stratos, 2019). Contrastive approaches train representations of pairs of views to be more similar to each other than to representations sampled from a negative sample distribution. The InfoNCE bound on I(x; y) (Oord et al., 2018) has been successful insofar as it enjoys much lower variance than competing approaches (Song & Ermon, 2020a). However, the capacity of the bound is limited by the number of contrastive samples used (McAllester & Stratos, 2020a;Poole et al., 2019) and is therefore likely biased when a large amount of MI needs to be estimated, e.g. between high dimensional objects such as natural images.
The starting point of this paper is to decompose I(x, y) by applying the chain rule on MI to obtain a sum of terms, each containing smaller chunks of the total MI that can be approximated with less bias by contrastive approaches. For example, consider creating a subview x by removing information from x, e.g. by masking some pixels as depicted in Fig. 1 (left). By construction, I(x , x; y) = I(x ; y)+I(x; y|x ) = I(x; y). Decomposed Estimation of Mutual Information (DEMI) prescribes learning representations that maximize each term in the sum, by contrastive learning. The condi- x 0 x y x 0
x y Figure 1: (left) Given two augmentations x and y, we create a subview x , which is obtained by occluding some of the pixels in x. We can maximize I(x; y) ≥ I(x ; y) + I(x; y|x ) using a contrastive bound by training x to be closer to y than to other images from the corpus. Additionally, we train x to be closer to y than to samples from p(y|x ), i.e. we can use x to generate hard negatives y, which corresponds to maximizing conditional MI, and leads the encoder to capture features not explained by x . (right) A fictional dialogue in which x and y represent past and future of the conversation respectively and x is the "recent past". In this context, the conditional MI term encourages the encoder to capture long-term dependencies that cannot be explained by the most recent utterances.
tional MI term measures the information about y that the model has gained by looking at x given the information already contained in x . An intuitive explanation of why this term may lead to capturing more of the total MI between views can be found in Fig. 1. For images (left), only maximizing I(x; y) could imbue the representations with the overall "shape" of the stick and representations would likely need many negative samples to capture other discriminative features of the image. By maximizing conditional MI, we hope to more directly encourage the model to capture these additional features, e.g. the embossed detailing. In the context of predictive coding on sequential data such as dialogue, by setting x to be the most recent utterance (Fig. 1, right), the encoder is directly encouraged to capture long-term dependencies that cannot be explained by the most recent utterance.
One may wonder how DEMI is related to recent approaches maximizing MI between more than two views, amongst them AMDIM , CMC (Tian et al., 2019) and SwAV (Caron et al., 2020). Interestingly, these models can be seen as maximizing the sum of MIs between views I(x, x ; y) = I(x ; y) + I(x; y). E.g., in Bachman et al. (2019), x and x could be global and local representations of an image, and in Caron et al. (2020), x and x could be the views resulting from standard cropping and the aggressive multi-crop strategy. This equality is only valid when the views x and x are statistically independent, which usually does not hold. Instead, DEMI maximizes I(x, x ; y) = I(x ; y) + I(x; y|x ), which always holds. Most importantly, the conditional MI term encourages the encoder to capture more non-redundant information across views.
Our contributions are the following. We show that DEMI can potentially capture more of the total information shared between the original views x and y. We extend existing contrastive MI bounds to conditional MI estimation and present novel computationally tractable approximations. Supplementally, our results offer another perspective on hard contrastive examples, i.e., Faghri et al. (2018), given that conditional MI maximization can be achieved by sampling contrastive examples from a partially informed conditional distribution instead of the marginal distribution. We first show in a synthetic setting that DEMI leads to capturing more of the ground-truth MI thus alleviating bias existing in InfoNCE. Finally, we present evidence of the effectiveness of the proposed method in vision and in dialogue generation.
Problem Setting
The maximum MI predictive coding framework (McAllester, 2018;Oord et al., 2018;Hjelm et al., 2019) prescribes learning representations of input data such that they maximize MI between inputs and representations. Recent interpretations of this principle create two independently-augmented copies x and y of the same input by applying a set of stochastic transformations twice, and then learn representations of x and y by maximizing the MI of the respective features produced by an encoder f : X → R d Chen et al., 2020a):
arg max f I(f (x); f (y)) ≤ I(x; y)(1)
where the upper bound is due to the data processing inequality. Our starting point to maximize Eq. 1 is the recently proposed InfoNCE lower bound on MI (Oord et al., 2018) which trains f (x) to be closer to f (y) than to the representations of other images drawn from the marginal distribution of the corpus. This can be viewed as a contrastive estimation of the MI (Oord et al., 2018) and has been shown to enjoy lower variance than competing approaches (Song & Ermon, 2020a).
InfoNCE Bound
InfoNCE (Oord et al., 2018) is a lower-bound on I(x; y) obtained by comparing pairs sampled from the joint distribution x, y 1 ∼ p(x, y) to pairs x, y i built using a set of negative examples, y 2:K ∼ p(y 2:K ) = K k=2 p(y k ), also called contrastive, independently sampled from the marginal:
I NCE (x, y|φ, K) = E log e ψ(x,y1) 1 K K k=1 e ψ(x,y k ) ,(2)
where the expectation is with respect to p(x, y 1 )p(y 2:K ) and ψ is a critic assigning a real valued score to x, y pairs. Usually, ψ is the dot product of the representations after applying an additional transformation g, e.g. an MLP, ψ(x, y) g(f (x)) T g(f (y)) (Chen et al., 2020a). We provide an exact derivation of this bound in the Appendix 2 . The optimal value of I NCE is reached for a critic proportional to the log-odds between the conditional distribution p(y|x) and the marginal distribution p(y), i.e. the PMI between x and y, ψ * (x, y) = log p(y|x) p(y) + c(x) (Oord et al., 2018;Ma & Collins, 2018;Poole et al., 2019).
InfoNCE has recently been extensively used in selfsupervised representation learning given that it enjoys lower variance than some of its competitors such as MINE (Belghazi et al., 2018;Song & Ermon, 2020a). However, the bound is loose if the true mutual information I(x; y) is larger than log K, which is likely when dealing with highdimensional inputs such as natural images. To overcome this difficulty, recent methods either train with large batch sizes (Chen et al., 2020a) or exploit an external memory of negative samples in order to reduce memory requirements (Chen et al., 2020b;Tian et al., 2020). These methods rely on uniform sampling from the training set in order to form the contrastive sets. Discussion of limits of variational bounds can be found in McAllester & Stratos (2020a).
Decomposing Mutual Information
When X is high-dimensional, the amount of mutual information between x and y will potentially be larger than the amount of MI that I NCE can measure given computational constraints associated with large K and the poor log scaling properties of the bound. We argue that we can ease this estimation problem by creating subviews of x and applying the chain rule on MI to decompose the total MI into a sum of potentially smaller MI terms.
By the data processing inequality, we have: I(x; y) ≥ I({x 1 , . . . , x N }; y), where {x 1 , . . . , x N } are different subviews of x -i.e., views derived from x without adding any exogenous information. For example, {x 1 , . . . , x N } can represent single utterances in a dialog x, sentences in a document x, or different augmentations of the same image x. Equality is obtained when the set of subviews retains all information about x or if x is in the set.
For ease of exposition and without loss of generality, we consider the case where we have two subviews, x itself and x . Then, I(x; y) = I(x, x ; y) and we can write I(x, x ; y) by applying the chain rule for MI:
I(x, x ; y) = I(x ; y) + I(x; y|x ).(3)
The conditional MI term can be written as:
I(x; y|x ) = E p(x,x ,y) log p(y|x, x ) p(y|x ) .(4)
This conditional MI is different from the unconditional MI, I(x; y), as it measures the amount of information shared between x and y that cannot be explained by x .
Lower bounding each term in Eq. 3 with a contrastive bound can potentially lead to a less biased estimator of the total MI. This motivates us to introduce DEMI, a sum of unconditional and conditional lower bounds:
I DEMI = I NCE (x ; y) + I CNCE (x; y|x ) ≤ I(x; y),(5)
where I CNCE is a placeholder for a lower bound on the conditional MI and will be presented in the next section. Both conditional and unconditional bounds on the MI can capture at most log K nats of MI. Therefore, DEMI in Eq. 5 potentially allows to capture up to N log K nats of MI in total, where N is the number of subviews used to describe x. This is strictly larger than log K in the standard I NCE .
Contrastive Conditional MI Estimation
One of the difficulties in computing DEMI is estimating the conditional MI. In this section, we provide bounds and approximations of this quantity. First, we show that we can readily extend InfoNCE:
Proposition 1 (Conditional InfoNCE). I CNCE is a lowerbound on I(x; y|x ) and verifies the properties below: x ,x,y1) bounded by log K. The proof can be found in Sec. A.2 and follows closely the derivation of the InfoNCE bound by applying a result from (Barber & Agakov, 2003). A related derivation of this bound was also presented in Foster et al. (2020) for optimal experiment design.
I CNCE (x; y|x , φ, K) = E log e φ(
Eq. 6 shows that a lower bound on the conditional MI can be obtained by sampling contrastive sets from the proposal distribution p(y|x ) (instead of from the marginal p(y) as in Eq. 2). Indeed, since we want to estimate the MI conditioned on x , we should allow our contrastive distribution to condition on x . Note that φ is now a function of three variables. One of the biggest hurdles in computing Eq. 6 is the access to many samples from p(y|x ), which is unknown and usually challenging to obtain. In order to overcome this, we propose various solutions next.
Variational Approximation
It is possible to obtain a bound on the conditional MI by approximating the unknown conditional distribution p(y|x ) with a variational distribution q ξ (y|x ), leading to the following proposition:
Proposition 2 (Variational I CNCE ). For any variational approximation q ξ (y|x ) in lieu of p(y|x ), with p(·|x ) q ξ (·|x ) for any x , we have:
I VAR (x, y|x , φ, ξ, K) = (7) E log e φ(x ,x,y1) 1 K K k=1 e φ(x ,x,y k ) − E KL (p(y|x ) q ξ ) ,
1. I VAR ≤ I(x; y|x ).
2. If q ξ (y|x ) = p(y|x ), I VAR = I CNCE .
3. lim K→∞ sup φ I VAR (x; y|x , φ, ξ, K) = I(x; y|x ).
where the first expectation is taken with respect to p(x, x , y 1 )q ξ (y 2:K |x ) and the second with respect to p(x ). See Sec. A.3 for the proof. Note that this bound side-steps the problem of requiring access to an arbitrary number of negative samples from the unknown p(y|x ) by i.i.d. sampling from the known and tractable q ξ (y|x ). For example, q ξ can be a conditional flow-based image generation model (Kingma & Dhariwal, 2018) or a transformer language model for text (Zhang et al., 2020). We prove that as the number of examples goes to ∞, optimizing the bound w.r.t. φ converges to the true conditional MI. Interestingly, this holds true for any q ξ , though the choice of q ξ will most likely impact the convergence rate of the estimator.
Eq. 7 is superficially similar to the ELBO (Evidence Lower BOund) objective used to train VAEs (Kingma & Welling, 2014), where q ξ plays the role of the approximate posterior (although the KL direction in the ELBO is inverted).
This parallel suggests that, assuming the variational family contains p, the optimal solution w.r.t. ξ may not verify p(y|x ) = q ξ (y|x ) for all values of K and φ, i.e. there could be solutions for which some of the KL divergence is traded for additional nats on the contrastive cost. However, we see trivially that if we ignore the dependency of the first expectation term on q ξ (i.e. we "detach" the gradient of the expectation w.r.t ξ) and only optimize ξ to minimize the KL term, then it is guaranteed that p(y|x ) = q ξ (y|x ), for any K and φ. Thus, by the second property in Proposition 2, optimizing I VAR (φ, ξ * , K) w.r.t. φ will correspond to optimizing I CNCE .
In practice, the latter observation significantly simplifies the estimation problem as one can minimize a Monte-Carlo approximation of the KL divergence w.r.t ξ by standard supervised learning: we can efficiently approximate the KL by taking samples from p(y|x ). Those can be directly obtained by using the joint samples from p(x, y) included in the training set and computing x from x. 3 However, maximizing I VAR can still be challenging as it requires estimating a distribution over potentially high-dimensional inputs and efficiently sampling a large number of negative examples from it. In the next section, we provide an importance sampling approximation of I CNCE that bypasses this issue.
Importance Sampling Approximation
The optimal critic for I NCE is ψ * (x , y) = log p(y|x ) p(y) +c(x ), for any c. Assuming access to ψ * (x , y), it is possible to use importance sampling to produce approximate expectations from p(y|x ). This is achieved by first sampling y 1:M ∼ p(y) and then resampling
K ≤ M (K > 0) examples i.i.d. from the normalized importance distribu- tion w k = exp ψ * (x ,ỹ k ) M m=1 exp ψ * (x ,ỹm)
. This process is also called "sampling importance resampling" (SIR) and we can write the corresponding distribution as p SIR (y k ) = w k δ(y k ∈ y 1:M )p(ỹ 1:M ). As M/K → ∞, it is guaranteed to produce samples from p(y|x ) (Rubin, 1987).
The objective corresponding to this process is:
I SIR (x, y|x , φ, K) = (8) E p(x ,x,y1)pSIR(y 2:K ) log e φ(x ,x,y1) 1 K K k=1 e φ(
x ,x,y k ) Note the dependence of p SIR on w k and hence ψ * . SIR is known to increase the variance of the estimator (Skare et al., 2003) and is wasteful given that only a smaller set of K < M examples are actually used for MI estimation.
To provide a cheap approximation of the SIR estimator, we split the denominator of Eq. 8 into a positive term in-volving y 1 and a sum of contributions coming from negative examples y 2:K , and we rewrite the latter as an average
(K −1) K k=2 1 K−1 e φ(x ,x,y k )
. Now, we can use the normalized importance weights w k to estimate that term under the resampling distribution. Formally, we have the following approximation:
Proposition 3 (Importance Sampled I CNCE ). Assuming ψ * = arg sup ψ I NCE (x , y) and w k = exp ψ * (x ,y k ) M k=2 exp ψ * (x ,ym)
, we have the following two properties, where:
I IS (x, y|x , φ, K) = E log e φ(x ,x,y1) 1 K (e φ(x ,x,y1) + (K − 1) K k=2 w k e φ(x ,x,y k ) ) ,(9)1. lim K→∞ sup φ I IS (x; y|x , φ, K) = I(x; y|x ), 2. lim K→∞ arg sup φ I IS = log p(y|x ,x) p(y|x ) + c(x, x ).
where the expectation is with respect to p(x , x, y 1 )p(y 2:K ).
The proof can be found in Sec. A.4. I IS skips the resampling step by up-weighting the negative contribution to the normalization term of examples that have large probability under the resampling distribution, i.e. that have large w k . As detailed in the appendix, this approximation is cheap to compute given that the negative samples are sampled from the marginal distribution p(y) and we avoid the need for the resampling step. We hypothesize that I IS has less variance than I SIR as it does not require the additional resampling step. The proposition shows that as the number of negative examples goes to infinity, the proposed approximation converges to the true value of the conditional MI, and, in the limit of K → ∞, optimizing I IS w.r.t. φ converges to the conditional MI and the optimal φ converges to the optimal I CNCE solution.
Boosted Critic Approximation
Proposition 3 shows that the optimal critic φ * estimates the desired log ratio only in the limit of K → ∞. Hereafter, we generalize the results presented in Ma & Collins (2018) and show that we can accurately estimate the conditional log-ratio with the following proposition.
Proposition 4 (Boosted Critic Estimation). Assuming ψ * = arg sup ψ I NCE (x , y), the following holds, with:
I BO (x, y|x , φ, K) = E log e ψ * (x ,y1)+φ(x ,x,y1) 1 K K k=1 e ψ * (x ,y k )+φ(x ,x,y k ) ,(10)1. I BO ≤ I(x, x ; y), 2. φ * = arg sup φ I BO = log p(y|x ,x) p(y|x ) + c(x, x ).
where the expectation is with respect to p(x, x , y 1 )p(y 2:K ). The proof is straightforward and is in Sec. A.5.
We refer to Eq. 10 as boosted critic estimation due to the fact that optimizing φ captures residual information not expressed in ψ * . Perhaps surprisingly, I BO provides an almost embarrassingly simple way of estimating the desired log-ratio for any K. It corresponds to estimating an In-foNCE like bound, where negative samples come from the easily-sampled marginal p(y) and the critic is shifted by the optimal critic for I NCE (x , y). However, this comes at the cost of not having a valid approximation of the conditional MI. Indeed, by 1., I BO is a lower-bound on the total MI, not on the conditional MI. As we show in the next section, we can get an estimate of the conditional MI by using I BO to estimate the conditional critic in an accurate manner and I IS to evaluate the conditional MI.
Experiments
The goal of our experiments is two-fold: (1) to test whether DEMI leads to a better estimator of the total MI, and whether our proposed conditional MI approximations are accurate;
(2) to test whether DEMI helps in estimating better representations for natural data. We verify (1) in a synthetic experiment where we control the total amount of MI between Gaussian covariates. Then, we verify (2) on a selfsupervised image representation learning domain and explore an additional application to natural language generation in a sequential setting: conversational dialogue.
Synthetic Data
We extend Poole et al. (2019)'s two variable setup to three variables. We posit that {x, x , y} are three Gaussian covariates, x, x , y ∼ N (0, Σ) and we choose Σ such that we can control the total mutual information I(x, x ; y), I ∈ {5, 10, 15, 20} (see Appendix for pseudo-code and details of the setup). We aim to estimate the total MI I(x, x ; y) and compare the performance of our approximators in doing so. We limit this investigation to contrastive estimators although other estimators and non lower-bounds exist (e.g. DoE (McAllester & Stratos, 2020b)). For more details see App. A.6.
In Figure 2 ( Maximizing I DEMI assumes access to negative samples from (top) DEMI maximizes I DEMI with K/2 examples for unconditional and conditional bounds (K total) and assume access to the ground-truth p(y|x). DEMI-IS learns the conditional critic using I IS , DEMI-BO using I BO , DEMI-VAR using I VAR . We plot the total MI estimated by I DEMI when learning the conditional critics using our approximations. We see that (1) p(y|x ), which is an unrealistic assumption in practice. To verify the effectiveness of our approximations, we train the conditional critics using I BO (DEMI-BO), I IS (DEMI-IS) and I VAR (DEMI-VAR) and we evaluate the total MI using I DEMI (we assume access to p(y|x ) only at evaluation time). This allows us to verify whether it is possible to reliably estimate the conditional critic in the absence of negative samples from p(y|x ). It is interesting to note how the critic learnt by I IS suffers high variance and does not lead to a good estimate of the total MI when evaluated with I CNCE . DEMI-VAR still outperforms InfoNCE for higher values of total MI, but seems to suffer in the case of small MIs. For this experiment, we update q ξ at the same rate as φ. Improvements could be obtained by updating q ξ more frequently, similarly to the asynchronous updates successfully used in the GAN literature (Mescheder et al., 2018). I BO accurately estimates the critic.
In Figure 2 (bottom), we show that it is possible to obtain an estimate of the total MI without access to p(y|x ) neither at training nor evaluation time. We first learn the conditional critic using I BO and compute I NCE + I IS by using the estimated critic. Figure 2 (bottom) reports the results. For this experiment, we share the same set of K negative examples for both conditional and unconditional MI and therefore we report the upper bound 2 log K.
Vision
. 2 . 1 . I M A G E N E T
Setup We study self-supervised learning of image representations using 224×224 images from ImageNet (Deng et al., 2009). The evaluation is performed by fitting a linear classifier to the task labels using the pre-trained representations only, that is, we fix the weights of the pre-trained image encoder f . We build upon InfoMin (Tian et al., 2020). All hyperparameters for training and evaluation are the same as in Tian et al. (2020). All models use a momentum-contrastive memory buffer of K = 65536 examples (Chen et al., 2020b). All models use a Resnet50 backbone and are trained for 200 epochs. We report transfer learning performance by freezing the encoder on STL-10, CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2009), Stanford Cars (Krause et al., 2013), Caltech-UCSD Birds (CUB) (Welinder et al., 2010) and Oxford 102 Flowers (Nilsback & Zisserman, 2008).
Views Each input image is independently augmented into two views x and y using a stochastically applied transformation following Tian et al. (2020). This uses random resized crop, color jittering, gaussian blur, rand augment, color dropping, and jigsaw as augmentations. We experiment two ways of creating the subview x of x: cut, which applies cutout to x, and crop which is inspired by Caron et al. (2020) and consists in cropping the image aggressively and resizing the resulting crops to 96×96. To do so, we use the RandomResizedCrop from the (Chen et al., 2020b) x ↔ y -67.5 ------InfoMin (Tian et al., 2020) x ↔ y 74. torchvision.transforms module with s = (0.05, 0.14). Models Our baseline, InfoMin, maximizes I NCE (x, y).
We also report an enhanced baseline InfoMin (multi), which maximizes I NCE (x, y) + I NCE (x , y) and aims to verify whether additional gains can be obtained by estimating conditional MI rather than just using x as an additional view. We recur to I BO to estimate the conditional critic 4 . DEMI maximizes four terms:
I NCE (x ; y) + I BO (x; y|x ) + I NCE (x; y) + I BO (x ; y|x)
. This corresponds to maximizing both decompositions of the joint I(x, x ; y). Differently from MI estimation, we found to be important for representation learning to maximize both decompositions, which include I NCE (x; y) in the objective. The computation of the conditional MI terms can be efficiently done by reusing the logits of the two unconditional MI (Listing 1).
Results Table 1 reports the average accuracy of linear evaluations obtained by 3 pretraining seeds. DEMI obtains 3.7% improvement (78.6±0.2) compared to the baseline InfoMin for Imagenet100 (IN100) and 0.7% (70.8±0.1) for full Imagenet (IN1K). Although not reported, the crop strategy performs better than the cut strategy (which obtains 70.5±0.1 on average IN1K). One hypothesis is that cutout introduces image patches that do not follow the pixel statistics in the corpus. InfoMin (multi) ablates conditional MI maximization and shows that introducing the additional view is helpful in low-data setting such as IN100, but can only slightly improve performance in IN1K. It is interesting to note that DEMI improves transfer learning performance the most in the fine-grained classification benchmarks CARS and CUB, where it is particularly important to capture detailed information about the input image (Yang et al., 2018). This serves as indication that the representations learnt by DEMI can extract more information about each input.
. 2 . 2 . C I FA R -1 0
We also experiment on CIFAR-10 building upon Sim-CLR (Chen et al., 2020b), which uses a standard ResNet-50 4 Although not reported explicitly, we found that IIS leads very similar performance with a slightly higher variance across seeds. Listing 1: PyTorch-style pseudo-code for DEMI in InfoMin. We use I BO to estimate the critic for conditional MI. architecture by replacing the first 7×7 Conv of stride 2 with 3×3 Conv of stride 1 and also remove the max pooling operation. In order to generate the views, we use Inception crop (flip and resize to 32×32) and color distortion. We train with learning rate 0.5, batch-size 800, momentum coefficient of 0.9 and cosine annealing schedule. Our energy function is the cosine similarity between representations scaled by a temperature of 0.5 (Chen et al., 2020b). We obtain a top-1 accuracy of 94.7% using a linear classifier compared to 94.0% reported in Chen et al. (2020b) and 95.1% for a supervised baseline with same architecture.
Dialogue
Setup We experiment with language modeling task on the Wizard of Wikipedia (WoW) dataset (Dinan et al., 2019). We evaluate our models using automated metrics and human evaluation. For automated metrics, we report perplexity (ppl), BLEU (Papineni et al., 2002). We report a comprehensive set of metrics in the Appendix (Sec B). We build upon GPT2 (Radford et al., 2019), and fine-tune it by language modeling (LM) on the dialogue corpus. In addition to the LM loss, we maximize MI between representations of the past and future utterances in each dialogue, i.e. the predictive coding framework (Elias, 1955;McAllester & Stratos, 2020a). We consider past and future in a dialogue as views of the same conversation. Given L utterances (x 1 , . . . , x L ), we set y = (x k+1 , . . . , x L ), x = (x 1 , . . . , x k ) and x = x k , where (.) denotes concatenation and k is randomly chosen between 2 < k < L. The goal is therefore to imbue representations with information about the future that cannot be solely explained by the most recent utterance x . The representations of past and future are the state corresponding to the last token in the last layer in GPT2.
Models We evaluate our introduced models against different baselines. GPT2 is a basic small pre-trained model fine-tuned on the dialogue corpus. TransferTransfo (Wolf et al., 2019) augments the standard next-word prediction loss in GPT2 with the next-sentence prediction loss similar to Devlin et al. (2019). GPT2-MMI follows MMI-bidi (Li et al., 2016); we generate 50 responses from GPT2 and then rank them based on a trained backward model p GPT2 (x|y). For the InfoNCE baseline, we only maximize the unconditional MI between x and y and sample negative futures from the marginal distribution p(y). DEMI maximizes conditional MI by recurring to I VAR and using GPT2 itself as the variational approximation. GPT2 is a generative model therefore we can simply sample a set of negative futures from p GPT2 (y|x ), that is, by restricting the amount of contextual information GPT2 is allowed to consider. To speed up training, the negative sampling of future candidates is done offline. We also tried I BO in this setting and obtained similar results.
Results Table 2 shows results on the validation set obtained by 3 pretraining seeds. For the test set results and sample dialogue exchanges, please refer to the Appendix. The automated metrics indicate that DEMI representations result in higher quality responses. We also perform human evaluation on randomly sampled 1000 WoW dialogue contexts. We present the annotators with a pair of candidate responses consisting of InfoNCE, DEMI and baseline responses. They were asked to compare the pairs regarding interestingness, relevance and humanness, using a 3-point Likert scale (Zhang et al., 2020). In Table 2, we see that overall responses generated by DEMI were strongly preferred to other models but not to the gold response. Bootstrap confidence intervals and p-values (t-test, following Zhang et al., 2020) indicate significant improvements at α=0.01.
Related Works
Representation learning based on MI maximization has been applied in various domains such as images (Grill et al., 2020;Caron et al., 2020), words (Mikolov et al., 2013;Stratos, 2019), graphs (Velickovic et al., 2019), RL (Mazoure et al., 2020) and videos (Jabri et al., 2020), exploiting noise-contrastive estimation (NCE) (Gutmann & Hyvärinen, 2012), InfoNCE (Oord et al., 2018) and variational objectives (MINE) . InfoNCE have gained recent interest w.r.t. variational approaches due to its lower variance (Song & Ermon, 2020a) and superior performance in downstream tasks. InfoNCE however can underestimate large amounts of true MI given that it is capped at log K. Poole et al. (2019) propose to trade-off between variance and bias by interpolating variational and contrastive bounds. Song & Ermon (2020b) propose a modification to InfoNCE for reducing bias where the critic needs to jointly identify multiple positive samples at the same time. Our proposal to scaffold the total MI estimation into a sequence of smaller estimation problems shares similarities with the recent telescopic estimation of density ratio (Rhodes et al., 2020) which is based on variational approximations. Instead, we build upon InfoNCE, propose new results on contrastive conditional MI estimation and apply it to self-supervised representation learning. Other MINE-based approaches of conditional MI estimation can be found in the recent (Mondal et al., 2020). Our contrastive bound in Eq. 5 is reminiscent of conditional noise-contrastive estimation (Ceylan & Gutmann, 2018), which generalizes NCE for data-conditional noise distributions (Gutmann & Hyvärinen, 2012): our result is an interpretation in terms of conditional MI.
Conclusion
We decompose the original cross-view MI into a sum of conditional and unconditional MI terms (DEMI). We provide several contrastive approximations to the conditional MI and verify their effectiveness in various domains. Incorporating more than two terms in the decomposition is straightforward and could be investigated in the future. Recent work questioned whether MI maximization itself is at the core of the recent success in representation learning (Rainforth et al., 2018;Tschannen et al., 2020). These showed that capturing a larger amount of mutual information between views may not correlate to better downstream performance. Other desirable properties of the representation space may play an important role (Wang & Isola, 2020). Although we acknowledge these results, we posit that devising more effective ways to maximize MI will still prove useful in representation learning, especially if paired with architectural inductive biases or explicit regularization methods.
A. Derivations
A.1. Derivation of InfoNCE, I NCE
We start from Barber and Agakov's variational lower bound on MI (Barber & Agakov, 2003). I(x; y) can be bounded as follows:
I(x; y) = E p(x,y) log p(y|x) p(y) ≥ E p(x,y) log q(y|x) p(y) ,(11)
where q is an arbitrary distribution. We show that the InfoNCE bound (Oord et al., 2018) corresponds to a particular choice for the variational distribution q followed by the application of the Jensen inequality. Specifically, q(y|x) is defined by independently sampling a set of examples {y 1 , . . . , y K } from a proposal distribution π(y) and then choosing y from {y 1 , . . . , y K } in proportion to the importance weights w y = e ψ(x,y) k e ψ(x,y k ) , where ψ is a function that takes x and y and outputs a scalar. In the context of representation learning, ψ is usually a dot product between some representations of x and y, e.g. f (x) T f (y) (Oord et al., 2018). The unnormalized density of y given a specific set of samples y 2:K = {y 2 , . . . , y K } and x is:
q(y|x, y 2:K ) = π(y) · K · e ψ(x,y) e ψ(x,y) + K k=2 e ψ(x,y k ) ,(12)
where we introduce a factor K which provides "normalization in expectation". By normalization in expectation, we mean that taking the expectation of q(y|x, y 2:K ) with respect to resampling of the alternatives y 2:K from π(y) produces a normalized density (see Sec. A.1.1 for a derivation):
q(y|x) = E π(y 2:K ) [q(y|x, y 2:K )],(13)
where π(y 2:K ) = K k=2 π(y k ). The InfoNCE bound (Oord et al., 2018) is then obtained by setting the proposal distribution as the marginal distribution, π(y) ≡ p(y) and applying Jensen's inequality, giving:
I(x, y) ≥ E p(x,y) log E p(y 2:K ) q(y|x, y2:K ) p(y) ≥ E p(x,y) E p(y 2:K ) log p(y) K · wy p(y) = E p(x,y) E p(y 2:K ) log K · e ψ(x,y) e ψ(x,y) + K k=2 e ψ(x,y k ) = E p(x,y 1 )p(y 2:K ) log e ψ(x,y) 1 K K k=1 e ψ(x,y k ) = INCE(x; y|ψ, K) ≤ log K,(14)
where the second inequality was obtained using Jensen's inequality.
A . 1 . 1 . D E R I VAT I O N O F N O R M A L I Z E D D I S T R I B U T I O N
We follow Cremer et al. (2017) to show that q(y|x) = E y 2:K ∼π(y) [q(y|x, y 2:K )] is a normalized distribution:
x q(y|x) dy = y E y 2:K ∼π(y) π(y) e ψ(x,y) 1 K K k=2 e ψ(x,y k ) + e ψ(x,y) dy = y π(y)E y 2:K ∼π(y) e ψ(x,y) 1 K K k=2 e ψ(x,y k ) + e ψ(x,y) dy = E π(y) E π(y 2:K ) e ψ(x,y) 1 K K k=2 e ψ(x,y k ) + e ψ(x,y) = E π(y 1:K ) e ψ(x,y) 1 K K k=1 e ψ(x,y k ) = K · E π(y 1:K ) e ψ(x,y1) K k=1 e ψ(x,y k ) = K i=1 E π(y 1:K ) e ψ(x,yi) K k=1 e ψ(x,y k ) = E π(y 1:K ) K i=1 e ψ(x,yi) K k=1 e ψ(x,y k ) = 1(15)
A.2. Proof of Proposition 1
Proposition 1 (Conditional InfoNCE). I CNCE is a lower-bound on I(x; y|x ) and verifies the properties below:
I CNCE (x; y|x , φ, K) = E log e φ(x ,x,y1) 1 K K k=1 e φ(x ,x,y k ) ,(6)
1. I CNCE ≤ I(x; y|x ).
φ * = arg sup
φ I CNCE = log p(y|x ,x) p(y|x ) + c(x, x ). 3. lim K→∞ I CNCE (x; y|x , φ * , K) = I(x; y|x ).
Proof. We begin with 1., the derivation is as follows:
I(x; y|x ) = E p(x ,x,y) log p(y|x , x) p(y|x ) ≥ E p(x ,x,y) logq (y|x , x) p(y|x ) (16) = E p(x ,x,y) log E p(y 2:K |x ) q(y|x , x, y 2:K ) p(y|x ) (17) ≥ E p(x ,x,y) E p(y 2:K |x ) log p(y|x ) K · w y p(y|x ) (18) = E p(x ,x,y) E p(y 2:K |x ) log K · e φ(x ,x,y) K k=1 e φ(x ,x,y k ) (19) = E p(x ,x,y) E p(y 2:K |x ) log e φ(x ,x,y) 1 K K k=1 e φ(x ,x,y k ) (20) = I CNCE (x; y|x , φ, K),(21)
where in Eq. 18 we used Jensen's inequality and p(y|x ) as our proposal distribution for the variational approximation q(y|x , x).
For 2., we rewrite I CNCE by grouping the expectation w.r.t x :
E p(x ) E p(x,y1|x )p(y 2:K |x ) log e ψ(x ,x,y1) 1 K K k=1 e ψ(x ,x,y k ) .(22)
Given that both distributions in the inner-most expectation condition on the same x , this term has the same form as I NCE and therefore the optimal solution is φ * (Ma & Collins, 2018). The optimal φ for I CNCE is thus obtained by choosing φ(x , x, y) c(x, x ). For proving 3., we substitute the optimal critic and take the limit K → ∞. We have:
x = log p(y|x,x ) p(y|x ) + c x (x)= φ * x for each x , giving φ * = log p(y|x,x ) p(y|x ) +lim K→∞ E p(x ,x,y1)p(y 2:K |x ) log p(y|x ,x) p(y|x ) 1 K p(y1|x ,x) p(y1|x ) + K k=2 p(y k |x ,x) p(y k |x ) ,(23)
From the Strong Law of Large Numbers, we know that as 1
K−1 K−1 k=1 p(y k |x ,x) p(y k |x ) → E p(y|x ) p(y|x ,x)
p(y|x ) = 1, as K → ∞ a.s., therefore (relabeling y = y 1 ):
I CNCE ∼ K→∞ E p(x ,x,y) log p(y|x ,x) p(y|x ) 1 K p(y|x ,x) p(y|x ) + K − 1 (24) ∼ K→∞ E p(x ,x,y) log p(y|x , x) p(y|x ) + log K p(y|x ,x) p(y|x ) + K − 1 (25) ∼ K→∞ I(x, y|x ),(26)
where the last equality is obtained by noting that the second term → 0.
A.3. Proof for Proposition 2
Proposition 2 (Variational I CNCE ). For any variational approximation q ξ (y|x ) in lieu of p(y|x ), with p(·|x ) q ξ (·|x ) for any x , we have:
I VAR (x, y|x , φ, ξ, K) = (7) E log e φ(x ,x,y1) 1 K K k=1 e φ(x ,x,y k ) − E KL (p(y|x ) q ξ ) , 1. I VAR ≤ I(x; y|x ). 2. If q ξ (y|x ) = p(y|x ), I VAR = I CNCE . 3. lim K→∞ sup φ I VAR (x; y|x , φ, ξ, K) = I(x; y|x ).
Proof. For 1., we proceed as follows:
I(x; y|x ) ≥ E p(x,y) log q(y|x , x)q ξ (y|x ) p(y|x )q ξ (y|x ) = E p(x,y) log q(y|x , x) q ξ (y|x ) − E p(x) [KL(p(y|x ) q ξ (y|x ))] ≥ E p(x,y1)q ξ (y 2:K |x ) log e φ(x ,x,y1) 1 K K k=1 e φ(x ,x,y k ) − E p(x) [KL(p(y|x ) q ξ (y|x ))] , = I VAR (x, y|x , φ, ξ, K)(27)
where the last step has been obtained as in Eq. 18.
Proving 2. is straightforward by noting that if q ξ = p, KL(p(y|x )||q ξ (y|x )) = 0 and the first term corresponds to I CNCE .
Proving 3. goes as follows:
sup φ E p(x,x ,y1)q ξ (y 2:K |x ) log e φ(x ,x,y1) 1 K K k=1 e φ(x ,x,y k ) − E p(x ) KL (p(y|x ) q ξ (y|x )) (28) = E p(x ,x,y1)q ξ (y 2:K |x ) log p(y 1 |x , x) q ξ (y 1 |x ) − log p(y 1 |x ) q ξ (y 1 |x ) − log 1 K K k=1 p(y k |x, x ) q ξ (y k |x ) (29) = I(x, y|x ) − E p(x ,x,y1)q ξ (y 2:K |x ) log 1 K K k=1 p(y k |x, x ) q ξ (y k |x )(30)→ K→∞ I(x, y|x ).(31)
This is obtained by noting that (1) for any K and q ξ , arg sup φ I VAR = log p(y|x ,x) q ξ (y|x) + c(x, x ) (because the KL doesn't depend on φ) and (2) the second term in the last line goes to 0 for K → ∞ (a straightforward application of the Strong Law of Large Numbers shows that for samples y 2:K drawn from q ξ (y 2:K |x ), we have: 1
K K k=2 p(y k |x,x ) q ξ (y k |x ) → K→∞ 1).
A.4. Proofs for I IS
We will be using the following lemma. Lemma 1. For any x , x and y, and any sequence φ K such that ||φ K − φ|| ∞ → K→∞ 0:
lim K→∞ E p(y 2:K ) log Ke φ K (x ,x,y) e φ K (x ,x,y) + (K − 1) K k=2 w k e φ K (x ,x,y k ) (32) = lim K→∞ E p(y 2:K |x ) log Ke φ(x ,x,y) e φ(x ,x,y) + K k=2 e φ(x ,x,y k ) ,(33)where w k = exp ψ * (x ,y k ) K k=2 exp ψ * (x ,y k )
, for ψ * (x , y k ) = arg sup ψ I NCE (x , y|ψ, K) = log p(y k |x ) p(y k ) .
Proof. We see that almost surely, for y 2:K ∼ p(·):
K k=2 w k e φ K (x ,x,y k ) = 1 K−1 K k=2 p(y k |x ) p(y k ) e φ K (x ,x,y k ) 1 K−1 K k=2 p(y k |x ) p(y k ) → K→∞ E p(y|x ) e φ(x ,x,y) ,(34)
where we applied the Strong Law of Large Numbers to the denominator.
For the numerator, we write:
1 K − 1 K k=2 p(y k |x ) p(y k ) e φ K (x ,x,y k ) = 1 K − 1 K k=2 p(y k |x ) p(y k ) e φ(x ,x,y k ) + 1 K − 1 K k=2 p(y k |x ) p(y k ) (e φ K (x ,x,y k ) − e φ(x ,x,y k ) )
and note that the first term is the standard IS estimator using p(y k ) as proposal distribution and tends to E p(y|x ) e φ (x ,x,y) from the Strong Law of Large Numbers, while the second term goes to 0 as φ K tends to φ uniformly.
This gives lim K→∞ E p(y 2:K ) log (x ,x,y) .
Ke φ K (x ,x,y) e φ K (x ,x,y) +(K−1) K k=2 w k e φ K (x ,x,y k ) = log e φ(x ,x,y) E p(y|x ) e φ
Following the same logic (without the importance-sampling) demonstrates that:
lim K→∞ E p(y 2:K |x ) log Ke φ(x ,x,y) e φ(x ,x,y) + K k=2 e φ(x ,x,y k ) = log e φ(x ,x,y) E p(y|x ) e φ(x ,x,y) ,
which concludes the proof.
Proposition 3 (Importance Sampled I CNCE ). Assuming ψ * = arg sup ψ I NCE (x , y) and w k =
exp ψ * (x ,y k ) M k=2 exp ψ * (x ,ym)
, we have the following two properties, where:
I IS (x, y|x , φ, K) = E log e φ(x ,x,y1) 1 K (e φ(x ,x,y1) + (K − 1) K k=2 w k e φ(x ,x,y k ) ) ,(9)1. lim K→∞ sup φ I IS (x; y|x , φ, K) = I(x; y|x ), 2. lim K→∞ arg sup φ I IS = log p(y|x ,x) p(y|x ) + c(x, x ).
Proof. By applying Lemma 1 with φ K = φ, we know that for any φ:
lim K→∞ I IS (x; y|x , φ, K) = lim K→∞ E p(x ,x,y)p(y 2:K |x ) log Ke φ(x ,x,y) e φ(x ,x,y) + K k=2 e φ(x ,x,y k )
.
In particular, the RHS of the equality corresponds to lim K→∞ I CNCE (x, y|x , φ, K). That quantity is smaller than I(x, y|x ), with equality for φ = φ * . This guarantees that:
lim K→∞ sup φ I IS (x; y|x , φ, K) ≥ lim K→∞ I IS (x; y|x , φ * , K) = I(x, y|x ).(35)
We now prove the reverse inequality. We let 2 = lim K→∞ sup φ I IS (x; y|x , φ, K) − I(x, y|x ), and assume towards a contradiction that > 0. We know that:
∃K 0 , ∀K ≥ K 0 , sup φ I IS (x; y|x , φ, K) ≥ I(x, y|x ) + .
Now, ∀K ≥ K 0 , let φ K be such that:
I IS (x; y|x , φ K , K) ≥ sup φ I IS (x; y|x , φ, K) − 2 ,
and thus: ∀K ≥ K 0 , I IS (x; y|x , φ K , K) ≥ I(x, y|x ) + 2 .
Since φ K ∈ R |X |×|X |×|Y| , {φ K } K≥K0 contains a subsequence that converges to a certain φ ∞ ∈R |X |×|X |×|Y| . Without loss of generality, we assume that ∀K, ∀x , ∀x, E p(y) [φ K (x , x, y)] = 0 which implies that E p(y) [φ ∞ (x , x, y)] = 0 (similarly to I NCE , I IS is invariant to functions of (x , x) added to φ).
In particular, this guarantees that ||φ ∞ || ∞ < ∞. Otherwise, we would have φ ∞ (x , x, y) = −∞ for a given y, which would then imply I IS (x; y|x , φ ∞ , K) = −∞ and give a contradiction.
We can now apply Lemma 1 to {φ K } and φ ∞ to show that lim K→∞ I IS (x; y|x , φ K , K) = lim K→∞ I CNCE (x, y|x , φ ∞ , K), and get a contradiction: the first term is larger than I(x, y|x ) + 2 while the second is smaller than I(x, y|x ).
A.5. Proof for I BO
Proposition 4 (Boosted Critic Estimation). Assuming ψ * = arg sup ψ I NCE (x , y), the following holds, with:
I BO (x, y|x , φ, K) = E log e ψ * (x ,y1)+φ(x ,x,y1) 1 K K k=1 e ψ * (x ,y k )+φ(x ,x,y k ) ,(10)
1. I BO ≤ I(x, x ; y),
2. φ * = arg sup φ I BO = log p(y|x ,x) p(y|x ) + c(x, x ).
Proof. To prove 1., it suffices to follow the proof for I NCE (Sec. A.1). To prove 2., we set η(x , x, y) = ψ * (x , y) + φ(x , x, y 1 ). Ma & Collins (2018) show that η * (x , x, y) = log p (y|x ,x) p(y) + c η (x , x), for any K. Knowing that ψ * (x , y) = log p(y|x ) p(y) + c ψ (x ) is a constant in the maximization problem, simple algebra shows that φ * (x , x, y) = log p(y|x ,x) p(y|x ) + c(x , x).
A.6. Synthetic Experiments
Here, we provide details for Sec. 5.1. In this experiment, each x, x and y are 20-dimensional. For each dimension, we sampled (x i , x i , y i ) from a correlated Gaussian with mean 0 and covariance matrix cov i . For a given value of MI, mi = {5, 10, 15, 20}, we sample covariance matrices cov i = sample_cov(mi i ), such that i mi i = mi, mi i > 0 chosen at random. We optimize the bounds by stochastic gradient descent (Adam, learning rate 5 · 10 −4 ). All encoders f are multi-layer perceptrons with a single hidden layer and ReLU activation. Both hidden and output layer have size 100.
InfoNCE computes:
Ep log
e f ([x,x ]) T f (y) e f ([x,x ]) T f (y) + K k=2 e f ([x,x ]) T f (y k ) + log K, y2:K ∼ p(y),
where the proposal is the marginal distribution p(y), E is chosen to be a dot product between representations, E p denotes expectation w.r.t. the known joint distribution p (x, x , y) and is approximated with Monte-Carlo, [x, x ] denotes concatenation and f is a 1-hidden layer MLP.
DEMI computes: x, 0]) in order to re-use MLP parameters for the two terms. The negative samples of the conditional MI term come from the conditional distribution p(y|x ), which is assumed to be known in this controlled setting. We maximize both lower bounds with respect to the encoder f . We report pseudo-code for sample_cov in Listing 2, used to generate 3×3 covariance matrices for a fixed I({x, x }; y) and uniformly sampled α = I(x; y)/I({x, x }; y).
E p(x ,x,y)p(y 2:K/2 ) log e f (x ) T f (y) e f (x ) T f (y) + K/2 k=2 e f (x ) T f (y k ) + (36) E p(x ,x,y)p(y 2:K/2 |x ) log e f ([x ,x]) T f (y) e f ([x ,x]) T f (y) + K/2 k=2 e f ([x ,x]) T f (y k ) + 2 log K/2 where f (x) is just f ([
B. Experiments on Dialogue
B.1. DEMI Details
The optimization of the DEMI requires the specification of a critic. Following previous work (Oord et al., 2018;Hjelm et al., 2019), we implement the critic by a dot product between representations of the past f (x) and those of the future f (y). We obtain f x , f y by running a forward pass with the GPT2 model on the words from the past and the future separately and by taking the state of the last layer of the GPT2 corresponding to the last token in the past and the future respectively.
For all DEMI terms, given the past, the model is trained to pick the ground-truth future among a set of N future candidates. This candidate set includes the ground-truth future and N − 1 negative futures drawn from different proposal distributions. To compute I NCE (x; y), we consider the ground truth future of each sample in the batch as a negative candidate for the other samples in the same batch. Using this approach, the number of candidates N is equated to the batch size. This ensures that negative samples are sampled from the marginal distribution p(y). To compute the conditional MI boud I CNCE (x; y|x ), we sample negative futures p(y|x ) by conditioning the GPT2 model on the most recent utterance in the past x .
B.2. Dataset
Wizard of Wikipedia (Dinan et al., 2019) consists of 20 365 dialogues where each dialogue in the conversation is about a specific topic. There are two participants in the conversation: the wizard and the apprentice. The apprentice is a curious learner who is eager to know more about a particular topic. However, the wizard is a knowledgeable expert who tries to inform the apprentice about the topic. In our experiments, we used the valid data "unseen valid" that includes topics that do not overlap with the train data and the test data. Detailed statistics of the dataset are presented in Table 4. return (mi_xp_xpp_y -mi) ** 2 + (mi_xp_y -α * mi) ** 2
Listing 2: Pseudo-code for covariance sampling in the synthetic experiment. Table 3: A sample dialogue between speaker A and speaker B from the Wizard of Wikipedia dataset. The four rows from top to bottom are: (1) x: the "past" dialogue up to utterance k (2) y: the ground-truth utterance for the next turn k + 1 (3) y 1:N : future candidates sampled from the "restricted context" future distribution p(y|x ). These candidates correspond to the set of hard negatives that are closely related to the conversation. (4) y 1:N : future candidates sampled randomly from the dataset. We can see that candidates y 1:N are semantically close but incoherent w.r.t to the dialogue history as they were conditioned solely on the immediate past utterance x . However, we can notice that candidates y 1:N are semantically distant from x as they were sampled randomly from the data distribution. The highlighted text in green correspond to the topic of the conversation. Speaker B mentions that they have never done either parachuting or skydiving. B 1 corresponds to the utterance generated based on the restricted context x . The utterance is on-topic but completely contradictory to what speaker B has said in the past. On the other hand B 1 is randomly sampled from other dialogues. We can observe that the utterance is clearly irrelevant to the conversation.
B.3. Experimental Setup
Given memory constraints, all the proposed models are trained with a batch size of 5 per GPU, considering up to three utterances for the future and five utterances in the past. All the models are trained on 2 NVIDIA V100s. The models early-stop in the 4th epoch. We use the Adam optimizer with a learning rate of 6.25 × 10 −5 , which we linearly decay to zero during training. Dropout is set to 10% on all layers. InfoNCE/DEMI terms are weighted with a factor 0.1 in the loss function. We varied the factor from 0.1 to 1 and 0.1 was chosen based on the best results on the validation set. During inference, we use nucleus sampling (Holtzman et al., 2020) with p = 0.9 for all models.
B.4. Additional Automated metrics
Repetition The word repetition metrics aim at testing the model's performance in generating responses while avoiding artificial repetitions. We employ the repetition metrics presented in Welleck et al. (2020): seq-rep-n, rep, wrep and uniq. These metrics are defined based on the amount of repetitions in the generations. seq-rep-n measures the portion of duplicate n-grams in a generated sequence:
seq-rep-n = 1 − |unique n-grams(w 1:N )| |n-grams|
where w 1:N is the generated utterance. We report seq-rep-avg which averages over n ∈ {2, 3, 4, 5, 6}. rep measures the fraction of tokens that occur in previous tokens, uniq counts the number of unique tokens on the validation set. Please refer to (Welleck et al., 2020;Li et al., 2020) for more information about these metrics.
Distinct-n The metric is derived from Li et al. (2016). It is defined as the number of unique n-grams, normalized by the total number of n-grams of tested sentences.
Entropy-n We employ the entropy metric from Zhang et al. (2018) which aims to fix the problem of frequency difference of n-grams in Distinct-n by reflecting how evenly the empirical n-gram distribution is for each given sentence.
Results on the test set and the valid set are presented in Table 5 and Table 6 respectively. Table 7: Selected responses to the same context from different methods fine-tuned on the Wizard of Wikipedia datasets.
Context Generated Responses
A: I recently took up archery and I think I love it . Do you know anything about Archery ? B: Archery is a sport / skill of using a bow to propel arrows. A: And a great sport it is . Do you know where archery originated from ?
DEMI: Archery originated from ancient Mesopotamia, and is actually practiced internationally in over 200 countries. InfoNCE: I am not sure but many traditions and principles arose during the Mesolithic and Paleolithic era. TransferTransfo : Yep , you just use it for skill and using it to shoot arrows. GPT2: I don't know, but I know that the old French called it archer's art. Table 10: Which response is more interesting?
B.5. Human Evaluation
We closely follow the protocol used in Zhang et al. (2020). Systems were paired and each response pair was presented to 3 judges in random order. Judges expressed their preference on a 3 point Likert scale. We use a majority vote for each response pair to decide whether a specific baseline, the pivot (DEMI), or neither, performed better. We then bootstrap the set of majority votes to obtain a 99% confidence interval (CI) on the expected difference between the baseline and DEMI. If this confidence interval contains 0, the difference is deemed insignificant. We also compute p-values from the confidence intervals 5 .
In the following tables, the "pivot" is always the system given by DEMI. Pairings where the pairwise confidence interval is marked with "*" have a significant difference.
Figure 2 :
2Estimation of I(x, x ; y) for three Gaussian covariates x, x , y as function of the number of negative samples K.
top), we compare the estimate of the MI obtained by InfoNCE and DEMI, which maximizes I DEMI (Eq. 5). To be comparable with InfoNCE in terms of total number of negative examples used, DEMI uses half as many negative examples for computing each term in the sum (K/2). For all amounts of true MI, and especially for larger amounts, DEMI can capture more nats than InfoNCE with an order of magnitude less examples. We also report the upper-bounds on InfoNCE (log K) and DEMI (2 log K/2).
DEMI captures more MI than InfoNCE for the same number of K and (2) I BO accurately estimates the conditional critic without access to samples from p(y|x ) while I IS suffers from significant variance. (bottom) We assess whether we can form a good estimator of the total MI without access to p(y|x ) neither at training nor at evaluation time. Here, DEMI-BO trains the conditional critic by I BO and evaluates the total MI by I NCE + I IS .
Table 1 :
1Accuracy for self-supervised learning on Imagenet-100 (IN100) and on full Imagenet (IN1K), measured by linear evaluation. x ↔ y denotes standard contrastive matching between views. In DEMI, we use the same base InfoMin architecture but augments the loss function with conditional MI maximization across views. InfoMin (multi) considers x just as an additional view and therefore discards conditional MI maximization. All models use a standard Resnet-50 and are trained for 200 epochs. The right part of the table reports transfer learning performance of our model trained on IN1K.Model
Views
IN100
IN1K
STL10
C10
C100
CARS
CUB
FLOWERS
SimCLR (Chen et al., 2020a)
x ↔ y
-
66.6
-
90.6
71.6
50.3
-
91.2
MocoV2
Table 2 :
2Perplexity, BLEU and side-by-side human evaluation on WoW(Dinan et al., 2019). H-columns indicate whether DEMI was preferred () or not (), or neither (=) at α = 0.01.Model
ppl
BLEU H-rel
H-hum
H-int
GPT2
19.21
0.78
TransferTransfo 19.32
0.75
GPT2-MMI
19.30
0.65
InfoNCE
18.85
0.80
=
DEMI
18.70
0.82
=
=
=
Human
-
-
x A: I like parachuting or skydiving . B : I've never done either but they sound terrifying, not a fan of heights. A: But it is interesting game. This first parachute jump in history was made by Andre Jacques. B: Oh really ? Sounds like a french name, what year did he do it ? A: It done in October 22 1797. They tested his contraption by leaping from a hydrogen balloon. B: Was he successful or did he kick the bucket off that stunt? A: I think its a success. The military developed parachuting tech. y ∼ p(y|x ) B gt Yeah nowadays they are a lot more stable and well made. y 1:N ∼ p(y|x ) B 1 : That is great. I've been skydiving for days now . How is it ? B 2 : Oh I have never flown but I'm glad to know. B 3 : I've been dying for it since I was a kid. B 4 : Yes, that is why NASA had an advanced mechanics tech for months. B 5 : I went parachuting last Sunday and enjoyed it. y 1:N ∼ p(y) B 1 : I think science fiction is an amazing genre for anything B 2 : Can you imagine the world without internet access ? B 3 : I am just finishing my university course and I will be a qualified pharmacist. B 4 : I don't know how to be romantic. I have trouble expressing emotional attraction. B 5 : I think Krav Maga is a martial art sport. That 's the reason I picked it .
Table 4 :
4Statistics of the Wizard of Wikipedia dataset # Train # Valid # TestNumber of utterances
166 787
8806
8782
Number of dialogues
18 430
967
968
Number of topics
1247
300
58
Average turns per dialog
9
9
9
Table 5 :
5Results for perplexity, sequence-level metric, token-level metrics, BLEU and diversity metrics on the test data of the Wizard of Wikipedia dataset. Results demonstrate that the proposed InfoNCE and DEMI bounds achieve lower perplexity, reduce next-token repetition and increase the number of unique next-tokens compared to the baselines GPT2, GPT2-MMI and TransferTransfo. Note that our results are not directly comparable withLi et al. (2020) as their model is trained from scratch on a not publicly available Reddit-based corpus.Model
ppl seq-rep-avg rep
wrep uniq dist-1 dist-2 BLEU Entropy-4
GPT2
19.24
0.064
0.130 0.132 7393 0.064 0.392 0.775
0.095
TransferTransfo
19.33
0.078
0.134 0.132 7735 0.058 0.386 0.752
0.084
GPT2-MMI
19.35
0.070
0.129 0.135 7623 0.052 0.384 0.740
0.092
InfoNCE
18.88
0.065
0.126 0.131 8432 0.065 0.390 0.799
0.107
DEMI
18.66
0.050
0.120 0.128 8666 0.070 0.405 0.810
0.108
Ground Truth
-
0.052
0.095
-
9236 0.069 0.416
-
0.110
Table 6 :
6Results for perplexity, sequence-level metric, token-level metrics, BLEU and diversity metrics on the valid data of the Wizard of Wikipedia dataset.Model
ppl seq-rep-avg rep
wrep uniq dist-1 dist-2 BLEU Entropy-4
GPT2
19.21
0.066
0.136 0.134 7259 0.083 0.432 0.780
0.106
TransferTransfo
19.32
0.074
0.134 0.133 7223 0.082 0.431 0.750
0.106
GPT2-MMI
19.30
0.065
0.126 0.130 7351 0.082 0.425 0.650
0.106
InfoNCE
18.85
0.065
0.126 0.126 7300 0.091 0.046 0.800
0.107
DEMI
18.70
0.054
0.130 0.120 7375 0.077 0.047 0.820
0.108
Ground Truth
-
0.052
0.095
-
9236 0.069 0.416
-
0.110
A :
AWhat is your motivation? B: That's a tough question. I 'd say my own progress. I like seeing how much I improve with something. You ? A: I am retired now. Are you familiar with the work of Mehr and Meyer, well known psychologists ? B: They said that " Motivation is a word that is part of the popular culture as few other psychological concepts are . " Do you agree with what they said ? DEMI: Yes, definitely! You should check out Mark Twain's famous motivational book ! There are theories that point out how important it is to meditate. InfoNCE: They said that in psychology research, theories and evidence, participants who enjoyed continuous experience could be more likely to be successful. TransferTransfo: Absolutely, I disagree. Are you an author or a commentator? GPT2: That's awesome. So , what types of differences exist between "good " motivation and bad?DEMI_wins DEMI_CI
baseline_wins baseline_CI pairwise_CI
p
Baseline
GPT2
0.48726 (0.44, 0.53]
0.28662 (0.25, 0.32] (0.13, 0.27] *
<0.001
GPT2-MMI
0.65833 (0.6, 0.71]
0.16250 (0.12, 0.21] (0.4, 0.58] *
<0.001
TransferTransfo
0.46888 (0.43, 0.51]
0.30043 (0.26, 0.34] (0.09, 0.24] *
<0.001
InfoNCE
0.41711 (0.38, 0.46]
0.36748 (0.33, 0.41] (-0.03, 0.13]
0.0905
gold_response
0.22679 (0.19, 0.26]
0.54325 (0.5, 0.59]
(-0.39, -0.25] * <0.001
Table 8 :
8Which response is more relevant?DEMI_wins DEMI_CI
baseline_wins baseline_CI pairwise_CI
p
Baseline
GPT2
0.45084 (0.41, 0.49]
0.32636 (0.29, 0.37] (0.05, 0.2] *
<0.001
GPT2-MMI
0.61734 (0.56, 0.67]
0.18393 (0.14, 0.23] (0.34, 0.53] *
<0.001
TransferTransfo
0.43617 (0.4, 0.48]
0.35000 (0.31, 0.39] (0.01, 0.16] *
0.0028
InfoNCE
0.44630 (0.41, 0.49]
0.34515 (0.31, 0.38] (0.03, 0.17] *
<0.001
gold_response
0.22164 (0.19, 0.26]
0.56608 (0.52, 0.61] (-0.41, -0.28] * <0.001
Table 9 :
9Which response is more humanlike?DEMI_wins DEMI_CI
baseline_wins baseline_CI pairwise_CI
p
Baseline
GPT2
0.56157 (0.52, 0.6]
0.21444 (0.18, 0.25] (0.28, 0.42] *
<0.001
GPT2-MMI
0.68750 (0.63, 0.74]
0.12292 (0.09, 0.16] (0.48, 0.65] *
<0.001
TransferTransfo
0.51931 (0.48, 0.56]
0.24571 (0.21, 0.28] (0.21, 0.34] *
<0.001
InfoNCE
0.41288 (0.37, 0.45]
0.33580 (0.3, 0.38]
(0.0, 0.15] *
0.0059
gold_response
0.32384 (0.28, 0.36]
0.46624 (0.43, 0.51] (-0.22, -0.07] * <0.001
In what follows, we will slightly abuse language and use the expression "maximizing I(x, y)" as a shortcut for "maximizing a lower bound on I(x, y) with respect to f ".
The derivation inOord et al. (2018) presented an approximation and therefore was not properly a bound. An alternative, exact derivation of the bound can be found in(Poole et al., 2019).
The ability to perform that computation is usually a key assumption in self-supervised learning approaches.
https://www.bmj.com/content/343/bmj.d2304
AcknowledgementsWe would like to acknowledge Jiaming Song and Mike Wu for the insightful discussions and the anonymous reviewers for their helpful comments.
Learning representations by maximizing mutual information across views. P Bachman, R D Hjelm, W Buchwalter, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)Bachman, P., Hjelm, R. D., and Buchwalter, W. Learning representations by maximizing mutual information across views. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), pp. 15509-15519, 2019.
The im algorithm: A variational approach to information maximization. D Barber, F Agakov, Proc. Conf. on Neural Information Processing Systems (NIPS). Conf. on Neural Information essing Systems (NIPS)Barber, D. and Agakov, F. The im algorithm: A variational approach to information maximization. In Proc. Conf. on Neural Information Processing Systems (NIPS), pp. 201-208, 2003.
Mutual information neural estimation. M I Belghazi, A Baratin, S Rajeswar, S Ozair, Y Bengio, R D Hjelm, A C Courville, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Belghazi, M. I., Baratin, A., Rajeswar, S., Ozair, S., Bengio, Y., Hjelm, R. D., and Courville, A. C. Mutual informa- tion neural estimation. In Proc. Int. Conf. on Machine Learning (ICML), pp. 530-539, 2018.
Unsupervised learning of visual features by contrasting cluster assignments. M Caron, I Misra, J Mairal, P Goyal, P Bojanowski, Joulin , A , Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)2020Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., and Joulin, A. Unsupervised learning of visual features by contrasting cluster assignments. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), 2020.
Conditional noisecontrastive estimation of unnormalised models. C Ceylan, M U Gutmann, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Ceylan, C. and Gutmann, M. U. Conditional noise- contrastive estimation of unnormalised models. In Proc. Int. Conf. on Machine Learning (ICML), pp. 725-733, 2018.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G E Hinton, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. E. A simple framework for contrastive learning of visual representations. In Proc. Int. Conf. on Machine Learning (ICML), pp. 1597-1607, 2020a.
X Chen, H Fan, R Girshick, K He, arXiv:2003.04297Improved baselines with momentum contrastive learning. arXiv preprintChen, X., Fan, H., Girshick, R., and He, K. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020b.
Reinterpreting importance-weighted autoencoders. C Cremer, Q Morris, D Duvenaud, Proc. Int. Conf. on Learning Representations (ICLR. Int. Conf. on Learning Representations (ICLRCremer, C., Morris, Q., and Duvenaud, D. Reinterpreting importance-weighted autoencoders. Proc. Int. Conf. on Learning Representations (ICLR), 2017.
ImageNet: A Large-Scale Hierarchical Image Database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR09. Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei- Fei, L. ImageNet: A Large-Scale Hierarchical Image Database. In CVPR09, 2009.
Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Bert, Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 4171-4186, 2019.
Wizard of wikipedia: Knowledge-powered conversational agents. E Dinan, S Roller, K Shuster, A Fan, M Auli, Weston , J , Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Dinan, E., Roller, S., Shuster, K., Fan, A., Auli, M., and Weston, J. Wizard of wikipedia: Knowledge-powered conversational agents. In Proc. Int. Conf. on Learning Representations (ICLR), 2019.
Predictive coding-i. P Elias, IRE Transactions on Information Theory. 11Elias, P. Predictive coding-i. IRE Transactions on Information Theory, 1(1):16-24, 1955.
VSE++: improving visual-semantic embeddings with hard negatives. F Faghri, D J Fleet, J R Kiros, S Fidler, Proc. British Machine Vision Conference (BMVC). British Machine Vision Conference (BMVC)12Faghri, F., Fleet, D. J., Kiros, J. R., and Fidler, S. VSE++: improving visual-semantic embeddings with hard neg- atives. In Proc. British Machine Vision Conference (BMVC), pp. 12, 2018.
A unified stochastic gradient approach to designing bayesian-optimal experiments. A Foster, M Jankowiak, M O'meara, Y W Teh, T Rainforth, Proc. Int. Conf. on Artificial Intelligence and Statistics. Int. Conf. on Artificial Intelligence and StatisticsFoster, A., Jankowiak, M., O'Meara, M., Teh, Y. W., and Rainforth, T. A unified stochastic gradient approach to designing bayesian-optimal experiments. In Proc. Int. Conf. on Artificial Intelligence and Statistics, pp. 2959- 2969, 2020.
Unsupervised representation learning by predicting image rotations. S Gidaris, P Singh, N Komodakis, Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Gidaris, S., Singh, P., and Komodakis, N. Unsupervised representation learning by predicting image rotations. In Proc. Int. Conf. on Learning Representations (ICLR), 2018.
Bootstrap your own latent -A new approach to self-supervised learning. J Grill, F Strub, F Altché, C Tallec, P H Richemond, E Buchatskaya, C Doersch, B Á Pires, Z Guo, M G Azar, B Piot, K Kavukcuoglu, R Munos, M Valko, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)2020Grill, J., Strub, F., Altché, F., Tallec, C., Richemond, P. H., Buchatskaya, E., Doersch, C., Pires, B. Á., Guo, Z., Azar, M. G., Piot, B., Kavukcuoglu, K., Munos, R., and Valko, M. Bootstrap your own latent -A new approach to self-supervised learning. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), 2020.
Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics. M U Gutmann, A Hyvärinen, Journal of Machine Learning Research. 13Gutmann, M. U. and Hyvärinen, A. Noise-contrastive es- timation of unnormalized statistical models, with appli- cations to natural image statistics. Journal of Machine Learning Research, 13:307-361, 2012.
A practical guide to training restricted boltzmann machines. G E Hinton, Neural networks: Tricks of the trade. SpringerHinton, G. E. A practical guide to training restricted boltz- mann machines. In Neural networks: Tricks of the trade, pp. 599-619. Springer, 2012.
Learning deep representations by mutual information estimation and maximization. R D Hjelm, A Fedorov, S Lavoie-Marchildon, K Grewal, P Bachman, A Trischler, Y Bengio, Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Hjelm, R. D., Fedorov, A., Lavoie-Marchildon, S., Grewal, K., Bachman, P., Trischler, A., and Bengio, Y. Learn- ing deep representations by mutual information estima- tion and maximization. In Proc. Int. Conf. on Learning Representations (ICLR), 2019.
The curious case of neural text degeneration. A Holtzman, J Buys, L Du, M Forbes, Y Choi, Proc. Int. Conf. on Learning Representations (ICLR. Int. Conf. on Learning Representations (ICLR2020Holtzman, A., Buys, J., Du, L., Forbes, M., and Choi, Y. The curious case of neural text degeneration. In Proc. Int. Conf. on Learning Representations (ICLR), 2020.
Space-time correspondence as a contrastive random walk. A Jabri, A Owens, A A Efros, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)2020Jabri, A., Owens, A., and Efros, A. A. Space-time corre- spondence as a contrastive random walk. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), 2020.
Generative flow with invertible 1x1 convolutions. D P Kingma, P Dhariwal, Glow, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)Kingma, D. P. and Dhariwal, P. Glow: Generative flow with invertible 1x1 convolutions. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), pp. 10236- 10245, 2018.
Auto-encoding variational bayes. D P Kingma, M Welling, Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Kingma, D. P. and Welling, M. Auto-encoding variational bayes. In Proc. Int. Conf. on Learning Representations (ICLR), 2014.
Collecting a large-scale dataset of fine-grained cars. J Krause, J Deng, M Stark, L Fei-Fei, Technical reportKrause, J., Deng, J., Stark, M., and Fei-Fei, L. Collecting a large-scale dataset of fine-grained cars. Technical report, 2013.
Learning multiple layers of features from tiny images. A Krizhevsky, Technical reportKrizhevsky, A. et al. Learning multiple layers of features from tiny images. Technical report, 2009.
A diversity-promoting objective function for neural conversation models. J Li, M Galley, C Brockett, J Gao, B Dolan, Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Li, J., Galley, M., Brockett, C., Gao, J., and Dolan, B. A diversity-promoting objective function for neural conver- sation models. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 110-119, 2016.
Don't say that! making inconsistent dialogue unlikely with unlikelihood training. M Li, S Roller, I Kulikov, S Welleck, Y.-L Boureau, K Cho, Weston , J , Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Li, M., Roller, S., Kulikov, I., Welleck, S., Boureau, Y.-L., Cho, K., and Weston, J. Don't say that! making inconsis- tent dialogue unlikely with unlikelihood training. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 4715-4728, 2020.
Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. Z Ma, M Collins, Proc. Conf. on Empirical Methods in Natural Language Processing (EMNLP). Conf. on Empirical Methods in Natural Language essing (EMNLP)Ma, Z. and Collins, M. Noise contrastive estimation and negative sampling for conditional models: Consistency and statistical efficiency. In Proc. Conf. on Empirical Methods in Natural Language Processing (EMNLP), pp. 3698-3707, 2018.
Deep reinforcement and infomax learning. B Mazoure, R Tachet Des Combes, T Doan, P Bachman, R D Hjelm, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)2020Mazoure, B., Tachet des Combes, R., Doan, T., Bachman, P., and Hjelm, R. D. Deep reinforcement and infomax learn- ing. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), 2020.
Information theoretic co-training. D Mcallester, arXiv:1802.07572arXiv preprintMcAllester, D. Information theoretic co-training. arXiv preprint arXiv:1802.07572, 2018.
Formal limitations on the measurement of mutual information. D Mcallester, K Stratos, Int. Conf. on Artificial Intelligence and Statistics (AISTATS). McAllester, D. and Stratos, K. Formal limitations on the measurement of mutual information. In Int. Conf. on Artificial Intelligence and Statistics (AISTATS), pp. 875- 884, 2020a.
Formal limitations on the measurement of mutual information. D Mcallester, K Stratos, PMLRProceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. Chiappa, S. and Calandra, R.the Twenty Third International Conference on Artificial Intelligence and Statistics108McAllester, D. and Stratos, K. Formal limitations on the measurement of mutual information. In Chiappa, S. and Calandra, R. (eds.), Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, volume 108 of Proceedings of Machine Learning Research, pp. 875-884. PMLR, 26-28 Aug 2020b.
Which training methods for gans do actually converge?. L M Mescheder, A Geiger, S Nowozin, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Mescheder, L. M., Geiger, A., and Nowozin, S. Which training methods for gans do actually converge? In Proc. Int. Conf. on Machine Learning (ICML), pp. 3478-3487, 2018.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintMikolov, T., Chen, K., Corrado, G., and Dean, J. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013.
Estimation of conditional mutual information using minmax formulation. A K Mondal, A Bhattacharjee, S Mukherjee, H Asnani, S Kannan, P , P A , C-Mi-Gan , Proc. Conf. on Uncertainty in Artificial Intelligence (UAI). Conf. on Uncertainty in Artificial Intelligence (UAI)Mondal, A. K., Bhattacharjee, A., Mukherjee, S., Asnani, H., Kannan, S., and P., P. A. C-MI-GAN : Estima- tion of conditional mutual information using minmax formulation. In Proc. Conf. on Uncertainty in Artificial Intelligence (UAI), pp. 849-858, 2020.
Automated flower classification over a large number of classes. M.-E Nilsback, A Zisserman, IEEE Computer SocietyUSAICVGIP '08Nilsback, M.-E. and Zisserman, A. Automated flower clas- sification over a large number of classes. ICVGIP '08, pp. 722-729, USA, 2008. IEEE Computer Society.
Unsupervised learning of visual representations by solving jigsaw puzzles. M Noroozi, P Favaro, Proc. European Conf. on Computer Vision. European Conf. on Computer VisionSpringerNoroozi, M. and Favaro, P. Unsupervised learning of visual representations by solving jigsaw puzzles. In Proc. European Conf. on Computer Vision, pp. 69-84. Springer, 2016.
A Oord, Y Li, O Vinyals, arXiv:1807.03748Representation learning with contrastive predictive coding. arXiv preprintOord, A. v. d., Li, Y., and Vinyals, O. Representation learn- ing with contrastive predictive coding. arXiv preprint arXiv:1807.03748, 2018.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. Bleu: a method for automatic evaluation of machine translation. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 311-318, 2002.
On variational bounds of mutual information. B Poole, S Ozair, A Van Den Oord, A Alemi, G Tucker, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Poole, B., Ozair, S., van den Oord, A., Alemi, A., and Tucker, G. On variational bounds of mutual informa- tion. In Proc. Int. Conf. on Machine Learning (ICML), pp. 5171-5180, 2019.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, OpenAI Blog. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog, 2019.
Tighter variational bounds are not necessarily better. T Rainforth, A R Kosiorek, T A Le, C J Maddison, M Igl, F Wood, Y W Teh, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Rainforth, T., Kosiorek, A. R., Le, T. A., Maddison, C. J., Igl, M., Wood, F., and Teh, Y. W. Tighter variational bounds are not necessarily better. In Proc. Int. Conf. on Machine Learning (ICML), pp. 4274-4282, 2018.
B Rhodes, K Xu, M U Gutmann, arXiv:2006.12204Telescoping density-ratio estimation. arXiv preprintRhodes, B., Xu, K., and Gutmann, M. U. Tele- scoping density-ratio estimation. arXiv preprint arXiv:2006.12204, 2020.
The calculation of posterior distributions by data augmentation: Comment: A noniterative sampling/importance resampling alternative to the data augmentation algorithm for creating a few imputations when fractions of missing information are modest: The SIR algorithm. D B Rubin, Journal of the American Statistical Association. 82398Rubin, D. B. The calculation of posterior distributions by data augmentation: Comment: A noniterative sam- pling/importance resampling alternative to the data aug- mentation algorithm for creating a few imputations when fractions of missing information are modest: The SIR al- gorithm. Journal of the American Statistical Association, 82(398):543-546, 1987.
Improved samplingimportance resampling and reduced bias importance sampling. Ø Skare, E Bølviken, L Holden, Scandinavian Journal of Statistics. 304Skare, Ø., Bølviken, E., and Holden, L. Improved sampling- importance resampling and reduced bias importance sam- pling. Scandinavian Journal of Statistics, 30(4):719-737, 2003.
Understanding the limitations of variational mutual information estimators. J Song, S Ermon, Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Song, J. and Ermon, S. Understanding the limitations of variational mutual information estimators. In Proc. Int. Conf. on Learning Representations (ICLR), 2020a.
Multi-label contrastive predictive coding. J Song, S Ermon, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)Song, J. and Ermon, S. Multi-label contrastive predictive coding. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), 2020b.
Mutual information maximization for simple and accurate part-of-speech induction. K Stratos, Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Stratos, K. Mutual information maximization for simple and accurate part-of-speech induction. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 1095- 1104, 2019.
Y Tian, D Krishnan, P Isola, arXiv:1906.05849Contrastive multiview coding. arXiv preprintTian, Y., Krishnan, D., and Isola, P. Contrastive multiview coding. arXiv preprint arXiv:1906.05849, 2019.
What makes for good views for contrastive learning. Y Tian, C Sun, B Poole, D Krishnan, C Schmid, P Isola, arXiv:2005.10243arXiv preprintTian, Y., Sun, C., Poole, B., Krishnan, D., Schmid, C., and Isola, P. What makes for good views for contrastive learning. arXiv preprint arXiv:2005.10243, 2020.
On mutual information maximization for representation learning. M Tschannen, J Djolonga, P K Rubenstein, S Gelly, M Lucic, Proc. Int. Conf. on Learning Representations (ICLR. Int. Conf. on Learning Representations (ICLR2020Tschannen, M., Djolonga, J., Rubenstein, P. K., Gelly, S., and Lucic, M. On mutual information maximization for representation learning. In Proc. Int. Conf. on Learning Representations (ICLR), 2020.
Deep graph infomax. P Velickovic, W Fedus, W L Hamilton, P Liò, Y Bengio, R D Hjelm, Proc. Int. Conf. on Learning Representations (ICLR). Int. Conf. on Learning Representations (ICLR)Velickovic, P., Fedus, W., Hamilton, W. L., Liò, P., Bengio, Y., and Hjelm, R. D. Deep graph infomax. In Proc. Int. Conf. on Learning Representations (ICLR), 2019.
Understanding contrastive representation learning through alignment and uniformity on the hypersphere. T Wang, P Isola, Proc. Int. Conf. on Machine Learning (ICML). Int. Conf. on Machine Learning (ICML)Wang, T. and Isola, P. Understanding contrastive represen- tation learning through alignment and uniformity on the hypersphere. In Proc. Int. Conf. on Machine Learning (ICML), pp. 9929-9939, 2020.
. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, Caltech-Ucsd, Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., and Perona, P. Caltech-ucsd birds 200. 2010.
Neural text generation with unlikelihood training. S Welleck, I Kulikov, S Roller, E Dinan, K Cho, Weston , J , Proc. Int. Conf. on Learning Representations (ICLR. Int. Conf. on Learning Representations (ICLR2020Welleck, S., Kulikov, I., Roller, S., Dinan, E., Cho, K., and Weston, J. Neural text generation with unlikelihood training. In Proc. Int. Conf. on Learning Representations (ICLR), 2020.
Transfertransfo: A transfer learning approach for neural network based conversational agents. T Wolf, V Sanh, J Chaumond, C Delangue, Proc. Conf. on Neural Information Processing Systems (NeurIPS) CAI Workshop. Conf. on Neural Information essing Systems (NeurIPS) CAI WorkshopWolf, T., Sanh, V., Chaumond, J., and Delangue, C. Trans- fertransfo: A transfer learning approach for neural net- work based conversational agents. In Proc. Conf. on Neural Information Processing Systems (NeurIPS) CAI Workshop, 2019.
Learning to navigate for fine-grained classification. Z Yang, T Luo, D Wang, Z Hu, J Gao, Wang , L , Proc. of the European Conf. on Computer Vision (ECCV). of the European Conf. on Computer Vision (ECCV)Yang, Z., Luo, T., Wang, D., Hu, Z., Gao, J., and Wang, L. Learning to navigate for fine-grained classification. In Proc. of the European Conf. on Computer Vision (ECCV), pp. 420-435, 2018.
Generating informative and diverse conversational responses via adversarial information maximization. Y Zhang, M Galley, J Gao, Z Gan, X Li, C Brockett, B Dolan, Proc. Conf. on Neural Information Processing Systems (NeurIPS). Conf. on Neural Information essing Systems (NeurIPS)Zhang, Y., Galley, M., Gao, J., Gan, Z., Li, X., Brockett, C., and Dolan, B. Generating informative and diverse conver- sational responses via adversarial information maximiza- tion. In Proc. Conf. on Neural Information Processing Systems (NeurIPS), pp. 1815-1825, 2018.
Large-scale generative pre-training for conversational response generation. Y Zhang, S Sun, M Galley, Y.-C Chen, C Brockett, X Gao, J Gao, J Liu, B Dolan, Di-Alogpt, Proc. Conf. Assoc. for Computational Linguistics (ACL). Conf. Assoc. for Computational Linguistics (ACL)Zhang, Y., Sun, S., Galley, M., Chen, Y.-C., Brockett, C., Gao, X., Gao, J., Liu, J., and Dolan, B. DI- ALOGPT : Large-scale generative pre-training for conver- sational response generation. In Proc. Conf. Assoc. for Computational Linguistics (ACL), pp. 270-278, 2020.
| [] |
[
"Text Summarization Techniques: A Brief Survey",
"Text Summarization Techniques: A Brief Survey"
] | [
"Mehdi Allahyari [email protected] ",
"Seyedamin Pouriyeh ",
"Mehdi Assefi ",
"Saeid Safaei ",
"Elizabeth D Trippe ",
"MehdiSeyedamin Allahyari ",
"Mehdi Pouriyeh ",
"Saeid Assefi ",
"Elizabeth D Safaei ",
"Juan B Trippe ",
"Gutierrez ",
"Mehdi Allahyari ",
"Seyedamin Pouriyeh ",
"Mehdi Assefi ",
"Saeid Safaei ",
"Elizabeth D Trippe ",
"Juan B Gutierrez ",
"Krys Kochut ",
"Mehdi Allahyari ",
"Seyedamin Pouriyeh \nDepartment of Computer Science\nUniversity of Georgia\nAthensUSA\n",
"Mehdi Assefi \nDepartment of Computer Science\nUniversity of Georgia\nAthensUSA\n",
"Saeid Safaei \nDepartment of Computer Science\nUniversity of Georgia\nAthensUSA\n",
"Elizabeth D Trippe \nDepartment of Mathematics Institute of Bioinformatics\nUniversity of Georgia Athens\nUSA\n",
"Juan B Gutierrez \nDepartment of Mathematics Institute of Bioinformatics\nUniversity of Georgia Athens\nUSA\n",
"Krys Kochut \nDepartment of Computer Science\nUniversity of Georgia\nAthensUSA\n",
"\nSouthern Digital Commons@Georgia Southern Computer Science Faculty Publications Computer Science\nDepartment\nDepartment of Computer Science\nGeorgia Southern University Georgia Southern University Digital Commons@Georgia\nGeorgia Southern University\nUniversity of Georgia\nUniversity of Georgia\nUniversity of Georgia\nUniversity of Georgia\nGeorgia\n",
"\nSothern University\nStatesboroUSA\n"
] | [
"Department of Computer Science\nUniversity of Georgia\nAthensUSA",
"Department of Computer Science\nUniversity of Georgia\nAthensUSA",
"Department of Computer Science\nUniversity of Georgia\nAthensUSA",
"Department of Mathematics Institute of Bioinformatics\nUniversity of Georgia Athens\nUSA",
"Department of Mathematics Institute of Bioinformatics\nUniversity of Georgia Athens\nUSA",
"Department of Computer Science\nUniversity of Georgia\nAthensUSA",
"Southern Digital Commons@Georgia Southern Computer Science Faculty Publications Computer Science\nDepartment\nDepartment of Computer Science\nGeorgia Southern University Georgia Southern University Digital Commons@Georgia\nGeorgia Southern University\nUniversity of Georgia\nUniversity of Georgia\nUniversity of Georgia\nUniversity of Georgia\nGeorgia",
"Sothern University\nStatesboroUSA"
] | [
"International Journal of Advanced Computer Science and Applications (ijacsa)"
] | In recent years, there has been a explosion in the amount of text data from a variety of sources. This volume of text is an invaluable source of information and knowledge which needs to be effectively summarized to be useful. Text summarization is the task of shortening a text document into a condensed version keeping all the important information and content of the original document. In this review, the main approaches to automatic text summarization are described. We review the different processes for summarization and describe the effectiveness and shortcomings of the different methods. | 10.14569/ijacsa.2017.081052 | null | 304,226 | 1707.02268 | c2691eeb6efd7c3a2ba26cfa0790941fae593d9e |
Text Summarization Techniques: A Brief Survey
SAI OrganizationCopyright SAI Organization2017. 2017
Mehdi Allahyari [email protected]
Seyedamin Pouriyeh
Mehdi Assefi
Saeid Safaei
Elizabeth D Trippe
MehdiSeyedamin Allahyari
Mehdi Pouriyeh
Saeid Assefi
Elizabeth D Safaei
Juan B Trippe
Gutierrez
Mehdi Allahyari
Seyedamin Pouriyeh
Mehdi Assefi
Saeid Safaei
Elizabeth D Trippe
Juan B Gutierrez
Krys Kochut
Mehdi Allahyari
Seyedamin Pouriyeh
Department of Computer Science
University of Georgia
AthensUSA
Mehdi Assefi
Department of Computer Science
University of Georgia
AthensUSA
Saeid Safaei
Department of Computer Science
University of Georgia
AthensUSA
Elizabeth D Trippe
Department of Mathematics Institute of Bioinformatics
University of Georgia Athens
USA
Juan B Gutierrez
Department of Mathematics Institute of Bioinformatics
University of Georgia Athens
USA
Krys Kochut
Department of Computer Science
University of Georgia
AthensUSA
Southern Digital Commons@Georgia Southern Computer Science Faculty Publications Computer Science
Department
Department of Computer Science
Georgia Southern University Georgia Southern University Digital Commons@Georgia
Georgia Southern University
University of Georgia
University of Georgia
University of Georgia
University of Georgia
Georgia
Sothern University
StatesboroUSA
Text Summarization Techniques: A Brief Survey
International Journal of Advanced Computer Science and Applications (ijacsa)
SAI Organization8102017. 2017Text Summarization Techniques: A Brief Survey Text Summarization Techniques: A Brief Survey See next page for additional authors Follow this and additional works at: https://digitalcommons.georgiasouthern.edu/compsci-facpubs Part of the Computer Sciences Commons This article is brought to you for free and open access by the Computer Science, Department of at Digital Commons@Georgia Southern. It has been accepted for inclusion in Computer Science Faculty Publications by an authorized administrator of Digital Commons@Georgia Southern. For more information, please contact This article is available at Digital Commons@Georgia Southern: https://digitalcommons.georgiasouthern.edu/ compsci-facpubs/224Text summarizationextractive summaryabstrac- tive summary knowledge basestopic models
In recent years, there has been a explosion in the amount of text data from a variety of sources. This volume of text is an invaluable source of information and knowledge which needs to be effectively summarized to be useful. Text summarization is the task of shortening a text document into a condensed version keeping all the important information and content of the original document. In this review, the main approaches to automatic text summarization are described. We review the different processes for summarization and describe the effectiveness and shortcomings of the different methods.
I. INTRODUCTION
With the dramatic growth of the Internet, people are overwhelmed by the tremendous amount of online information and documents. This expanding availability of documents has demanded exhaustive research in the area of automatic text summarization. According to Radef et al. [1] a summary is defined as "a text that is produced from one or more texts, that conveys important information in the original text(s), and that is no longer than half of the original text(s) and usually, significantly less than that".
Automatic text summarization is the task of producing a concise and fluent summary while preserving key information content and overall meaning. In recent years, numerous approaches have been developed for automatic text summarization and applied widely in various domains. For example, search engines generate snippets as the previews of the documents [2]. Other examples include news websites which produce condensed descriptions of news topics usually as headlines to facilitate browsing or knowledge extractive approaches in different domains [3]- [6].
Automatic text summarization is very challenging, because when we as humans summarize a piece of text, we usually read it entirely to develop our understanding, and then write a summary highlighting its main points. Since computers lack human knowledge and language capability, it makes automatic text summarization a very difficult and non-trivial task.
Automatic text summarization gained attraction as early as the 1950s. An important research of these days was [7] for summarizing scientific documents. Luhn et al. [7] introduced a method to extract salient sentences from the text using features such as word and phrase frequency. They proposed to weight the sentences of a document as a function of high frequency words, ignoring very high frequency common words. Edmundson et al. [8] described a paradigm based on key phrases which in addition to standard frequency depending weights, used the following three methods to determine the sentence weight:
1) Cue Method: The relevance of a sentence is calculated based on the presence or absence of certain cue words in the cue dictionary. 2) Title Method: The weight of a sentence is computed as the sum of all the content words appearing in the title and headings of a text. 3) Location Method: This method assumes that sentences appearing in the beginning of document as well as the beginning of individual paragraphs have a higher probability of being relevant.
Since then, many works have been published to address the problem of automatic text summarization (see [9], [10] for more information about more advanced techniques until 2000s).
In general, there are two different approaches for automatic summarization: extraction and abstraction. Extractive summarization methods work by identifying important sections of the text and generating them verbatim; thus, they depend only on extraction of sentences from the original text. In contrast, abstractive summarization methods aim at producing important material in a new way. In other words, they interpret and examine the text using advanced natural language techniques in order to generate a new shorter text that conveys the most critical information from the original text. Even though summaries created by humans are usually not extractive, most of the summarization research today has focused on extractive summarization. Purely extractive summaries often times give better results compared to automatic abstractive summaries [10]. This is because of the fact that abstractive summarization methods cope with problems such as semantic representation, inference and natural language generation which are relatively harder than data-driven approaches, such as sentence extraction. As a matter of fact, there is no completely abstractive summarization system today. Existing abstractive summarizers often rely on an extractive preprocessing component to produce the abstract of the text [11], [12].
Consequently, in this paper we focus on extractive summarization methods and provide an overview of some of the most dominant approaches in this category. There are a number of papers that provide extensive overviews of text summarization techniques and systems [13]- [16].
The rest of the paper is organized as follows: Section II describes the extractive summarization approaches. Topic representation methods are explained in Section III. Section IV details knowledge bases and automatic summarization. Section V explains the impact of context in the summarization task. Indicator representation approaches are described in Section VI. Finally, Section VII outlines the evaluation methods for summarization.
II. EXTRACTIVE SUMMARIZATION
As mentioned before, extractive summarization techniques produce summaries by choosing a subset of the sentences in the original text. These summaries contain the most important sentences of the input. Input can be a single document or multiple documents.
In order to better understand how summarization systems work, we describe three fairly independent tasks which all summarizers perform [15]: 1) Construct an intermediate representation of the input text which expresses the main aspects of the text. 2) Score the sentences based on the representation. 3) Select a summary comprising of a number of sentences.
A. Intermediate Representation
Every summarization system creates some intermediate representation of the text it intends to summarize and finds salient content based on this representation. There are two types of approaches based on the representation: topic representation and indicator representation. Topic representation approaches transform the text into an intermediate representation and interpret the topic(s) discussed in the text.
Topic representation-based summarization techniques differ in terms of their complexity and representation model, and are divided into frequency-driven approaches, topic word approaches, latent semantic analysis and Bayesian topic models [15]. We elaborate topic representation approaches in the following sections. Indicator representation approaches describe every sentence as a list of features (indicators) of importance such as sentence length, position in the document, having certain phrases, etc.
B. Sentence Score
When the intermediate representation is generated, we assign an importance score to each sentence. In topic representation approaches, the score of a sentence represents how well the sentence explains some of the most important topics of the text. In most of the indicator representation methods, the score is computed by aggregating the evidence from different indicators. Machine learning techniques are often used to find indicator weights.
C. Summary Sentences Selection
Eventually, the summarizer system selects the top k most important sentences to produce a summary. Some approaches use greedy algorithms to select the important sentences and some approaches may convert the selection of sentences into an optimization problem where a collection of sentences is chosen, considering the constraint that it should maximize overall importance and coherency and minimize the redundancy. There are other factors that should be taken into consideration while selecting the important sentences. For example, context in which the summary is created may be helpful in deciding the importance. Type of the document (e.g. news article, email, scientific paper) is another factor which may impact selecting the sentences.
III. TOPIC REPRESENTATION APPROACHES
In this section we describe some of the most widely used topic representation approaches.
A. Topic Words
The topic words technique is one of the common topic representation approaches which aims to identify words that describe the topic of the input document. [7] was one the earliest works that leveraged this method by using frequency thresholds to locate the descriptive words in the document and represent the topic of the document. A more advanced version of Luhn's idea was presented in [17] in which they used log-likelihood ratio test to identify explanatory words which in summarization literature are called the "topic signature". Utilizing topic signature words as topic representation was very effective and increased the accuracy of multi-document summarization in the news domain [18]. For more information about log-likelihood ratio test, see [15].
There are two ways to compute the importance of a sentence: as a function of the number of topic signatures it contains, or as the proportion of the topic signatures in the sentence. Both sentence scoring functions relate to the same topic representation, however, they might assign different scores to sentences. The first method may assign higher scores to longer sentences, because they have more words. The second approach measures the density of the topic words.
B. Frequency-driven Approaches
When assigning weights of words in topic representations, we can think of binary (0 or 1) or real-value (continuous) weights and decide which words are more correlated to the topic. The two most common techniques in this category are: word probability and TFIDF (Term Frequency Inverse Document Frequency).
1) Word Probability:
The simplest method to use frequency of words as indicators of importance is word probability. The probability of a word w is determined as the number of occurrences of the word, f (w), divided by the number of all words in the input (which can be a single document or multiple documents):
P (w) = f (w) N(1)
Vanderwende et al. [19] proposed the SumBasic system which uses only the word probability approach to determine sentence importance. For each sentence, S j , in the input, it assigns a weight equal to the average probability of the words in the sentence:
g(S j ) = wi∈Sj P (w i ) |{w i |w i ∈ S j }| (2) where, g(S j ) is the weight of sentence S j . www.ijacsa.thesai.org
In the next step, it picks the best scoring sentence that contains the highest probability word. This step ensures that the highest probability word, which represents the topic of the document at that point, is included in the summary. Then for each word in the chosen sentence, the weight is updated:
p new (w i ) = p old (w i )p old (w i )(3)
This word weight update indicates that the probability of a word appearing in the summary is lower than a word occurring once. The aforementioned selection steps will repeat until the desired length summary is reached. The sentence selection approach used by SumBasic is based on the greedy strategy. Yih et al. [20] used an optimization approach (as sentence selection strategy) to maximize the occurrence of the important words globally over the entire summary. [21] is another example of using an optimization approach.
2) TFIDF: Since word probability techniques depend on a stop word list in order to not consider them in the summary and because deciding which words to put in the stop list is not very straight forward, there is a need for more advanced techniques. One of the more advanced and very typical methods to give weight to words is TFIDF (Term Frequency Inverse Document Frequency). This weighting technique assesses the importance of words and identifies very common words (that should be omitted from consideration) in the document(s) by giving low weights to words appearing in most documents. The weight of each word w in document d is computed as follows:
q(w) = f d (w) * log |D| f D (w)(4)
where f d (w) is term frequency of word w in the document d, f D (w) is the number of documents that contain word w and |D| is the number of documents in the collection D. For more information about TFIDF and other term weighting schemes, see [22]. TFIDF weights are easy and fast to compute and also are good measures for determining the importance of sentences, therefore many existing summarizers [10], [21], [23] have utilized this technique (or some form of it).
Centroid-based summarization, another set of techniques which has become a common baseline, is based on TFIDF topic representation. This kind of method ranks sentences by computing their salience using a set of features. A complete overview of the centroid-based approach is available in [24] but we outline briefly the basic idea.
The first step is topic detection and documents that describe the same topic clustered together. To achieve this goal, TFIDF vector representations of the documents are created and those words whose TFIDF scores are below a threshold are removed. Then, a clustering algorithm is run over the TFIDF vectors, consecutively adding documents to clusters and recomputing the centroids according to:
c j = d∈Cj d |C j |(5)
where c j is the centroid of the jth cluster and C j is the set of documents that belong to that cluster. Centroids can be considered as pseudo-documents that consist of those words whose TFIDF scores are higher than the threshold and form the cluster.
The second step is using centroids to identify sentences in each cluster that are central to topic of the entire cluster. To accomplish this goal, two metrics are defined [25]: clusterbased relative utility (CBRU) and cross-sentence informational subsumption (CSIS). CBRU decides how relevant a particular sentence is to the general topic of the entire cluster and CSIS measure redundancy among sentences. In order to approximate two metrics, three features (i.e. central value, positional value and first-sentence overlap) are used. Next, the final score of each sentence is computed and the selection of sentences is determined. For another related work, see [26].
C. Latent Semantic Analysis
Latent semantic analysis (LSA) which is introduced by [27], is an unsupervised method for extracting a representation of text semantics based on observed words. Gong and Liu [28] initially proposed a method using LSA to select highly ranked sentences for single and multi-document summarization in the news domain. The LSA method first builds a term-sentence matrix (n by m matrix), where each row corresponds to a word from the input (n words) and each column corresponds to a sentence (m sentences). Each entry a ij of the matrix is the weight of the word i in sentence j. The weights of the words are computed by TFIDF technique and if a sentence does not have a word the weight of that word in the sentence is zero. Then singular value decomposition (SVD) is used on the matrix and transforms the matrix A into three matrices:
A = U ΣV T .
Matrix U (n × m) represents a term-topic matrix having weights of words. Matrix Σ is a diagonal matrix (m×m) where each row i corresponds to the weight of a topic i. Matrix V T is the topic-sentence matrix. The matrix D = ΣV T describes how much a sentence represent a topic, thus, d ij shows the weight of the topic i in sentence j.
Gong and Liu's method was to choose one sentence per each topic, therefore, based on the length of summary in terms of sentences, they retained the number of topics. This strategy has a drawback due to the fact that a topic may need more than one sentence to convey its information. Consequently, alternative solutions were proposed to improve the performance of LSA-based techniques for summarization. One enhancement was to leverage the weight of each topic to decide the relative size of the summary that should cover the topic, which gives the flexibility of having a variable number of sentences. Another advancement is described in [29]. Steinberger et al. [29] introduced a LSA-based method which achieves a significantly better performance than the original work. They realized that the sentences that discuss some of important topics are good candidates for summaries, thus, in order to locate those sentences they defined the weight of the sentence as follows:
Let g be the "weight" function, then
g(s i ) = m j=1 d 2 ij(6)
www.ijacsa.thesai.org For other variations of LSA technique, see [30], [31].
D. Bayesian Topic Models
Many of the existing multi-document summarization methods have two limitations [32]: 1) They consider the sentences as independent of each other, so topics embedded in the documents are disregarded. 2) Sentence scores computed by most existing approaches typically do not have very clear probabilistic interpretations, and many of the sentence scores are calculated using heuristics.
Bayesian topic models are probabilistic models that uncover and represent the topics of documents. They are quite powerful and appealing, because they represent the information (i.e. topics) that are lost in other approaches. Their advantage in describing and representing topics in detail enables the development of summarizer systems which can determine the similarities and differences between documents to be used in summarization [33].
Apart from enhancement of topic and document representation, topic models often utilize a distinct measure for scoring the sentence called Kullbak-Liebler (KL). The KL is a measure of difference (divergence) between two probability distributions P and Q [34]. In summarization where we have probability of words, the KL divergence of Q from P over the words w is defined as:
D KL (P ||Q) = w P (w) log P (w) Q(w)(7)
where P (w) and Q(w) are probabilities of w in P and Q.
KL divergence is an interesting method for scoring sentences in the summarization, because it shows the fact that good summaries are intuitively similar to the input documents. It describes how the importance of words alters in the summary in comparison with the input, i.e. the KL divergence of a good summary and the input will be low.
Probabilistic topic models have gained dramatic attention in recent years in various domains [35]- [43]. Latent Dirichlet allocation (LDA) model is the state of the art unsupervised technique for extracting thematic information (topics) of a collection of documents. A complete review for LDA can be found in [44], [45], but the main idea is that documents are represented as a random mixture of latent topics, where each topic is a probability distribution over words.
LDA has been extensively used for multi-document summarization recently. For example, Daume et al. [46] proposed BAYESUM, a Bayesian summarization model for queryfocused summarization. Wang et al. [32] introduced a Bayesian sentence-based topic model for summarization which used both term-document and term-sentence associations. Their system achieved significance performance and outperformed many other summarization methods. Celikyilmaz et al. [47] describe multi-document summarization as a prediction problem based on a two-phase hybrid model. First, they propose a hierarchical topic model to discover the topic structures of all sentences. Then, they compute the similarities of candidate sentences with human-provided summaries using a novel treebased sentence scoring function. In the second step they make use of these scores and train a regression model according the lexical and structural characteristics of the sentences, and employ the model to score sentences of new documents (unseen documents) to form a summary.
IV. KNOWLEDGE BASES AND AUTOMATIC SUMMARIZATION
The goal of automatic text summarization is to create summaries that are similar to human-created summaries. However, in many cases, the soundness and readability of created summaries are not satisfactory, because the summaries do not cover all the semantically relevant aspects of data in an effective way. This is because many of the existing text summarization techniques do not consider the semantics of words. A step towards building more accurate summarization systems is to combine summarization techniques with knowledge bases (semantic-based or ontology-based summarizers).
The advent of human-generated knowledge bases and various ontologies in many different domains (e.g. Wikipedia, YAGO, DBpedia, etc.) has opened further possibilities in text summarization , and reached increasing attention recently. For example, Henning et al. [48] present an approach to sentence extraction that maps sentences to concepts of an ontology. By considering the ontology features, they can improve the semantic representation of sentences which is beneficial in selection of sentences for summaries. They experimentally showed that ontology-based extraction of sentences outperforms baseline summarizers. Chen et al. [49] introduce a user query-based text summarizer that uses the UMLS medical ontology to make a summary for medical text. Baralis et al. [50] propose a Yago-based summarizer that leverages YAGO ontology [51] to identify key concepts in the documents. The concepts are evaluated and then used to select the most representative document sentences. Sankarasubramaniam et al. [52] introduce an approach that employs Wikipedia in conjunction with a graph-based ranking technique. First, they create a bipartite sentence-concept graph, and then use an iterative ranking algorithm for selecting summary sentences.
V. IMPACT OF CONTEXT IN SUMMARIZATION
Summarization systems often have additional evidence they can utilize in order to specify the most important topics of document(s). For example when summarizing blogs, there are discussions or comments coming after the blog post that are good sources of information to determine which parts of the blog are critical and interesting. In scientific paper summarization, there is a considerable amount of information such as cited papers and conference information which can be leveraged to identify important sentences in the original paper. In the following, we describe some the contexts in more details.
A. Web Summarization
Web pages contains lots of elements which cannot be summarized such as pictures. The textual information they have is often scarce, which makes applying text summarization techniques limited. Nonetheless, we can consider the context of a web page, i.e. pieces of information extracted from content of all the pages linking to it, as additional material to improve summarization. The earliest research in this regard is [53] where they query web search engines and fetch the pages having links to the specified web page. Then they analyze the candidate pages and select the best sentences containing links to the web page heuristically. Delort et al. [54] extended and improved this approach by using an algorithm trying to select a sentence about the same topic that covers as many aspects of the web page as possible.
For blog summarization, [55] proposed a method that first derives representative words from comments and then selects important sentences from the blog post containing representative words. For more related works, see [56]- [58].
B. Scientific Articles Summarization
A useful source of information when summarizing a scientific paper (i.e. citation-based summarization) is to find other papers that cite the target paper and extract the sentences in which the references take place in order to identify the important aspects of the target paper. Mei et al. [59] propose a language model that gives a probability to each word in the citation context sentences. They then score the importance of sentences in the original paper using the KL divergence method (i.e. finding the similarity between a sentence and the language model). For more information, see [60], [61].
C. Email Summarization
Email has some distinct characteristics that indicates the aspects of both spoken conversation and written text. For example, summarization techniques must consider the interactive nature of the dialog as in spoken conversations. Nenkova et al. [62] presented early research in this regard, by proposing a method to generate a summary for the first two levels of the thread discussion. A thread consists of one or more conversations between two or more participants over time. They select a message from the root message and from each response to the root, considering the overlap with root context. Rambow et al. [63] used a machine learning technique and included features related to the thread as well as features of the email structure such as position of the sentence in the tread, number of recipients, etc. Newman et al. [64] describe a system to summarize a full mailbox rather than a single thread by clustering messages into topical groups and then extracting summaries for each cluster.
VI. INDICATOR REPRESENTATION APPROACHES
Indicator representation approaches aim to model the representation of the text based on a set of features and use them to directly rank the sentences rather than representing the topics of the input text. Graph-based methods and machine learning techniques are often employed to determine the important sentences to be included in the summary.
A. Graph Methods for Summarization
Graph methods, which are influenced by PageRank algorithm [65], represent the documents as a connected graph. Sentences form the vertices of the graph and edges between the sentences indicate how similar the two sentences are. A common technique employed to connect two vertices is to measure the similarity of two sentences and if it is greater then a threshold they are connected. The most often used method for similarity measure is cosine similarity with TFIDF weights for words. This graph representation results in two outcomes. First, the partitions (sub-graphs) included in the graph, create discrete topics covered in the documents. The second outcome is the identification of the important sentences in the document. Sentences that are connected to many other sentences in the partition are possibly the center of the graph and more likely to be included in the summary.
Graph-based methods can be used for single as well as multi-document summarization [10]. Since they do not need language-specific linguistic processing other than sentence and word boundary detection, they can also be applied to various languages [66]. Nonetheless, using TFIDF weighting scheme for similarity measure has limitations, because it only preserves frequency of words and does not take the syntactic and semantic information into account. Thus, similarity measures based on syntactic and semantic information enhances the performance of the summarization system [67]. For more graph-based approaches, see [15].
B. Machine Learning for Summarization
Machine learning approaches model the summarization as a classification problem. [68] is an early research attempt at applying machine learning techniques for summarization. Kupiec et al. develop a classification function, naive-Bayes classifier, to classify the sentences as summary sentences and non-summary sentences based on the features they have, given a training set of documents and their extractive summaries. The classification probabilities are learned statistically from the training data using Bayes' rule: P (s ∈ S|F 1 , F 2 , . . . , F k ) = P (F 1 , F 2 , . . . , F k )|s ∈ S)P (s ∈ S) P (F 1 , F 2 , . . . , F k )
Where, s is a sentence from the document collection, F 1 , F 2 , . . . , F k are features used in classification and S is the summary to be generated. Assuming the conditional independence between the features:
P (s ∈ S|F 1 , F 2 , . . . , F k ) = k i=1 P (F i |s ∈ S)P (s ∈ S) k i=1 P (F i ).(9)
The probability a sentence to belongs to the summary is the score of the sentence. The selected classifier plays the role of a sentence scoring function. Some of the frequent features used in summarization include the position of sentences in the document, sentence length, presence of uppercase words, similarity of the sentence to the document title, etc. Machine learning approaches have been widely used in summarization by [69]- [71], to name a few.
Naive Bayes, decision trees, support vector machines, Hidden Markov models and Conditional Random Fields are among the most common machine learning techniques used www.ijacsa.thesai.org for summarization. One fundamental difference between classifiers is that sentences to be included in the summary have to be decided independently. It turns out that methods explicitly assuming the dependency between sentences such as Hidden Markov model [72] and Conditional Random Fields [73] often outperform other techniques.
One of the primary issues in utilizing supervised learning methods for summarization is that they need a set of training documents (labeled data) to train the classifier, which may not be always easily available. Researchers have proposed some alternatives to cope with this issue:
• Annotated corpora creation: Creating annotated corpus for summarization greatly benefits the researchers, because more public benchmarks will be available which makes it easier to compare different summarization approaches together. It also lowers the risk of overfitting with a limited data. Ulrich et al. [74] introduce a publicly available annotated email corpus and its creation process. However, creating annotated corpus is very time consuming and more critically, there is no standard agreement on choosing the sentences, and different people may select varied sentences to construct the summary. • Semi-supervised approaches: Using a semi-supervised technique to train a classifier. In semi-supervised learning we utilize the unlabeled data in training. There is usually a small amount of labeled data along with a large amount of unlabeled data. For complete overview of semi-supervised learning, see [75]. Wong et al. [70] proposed a semisupervised method for extractive summarization. They co-trained two classifiers iteratively to exploit unlabeled data. In each iteration, the unlabeled training examples (sentences) with top scores are included in the labeled training set, and the two classifiers are trained on the new training data.
Machine learning methods have been shown to be very effective and successful in single and multi-document summarization, specifically in class specific summarization where classifiers are trained to locate particular type of information such as scientific paper summarization [61], [76], [77] and biographical summaries [78]- [80].
VII. EVALUATION
Evaluation of a summary is a difficult task because there is no ideal summary for a document or a collection of documents and the definition of a good summary is an open question to large extent [16]. It has been found that human summarizers have low agreement for evaluating and producing summaries. Additionally, prevalent use of various metrics and the lack of a standard evaluation metric has also caused summary evaluation to be difficult and challenging.
A. Evaluation of Automatically Produced Summaries
There have been several evaluation campaigns since the late 1990s in the US [16]. They include SUMMAC (1996)(1997)(1998) [81], DUC (the Document Understanding Conference, 2000-2007) [82], and more recently TAC (the Text Analysis Conference, 2008-present) 1 . These conferences have primary 1 http://www.nist.gov/tac/about/index.html role in design of evaluation standards and evaluate the summaries based on human as well as automatic scoring of the summaries.
In order to be able to do automatic summary evaluation, we need to conquer three major difficulties: i) It is fundamental to decide and specify the most important parts of the original text to preserve. ii) Evaluators have to automatically identify these pieces of important information in the candidate summary, since this information can be represented using disparate expressions. iii) The readability of the summary in terms of grammar and coherence has to be evaluated.
B. Human Evaluation
The simplest way to evaluate a summary is to have a human assess its quality. For example, in DUC, the judges would evaluate the coverage of the summary, i.e. how much the candidate summary covered the original given input. In more recent paradigms, in particular TAC, query-based summaries have been created. Then judges evaluate to what extent a summary answers the given query. The factors that human experts must consider when giving scores to each candidate summary are grammar, non redundancy, integration of most important pieces of information, structure and coherence. For more information, see [16].
C. Automatic Evaluation Methods
There has been a set of metrics to automatically evaluate summaries since the early 2000s. ROUGE is the most widely used metric for automatic evaluation.
1) ROUGE: Lin [83] introduced a set of metrics called Recall-Oriented Understudy for Gisting Evaluation (ROUGE) to automatically determine the quality of a summary by comparing it to human (reference) summaries. There are several variations of ROUGE (see [83]), and here we just mention the most broadly used ones:
• ROUGE-n: This metric is recall-based measure and based on comparison of n-grams. a series of n-grams (mostly two and three and rarely four) is elicited from the reference summaries and the candidate summary (automatically generated summary). Let p be "the number of common n-grams between candidate and reference summary", and q be "the number of n-grams extracted from the reference summary only". The score is computed as: ROUGE-n = p q (10)
• ROUGE-L: This measure employs the concept of longest common subsequence (LCS) between the two sequences of text. The intuition is that the longer the LCS between two summary sentences, the more similar they are. Although this metric is more flexible than the previous one, it has a drawback that all n-grams must be consecutive.
For more information about this metric and its refined metric, see [83]. • ROUGE-SU: This metric called skip bi-gram and unigram ROUGE and considers bi-grams as well as unigrams. This metric allows insertion of words between the first and the last words of the bi-grams, so they do not need to be consecutive sequences of words.
(IJACSA) International Journal of Advanced Computer Science and Applications, Vol. 8, No. 10, 2017
VIII. CONCLUSIONS
The increasing growth of the Internet has made a huge amount of information available. It is difficult for humans to summarize large amounts of text. Thus, there is an immense need for automatic summarization tools in this age of information overload. In this paper, we emphasized various extractive approaches for single and multi-document summarization. We described some of the most extensively used methods such as topic representation approaches, frequency-driven methods, graph-based and machine learning techniques. Although it is not feasible to explain all diverse algorithms and approaches comprehensively in this paper, we think it provides a good insight into recent trends and progresses in automatic summarization methods and describes the state-of-the-art in this research area.
www.ijacsa.thesai.org
Introduction to the special issue on summarization. D R Radev, E Hovy, K Mckeown, Computational linguistics. 284D. R. Radev, E. Hovy, and K. McKeown, "Introduction to the special issue on summarization," Computational linguistics, vol. 28, no. 4, pp. 399-408, 2002.
Fast generation of result snippets in web search. A Turpin, Y Tsegay, D Hawking, H E Williams, Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. the 30th annual international ACM SIGIR conference on Research and development in information retrievalACMA. Turpin, Y. Tsegay, D. Hawking, and H. E. Williams, "Fast generation of result snippets in web search," in Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2007, pp. 127-134.
A Brief Survey of Text Mining: Classification, Clustering and Extraction Techniques. M Allahyari, S Pouriyeh, M Assefi, S Safaei, E D Trippe, J B Gutierrez, K Kochut, ArXiv e-printsM. Allahyari, S. Pouriyeh, M. Assefi, S. Safaei, E. D. Trippe, J. B. Gutierrez, and K. Kochut, "A Brief Survey of Text Mining: Classifica- tion, Clustering and Extraction Techniques," ArXiv e-prints, 2017.
Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evaluation and applications. G K Savova, J J Masanz, P V Ogren, J Zheng, S Sohn, K C Kipper-Schuler, C G Chute, Journal of the American Medical Informatics Association. 175G. K. Savova, J. J. Masanz, P. V. Ogren, J. Zheng, S. Sohn, K. C. Kipper-Schuler, and C. G. Chute, "Mayo clinical text analysis and knowledge extraction system (ctakes): architecture, component evalu- ation and applications," Journal of the American Medical Informatics Association, vol. 17, no. 5, pp. 507-513, 2010.
A Vision for Health Informatics: Introducing the SKED Framework.An Extensible Architecture for Scientific Knowledge Extraction from Data. E D Trippe, J B Aguilar, Y H Yan, M V Nural, J A Brady, M Assefi, S Safaei, M Allahyari, S Pouriyeh, M R Galinski, J C Kissinger, J B Gutierrez, ArXiv e-printsE. D. Trippe, J. B. Aguilar, Y. H. Yan, M. V. Nural, J. A. Brady, M. Assefi, S. Safaei, M. Allahyari, S. Pouriyeh, M. R. Galinski, J. C. Kissinger, and J. B. Gutierrez, "A Vision for Health Informatics: Intro- ducing the SKED Framework.An Extensible Architecture for Scientific Knowledge Extraction from Data," ArXiv e-prints, 2017.
A comprehensive investigation and comparison of machine learning techniques in the domain of heart disease. S Pouriyeh, S Vahid, G Sannino, G Pietro, H Arabnia, J Gutierrez, Computers and Communications. S. Pouriyeh, S. Vahid, G. Sannino, G. De Pietro, H. Arabnia, and J. Gutierrez, "A comprehensive investigation and comparison of ma- chine learning techniques in the domain of heart disease," in Computers and Communications (ISCC), 2017 IEEE Symposium on. IEEE, 2017, pp. 204-207.
The automatic creation of literature abstracts. H P Luhn, IBM Journal of research and development. 22H. P. Luhn, "The automatic creation of literature abstracts," IBM Journal of research and development, vol. 2, no. 2, pp. 159-165, 1958.
New methods in automatic extracting. H P Edmundson, Journal of the ACM (JACM). 162H. P. Edmundson, "New methods in automatic extracting," Journal of the ACM (JACM), vol. 16, no. 2, pp. 264-285, 1969.
A survey of text summarization extractive techniques. V Gupta, G S , Journal of Emerging Technologies in Web Intelligence. 23V. Gupta and G. S. Lehal, "A survey of text summarization extractive techniques," Journal of Emerging Technologies in Web Intelligence, vol. 2, no. 3, pp. 258-268, 2010.
Lexrank: Graph-based lexical centrality as salience in text summarization. G Erkan, D R Radev, J. Artif. Intell. Res.(JAIR). 221G. Erkan and D. R. Radev, "Lexrank: Graph-based lexical centrality as salience in text summarization," J. Artif. Intell. Res.(JAIR), vol. 22, no. 1, pp. 457-479, 2004.
Statistics-based summarization-step one: Sentence compression. K Knight, D Marcu, AAAI/IAAI. K. Knight and D. Marcu, "Statistics-based summarization-step one: Sentence compression," in AAAI/IAAI, 2000, pp. 703-710.
Jointly learning to extract and compress. T Berg-Kirkpatrick, D Gillick, D Klein, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1T. Berg-Kirkpatrick, D. Gillick, and D. Klein, "Jointly learning to extract and compress," in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 2011, pp. 481-490.
Automatic summarising: The state of the art. K , Spärck Jones, Information Processing & Management. 436K. Spärck Jones, "Automatic summarising: The state of the art," Information Processing & Management, vol. 43, no. 6, pp. 1449-1481, 2007.
Text summarisation in progress: a literature review. E Lloret, M Palomar, Artificial Intelligence Review. 371E. Lloret and M. Palomar, "Text summarisation in progress: a literature review," Artificial Intelligence Review, vol. 37, no. 1, pp. 1-41, 2012.
A survey of text summarization techniques. A Nenkova, K Mckeown, Mining Text Data. SpringerA. Nenkova and K. McKeown, "A survey of text summarization techniques," in Mining Text Data. Springer, 2012, pp. 43-76.
in Multi-source, Multilingual Information Extraction and Summarization. H Saggion, T Poibeau, SpringerAutomatic text summarization: Past, present and futureH. Saggion and T. Poibeau, "Automatic text summarization: Past, present and future," in Multi-source, Multilingual Information Extrac- tion and Summarization. Springer, 2013, pp. 3-21.
Accurate methods for the statistics of surprise and coincidence. T Dunning, Computational linguistics. 191T. Dunning, "Accurate methods for the statistics of surprise and coin- cidence," Computational linguistics, vol. 19, no. 1, pp. 61-74, 1993.
Topic themes for multi-document summarization. S Harabagiu, F Lacatusu, Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval. the 28th annual international ACM SIGIR conference on Research and development in information retrievalACMS. Harabagiu and F. Lacatusu, "Topic themes for multi-document summarization," in Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information re- trieval. ACM, 2005, pp. 202-209.
Beyond sumbasic: Task-focused summarization with sentence simplification and lexical expansion. L Vanderwende, H Suzuki, C Brockett, A Nenkova, Information Processing & Management. 436L. Vanderwende, H. Suzuki, C. Brockett, and A. Nenkova, "Beyond sumbasic: Task-focused summarization with sentence simplification and lexical expansion," Information Processing & Management, vol. 43, no. 6, pp. 1606-1618, 2007.
Multidocument summarization by maximizing informative content-words. W Yih, J Goodman, L Vanderwende, H Suzuki, IJCAI. 20W.-t. Yih, J. Goodman, L. Vanderwende, and H. Suzuki, "Multi- document summarization by maximizing informative content-words." in IJCAI, vol. 2007, 2007, p. 20th.
Mcmr: Maximum coverage and minimum redundant text summarization model. R M Alguliev, R M Aliguliyev, M S Hajirahimova, C A Mehdiyev, Expert Systems with Applications. 3812R. M. Alguliev, R. M. Aliguliyev, M. S. Hajirahimova, and C. A. Mehdiyev, "Mcmr: Maximum coverage and minimum redundant text summarization model," Expert Systems with Applications, vol. 38, no. 12, pp. 14 514-14 522, 2011.
Term-weighting approaches in automatic text retrieval. G Salton, C Buckley, Information processing & management. 245G. Salton and C. Buckley, "Term-weighting approaches in automatic text retrieval," Information processing & management, vol. 24, no. 5, pp. 513-523, 1988.
Multiple documents summarization based on evolutionary optimization algorithm. R M Alguliev, R M Aliguliyev, N R Isazade, Expert Systems with Applications. 405R. M. Alguliev, R. M. Aliguliyev, and N. R. Isazade, "Multiple doc- uments summarization based on evolutionary optimization algorithm," Expert Systems with Applications, vol. 40, no. 5, pp. 1675-1689, 2013.
Centroid-based summarization of multiple documents. D R Radev, H Jing, M Styś, D Tam, Information Processing & Management. 406D. R. Radev, H. Jing, M. Styś, and D. Tam, "Centroid-based summariza- tion of multiple documents," Information Processing & Management, vol. 40, no. 6, pp. 919-938, 2004.
Centroid-based summarization of multiple documents: sentence extraction, utility-based evaluation, and user studies. D R Radev, H Jing, M Budzikowska, Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization. the 2000 NAACL-ANLP Workshop on Automatic SummarizationAssociation for Computational LinguisticsD. R. Radev, H. Jing, and M. Budzikowska, "Centroid-based sum- marization of multiple documents: sentence extraction, utility-based evaluation, and user studies," in Proceedings of the 2000 NAACL-ANLP Workshop on Automatic Summarization. Association for Computational Linguistics, 2000, pp. 21-30.
Multi-document summarization using clusterbased link analysis. X Wan, J Yang, Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. the 31st annual international ACM SIGIR conference on Research and development in information retrievalACMX. Wan and J. Yang, "Multi-document summarization using cluster- based link analysis," in Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2008, pp. 299-306.
Indexing by latent semantic analysis. S C Deerwester, S T Dumais, T K Landauer, G W Furnas, R A Harshman, JASIS. 416S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman, "Indexing by latent semantic analysis," JASIS, vol. 41, no. 6, pp. 391-407, 1990.
Generic text summarization using relevance measure and latent semantic analysis. Y Gong, X Liu, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. the 24th annual international ACM SIGIR conference on Research and development in information retrievalACMY. Gong and X. Liu, "Generic text summarization using relevance mea- sure and latent semantic analysis," in Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2001, pp. 19-25.
Two uses of anaphora resolution in summarization. J Steinberger, M Poesio, M A Kabadjov, K Ježek, Information Processing & Management. 436J. Steinberger, M. Poesio, M. A. Kabadjov, and K. Ježek, "Two uses of anaphora resolution in summarization," Information Processing & Management, vol. 43, no. 6, pp. 1663-1680, 2007.
Dimensionality reduction aids term co-occurrence based multi-document summarization. B Hachey, G Murray, D Reitter, Proceedings of the workshop on task-focused summarization and question answering. the workshop on task-focused summarization and question answeringAssociation for Computational LinguisticsB. Hachey, G. Murray, and D. Reitter, "Dimensionality reduction aids term co-occurrence based multi-document summarization," in Proceed- ings of the workshop on task-focused summarization and question answering. Association for Computational Linguistics, 2006, pp. 1-7.
Text summarization of turkish texts using latent semantic analysis. M G Ozsoy, I Cicekli, F N Alpaslan, Proceedings of the 23rd international conference on computational linguistics. the 23rd international conference on computational linguisticsAssociation for Computational LinguisticsM. G. Ozsoy, I. Cicekli, and F. N. Alpaslan, "Text summarization of turkish texts using latent semantic analysis," in Proceedings of the 23rd international conference on computational linguistics. Association for Computational Linguistics, 2010, pp. 869-876.
Multi-document summarization using sentence-based topic models. D Wang, S Zhu, T Li, Y Gong, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. Association for Computational Linguistics. the ACL-IJCNLP 2009 Conference Short Papers. Association for Computational LinguisticsD. Wang, S. Zhu, T. Li, and Y. Gong, "Multi-document summarization using sentence-based topic models," in Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. Association for Computational Lin- guistics, 2009, pp. 297-300.
Summarizing similarities and differences among related documents. I Mani, E Bloedorn, Information Retrieval. 11-2I. Mani and E. Bloedorn, "Summarizing similarities and differences among related documents," Information Retrieval, vol. 1, no. 1-2, pp. 35-67, 1999.
On information and sufficiency. S Kullback, R A Leibler, The Annals of Mathematical Statistics. S. Kullback and R. A. Leibler, "On information and sufficiency," The Annals of Mathematical Statistics, pp. 79-86, 1951.
Mixture of topic model for multi-document summarization. L Na, L Ming-Xia, L Ying, T Xiao-Jun, W Hai-Wen, X Peng, Control and Decision Conference. IEEEThe 26th ChineseL. Na, L. Ming-xia, L. Ying, T. Xiao-jun, W. Hai-wen, and X. Peng, "Mixture of topic model for multi-document summarization," in Control and Decision Conference (2014 CCDC), The 26th Chinese. IEEE, 2014, pp. 5168-5172.
Automatic summarization of events from social media. F C T Chua, S Asur, ICWSM. F. C. T. Chua and S. Asur, "Automatic summarization of events from social media." in ICWSM, 2013.
Personalized timeaware tweets summarization. Z Ren, S Liang, E Meij, M De Rijke, Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. the 36th international ACM SIGIR conference on Research and development in information retrievalACMZ. Ren, S. Liang, E. Meij, and M. de Rijke, "Personalized time- aware tweets summarization," in Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. ACM, 2013, pp. 513-522.
Personalized and automatic social summarization of events in video. J Hannon, K Mccarthy, J Lynch, B Smyth, Proceedings of the 16th international conference on Intelligent user interfaces. the 16th international conference on Intelligent user interfacesACMJ. Hannon, K. McCarthy, J. Lynch, and B. Smyth, "Personalized and automatic social summarization of events in video," in Proceedings of the 16th international conference on Intelligent user interfaces. ACM, 2011, pp. 335-338.
Automatic topic labeling using ontologybased topic models. M Allahyari, K Kochut, Machine Learning and Applications (ICMLA). IEEEIEEE 14th International Conference onM. Allahyari and K. Kochut, "Automatic topic labeling using ontology- based topic models," in Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on. IEEE, 2015, pp. 259- 264.
A knowledge-based topic modeling approach for automatic topic labeling. M Allahyari, S Pouriyeh, K Kochut, H R Arabnia, INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS. 89M. Allahyari, S. Pouriyeh, K. Kochut, and H. R. Arabnia, "A knowledge-based topic modeling approach for automatic topic la- beling," INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, vol. 8, no. 9, pp. 335-349, 2017.
Semantic tagging using topic models exploiting wikipedia category network. M Allahyari, K Kochut, IEEE Tenth International Conference on. IEEESemantic Computing (ICSC)M. Allahyari and K. Kochut, "Semantic tagging using topic models exploiting wikipedia category network," in Semantic Computing (ICSC), 2016 IEEE Tenth International Conference on. IEEE, 2016, pp. 63-70.
Semantic context-aware recommendation via topic models leveraging linked open data. International Conference on Web Information Systems Engineering. Springer--, "Semantic context-aware recommendation via topic models lever- aging linked open data," in International Conference on Web Informa- tion Systems Engineering. Springer, 2016, pp. 263-277.
Discovering coherent topics with entity topic models. Web Intelligence (WI), 2016 IEEE/WIC/ACM International Conference on. IEEE--, "Discovering coherent topics with entity topic models," in Web Intelligence (WI), 2016 IEEE/WIC/ACM International Conference on. IEEE, 2016, pp. 26-33.
Latent dirichlet allocation. D M Blei, A Y Ng, M I Jordan, the Journal of machine Learning research. 3D. M. Blei, A. Y. Ng, and M. I. Jordan, "Latent dirichlet allocation," the Journal of machine Learning research, vol. 3, pp. 993-1022, 2003.
Handbook of latent semantic analysis. M Steyvers, T Griffiths, 427Probabilistic topic modelsM. Steyvers and T. Griffiths, "Probabilistic topic models," Handbook of latent semantic analysis, vol. 427, no. 7, pp. 424-440, 2007.
Bayesian query-focused summarization. H Daumé, D Marcu, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsH. Daumé III and D. Marcu, "Bayesian query-focused summarization," in Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics, 2006, pp. 305-312.
A hybrid hierarchical model for multi-document summarization. A Celikyilmaz, D Hakkani-Tur, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsA. Celikyilmaz and D. Hakkani-Tur, "A hybrid hierarchical model for multi-document summarization," in Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 2010, pp. 815-824.
An ontology-based approach to text summarization. L Hennig, W Umbrath, R Wetzker, Web Intelligence and Intelligent Agent Technology. IEEE3L. Hennig, W. Umbrath, and R. Wetzker, "An ontology-based approach to text summarization," in Web Intelligence and Intelligent Agent Tech- nology, 2008. WI-IAT'08. IEEE/WIC/ACM International Conference on, vol. 3. IEEE, 2008, pp. 291-294.
A query-based medical information summarization system using ontology knowledge. P Chen, R Verma, Computer-Based Medical Systems, 2006. CBMS 2006. 19th IEEE International Symposium on. IEEEP. Chen and R. Verma, "A query-based medical information summariza- tion system using ontology knowledge," in Computer-Based Medical Systems, 2006. CBMS 2006. 19th IEEE International Symposium on. IEEE, 2006, pp. 37-42.
Multidocument summarization based on the yago ontology. E Baralis, L Cagliero, S Jabeen, A Fiori, S Shah, Expert Systems with Applications. 4017E. Baralis, L. Cagliero, S. Jabeen, A. Fiori, and S. Shah, "Multi- document summarization based on the yago ontology," Expert Systems with Applications, vol. 40, no. 17, pp. 6976-6984, 2013.
Yago: a core of semantic knowledge. F M Suchanek, G Kasneci, G Weikum, Proceedings of the 16th international conference on World Wide Web. the 16th international conference on World Wide WebACMF. M. Suchanek, G. Kasneci, and G. Weikum, "Yago: a core of semantic knowledge," in Proceedings of the 16th international conference on World Wide Web. ACM, 2007, pp. 697-706.
Text summarization using wikipedia. Y Sankarasubramaniam, K Ramanathan, S Ghosh, Information Processing & Management. 503Y. Sankarasubramaniam, K. Ramanathan, and S. Ghosh, "Text sum- marization using wikipedia," Information Processing & Management, vol. 50, no. 3, pp. 443-461, 2014.
Automatically summarising web sites: is there a way around it. E Amitay, C Paris, Proceedings of the ninth international conference on Information and knowledge management. the ninth international conference on Information and knowledge managementACME. Amitay and C. Paris, "Automatically summarising web sites: is there a way around it?" in Proceedings of the ninth international conference on Information and knowledge management. ACM, 2000, pp. 173-179.
Enhanced web document summarization using hyperlinks. J.-Y Delort, B Bouchon-Meunier, M Rifqi, Proceedings of the fourteenth ACM conference on Hypertext and hypermedia. the fourteenth ACM conference on Hypertext and hypermediaACMJ.-Y. Delort, B. Bouchon-Meunier, and M. Rifqi, "Enhanced web docu- ment summarization using hyperlinks," in Proceedings of the fourteenth ACM conference on Hypertext and hypermedia. ACM, 2003, pp. 208- 215.
Comments-oriented blog summarization by sentence extraction. M Hu, A Sun, E.-P Lim, Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. the sixteenth ACM conference on Conference on information and knowledge managementACMM. Hu, A. Sun, and E.-P. Lim, "Comments-oriented blog summarization by sentence extraction," in Proceedings of the sixteenth ACM conference on Conference on information and knowledge management. ACM, 2007, pp. 901-904.
Summarization of twitter microblogs. B P Sharifi, D I Inouye, J K Kalita, The Computer Journal. 109B. P. Sharifi, D. I. Inouye, and J. K. Kalita, "Summarization of twitter microblogs," The Computer Journal, p. bxt109, 2013.
Summarizing microblogs automatically. B Sharifi, M.-A Hutton, J Kalita, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational LinguisticsB. Sharifi, M.-A. Hutton, and J. Kalita, "Summarizing microblogs automatically," in Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Association for Computational Linguistics, 2010, pp. 685-688.
Comments-oriented document summarization: understanding documents with readers' feedback. M Hu, A Sun, E.-P Lim, Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. the 31st annual international ACM SIGIR conference on Research and development in information retrievalACMM. Hu, A. Sun, and E.-P. Lim, "Comments-oriented document sum- marization: understanding documents with readers' feedback," in Pro- ceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2008, pp. 291-298.
Generating impact-based summaries for scientific literature. Q Mei, C Zhai, ACL. Citeseer8Q. Mei and C. Zhai, "Generating impact-based summaries for scientific literature." in ACL, vol. 8. Citeseer, 2008, pp. 816-824.
Coherent citation-based summarization of scientific papers. A Abu-Jbara, D Radev, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1A. Abu-Jbara and D. Radev, "Coherent citation-based summarization of scientific papers," in Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computational Linguistics, 2011, pp. 500-509.
Scientific paper summarization using citation summary networks. V Qazvinian, D R Radev, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsAssociation for Computational Linguistics1V. Qazvinian and D. R. Radev, "Scientific paper summarization using citation summary networks," in Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1. Association for Computational Linguistics, 2008, pp. 689-696.
Recent advances in natural language processing III: selected papers from RANLP. A Nenkova, A Bagga, 287Facilitating email thread access by extractive summary generationA. Nenkova and A. Bagga, "Facilitating email thread access by ex- tractive summary generation," Recent advances in natural language processing III: selected papers from RANLP, vol. 2003, p. 287, 2004.
Summarizing email threads. O Rambow, L Shrestha, J Chen, C Lauridsen, Proceedings of HLT-NAACL 2004: Short Papers. HLT-NAACL 2004: Short PapersAssociation for Computational LinguisticsO. Rambow, L. Shrestha, J. Chen, and C. Lauridsen, "Summarizing email threads," in Proceedings of HLT-NAACL 2004: Short Papers. Association for Computational Linguistics, 2004, pp. 105-108.
Summarizing archived discussions: a beginning. P S Newman, J C Blitzer, Proceedings of the 8th international conference on Intelligent user interfaces. the 8th international conference on Intelligent user interfacesACMP. S. Newman and J. C. Blitzer, "Summarizing archived discussions: a beginning," in Proceedings of the 8th international conference on Intelligent user interfaces. ACM, 2003, pp. 273-276.
Textrank: Bringing order into texts. R Mihalcea, P Tarau, Association for Computational LinguisticsR. Mihalcea and P. Tarau, "Textrank: Bringing order into texts." Association for Computational Linguistics, 2004.
A language independent algorithm for single and multiple document summarization. --, "A language independent algorithm for single and multiple document summarization," 2005.
Improving the performance of the random walk model for answering complex questions. Y Chali, S R Joty, Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short PapersAssociation for Computational LinguisticsY. Chali and S. R. Joty, "Improving the performance of the random walk model for answering complex questions," in Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers. Association for Computational Linguistics, 2008, pp. 9-12.
A trainable document summarizer. J Kupiec, J Pedersen, F Chen, Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval. the 18th annual international ACM SIGIR conference on Research and development in information retrievalACMJ. Kupiec, J. Pedersen, and F. Chen, "A trainable document summarizer," in Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 1995, pp. 68-73.
A web-trained extraction summarization system. L Zhou, E Hovy, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyAssociation for Computational Linguistics1L. Zhou and E. Hovy, "A web-trained extraction summarization system," in Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1. Association for Computational Linguistics, 2003, pp. 205-211.
Extractive summarization using supervised and semi-supervised learning. K.-F Wong, M Wu, W Li, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational Linguistics1Association for Computational LinguisticsK.-F. Wong, M. Wu, and W. Li, "Extractive summarization using supervised and semi-supervised learning," in Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1. As- sociation for Computational Linguistics, 2008, pp. 985-992.
Applying regression models to query-focused multi-document summarization. Y Ouyang, W Li, S Li, Q Lu, Information Processing & Management. 472Y. Ouyang, W. Li, S. Li, and Q. Lu, "Applying regression models to query-focused multi-document summarization," Information Processing & Management, vol. 47, no. 2, pp. 227-237, 2011.
Text summarization via hidden markov models. J M Conroy, D P O'leary, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. the 24th annual international ACM SIGIR conference on Research and development in information retrievalACMJ. M. Conroy and D. P. O'leary, "Text summarization via hidden markov models," in Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 2001, pp. 406-407.
Document summarization using conditional random fields. D Shen, J.-T Sun, H Li, Q Yang, Z Chen, IJCAI. 7D. Shen, J.-T. Sun, H. Li, Q. Yang, and Z. Chen, "Document summa- rization using conditional random fields." in IJCAI, vol. 7, 2007, pp. 2862-2867.
A publicly available annotated corpus for supervised email summarization. J Ulrich, G Murray, G Carenini, Proc. of aaai email-2008 workshop. of aaai email-2008 workshopchicago, usaJ. Ulrich, G. Murray, and G. Carenini, "A publicly available annotated corpus for supervised email summarization," in Proc. of aaai email- 2008 workshop, chicago, usa, 2008.
Semi-supervised learning. O Chapelle, B Schölkopf, A Zien, MIT press Cambridge2O. Chapelle, B. Schölkopf, A. Zien et al., Semi-supervised learning. MIT press Cambridge, 2006, vol. 2.
Summarizing scientific articles: experiments with relevance and rhetorical status. S Teufel, M Moens, Computational linguistics. 284S. Teufel and M. Moens, "Summarizing scientific articles: experiments with relevance and rhetorical status," Computational linguistics, vol. 28, no. 4, pp. 409-445, 2002.
Generating extractive summaries of scientific paradigms. V Qazvinian, D R Radev, S M Mohammad, B Dorr, D Zajic, M Whidby, T Moon, arXiv:1402.0556arXiv preprintV. Qazvinian, D. R. Radev, S. M. Mohammad, B. Dorr, D. Zajic, M. Whidby, and T. Moon, "Generating extractive summaries of sci- entific paradigms," arXiv preprint arXiv:1402.0556, 2014.
Extracting biographical sentences from textual documents. S Soares, B Martins, P Calado, Proceedings of the 15th Portuguese Conference on Artificial Intelligence. the 15th Portuguese Conference on Artificial IntelligenceLisbon, PortugalS. Soares, B. Martins, and P. Calado, "Extracting biographical sentences from textual documents," in Proceedings of the 15th Portuguese Con- ference on Artificial Intelligence (EPIA 2011), Lisbon, Portugal, 2011, pp. 718-30.
Multi-document biography summarization. L Zhou, M Ticrea, E Hovy, cs/0501078arXiv preprintL. Zhou, M. Ticrea, and E. Hovy, "Multi-document biography summa- rization," arXiv preprint cs/0501078, 2005.
Producing biographical summaries: Combining linguistic knowledge with corpus statistics. B Schiffman, I Mani, K J Concepcion, Proceedings of the 39th Annual Meeting on Association for Computational Linguistics. the 39th Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsB. Schiffman, I. Mani, and K. J. Concepcion, "Producing biographical summaries: Combining linguistic knowledge with corpus statistics," in Proceedings of the 39th Annual Meeting on Association for Computa- tional Linguistics. Association for Computational Linguistics, 2001, pp. 458-465.
Summac: a text summarization evaluation. I Mani, G Klein, D House, L Hirschman, T Firmin, B Sundheim, Natural Language Engineering. 801I. Mani, G. Klein, D. House, L. Hirschman, T. Firmin, and B. Sund- heim, "Summac: a text summarization evaluation," Natural Language Engineering, vol. 8, no. 01, pp. 43-68, 2002.
Duc in context. P Over, H Dang, D Harman, Inf. Process. Manage. 436P. Over, H. Dang, and D. Harman, "Duc in context," Inf. Process. Manage., vol. 43, no. 6, pp. 1506-1520, Nov. 2007. [Online].
. 10.1016/j.ipm.2007.01.019Available: http://dx.doi.org/10.1016/j.ipm.2007.01.019
Rouge: A package for automatic evaluation of summaries. C.-Y. Lin, Text Summarization Branches Out: Proceedings of the ACL-04. C.-Y. Lin, "Rouge: A package for automatic evaluation of summaries," in Text Summarization Branches Out: Proceedings of the ACL-04
. Workshop, Workshop, 2004, pp. 74-81.
Es-lda: Entity summarization using knowledge-based topic modeling. S Pouriyeh, M Allahyari, K Kochut, G Cheng, H R Arabnia, International Joint Conference on Natural Language Processing. S. Pouriyeh, M. Allahyari, K. Kochut, G. Cheng, and H. R. Arabnia, "Es-lda: Entity summarization using knowledge-based topic modeling," in International Joint Conference on Natural Language Processing (IJCNLP), 2017.
| [] |
[
"Constraints of the formation and abundances of methyl carbamate, a glycine isomer, in hot corinos",
"Constraints of the formation and abundances of methyl carbamate, a glycine isomer, in hot corinos"
] | [
"Dipen Sahu \n11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C\n",
"Sheng-Yuan Liu \n11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C\n",
"Ankan Das \nIndian Centre for Space Physics\n43 Chalantika, Garia St. Road700084KolkataIndia\n",
"Prasanta Garai \nIndian Centre for Space Physics\n43 Chalantika, Garia St. Road700084KolkataIndia\n",
"Valentine Wakelam \nLaboratoire d'astrophysique de Bordeaux\nUniv. Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance\n"
] | [
"11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C",
"11F of AS/NTU Astronomy-Mathematics Building\nAcademia Sinica Institute of Astronomy and Astrophysics\nNo.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C",
"Indian Centre for Space Physics\n43 Chalantika, Garia St. Road700084KolkataIndia",
"Indian Centre for Space Physics\n43 Chalantika, Garia St. Road700084KolkataIndia",
"Laboratoire d'astrophysique de Bordeaux\nUniv. Bordeaux\nCNRS\nallée Geoffroy Saint-Hilaire\nB18N, 33615PessacFrance"
] | [] | Methyl carbamate CH 3 OC(O)NH 2 is an isomer of glycine. Quantum chemical analyses show that methyl carbamate is more stable isomer than glycine. Because of this, there could be a higher chance for methyl carbamte to exist in the interstellar medium as compared to glycine. Despite immense searches, till now glycine has not been detected in the ISM, therefore it is worthwhile to search its isomer methyl carbamate. In this paper, we present the constraints of methyl carbamate formation under the interstellar conditions. Large complex organic molecules are favorably produced in hotcorino environments of low mass protostars. We for the first time carried out astrochemical modeling focusing on the formation of methyl carbamate in physical conditions similar to hot-corino objects. Consequently, we examined ALMA archival data for existing spectral line observations toward hot corinos NGC1333 IRAS 4A2 and IRAS 16293B. Within the common spectral range towards these sources, we found three features are possibly related to the spectral transitions of methyl carbamate and consequently estimate the upper limit of column densities. Results of chemical modeling are consistent with the observational upper limit of estimated column density/abundance toward the sources. This may hint the validation of the proposed formation mechanism. Future observations using telescope like ngVLA may confirm the presence of MC toward the hot corinos. | 10.3847/1538-4357/aba0a5 | [
"https://arxiv.org/pdf/2006.15629v1.pdf"
] | 220,250,348 | 2006.15629 | 552d5b53ee011085620da0b62da930b780c56386 |
Constraints of the formation and abundances of methyl carbamate, a glycine isomer, in hot corinos
June 30, 2020
Dipen Sahu
11F of AS/NTU Astronomy-Mathematics Building
Academia Sinica Institute of Astronomy and Astrophysics
No.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C
Sheng-Yuan Liu
11F of AS/NTU Astronomy-Mathematics Building
Academia Sinica Institute of Astronomy and Astrophysics
No.1, Sec. 4, Roosevelt Rd10617TaipeiTaiwan, R.O.C
Ankan Das
Indian Centre for Space Physics
43 Chalantika, Garia St. Road700084KolkataIndia
Prasanta Garai
Indian Centre for Space Physics
43 Chalantika, Garia St. Road700084KolkataIndia
Valentine Wakelam
Laboratoire d'astrophysique de Bordeaux
Univ. Bordeaux
CNRS
allée Geoffroy Saint-Hilaire
B18N, 33615PessacFrance
Constraints of the formation and abundances of methyl carbamate, a glycine isomer, in hot corinos
June 30, 2020(Received Oct 2018; Revised Jan 2019; Accepted June 30, 2020) Submitted to ApJDraft version Typeset using L A T E X twocolumn style in AASTeX62ISM:abundances -ISM:individual object(NGC1333 IRAS4A & IRAS16293-2422 B)- ISM:molecules-stars:formation -Astrochemistry
Methyl carbamate CH 3 OC(O)NH 2 is an isomer of glycine. Quantum chemical analyses show that methyl carbamate is more stable isomer than glycine. Because of this, there could be a higher chance for methyl carbamte to exist in the interstellar medium as compared to glycine. Despite immense searches, till now glycine has not been detected in the ISM, therefore it is worthwhile to search its isomer methyl carbamate. In this paper, we present the constraints of methyl carbamate formation under the interstellar conditions. Large complex organic molecules are favorably produced in hotcorino environments of low mass protostars. We for the first time carried out astrochemical modeling focusing on the formation of methyl carbamate in physical conditions similar to hot-corino objects. Consequently, we examined ALMA archival data for existing spectral line observations toward hot corinos NGC1333 IRAS 4A2 and IRAS 16293B. Within the common spectral range towards these sources, we found three features are possibly related to the spectral transitions of methyl carbamate and consequently estimate the upper limit of column densities. Results of chemical modeling are consistent with the observational upper limit of estimated column density/abundance toward the sources. This may hint the validation of the proposed formation mechanism. Future observations using telescope like ngVLA may confirm the presence of MC toward the hot corinos.
INTRODUCTION
Amino acids are the building blocks of life as it is the essential compound for protein formation. Miller (1953) from his experiments showed that many amino acids can be produced in the laboratory using electric discharge in an aqueous mixture of ammonia, methane, and dihydrogen. In a later examination, Khare et al. (1986) observed 16 amino acids from the acidic treatment of tholins, which is present in Titan's atmosphere. Various investigations have also been performed to detect amino acids in extraterrestrial objects. For example, a number of amino acids have been detected in the Murchison me-teorite (Toshiki et al. 2017). Amino acids have also been found in comet 67P/Churyumov-Gerasimenko (Altwegg et al. 2016). All these studies suggest that the presence of amino acids could be quite ubiquitous in extraterrestrial environments.
To further explore the prebiotic chemistry, it is indispensable to study the simplest amino acids, glycine. Although extensive observational searches had been made, no conclusive detection of glycine in the interstellar medium (ISM) is reported (e.g., Kuan et al. 2003;Synder et al. 2005). Various authors discussed the formation of glycine in the ISM using possible pathways and chemical modeling (e.g. Garrod 2013;Suzuki et al. 2018), but it remains illusive.
In the context of isomeric species, it is interesting to note that ISM molecules often form preferentially following the 'minimum energy principle (MEP)' (Guillemin et al. 2014). The MEP predicts that thermodynamically the most stable isomer should be the most abundant. As an example, in the molecular clouds where C 2 H 3 N isomers have been detected -the more stable acetonitrile (CH 3 CN) is found to be more abundant than methyl isocyanide(CH 3 NC) and ketenimine (CH 2 CNH). Similarly acetaldehyde (CH 3 CHO) is more abundant than its isomeric counterpart vinyl alcohol (CH 2 CHOH) (see Guillemin et al. 2014, and references therein). There could exist, however, some exceptions due to the difference in the efficiencies of their reactions (Lattelais et al. 2009). Loomis et al. (2015) further argued that MEP may not be generally applicable to interstellar chemistry and the formation of complex organic molecules(COMs). They suggested that in the ISM local environments play a major role for the formation of chemical species rather than thermodynamic stability of individual species. Neverthless, it is worthwhile to examine the molecule of prebiotic important C 2 H 5 O 2 N, of which three isomers are glycine (NH 2 CH 2 COOH), Nmethylcarbamic acid (CH 3 NHCOOH) and methyl carbamate (CH 3 OC(NH 2 )O). Quantum chemical studies of the energy stability showed that N-methylcarbamic acid (dipole moment ∼ 1.2 Debye ) is the most stable isomer among these three (Lattelais et al. 2011). To the best of our knowledge, N-methylcarbamic acid has never been searched in the ISM, maybe due to the unavailability of its spectroscopic data. On the other hand, methyl carbamate (MC) is the second stable isomer, 4 kcal/mole above the most stable isomer (CH 3 NHCOOH) while glycine is ∼ 10 kcal/mol above (Lattelais et al. 2011). In addition, MC 1 has a dipole moment of ∼ 2.4 Debye, higher than N-methylcarbamic. Maybe for this reason, MC was searched toward the intermediate-mass protostar IRAS21391+58502 and the hot core W51e2 (Demyk et al. 2004). However, no further detail or positive results are available in the literature.
Nonetheless, in the ISM, COMs formation are often described to be formed predominantly on the ice phase via radical-radical reactions (e.g., Garrod 2013). Motivated by these, we performed chemical modeling assuming that MC is formed through similar pathways and searched for possible signatures of MC in the protostellar hot corino sources NGC 1333 IRAS 4A2 and IRAS 16293-2422 B. Hot corinos are associated with low-mass star formation at its relatively early evolutionary stage. Due to the elevated temperature ( > 100 K) in the vicinity of protostars, molecules in icy mantels come off to the 1 Polarizability 6.6 ± 0.5 × 10 24 cm 3 , CSID:11229,http://www. chemspider.com/ Chemical-Structure.11229.html(accessed02: 52,Jun24,2020) gas phase due to thermal desorption. For this reason, a large abundance of COMs are often observed in the gas phase toward hot-corino objects. Results from chemical modeling show that COMs formation more likely occurs during the warm-up phase in the star formation process (Garrod 2006). In this context, the hot-corino sources are good target to search for COMs such as MC. We model the formation of methyl carbamate in the hot-corino environments to study the possibility of its presence. We also present the ALMA observations of NGC 1333 IRAS 4A2 and IRAS 16293-2422 B towards of which we derived MC's column density upper limits being consistent with the chemical modeling results. To estimate the formation of methyl carbamate in the ISM and particularly in the hot corino environment, we use the NAUTILUS chemical code. The details of its formalism are described in Ruaud et al. (2016). Using NAUTILUS one can compute the time-evolution of chemistry (called 0D) and also spatial distributions of species (called 1D; see Wakelam et al. 2014, for example). The chemical code considers the interaction between gas and grain and calculates the chemical evolution both in gas and on dust surfaces. The rate equation approximation is used in the code to estimate molecular abundances of species in the gas phase, dust surfaces and in the bulk of grain mantle (i.e., a three-phase model). For the dust surface chemistry, NAUTILUS considers only the physisorption of chemical species onto the dust surface and subsequent diffusion mechanism to produce new species due to chemical reactions/recombinations. The surface species can come to the gas phase via different desorption mechanisms including thermal desorption (temperature dependent), UV photodesorption, cosmicray induced evaporation, and chemical desorption (see Ruaud et al. 2016, and references therein).
chemical network and methyl carbamate
We developed a chemical network for the methyl carbamate starting from the kida.uva.2014 gas-phase network 2 and from the surface network described in Ruaud et al. (2016) for the surface processes, both updated from Wakelam et al. (2015Wakelam et al. ( , 2017; Hickson et al. (2016); Loison et al. (2016Loison et al. ( , 2017; Vidal et al. (2017). The starting network is the same network used in Wakelam et al. (2019). To these sets of network, we have added several relevant species and reactions 2.14 × 10 −5 S + 8 × 10 −8 Si + 8 × 10 −9 Mg + 7 × 10 −9 Cl + 4 × 10 −9 Fe + 3 × 10 −9 P + 3 × 10 −9 Na + 2 × 10 −9
for exploring the formation of MC. The added species
are s − CH 3 O, s − NH 2 CO, s − H 2 NCO 2 CH 3 (MC), s − NH 2 , s − CH 3
OCO, s − CH 3 CO, and s − HOCO in the grain phase, with 's' representing surface species. Along with these, gas-phase species corresponding to the grain phase species including some newly produced gas phase species are also considered. Initial abundances of elemental species (Das et al. 2015a) and reactions involving these species are summarized in Table 1 and Table 2, respectively. Carbamate compound is formally derived from carbamic acid (NH 2 COOH). In terrestrial conditions, methyl carbamate is produced by the reaction of methanol and urea. This reaction, however may not be suitable for interstellar medium due to its extreme conditions such as very low pressure and temperature compared to the Earth atmosphere. For efficient formation of MC under ISM conditions, we consider the solid phase radical-radical reaction of CH 3 O and NH 2 CO:
CH 3 O + NH 2 CO → H 2 NCO 2 CH 3(1)
We check the potential of this radical-radical reaction using quantum chemical calculations (B3LYP/6-311g++(d,p) method, Gaussian 09 software). The enthalpy of the reaction is found to be −80.79 Kcal/mol, i.e. the reaction is exothermic and can proceed on grain surface, Additionally, we assume the radical-radical reaction to be barrier-less. The methoxy radical CH 3 O has been observed in the ISM (Cernicharo et al. 2012 Das et al. 2016) of atoms (e.g., O) and radicals (e.g., CH 3 ); also it can be produced from CH 3 OH as a result of hydrogen abstraction on grain surfaces. CH 3 O is already present in our network and available in KIDA. Also, NH 2 CO mainly form on grain surfaces via recombination reactions (Quénard et al. 2018) and hydrogen abstraction from species like NH 2 CHO (Haupa et al. 2019). The binding energies of NH 2 CO and CH 3 O are considered to be 5160 K and 4400 K, respectively ). Due to the high surface binding energies of the reactants, the radical-radical reaction (1) is unattainable at low temperature. At warm temperature (∼ 100 K and higher) the radicals get enough mobility to activate the reaction on grain surfaces. Another tentative reaction that can produce MC in the solid phase is:
CH 3 OCO + NH 2 → H 2 NCO 2 CH 3(2)
This reaction was discussed by Lee et al. (2009), where they speculated CH 3 OCO formation from the recombination of CH 3 O and CO. They concluded that the reaction is unlikely under cold condition. Though this reaction may be relevant for hot-corino like temperature (> 100 K), we did not include this reaction in our network as the formation of CH 3 OCO radical is not well known. Therefore, we have only considered Reaction (1) for solid phase production of MC. Due to various surface desorptions the solid phase MC can populate the gas phase, while no other gas phase production of MC is considered. We consider different kinds of desorption mechanisms (see section 2.1) and find that thermal desorption is the dominant mechanism for gas phase abundance of MC. As shown in Table 2, the destruction of MC occurs mainly due to ion-neutral reactions and photodissociation including cosmic ray dissociation. We have also considered MC being adsorbed back onto grain surface in our network.
From Table 2, we can see that NH 2 CO formation is related to NH 2 CHO (formamide) and HNCO (isocyanic acid). Also, CH 3 NCO (methyl isocyanate) is another COM which is chemically related to HNCO (e.g., Gorai et al. 2020). As MC mainly from via NH 2 CO radical, for the completeness of the chemical network we have to consider reactions related to NH 2 CHO, HNCO, and CH 3 NCO. Reaction pathways of these molecules are included in our network. In Table 2, we only noted some major reactions related to these molecules. Values of reactions rate constants vary over published articles. In our chemical network, we parameterized the major reactions (Table 2) following Gorai et al. (2020) and references therein. In Table 2, the gas phase reaction rate constants for MC are assumed to be sim-
− CH3O + s − NH2CO → s − H2NCO2CH3 1.000e+00 0.000e+00 0.000e+00 surface s − H2NCO2CH3 → H2NCO2CH3 1.000e+00 0.000e+00 0.000e+00 surface s − NH2 + s − CO → s − NH2CO 1.000e+00 0.000e+00 2.100E+03 surface s − NH2CO + s − H → s − NH2CHO 1.000e+00 0.000e+00 0.000e+00 surface s − NH2CO + s − H → s − HNCO + H2 1.000e+00 0.000e+00 0.000e+00 surface s − NH2CO + s − H → s − NH3 + CO 1.000e+00 0.000e+00 0.000e+00 surface s − NH2CHO + s − H → s − NH2CO + s − H2 1.000e+00 0.000e+00 2.500e+03 surface s − HNCO + s − H → s − NH2CO 1.000e+00 0.000e+00 4.150e+03 surface s − NH2CO → NH2CO 1.000e+00 0.000e+00 0.000e+00 surface H2NCO2CH3 + Photon → NH2CO +CH3O 1.380e-09 0.000e+00 1.730e+00 photo H2NCO2CH3 + CRP → NH2CO + CH3O 1.500e+03 0.000e+00 0.000e+00 CRP H2NCO2CH3 + Photon → NH2 + CH3OCO 1.380e-09 0.000e+00 1.730e+00 photo H2NCO2CH3 + CRP → NH2 + CH3OCO 1.500e+03 0.000e+00 0.000e+00 CRP H2NCO2CH3 + C + → C + COOCH + 4 + NH 1.860e-09 1.000e+00 3.240e+00 Bi-mol H2NCO2CH3 + He + → He + CH3 +H2NCO + 2 3.080e-09 1.000e+00 3.240e+00 Bi-mol H2NCO2CH3 + H + 3 → H2 + NH6C2O + 2 3.510e-09 1.000e+00 3.240e+00 Bi-mol H2NCO2CH3 + H + → NH2 + COOCH + 4 6.010e-09 1.000e+00 3.240e+00 Bi-mol H2NCO2CH3 +HCO + → CO + NH6C2O + 2 1.310e-09 1.000e+00 3.240e+00 Bi-mol H2NCO2CH3 +H3O + → H2O +NH6C2O + 2 1.530e-09 1.000e+00 3.240e+00 Bi-mol CH3 + HOCO → CH3OCO + H 1.000e-10 0.000e+00 8.004e+03 Bi-mol H + 3 + CH3OCO → CH3COOH + +H2 1.000e-09 -5.000e-01 0.000e+00 Bi-mol HCO + + CH3OCO → CH3COOH + + CO 1.030e-09 -5.000e-01 0.000e+00 Bi-mol H + + CH3OCO → CH3OCO + + H 1.000e-09 -5.000e-01 0.000e+00 Bi-mol CO + + CH3OCO → CH3OCO + + CO 1.440e-09 -5.000e-01 0.000e+00 Bi-mol He + + CH3OCO → CH3OCO + + He 1.000e-09 -5.000e-01 0.000e+00 Bi-mol CH3COOH + + e − → CH3OCO + H 3.000e-07 -5.000e-01 0.000e+00 Bi-mol CH3OCO + CRP → CO2 + CH3 4.000e+03 0.000e+00 0.000e+00 CRP CH3OCO + Photon → CO2 + CH3
5.000e-10 0.000e+00 0.000e+00 photo CH3OCO + + e − → CH3 +CO2 1.500e-07 -5.000e-01 0.000e+00 Bi-mol H2NCO + 2 + e − → NH2 + CO2 1.500e-07 -5.000e-01 0.000e+00 Bi-mol NH6C2O + 2 + e − → H + H2NCO2CH3 1.500e-07 -5.000e-01 0.000e+00 Bi-mol NH6C2O + 2 + e − → NH2CO + CH3OH 1.500e-07 -5.000e-01 0.000e+00 Bi-mol NH2CHO + H → H2 + NH2CO 1.000e-10 0.000e+00 3.1e+3/2.4e+3 ** Bi-mol NH2CO + H → H2 + HNCO 1.000e-10 0.000e+00 0.000e+00 Bi-mol HNCO + H → NH2CO 1.000e-10 0.000e+00 5.050e+03 Bi-mol NH2CO + H → NH2CHO 1.000e-10 0.000e+00 0.000e+00 Bi-mol NH2CO + CRP → NH2 + CO 9.500e+03 0.000e+00 0.000e+00 CRP NH2CO + He + → He + CO + NH + 2 5.000e-01 2.450e-09 8.130e+00
Bi-mol* NH2CO +H + 3 → H2 + NH2COH + 1.000e+00 2.790e-09 8.130e+00
Bi-mol* NH2CO + HCO + → CO + NH2COH + 1.000e+00 1.130e-09 8.130e+00
Bi-mol* NH2CO + H + → NH2 + HCO + 5.000e-01 4.730e-09 8.130e+00
Bi-mol* NH2CO + H + → HCO +NH + 2 5.000e-01 4.730e-09 8.130e+00 Bi-mol* NH2COH + + e − → HCO + NH2 1.500e-07 -5.000e-01 0.000e+00 Bi-mol NH2COH + + e − → H + NH2CO
1.500e-07 -5.000e-01 0.000e+00 Bi-mol NH2CHO + H→ H2 +NH2CO 1.000e-10 0.000e+00 1.500e+03 Bi-mol* NH2CO + H→ H2 +HNCO 1.000e-10 0.000e+00 0.000e+00 Bi-mol* NH2CO + H→ NH2CHO 1.000e-10 0.000e+00 0.000e+00 Bi-mol* HNCO + H→ NH2CO 1.000e-10 0.000e+00 3.000e+03 Bi-mol* Gas phase reaction parameterized CH3 + HNCO → CH3NCO + H 1.000e-11 0.000e+00 0.000e+00 Bi-mol NH2 + H2CO → NH2CHO + H 1.000e-12 -2.560e+00 4.880e+00 Bi-mol Note: α, β, γ are rate constants, and rate coefficients are calculated based on reaction types as mentioned in column 5. The formulae that are used to calculate rate coefficients different categories of reactions are the following, CRP: κ = αζ, where ζ is cosmic-ray ionization rate; photo: κ = αexp(−γAv); Bi-mol: κ(T ) = α(T /300) β exp(−γT ); and Bi-mol*: κ(T ) = αβ(0.62 + 0.4767γ(300/T ) 0.5 ). **We consider this values for model 1 and model 2 respectively. For 'surface' reactions, the diffusion barrier to binding energy ratio is considered to be 0.5. ilar to those for HCOOCH 3 . We consider only typical ion-neutral gas phase destruction pathways for MC. Also, tentative pathways for the newly produced species (e.g., CH 3 OCO, CH 3 COOH + ) are included for the completeness of the chemical network. Reaction rate constants are assumed similar to CH 3 NCO for CH 3 OCO and related species. We have added a few gas-phase destruction pathways (ion-neutral) for NH 2 CO following NH 2 CN. Gas phase reactions (neutral-neutral) of NH 2 CO and H are taken from Gorai et al. (2020).
Since the reaction pathways for the main constituents of MC (NH 2 CO and CH 3 O) are incorporated, the network can be considered complete for investigating the MC formation. If there are other reaction routes for the production of MC, in that case, assuming no major destruction of MC from unknown pathways, the current network would provide at least an estimate of the lower limit of MC abundance using the astrochemical modeling.
Physical models
To simulate the chemistry of MC, we consider the inner region of a low mass star forming core from its collapsing parent molecular cloud core to the subsequent warm-up phase due to YSO heating. Two physical models are adopted for mimicking the emergence of a hot corino. Model 1 is a simple toy model in which the cloud core is initially assumed to be at 8 K with a density of 2 × 10 4 cm −3 , and it evolves for 10 6 years in the prestellar phase. During the collapse phase, the density increases linearly to 10 8 cm −3 over a timescale of ∼ 2.1 × 10 5 years. At the end of the collapse phase, the dust temperature rises to 300 K (warm-up phase) within this period (e.g. Viti et al. 2004;Garrod 2006). The temperature during the warm-up phase is assumed to increase linearly with time. At this phase, dust and gas is well coupled, and we assume gas and dust temperatures to be the same.
We further draw, as Model 2, the physical model from Aikawa et al. (2008, Fig 5 for 15 AU, in their work) which is based on a somewhat more realistic onedimensional numerical radiation-hydrodynamical calculation. In Model 2, a static density (2.0 × 10 4 cm −3 at 8 K) evolution for 10 6 years is assumed before the collapse and warm-up of the parent molecular cloud. The warm-up time in this model is very short (∼ 9 × 10 4 years ) compared to Model 1. Additionally, it warms up very quickly, though the final temperature is ∼ 253 K, it reaches ∼ 228 K only after a warm-up of 2.0 × 10 4 years. Dust and gas temperature are considered to be the same throughout the evolution.
Depending on the gas density in the molecular cloud the extinction coefficient (A V ) may vary, here we consider A V varies according to the relation , and references therein):
n H = n H0 [1 + ( n Hmax /n H0 − 1)A V /A Vmax ] 2(3)
where, n H0 is the initial density and n Hmax is the maximum density of the molecular cloud; A Vmax is the maximum visual co-efficient deep inside the cloud corresponding to the maximum density. In both our physical models, A V is assumed to reach a maximum value 150 from a initial value of 10. A constant cosmic-ray ionization rate (ζ) of 1.3 × 10 −17 s −1 is assumed throughout the simulations.
Results of modeling
Figure 1 displays the chemical evolution of MC abundance (with respect to H 2 ) obtained from the two adopted physical models. The top panels corresponds to the physical Model 1 while the bottom ones to Model 2. It is evident, based on Figure 1, that MC forms during the warm up phase and thermally desorbs into the gas phase. However, depending on the physical conditions of the models, the fractional abundances may vary. As introduced earlier, in our network we do not have gas-phase reactions which can directly produce MC. Its gas phase abundance mainly comes from the thermal desorption of the species which forms in the solid phase through the radical-radical reaction of CH 3 O and NH 2 CO.
The binding energies greatly affects the mobility of radicals and consequently the formation of molecules on the grain surface. Due to the high binding energies of CH 3 O and NH 2 CO, MC does not form effectively even in luke-warm (∼30 K) conditions. Furthermore, since we assume a high binding energy (∼ 10000 K, considered similar to glycine, Garrod 2013) for MC, it desorbs to the gas phase at high temperatures >100 K. As an example, for Model 1, MC starts major desorption at temperature ∼ 150 K while reaches it peak gas phase abundance at temperature ∼ 220 K. These temperature values are highly dependent on the binding energy of MC. Garrod (2013) considered the binding energy of MC to be 7610 K. To check the effect of binding energies, we consider two values 10000 K and 7610 K, and the results are shown in Figure 2. For a binding energy of 7610 K, major desorption of MC occurs at a temperature ∼ 110 K while it reaches its peak gas phase abundance at a temperature ∼ 160 K. We can see that the evolution curve of MC shifts to the high-temperature side of the warm-up region for a higher value of binding energy. So, the production of MC would be favored in hot-corino (∼110 K) environments and below this temperature MC production is substantially low.
To check the consistency of modeling compared to other well known COMs, we also show the chemical evolution of CH 3 OH, CH 3 CHO and CH 3 OCH 3 . These COMs are well abundant in hot-corino environments and have well-established pathways; therefore it is meaningful to compare the MC abundance with those of these COMs. CH 3 OH is the most abundant (∼ 10 −6 ) COMs in hot-corino environments. Since the estimation of hydrogen density involves many uncertainties (mainly due to dust opacity), the abundances of COMs are often described comparing to CH 3 OH. Also, CH 3 OCH 3 mainly forms on grain surfaces similar to the assumed grain-surface origin of MC, therefore it would be worthy to compare with MC. CH 3 CHO forms in both gas and grain phase. Additionally, interferometric observations of these molecules are available at comparable resolution toward the hot corinos that we discuss in this paper (e.g., Sahu et al. 2019;Su et al. 2019;Drozdovskaya et al. 2019, and references therein). The observational results help us to better constrain the chemical modeling. In addition to these COMs, we consider NH 2 CHO, HNCO, CH 3 NCO which may be related to the formation of MC (see section 2.2). Figure 1 shows the chemical evolution of the molecules for two different sets of physical conditions. For Model 1, we can see that the observed fractional column densities (w.r.t hydrogen) of molecules likely match the modeled abundances around a time ∼ 1.2 × 10 6 , i.e ∼ 2 × 10 5 years of warm-up. We estimated the best fit time focussing more on CH 3 OH, MC and NH 2 CHO. Though, for CH 3 CHO and CH 3 OCH 3 best time may be near to 1.1 × 10 6 years but at this time model abundance of NH 2 CHO differs by orders of magnitude from observational value. So, we choose 1.2 × 10 6 years as the best-fit time for Model 1. The fractional abundances at this stage are noted in Table 3. In the case of Model 2, the temperature quickly crosses over 100 K over a short time-scale (∼ 10 4 years). For Model 2, we noted down the abundances (Table 3) after this shorttime and within the final time of evolution. The model abundances are quite similar in both models except for CH 3 NCO, which differ by an order of magnitude from each other. We note that the duration of the warm-up phase, in addition to the density profile, may severely affect the chemistry. The quicker and shorter warm-up in Model 2 affects the production of both CH 3 NCO and HNCO (formation pathways of which are closely linked) differently than in Model 1. The results of modeling and observations are further compared and discussed in Section 4 . Table 4) are present, too, in IRAS16293-2422 B. In this section, we briefly discuss the search of MC towards these hot corinos and also discuss the presence of relevant COMs.
IRAS 4A2
The NGC 1333 IRAS 4A (IRAS 4A hereafter) is one of the first known hot corino (Bottinelli et al. 2004) located in the Perseus molecular cloud at a distance of ∼ 293 pc (Ortiz-León et al. 2018;Zucker et al. 2018). Bottinelli et al. (2004) recognized IRAS 4A as a Class 0 source and reported the presence of COMs, including HCOOCH 3 , HCOOH, and CH 3 CN based on IRAM -30m telescope observations at angular resolutions ranging from 10 to 24 . As IRAS 4A is in fact a protobinary system consists of two protostellar sources IRAS 4A1 and IRAS 4A2 at a separation of 1. 8 (Looney et al. 2000;Reipurth et al. 2002), the "hot corino" signatures from single dish observations by Bottinelli et al. (2004) could not be attributed to the individual cores A1 and A2. While later observations (Persson et al. 2012) found that A2 harbors a hot corino, it is only recently that Sahu et al. (2019) discussed the possibility of A1 to host a hot-corino atmosphere.
In brief, the ALMA observations carried out in Sahu et al. (2019) and Su et al. (2019) deployed seven spectral windows, including one broadband window centered at 350.714 GHz whose bandwidth and spectral channel width are 1.875 GHz and 976 kHz (∼ 0.84 km s −1 ), respectively. The synthesized beam size of this spectral data cube is ∼ 0 .3 × 0 .2 (PA = −6.45 • ), sufficient for resolving the two cores A1 and A2, and the rms noise level (σ) is 3 mJy beam −1 (∼ 0.5 K). The dustcontinuum emission towards IRAS 4A2 is predominately optically thin and from the 2D Gaussian fittings, the extent of the compact component of dust is found to be ∼ 0 .27 (see Table 1 of Sahu et al. 2019). Assuming, κ ν =0.0182 cm 2 g −1 (Jørgensen et al. 2016), the average (from extended and compact emission) hydrogen column density is ∼ 2.5 × 10 25 cm −2 .
To analyze the spectra, we used CASSIS (developed by IRAP-UPS/CNRS-http://cassis.irap.omp.eu) software as well as the JPL (Pickett et al. 1998) and CDMS (Müller et al. 2001(Müller et al. , 2005 databases. The spectroscopy data of MC that are available in JPL was studied by Ilyushin et al. (2006). The spectra towards IRAS 4A2 are extracted from a region similar to beam size around the peak position of dust-continuum. The line identifications are performed by assuming the systematic velocity to be 6.9 km s −1 (Sahu et al. 2019, and references therein). In the appendix section we show the H2NCO2CH3 (20,15,5,3,20 -19,14,6,3,19) 349907.88 (0.0157) 322.69 9. 16,2,2,15,2,2,18) 351056.46 (0.0098) 147.42 1. 16,3,1,15,3,1,17) 351102.99 (0.0090) 147.38 1. 16,2,0,15,3,0,17) 351162.83 (0.0092) 147.42 1. 14,9,3,13,8,3,21) 351232.91 (0.0124) 329.02 7.93E-4 Note: This list contains only the main transitions that are discussed in the text, transition-multiplicities are present around these frequencies.
synthetic LTE spectra, in which at around 20 frequencies there are 106 MC transitions for the frequency range under consideration. We find that for T ex = 100 K and 300 K, respectively, three and five emission features are at the level of 2σ or at best 3σ, where σ is rms noise of the observed spectra. There are multiple transitions around the frequencies of those emission features (see appendix, Table C.1). Representative transitions around those emission features are listed in Table 4. These transitions are useful for estimating the upper limit of the MC column densities in the observed spectra. At T ex = 100 K, among the three emission profiles (see upper panel, Figure 3), one appears fully blended (351056.456 MHz) and the other emission features (351103.016 MHz, 351162.862 MHz) may be partially blended. For T ex = 300 K, those three lines remain useful for estimating the upper limits. Additionally, two higher E u (∼ 320 K) line appear in the spectra which are fully blended (see Figure in Appendix section B). Column density estimations are made with those two temperature conditions. The Einstein coefficients (A ij ) of all those major transitions are found to be > 7 × 10 −4 s −1 . The syn-thetic spectrum generated by CASSIS with a FWHM of ∼ 2.0 km s −1 (considering an average Gaussian width ∼ 1 km s −1 , as reported by Sahu et al. 2019), a column density of 2.5 × 10 15 cm −2 , an excitation temperature of 100 K, and a source velocity of 6.9 km s −1 could reproduce the observed spectra. The E u of these transitions are around ∼147 K, it is therefore difficult to estimate the rotational temperature. We choose another typical excitation temperature, 300 K, to estimate the column density. At this temperature, a column density ∼ 7 × 10 15 cm −2 could reproduce the observed spectral features. The transition around 351056 MHz is possibly blended with D 2 C 2 O and aGg -glycol. However, we could not quantitatively estimate the percentage of blending from these species as within the frequency range of the spectral window there are not enough transitions of those species to constrain their presence and abundances. The rest two transitions of MC are partially blended with intense transitions from other species, and the transition around 351162.862 MHz is estimated at a level ∼ 3σ. Considering these limitations, we only estimate the upper limit of MC from the observational data set. It is relevant to mention the observed column densities of CH 3 OH and CH 3 CHO for comparison with methyl carbamate. Sahu et al. (2019) found that CH 3 OH line emission may be optically thick and provide a lower limit for column density. So, to get the column density of CH 3 OH, we use the observe column density of 13 CH 3 OH (see Table 3 of Sahu et al. 2019) and use 12 C/ 13 C ratio 35 (Jørgensen et al. 2016), the resultant column density is 1.12 × 10 19 cm −2 . CH 3 CHO line emission is also found to be optically thick, so to get the likely column density of it, we use CH 3 OH/CH 3 CHO ratio (∼ 83) towards I16293B (see listed column densities by Drozdovskaya et al. 2019), and the estimated column density towards IRAS 4A2 found to be 1.35 × 10 17 cm −2 ; this value is consistent with the observed lower limit, 3.86 × 10 16 cm −2 towards IRAS 4A2. Additionally, among other COMs, CH 3 OCH 3 mainly produce in ISM on grain/ice surface, found to be present towards IRAS 4A2 (Lòpez-Sepulcre et al. 2017). Using an excitation condition from the earlier work (T ex ∼ 130 K), we found CH 3 OCH 3 column density to be ∼ 9×10 16 cm −2 . These observational values are helpful for comparing with the chemical model results.
IRAS 16293
IRAS16293-2422 (IRAS16293/I16293 hereafter) is a hot corino source similar to IRAS 4A and also a protobinary consists of IRAS 16293A and IRAS 16293B. The binary protostars are separated by 5.1 (620 AU; see Jørgensen et al. 2016, and references therein). The source I16293B is chemically rich and the dust continuum is optically thick within a scale of 50 AU. We choose I16293B for our analysis as it is likely a faceon disk system and consequently the line widths are narrow. I16293A likely represents an edge-on system and its molecular line widths are comparatively broad. Jørgensen et al. (2016) found that the lines (FWHM ∼ 1 km s −1 ) toward I16293B is five times narrower than I16293A, and this narrow line-width is helpful for detecting lines with less line confusion. I16293B therefore is an ideal target to search for MC.
The data towards the source were taken from ALMA archive and imaged with the highest possible spectral resolution using CASA pipeline analysis. The whole data description can be found in Jørgensen et al. (2016) and we used only the 12m data, suitable to resolve the source without the missing-flux problem. The synthesized beam size for the dust-continuum map is ∼ 0 .63 × 0 .39 (Position angle = −79.92 • ); this is similar to the beam size for spectral cube. Jørgensen et al. (2016) argued that at its peak location, the dust continuum emission is optically thick and consequently the author chose a one-beam offset position from the continuum peak for extracting spectra. Assuming the dust emission at the offset position is optically thin, N H2 found to be ∼ 1.2 × 10 25 cm −2 . Therefore, we choose a circular region similar to the observing beam (0 .5) around a position one offset away from continuum peak along the south west direction. Spectra from this position are expected to be least affected by the optical depth effect. Jørgensen et al. (2016) reported a rms noise level of 7-10 mJy beam −1 (≤ 0.6K), where the channel width is 0.244 MHz (∼ 0.2 km s −1 ). With this educated estimation, we used the 'statcont' algorithm (Sánchez-Monge et al. 2018) to get the continuum subtracted emission. Line identification was done in same manner as described for IRAS 4A2, and we adopted a systematic velocity of ∼ 3.1 km s −1 (Jørgensen et al. 2011). We found that, for excitation temperatures of 100 K and 300 K, respectively, the column densities of 1.0 × 10 15 cm −2 and 2.3 × 10 15 cm −2 best match with the observed spectral features, assuming FWHM ∼ 1 km s −1 . Additionally, the observed column densities for CH 3 OH ,CH 3 CHO and CH 3 OCH 3 are 1.0×10 19 , 1.2×10 17 cm −2 , and 2.4×10 17 respectively (Jørgensen et al. 2016(Jørgensen et al. , 2018. Figure 3 (lower panel) shows the tentative emission features of MC towards I16293B, assuming an excitation temperature of 100 K.
We note that for IRAS 4A2 all the spectral features of MC are either partially or fully blended due to low spectral resolutions. On the other hand, for I16293B, the 351103 MHz & 351162 MHz lines of MC seem to be resolved and non-blended. We choose the 351103 MHz line to check the emission map corresponding to this transitions. The integrated emission map related to the transition is shown in Figure 4. Although the emission features presented in this paper are not sufficient yet for claiming a tentative detection of MC as there are only a limited number of transitions with their features being weak and blended. These emission features, however, Figure 4. Integrated emission map (contours) of the 351103MHz MC transition towards I16293 overplotted with continuum emission map (color). We note that the integrated MC emission is localized around the hot corino I16293B. We note that toward I16293A, although the signalto-noiase ratio is high, we cannot solely attributed the integrated emission to MC given the different Vlsr and line contamination. Contours are 5,10,15σ..., where σ= 6 mJy beam −1 km s −1 .
are sufficiently good for estimating an upper limit of MC.
DISCUSSIONS AND CONCLUSIONS
In this paper we present the astrochemical modeling of methyl carbamate and its search toward line-rich hotcorino sources IRAS 4A2 and I16293B. Under the physical conditions that are described in the text, we can closely reproduce the observed fractional abundance of molecular species. Since obtaining the hydrogen column density from dust emission involves various uncertainties, we calculated the column density ratios of MC with respect to well known COMs, such as CH 3 OH, CH 3 CHO, CH 3 OCH 3 assuming these COM species reside in the same volume. For both sources, the hydrogen column densities are ∼ 10 25 cm −2 , and methanol column densities are ∼ 10 19 cm −2 . Since the dust continuum is assumed to be optically thin, hydrogen column densities presented in the text are lower limits of hydrogen column densities. Therefore, the upper limit of methanol abun- dance (w.r.t. hydrogen) is of the order of 10 −6 . In the chemical modeling, for different physical conditions, we obtain methanol abundances of ∼ 10 −6 . Therefore, the results of the chemical models are suitable for describing the major abundant COM, methanol. We compare the observed fractional abundances of these COMs with MC in Figure 5. We can see that the observed and model abundances of MC comparing to those COMs match well within an order of magnitude. This effectively gives us the idea of the MC abundance relative to those well abundant COMs.
Since one main constituent of MC is NH 2 CO radical, we consider species-CH 3 NCO, HNCO, NH 2 CHO which are closely linked with NH 2 CO and therefore MC. Towards I16293B, their abundances have been measured at a comparable resolution (see Drozdovskaya et al. 2019, and references therein). For IRAS 4A2, some of these species have been observed at much coarser angular resolutions (e.g., Taquet et al. 2015, and references therein), and there may be large uncertainties in column density estimation due to beam dilution factor. Therefore, com-parison between the model and observed abundances are shown only for I16293B ( Figure 5).
We can see that the modeled and observed fractional abundances of MC with respective to CH 3 OH and CH 3 CHO closely match. These molecules are major abundant molecules in hot corinos, so the abundance ratio of these molecules to MC can be helpful to gauge the MC abundances in hot-corino environments. CH 3 OCH 3 is overproduced in our chemical modeling, resulting in a order of magnitude difference between observed and modeled abundance. As the radical CH 3 O, a major constituent of MC, is mainly produced with reactions related to CH 3 OH, so CH 3 OCH 3 may not be a good species for estimating the MC abundance. On the other hand, modeled abundances ratios of MC and species like HNCO, CH 3 NCO, NH 2 CHO are mostly in agreement with the observed results except for CH 3 NCO estimation of Model 2. Due to very short and quick warm-up time scale of Model 2, this kind of differences may arise. However, the results suggest that HNCO, CH 3 NCO, NH 2 CHO maybe helpful for estimating MC abundances. Significance of the modeling results is that the upper limit of observed column densities (∼ 10 15 cm −2 ) of MC (an glycine isomer) roughly consistent with the modeled abundances. For the first time, we performed the observational search of MC and chemical modeling. Results of chemical modelling suggest that species like CH 3 OH, HNCO, CH 3 NCO, NH 2 CHO may helpful for estimating possible abundance of MC, using this estimations MC can be searched toward hot corinos.
MC can be searched using modern telescope facility like ALMA and future telescopes like ngVLA. To avoid line confusion problem, MC can be searched using high sensitive ALMA spectral line observations in lower frequencies ( 84 -116 GHz, Band 3). However, it should be noted that for larger molecules like MC, the transitions are very weak in the range of ALMA's spectral coverage. Rotational transitions of lighter molecules mostly fall within the mm-submm spectral range, so spectral confusion limit may reach in short time (∼1 hour) and it may make identification of MC difficult. Since, rotational transition of comparatively smaller molecules (e.g.,CH 3 OH) do not fall in centimeter wavelength range, this wavelength regime is comparatively clearer and therefore reduces the level of line-confusion. Therefore, future astronomical telescope like ngVLA may be helpful for detecting the glycine isomer, MC. Using the MC column-density upper limit, we find that the strongest transition around 43 GHz have intensity ∼ 0.3 mJy. Assuming, beam size fill the source, a sensitivity of ∼ 0.05 mJy can be achieved by ngVLA approximately in 10 hours of integration time (McGuire et al. 2018). Therefore, it is possible to detect MC using ngVLA toward the hot corinos. Detection of MC in the ISM may shed light on the long standing problem to understand glycine/prebiotic chemistry in the ISM. Figure A.1 shows the synthetic LTE spectra (using CASSIS software) of MC for column densities of 1.0 × 10 15 cm −2 and 2.3 × 10 15 cm −2 , respectively for excitation temperatures of 100 K and 300 K towards I16293B; source size was assumed to be 0. 5. These figures give an idea about the major emission signatures of MC to look for. For T ex =100 K and 300 K, respectively, there are three and five major transitions at a label of 2σ (rms) or higher. These transitions are listed in Table 4. Based on these transitions, the upper limit of MC column densities are estimated from the observed spectra towards the hot corinos. the LTE synthetic spectra of MC towards IRAS 4A2 and I16293B respectively, for an excitation temperature of 300 K. We searched for all MC lines having E u < 500 K. As the emission features of MC are very weak and blended, the rotational temperature can not be estimated from the observed spectra. Therefore, we assumed two typical temperatures, 100 K and 300 K, for estimating the upper limit of MC column densities. The LTE spectra for 100 K are already shown in figure 3. We can see that for T ex =300 K few high-temperature lines appear in the spectra. However, these high E u lines (349907.88 MHz, 351232.91 MHz) are fully blended but it helps us estimate the range of column density upper limits. Table 4 in the text shows the list of representative transitions. Multiple transitions are present around those frequencies. Table C.1 list multiple transitions around the frequencies.
The chemical model, NAUTILUS
Figure 1 .
1Time evolution of chemical species during the warm-up phase (after an initial period of 10 6 years). Upper panels show the results from physical model 1, and lower panels show the results from physical model 2. The dashed lines show the density evolution and the upper x-axis shows the temperature variation with time. Observed abundances of molecules towards I16293B are displayed in panels by solid-faint, colored lines with their widths representating 20% uncertainties.
(Figure 2ofSahu et al. 2019) have found forest of spectral transitions towards the hot-corino object NGC1333 IRAS 4A2. From a subsequent spectral search, we speculated the possible signatures of methyl carbamate towards IRAS 4A2. To check whether the transitions of MC are present in other line rich sources, we searched ALMA archive for existing spectral observations in the range 349.8-351.6 GHz, the observed window towards NGC133 IRAS 4A2. We found one of the well known hot-corino object, IRAS16293-2422 has been observed in the same frequency range (project id: ALMA#2013.1.00278.S). The spectral emission towards the sources are strikingly similar and possible MC transitions (see
Figure 2 .
2Left: Evolution of MC is depicted for various values of binding energies ; Right: Evolution of major radicals, NH2CO and CH3O ( using Model 1 for both left and right panels ). Dotted lines show abundances in grain/dust surface and solid lines are for gas phase abundances. The dashed lines show the density evolution and the upper x-axis shows the temperature variation with time.
Figure 3 .
3The observed spectral transition of methyl carbamate tentatively detected towards IRAS 4A2 and I16293 are plotted in blue. The synthetic spectra estimated based on the LTE assumption are over-plotted in red. The dotted lines represent the systematic velocity, and the frequencies of transition are noted in the panels.
Figure 5 .
5Plots shows the relative abundance of methyl carbamate (MC) with respect to CH3OH, CH3CHO and CH3OCH3 obtained from observations (I16293B, IRAS 4A2) and chemical modeling, Model 1 and Model 2. The observational values for MC are the average of estimated range of column densities.
Figure A. 1 .
1A typical LTE synthetic spectra is displayed considering excitation temperature 100K and 300K. Straight lines shows 2σ level of the observed spectra towards IRASI16293B.B. LTE SPECTRA FOR T EX =300 K Figure B.1 and B.2 show
Figure B. 1 .Figure B. 2 .
12LTE synthetic spectra (red) for an excitation temperature of 300 K overplotted on observed spectra (blue) towards IRAS 4A2. Similar toFigure B.1, spectra towards IRAS I16293B.C. FULL LIST OF MC TRANSITIONSThe
Table 1 .
1Initial abundances with respect to total hydrogen nucleiSpecies Abundance
H2
5 × 10 −1
He
1.0 × 10 −1
O
1.76 × 10 −4
C +
7.3 × 10 −5
N
Table 2 .
2The chemical reaction network of methyl carbamateReactions
α
β
γ
Reaction type
s
Table 3 .
3The fractional abundances of chemical species as obtained from chemical modeling.Molecule
Model 1 Model 2
CH3OH
1.1e-06
1.4e-06
CH3CHO
3.6e-08
1.4e-08
CH3OCH3
1.3e-07
2.3e-07
CH3NCO
3.2e-10
1.2e-09
HNCO
3.6e-09
1.8e-09
NH2CHO
2.7e-09
1.0e-09
H2NCO2CH3 1.7e-10
0.9e-10
Table 4 .
4Methyl carbamate transitionsSpecies
Transition (N Ka Kc v F) Frequency (MHz)(Error) Eu (K) Aij(s −1 )
Table C . 1 .
C1Methyl carbamate transitions Species Transition (N Ka Kc v F*) Frequency (MHz) (Error) Eu (K) Aij(s −1 )
(http://kida.astrophy.u-bordeaux.fr)
. Y Aikawa, V Wakelam, R T Garrod, E Herbst, ApJ. 674984Aikawa Y., Wakelam V., Garrod R. T., Herbst E., 2008, ApJ, 674, 984
. K Altwegg, H Balsiger, A Bar-Nun, Sci. Adv. 21600285Altwegg, K., Balsiger, H., Bar-Nun, A., et al. 2016, Sci. Adv., 2, e1600285
. S Bottinelli, C Ceccarelli, B Lefloch, ApJ. 615354Bottinelli, S., Ceccarelli, C., Lefloch, B., et al. 2004, ApJ, 615, 354
. J Cernicharo, N Marcelino, N Roueff, E , ApJL. 75943Cernicharo, J., N. Marcelino, N., Roueff, E. et al., 2012, ApJL, 759:L43
. A Das, L Majumdar, D Sahu, P Gorai, B Sivaraman, S K Chakrabarti, ApJ. 80821Das A., Majumdar L., Sahu D., Gorai P., Sivaraman B., Chakrabarti S. K., 2015, ApJ, 808, 21
. A Das, L Majumdar, S K Chakrabarti, D Sahu, New Astron. 3553Das A., Majumdar L., Chakrabarti S. K., Sahu D., 2015a, New Astron., 35, 53
. A Das, D Sahu, L Majumdar, S K Chakrabarti, MNRAS. 455540Das, A., Sahu, D., Majumdar, L., & Chakrabarti, S. K. 2016, MNRAS, 455, 540
. K Demyk, G Wlodarczak, E Dartois, Semaine deĺÁstrohysique Francaise, ed. F. Combes, DDemyk, K., Wlodarczak, G., & Dartois, E. 2004, in Semaine deĺÁstrohysique Francaise, ed. F. Combes, D.
. T Barret, F Contini, & L Meynadier, Pagani, Barret, T. Contini, F. Meynadier, & L. Pagani (SF2A-2004;
. Les Ulis, EDP-Sciences493Les Ulis: EDP-Sciences.), 493
. M N Drozdovskaya, E F Van Dishoeck, MNRAS. 4905079Drozdovskaya M. N., van Dishoeck E. F. et al. 2019, MNRAS,490, 5079
. R T Garrod, E Herbst, A&A. 457927Garrod, R. T., & Herbst, E. 2006, A&A, 457, 927
. R T Garrod, ApJ. 76560Garrod, R. T. 2013, ApJ, 765, 60.
. P Gorai, arXiv:2003.09188Gorai, P., et al. 2020, arXiv:2003.09188
. 349907.88 (0.0157) 322.69 9H2NCO2CH3. H2NCO2CH3 (20,15,5,3,20 -19,14,6,3,19) 349907.88 (0.0157) 322.69 9.22E-4
. 349907.88 (0.0157) 322.69 9H2NCO2CH3. H2NCO2CH3 (20,15,6,3,20-19,14,5,3,19) 349907.88 (0.0157) 322.69 9.22E-4
. 349907.92 (0.0157) 322.69 9H2NCO2CH3. H2NCO2CH3 (20,15,5,3,21 -19,14,6,3,20) 349907.92 (0.0157) 322.69 9.25E-4
. 349907.92 (0.0157) 322.69 9H2NCO2CH3. H2NCO2CH3 (20,15,6,3,21 -19,14,5,3,20) 349907.92 (0.0157) 322.69 9.25E-4
. 18) 349907.92 (0.0157) 322.69 9H2NCO2CH3. 15H2NCO2CH3 (20,15,5,3,19 -19,14,6,3,18) 349907.92 (0.0157) 322.69 9.22E-4
. 17) 351056.43 (0.0098) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,2,18 -17,15,2,2,17) 351056.43 (0.0098) 147.42 3.29E-3
. 16) 351056.46 (0.0098) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,2,17 -17,15,2,2,16) 351056.46 (0.0098) 147.42 3.11E-3
. 18) 351056.46 (0.0098) 147.42 1H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,2,19 -17,15,2,2,18) 351056.46 (0.0098) 147.42 1.11E-3
. 17) 351102.99 (0.0090) 147.38 1H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,1,18 -17,15,3,1,17) 351102.99 (0.0090) 147.38 1.11E-3
. 16) 351103.01 (0.0090) 147.38 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,1,17 -17,15,3,1,16) 351103.01 (0.0090) 147.38 3.12E-3
. 18) 351103.02 (0.0090) 147.38 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,1,19 -17,15,3,1,18) 351103.02 (0.0090) 147.38 3.48E-3
. 17) 351162.83 (0.0092) 147.42 1H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,0,18 -17,15,3,0,17) 351162.83 (0.0092) 147.42 1.11E-3
. 351162.83 (0.0092) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,0,18 -17,15,2,0,17) 351162.83 (0.0092) 147.42 3.29E-3
. 16) 351162.86 (0.0092) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,0,17 -17,15,3,0,16) 351162.86 (0.0092) 147.42 3.12E-3
. 351162.86 (0.0092) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,0,17 -17,15,2,0,16) 351162.86 (0.0092) 147.42 3.12E-3
. 18) 351162.86 (0.0092) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,2,0,19 -17,15,3,0,18) 351162.86 (0.0092) 147.42 3.48E-3
. 18) 351162.86 (0.0092) 147.42 3H2NCO2CH3. 1618H2NCO2CH3 (18,16,3,0,19 -17,15,2,0,18) 351162.86 (0.0092) 147.42 3.48E-3
. 21) 351232.91 (0.0124) 329.02 7.93E-4H2NCO2CH3. 1422H2NCO2CH3 (22,14,9,3,22 -21,13,8,3,21) 351232.91 (0.0124) 329.02 7.93E-4
. 22) 351232.95 (0.0124) 329.02 7.95E-4H2NCO2CH3. 1422H2NCO2CH3 (22,14,9,3,23 -21,13,8,3,22) 351232.95 (0.0124) 329.02 7.95E-4
. 351232.95 (0.0124) 329.02 7.93E-4H2NCO2CH3. 1422H2NCO2CH3 (22,14,9,3,21 -21,13,8,3,20) 351232.95 (0.0124) 329.02 7.93E-4
*The quantum numbers (QN) are mentioned as N Ka Kc v F . The 'F' QN can also denote an A or E state as in the case for MC. *The quantum numbers (QN) are mentioned as N Ka Kc v F . The 'F' QN can also denote an A or E state as in the case for MC.
J.-C Guillemin, BIO Web of Conferences. 24002Guillemin, J.-C., 2014, BIO Web of Conferences, 2,04002
. K A Haupa, G Tarczay, Y.-P Lee, J. Am. Chem. Soc. 14111614Haupa, K. A., Tarczay, G., & Lee, Y.-P. 2019, J. Am. Chem. Soc., 141, 11614.
. K M Hickson, J.-C Loison, V Wakelam, Chem. Phys. Lett. 65970Hickson, K. M., Loison, J.-C., & Wakelam, V. 2016, Chem. Phys. Lett., 659, 70
. V Ilyushin, E Alekseev, J Demaison, I Kleiner, J. Mol. Spectrosc. 240Ilyushin, V., Alekseev, E., Demaison, J., Kleiner, I., J. Mol. Spectrosc. 240 (2006)
I Jiménez-Serra, SKA1 Beyond 15GHz: The Science case for Band 6. section 3.3Jiménez-Serra, I. et al., 2020, section 3.3, Memo 20-01, SKA1 Beyond 15GHz: The Science case for Band 6.
. J K Jørgensen, A&A. 595117Jørgensen J. K. et al., 2016, A&A, 595, A117
. J K Jørgensen, A&A. 620170Jørgensen J. K. et al., 2018, A&A, 620, A170
. J K Jørgensen, T L Bourke, Q Nguyen Luong, S Takakuwa, A&A. 534100Jørgensen, J. K., Bourke, T. L., Nguyen Luong, Q., & Takakuwa, S. 2011, A&A, 534, A100
. B N Khare, C Sagan, H Ogino, B Nagy, C Er, 68Khare, B.N., Sagan, C., Ogino, H., Nagy, B., Er, C., et al. 1986, 68, 176-84
. Y J Kuan, S B Charnley, H C Huang, W L Tseng, Z Kisiel, ApJ. 593848Kuan, Y. J., Charnley, S. B., Huang, H. C., Tseng, W. L., & Kisiel, Z. 2003, ApJ, 593, 848
. M Lattelais, F Pauzat, Y Ellinger, C Ceccarelli, ApJ. 696133Lattelais, M., Pauzat, F., Ellinger, Y., & Ceccarelli, C. 2009, ApJ, 696, L133
. M Lattelais, O Risset, J Pilm, Int. J. Quantum Chem. 1111163Lattelais, M., Risset, O., Pilm, J., et al. 2011, Int. J. Quantum Chem., 111, 1163
. C-W Lee, J-K Kim, E-S Moon, J. et al. 428435. Ligterink, N. F. W., Terwisscha van Scheltinga6973628MNRASLee, C-W., Kim, J-K., Moon, E-S. et al., 2009, 697:428435. Ligterink, N. F. W., Terwisscha van Scheltinga, J. et al. 2018, MNRAS 480, 3628
. J.-C Loison, M Agúndez, N Marcelino, MNRAS. 4564101Loison, J.-C., Agúndez, M., Marcelino, N., et al. 2016, MNRAS, 456, 4101
. J.-C Loison, M Agúndez, V Wakelam, MNRAS. 4704075Loison, J.-C., Agúndez, M., Wakelam, V., et al. 2017, MNRAS, 470, 4075
. Ryan A Loomis, Brett A Mcguire, C Shingledecker, ApJ. 79934Loomis, Ryan A., McGuire, Brett A., Shingledecker, C. et al. 2015, ApJ, 799:34.
. L W Looney, L G Mundy, W J Welch, ApJ. 529477Looney, L. W., Mundy, L. G., & Welch, W. J. 2000, ApJ, 529, 477
. A Lòpez-Sepulcre, N Sakai, R Neri, M Imai, A&A. 606121Lòpez-Sepulcre, A., Sakai, N., Neri, R., Imai, M. et al. 2017, A&A, 606, A121
. Brett A Mcguire, Astronomical Society of the Pacific, Science with a Next-Generation Very Large Array ASP Conference Series. Eric J. Murphy, ed. Miller, S.L.1173046ScienceMcGuire, Brett A. et al., 2018, Astronomical Society of the Pacific, Science with a Next-Generation Very Large Array ASP Conference Series, Monograph 7 Eric J. Murphy, ed. Miller, S.L., 1953, Science, 117(3046), 528-529
. H S P Müller, S Thorwirth, D A Roth, G Winnewisser, A&A. 49Müller H. S. P., Thorwirth S., Roth D. A., Winnewisser G., 2001, A&A,370, L49.
. H S P Müller, F Schlöder, J Stutzki, G Winnewisser, J. Mol. Struct. 742215Müller H. S. P., Schlöder F., Stutzki J., Winnewisser G., 2005, J. Mol. Struct., 742, 215
. G N Ortiz-León, L Loinard, S A Dzib, ApJ. 86573Ortiz-León, G N., Loinard L., Dzib S A. et. al. 2018, ApJ, 865:73
. M V Persson, J K Jrgensen, E F Van Dishoeck, A&A. 54139Persson, M. V., Jrgensen, J. K., & van Dishoeck, E. F. 2012, A&A, 541, A39
. H M Pickett, R L Poynter, E A Cohen, M L Delitsky, J C Pearson, H S P Müller, J. Quant. Spectrosc. Radiat. Transfer. 60883Pickett H. M., Poynter R. L., Cohen E. A., Delitsky M. L., Pearson J. C., Müller H. S. P., 1998, J. Quant. Spectrosc. Radiat. Transfer, 60, 883.
. D Quénard, I Jiménez-Serra, S Viti, J Holdship, & Coutens, A. 4742796MNRASQuénard, D., Jiménez-Serra, I., Viti, S., Holdship, J., & Coutens, A. 2018, MNRAS, 474, 2796
. B Reipurth, L F Rodrguez, G Anglada, J Bally, AJ. 1241045Reipurth, B., Rodrguez, L. F., Anglada, G., & Bally, J. 2002, AJ, 124, 1045
. M Ruaud, V Wakelam, F Hersant, MNRAS. 4593756Ruaud M., Wakelam V., Hersant F., 2016, MNRAS, 459, 3756
. D Sahu, A Das, L Majumdar, S K Chakrabarti, New Astron. 3823Sahu D., Das A., Majumdar L., Chakrabarti S. K., 2015, New Astron., 38, 23
. D Sahu, S.-Y Liu, Y.-N Su, ApJ. 872196Sahu, D., Liu, S.-Y., Su, Y.-N., et al. 2019, ApJ, 872, 196
. Á Sánchez-Monge, P Schilke, A Ginsburg, A&A. 609101Sánchez-Monge,Á, Schilke, P., Ginsburg, A. et al. 2018, A&A, 609, A101
. L E Snyder, F J Lovas, J M Hollis, ApJ. 619914Snyder, L. E., Lovas, F. J., Hollis, J. M., et al. 2005, ApJ, 619, 914
. Y-N Su, S-Y Liu, Z-Y Li, ApJ. 88598Su, Y-N., Liu, S-Y., Li, Z-Y. et al. 2019, ApJ,885:98
. T Suzuki, L Liton Majumdar, M Ohishi, ApJ. 86351Suzuki, T., Liton Majumdar, L., Ohishi, M. et al. 2018, ApJ, 863:51
. V Taquet, A Lopez-Sepulcre, C Ceccarelli, ApJ. 80481Taquet, V., Lopez-Sepulcre, A., Ceccarelli, C., et al. 2015, ApJ, 804, 81
. K Toshiki, N Hiroshi, Scientific Reports. 7636Toshiki, K., Hiroshi, N., 2017, Scientific Reports, 7, 636
. T H G Vidal, J.-C Loison, A Y Jaziri, M Ruaud, P Gratier, V Wakelam, MNRAS. 469435Vidal T. H. G., Loison J.-C., Jaziri A. Y., Ruaud M., Gratier P., Wakelam V., 2017, MNRAS, 469, 435
. S Viti, M P Collings, J W Dever, M R S Mccoustra, D A Williams, MNRAS. 3541141Viti, S., Collings, M. P., Dever, J. W., McCoustra, M. R. S., & Williams, D. A. 2004, MNRAS, 354, 1141
. V Wakelam, C Vastel, Y Aikawa, A Coutens, S Bottinelli, E Caux, MNRAS. 4452854Wakelam V., Vastel C., Aikawa Y., Coutens A., Bottinelli S., Caux E., 2014, MNRAS, 445, 2854
. V Wakelam, J.-C Loison, E Herbst, ApJS. 21720Wakelam, V., Loison, J.-C., Herbst, E., et al. 2015, ApJS, 217, 20
. V Wakelam, J C Loison, R Mereau, M Ruaud, MolAs. 622Wakelam, V., Loison, J. C., Mereau, R., & Ruaud, M. 2017, MolAs, 6, 22.
. V Wakelam, E Chapillon, A Dutrey, MNRAS. 48415631573Wakelam, V., Chapillon, E., Dutrey, A. et al., 2019, MNRAS 484, 15631573.
. C Zucker, E F Schlafly, S F Speagle, G M Green, ApJ. 86983Zucker, C., Schlafly, E.F., Speagle, S.F., Green, G.M., et. al., 2018, ApJ, 869:83
| [] |
[
"Mass Reconstruction of Galaxy-scale Strong Gravitational Lenses Using Broken Power-law Model",
"Mass Reconstruction of Galaxy-scale Strong Gravitational Lenses Using Broken Power-law Model"
] | [
"Wei Du \nShanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina\n",
"Liping Fu \nShanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina\n",
"Yiping Shu \nPurple Mountain Observatory\nChinese Academy of Science\n210023NanjingChina\n",
"Ran Li \nNational Astronomical Observatories\nChinese Academy of Science\n100101BeijingChina\n\nInstitute for Frontiers in Astronomy and Astrophysics\nBeijing Normal University\n102206BeijingChina\n\nSchool of Astronomy and Space Science\nUniversity of Chinese Academy of Sciences\n100049BeijingChina\n",
"Zuhui Fan \nSouth-Western Institute for Astronomy Research\nYunnan University\n650500KunmingChina\n",
"Chenggang Shu \nShanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina\n"
] | [
"Shanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina",
"Shanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina",
"Purple Mountain Observatory\nChinese Academy of Science\n210023NanjingChina",
"National Astronomical Observatories\nChinese Academy of Science\n100101BeijingChina",
"Institute for Frontiers in Astronomy and Astrophysics\nBeijing Normal University\n102206BeijingChina",
"School of Astronomy and Space Science\nUniversity of Chinese Academy of Sciences\n100049BeijingChina",
"South-Western Institute for Astronomy Research\nYunnan University\n650500KunmingChina",
"Shanghai Key Lab for Astrophysics\nShanghai Normal University\n200234ShanghaiChina"
] | [] | With mock strong gravitational lensing images, we investigate the performance of broken power-law (BPL) model on the mass reconstruction of galaxy-scale lenses. An end-to-end test is carried out, including the creation of mock strong lensing images, the subtraction of lens light, and the reconstruction of lensed images. Based on these analyses, we can reliably evaluate how accurate the lens mass and source light distributions can be measured. We notice that, based on lensed images alone, only the Einstein radii (R E ) or the mean convergence within them can be well determined, with negligible bias (typically < 1%) and controllable uncertainty. Away from the Einstein radii, the radial and mean convergence profiles can hardly be constrained unless well-designed priors are applied to the BPL model. We find that, with rigid priors, the BPL model can clearly outperform the singular power-law models by recovering the lens mass distributions with small biases out to several Einstein radii (e.g., no more than 5% biases for the mean convergence profiles within 3 R E ). We find that the source light reconstructions are sensitive to both lens light contamination and lens mass models, where the BPL model with rigid priors still performs best when there is no lens light contamination. It is shown that, by correcting for the projection effect, the BPL model is capable of estimating the aperture and luminosity weighted line-of-sight velocity dispersions to an accuracy of ∼ 6%. These results further highlight the great potential of the BPL model in strong lensing related studies. | null | [
"https://export.arxiv.org/pdf/2302.04651v1.pdf"
] | 256,697,392 | 2302.04651 | d759e901c4abf20c3352d8a94983492be7084231 |
Mass Reconstruction of Galaxy-scale Strong Gravitational Lenses Using Broken Power-law Model
Draft version February 10, 2023
Wei Du
Shanghai Key Lab for Astrophysics
Shanghai Normal University
200234ShanghaiChina
Liping Fu
Shanghai Key Lab for Astrophysics
Shanghai Normal University
200234ShanghaiChina
Yiping Shu
Purple Mountain Observatory
Chinese Academy of Science
210023NanjingChina
Ran Li
National Astronomical Observatories
Chinese Academy of Science
100101BeijingChina
Institute for Frontiers in Astronomy and Astrophysics
Beijing Normal University
102206BeijingChina
School of Astronomy and Space Science
University of Chinese Academy of Sciences
100049BeijingChina
Zuhui Fan
South-Western Institute for Astronomy Research
Yunnan University
650500KunmingChina
Chenggang Shu
Shanghai Key Lab for Astrophysics
Shanghai Normal University
200234ShanghaiChina
Mass Reconstruction of Galaxy-scale Strong Gravitational Lenses Using Broken Power-law Model
Draft version February 10, 2023Typeset using L A T E X twocolumn style in AASTeX631dark matter -galaxies: halos -galaxies: kinematics and dynamics -gravitational lensing: strong
With mock strong gravitational lensing images, we investigate the performance of broken power-law (BPL) model on the mass reconstruction of galaxy-scale lenses. An end-to-end test is carried out, including the creation of mock strong lensing images, the subtraction of lens light, and the reconstruction of lensed images. Based on these analyses, we can reliably evaluate how accurate the lens mass and source light distributions can be measured. We notice that, based on lensed images alone, only the Einstein radii (R E ) or the mean convergence within them can be well determined, with negligible bias (typically < 1%) and controllable uncertainty. Away from the Einstein radii, the radial and mean convergence profiles can hardly be constrained unless well-designed priors are applied to the BPL model. We find that, with rigid priors, the BPL model can clearly outperform the singular power-law models by recovering the lens mass distributions with small biases out to several Einstein radii (e.g., no more than 5% biases for the mean convergence profiles within 3 R E ). We find that the source light reconstructions are sensitive to both lens light contamination and lens mass models, where the BPL model with rigid priors still performs best when there is no lens light contamination. It is shown that, by correcting for the projection effect, the BPL model is capable of estimating the aperture and luminosity weighted line-of-sight velocity dispersions to an accuracy of ∼ 6%. These results further highlight the great potential of the BPL model in strong lensing related studies.
INTRODUCTION
Strong gravitational lensing (SL) has proven to be an important tool to learn about the universe because of its sensitivity to the geometry of the universe and the matter distribution therein, for example, by constraining cosmological parameters (Jullo et al. 2010;Collett et al. 2012;Cao et al. 2015;Linder 2016), testing gravity Koopmans et al. 2009;Collett et al. 2018) and measuring the mass distribution of intervening objects (Shu et al. 2008;Coe et al. 2012;Gomer & Williams 2020;Chen et al. 2022). In addition, SL can magnify the distant galaxies and help us look into * E-mail: [email protected] † E-mail: [email protected] their properties in more detail (e.g., Marshall et al. 2007;Newton et al. 2011).
In recent years, one of the most attention-getting measurements from SL observations is the constraint on Hubble constant H 0 using time-delay cosmography (Suyu et al. 2013;Bonvin et al. 2017;Wong et al. 2020;Shajib et al. 2020). For instance, the H 0 Lenses in COSMOGRAIL's Wellspring (H0LiCOW) collaboration found H 0 = 73.3 +1.7 −1.8 km s −1 Mpc −1 from a joint analysis of six time-delay systems (Wong et al. 2020), in 3.1σ tension with Planck observations of the cosmic microwave background. Based on a single time-delay system with two sets of multiple images at different redshifts, Shajib et al. (2020) reported a 3.9 percent measurement of H 0 . In light of these reported precisions, we have high hope of achieving 1 percent precision in H 0 using tens of time-delay systems.
However, there is much debate over these reported high precisions of H 0 measurements, because there exist many lensing degeneracies making it very difficult to accurately recover the mass distributions along the lineof-sight (LoS) (e.g. Schneider & Sluse 2013Xu et al. 2016;Sonnenfeld 2018;Kochanek 2020Kochanek , 2021Millon et al. 2020). Inaccurate reconstruction of the lensing mass distributions, especially the main lens which dominates the lensing potential along the LoS, may lead to uncontrollable systematics in H 0 estimation.
Among the lensing degeneracies, the most famous one is the mass-sheet degeneracy (MSD, Falco et al. 1985), which indicates the lensed images produced by a convergence profile κ(θ) + κ ext can be totally recovered by a transformed profile (1−κ ext )κ(θ)/(1−κ ext )+κ ext with a corresponding rescaling of the source plane, where κ(θ), κ ext and κ ext represent the convergence profiles of the main lens, the actual external convergence and a guess about κ ext , respectively. Since external convergence is not a direct observable, it is usually very hard to quantify with high accuracy (Treu et al. 2009;Suyu et al. 2010;Guimarães & Sodré 2011;Tihhonova et al. 2020), thus leading to large uncertainties in the determination of mass distributions and thus H 0 .
The MSD, as we know, is a special case of the sourceposition transformation (SPT, Schneider & Sluse 2014). The SPT refers to the fact that different combinations of lens mass and source light distributions can produce very similar or indistinguishable lensed images (Schneider & Sluse 2013;Unruh et al. 2017). Schneider & Sluse (2013) illustrated the impact of SPT on the determination of H 0 , and showed that the predicted H 0 can deviate by ∼ 20% if the lensed images produced by a composite lens are fitted by the singular power-law (SPL) model.
In addition to the MSD or SPT, there are more lensing degeneracies, such as the monopole degeneracy in regions without lensed images (Saha 2000;Liesenborgs & De Rijcke 2012) and the local degeneracies just around the lensed images (Gorenstein et al. 1988;Wagner 2018).
These degeneracies can bring about more uncertainties in the determination of lens mass distributions as well as H 0 . Actually, not only H 0 , but many other quantities relevant to SL analyses, e.g., cosmological distance ratios and parameterized post-Newtonian parameter γ PPN Schwab et al. 2010), are vulnerable to the above mentioned lensing degeneracies.
In view of these degeneracies, a question arises about what quantities can be determined faithfully by lensed images. Usually, the Einstein radius is deemed to be such a quantity (Schneider et al. 2006;Treu 2010). Its measurement error is typically of the order of a few per-cent (Bolton et al. 2008;Shu et al. 2015Shu et al. , 2016Shajib et al. 2021;Etherington et al. 2022). However, there are also works showing surprising results. Recently, based on simulated SL systems with mass distributions inferred from SDSS-MaNGA stellar dynamics data (Li et al. 2019), Cao et al. (2022) concluded that the Einstein radius can be recovered with 0.1% accuracy. On the other hand, Mukherjee et al. (2018) found a large scatter of ∼ 20% in Einstein radius estimation when comparing SIE model fittings to the direct convergence fittings, where the mock lensed images are created using galaxies from EAGLE simulations (Schaye et al. 2015).
The other quantity, which can be constrained independently of lens mass models, is
ξ 2 = R E α E /(1 − κ E ),
where α E is the second-order radial derivative of the deflection profile at Einstein radius R E and κ E is the convergence at R E (Sonnenfeld 2018;Kochanek 2020Kochanek , 2021Birrer 2021). For axisymmetric or moderate elliptical lenses, R E and ξ 2 are the two parameters that lensed images can constrain reliably. Sonnenfeld (2018) argued that, in order to avoid over-constraining the lens mass distributions, a lens model should have at least 3 degrees of freedom in the radial direction. The SPL model with only two radial parameters should be abandoned if higher accuracies are needed in the determination of cosmological parameters, as suggested by Kochanek (2020).
Nevertheless, only when lens mass distributions are reconstructed more accurately, can the relevant cosmological parameters be estimated more reliably. In SL analyses, a lens model is preferred if its deflection field can be computed analytically. Many spherical models meet this requirement (see a list of lens models shown in Keeton 2001), while only a handful of elliptical models have analytic expressions for deflections, e.g., softened powerlaw (Wertz & Surdej 2014), three-dimensional broken power-law (BPL, Du et al. 2020) and two-dimensional BPL (O'Riordan et al. 2021) models. The commonly adopted singular isothermal ellipsoid (SIE, Kassiola & Kovner 1993;Keeton & Kochanek 1998) and SPL models (Tessore & Metcalf 2015) are the special cases of the above mentioned elliptical models.
Among the analytical lens mass models, the BPL model proposed by Du et al. (2020) is a flexible model with four degrees of freedom in the radial direction which can describe not only the mass distributions of lenses with flat cores, but also with steep cusps. Furthermore, it can also fit well the well-known NFW (Navarro et al. 1997) and Einasto (Einasto 1965) profiles within sufficiently large radii. In this paper, we concentrate on this BPL model and, based on simulated SL systems, look into how accurate the lens mass distributions can be recovered by fitting the lensed images.
The rest of this paper is organized as follows. In Section 2, we briefly review the basics about BPL model. In Section 3, we show the creation of mock lensing observations, which are used to evaluate the BPL model as well as the SIE and SPL models. We describe in detail the extraction and reconstruction of lensed images in Section 4 and investigate the necessary priors in lens mass modeling. Results are presented in Section 5. Conclusion and discussions are given in the last section.
THE BPL MODEL
In this section, we briefly review the basics about the BPL model, including its density profiles and deflections. We also show the formalism for estimating the line-of-sight velocity dispersions (LOSVDs) observed by single-fiber spectroscopy, i.e., the aperture and luminosity (AL) weighted LOSVDs which provide importantly the dynamical information for calibrating the lensing mass distributions.
The volume and surface density profiles
The volume density profile of BPL model is expressed by
ρ(r) = ρ c (r/r c ) −αc if r r c ρ c (r/r c ) −α if r r c ,(1)
where α c and α are respectively the inner and outer slopes, and ρ c is the volume density at break radius r c . By integrating ρ(r) along the line-of-sight, one obtains the surface mass density profile Σ(R), and then has the convergence profile κ(R) = Σ(R)/Σ crit with
Σ crit = c 2 4πG D s D d D ds(2)
known as the critical surface mass density in lensing analyses, where D d , D s , and D ds are the angular diameter distances to the lens deflector, to the source, and from the deflector to the source, respectively. As shown by Du et al. (2020), κ(R) can be expressed in terms of two parts as
κ(R) = κ 1 (R) + κ 2 (R),(3)
with
κ 1 (R) = 3 − α 2 b R α−1(4)
corresponding to a single power-law part and
κ 2 (R) = 3 − α B(α) b r c α−1 × z F α c 2 , 1; 3 2 ;z 2 − F α 2 , 1; 3 2 ;z 2(5)
a complementary part within r c , which is a mass deficit for α c < α or a mass surplus for α c > α, where B(α) = Beta 1 2 , α−1 2 ,z = 1 − R 2 /r 2 c , F () denotes the Gauss hypergeometric function and b is a scale radius defined by
b α−1 = B(α) Σ crit 2 3 − α ρ c r α c .(6)
Note that κ 2 (R) is zero when R r c or α c = α.
In order to describe surface mass distributions which are elliptically symmetric, the circular radius R in κ(R) can be generalized to elliptical radius R el = qx 2 + y 2 /q, where q is the axis ratio of the elliptical isodensities. In this case, the area enclosed by R el is πR 2 el independent of q. In view of this advantage, we thus can define an effective Einstein radius of a lens as the elliptical radius within which the mean convergence is unity.
The deflection angles
In complex notation, the lens equation is
z s = z − α(z),(7)
which relates the true source position z s = x s + iy s on source plane to its observed position z = x + iy on lens plane (Please do not confuse the complex numbers z in this subsection with the redshift symbols in other sections), where α(z) = α x + iα y is the scaled deflection angle caused by the intervening lens (Schneider et al. 2006;Tessore & Metcalf 2015). If the lens mass distribution is elliptically symmetric, the deflection angle α(z) at position z can be evaluated by
α * (z) = 2 z R el 0 κ(R)R dR 1 − ζ 2 R 2 ,(8)
where the symbol * denotes the complex conjugate, ζ 2 = (1/q − q)/z 2 and R el is the elliptical radius of the ellipse passing through the position z (Bourassa & Kantowski 1975;Bray 1984). As presented in Du et al. (2020), by substituting Equation (3) into Equation (8), we have the deflections α * (z) = α * 1 (z) + α * 2 (z) for the BPL model, with
α * 1 (z) = R 2 el z b R el α−1 F 1 2 , 3 − α 2 ; 5 − α 2
; ζ 2 R 2 el (9) for the power-law part κ 1 and
α * 2 (z) = r 2 c z 3 − α B(α) b r c α−1 2 3 − α c F 3 − α c 2 , C − 2 3 − α F 3 − α 2 , C − S 0 (α, α c ,z el , C) (10)
for the complementary part κ 2 , where C = r 2 c ζ 2 ,z el = 1 − R 2 el /r 2 c , and functions F and S 0 can be written in terms of the Gauss hypergeometric functions 1 . Please refer to the subsection 2.2 in Du et al. (2020) for detailed expressions of F and S 0 . Note that S 0 disappears when R el r c or α c = α.
The AL-weighted LOSVDs
In addition to testing the accuracy of mass measurements of galaxy-scale lenses, we are also interested in the accuracy of AL-weighted LOSVDs predicted from the reconstructed mass distributions, which can help us calibrate the lens mass distributions.
As we know, based on the spherical Jeans equation, the AL-weighted LOSVD for a galaxy with constant velocity anisotropy can be modeled as
σ 2 = ∞ 0 dR Rw(R) ∞ −∞ dZ j(r)(1 − βR 2 /r 2 )σ 2 r (r) ∞ 0 dR Rw(R) ∞ −∞ dZ j(r) (11) where σ 2 r (r)
is the radial velocity dispersion for stars, β denotes the global velocity anisotropy, j(r) is the 3D luminosity density profile, and w(R) is a weighting function accounting for the fiber size and seeing effect for ground-based spectroscopic observations. By assuming w(R) follows a Gaussian distribution (Schwab et al. 2010), i.e.,
w(R) ≈ exp − R 2 2σ 2 fib ,(12)
where σ fib is a function of seeing and fiber size, Du et al. (2020) found that Equation (11) can be transformed into
σ 2 = ∞ 0 dr r 2 j(r)σ 2 r (r) Φ 1, 3 2 ; − r 2 2σ 2 fib − 2β 3 Φ 2, 5 2 ; − r 2 2σ 2 fib ∞ 0 dr r 2 j(r)Φ 1, 3 2 ; − r 2 2σ 2 fib(13)
with only 1D integrals, where Φ(a 1 , a 2 ; −x) is the Kummer's confluent hypergeometric function. We adopt the power-law Sérsic (PL-Sérsic) profile to describe the light (or stellar mass) density profiles (Terzić & Graham 2005), which is written as
j(r) = j c (r/r c ) −αc if r r c j 0 r s −u exp − r s ν if r r c(14)
1 In this paper, the Gauss hypergeometric function is computed using the function scipy.special.hyp2f1 in Python, where the latest version of SciPy library is recommended since it solves the divergence problem of function hyp2f1 in some regions.
where
j 0 = j c r c s u exp r c s ν ,(15)
j c is the luminosity density at break radius r c , s = R eff /k n is a scale radius defined by the 2D effective radius R eff for the Sérsic profile and the Sérsic index n (k here is a function of n and its expression can be found in Ciotti &Bertin 1999 andMacArthur et al. 2003), ν = 1/n, and u = 1 − 0.6097ν + 0.054635ν 2 (Lima Neto et al. 1999;Márquez et al. 2001).
Given the BPL mass model and PL-Sérsic light profile, the radial velocity dispersion σ 2 r (r) can be analytically calculated. Please refer to the Equations (41)-(45) in Du et al. (2020) for the analytical expressions of σ 2 r (r), where the PL-Sérsic profile is assumed to have the same break radius r c and inner slope α c as the BPL model.
MOCK LENSING OBSERVATIONS
In this section, we first display some basics about observed SL systems which are used for reference, and then briefly describe the Illustris simulation from which a sample of galaxies are selected as mock lenses. In the third subsection, we show the generation of hundreds of mock SL systems with single or multiple exposures.
Observational data
In order to produce realistic lensing images, we refer to the lenses detected by the Sloan Lens ACS (SLACS) and the SLACS for the mass (S4TM) surveys (Bolton et al. 2008;Shu et al. 2015). The SL candidates for these two surveys are first identified by taking advantage of the galaxy spectra from SDSS. And then, follow-up imaging observations are performed with the Advanced Camera for Surveys (ACS) aboard the Hubble Space Telescope (HST).
By visually inspecting the high resolution HST images, the SL candidates are classified into different classes, where the ones with clear and definite lensing features are termed "grade-A" lenses. In the following analyses, we use the grade-A systems with only one dominant lens for reference. In total, we have 63 SLACS lenses (20 with one exposure and 43 with multiple exposures) and 38 S4TM lenses (all with one exposure), most of which are elliptical galaxies. Hereafter, we refer to both SLACS and S4TM surveys simply as SLACS surveys. Shu et al. (2015) estimated the stellar masses of these lenses by scaling single stellar population models to fit their HST F814W photometry. It is thus expected that a scaling relation may exist between the observed total number of photon-excited electrons N e and stellar masses M for these lenses. By adopting the favored stellar masses inferred with the Chabrier stellar initial mass function (Chabrier 2003), we find the N e can be connected with the M simply by
N e ≈ 0.2M 4πD 2 L (1 + z) 0.5 e − /s,(16)
where D L is the luminosity distance in units of Mpc and M is in units of M . Using this scaling relation, we thus can assign a total number of electrons to a lens galaxy to mimic HST-like observations in an easy way. Figure 1 illustrates some basic information about the 101 lenses (63 SLACS and 38 S4TM lenses). The left panel shows the redshift distribution of the lenses. The middle panel presents the stellar mass distribution of the real lenses (black histogram) and mock lenses (red histogram). The last panel illustrates the perfect fitting of Equation (16) to the ratio of N e and M as a function of redshift.
The Illustris simulation
The Illustris project runs a set of large-scale hydrodynamic simulations by accounting for various kinds of baryonic physics including gas cooling; star formation and evolution; feedback from active galactic nuclei, supernovae and supermassive black holes; and so on Vogelsberger et al. 2014;Nelson et al. 2015). It assumes a flat ΛCDM cosmology with Ω m = 0.2726, Ω Λ = 0.7274, Ω b = 0.0456, σ 8 = 0.809, n s = 0.963 and h = 0.704. Among the simulation runs, the one with the highest resolution is named as the Illustris-1 simulation from which we extract massive galaxies to generate mock SL systems.
Specifically, the Illustris-1 simulation follows 1820 3 dark matter particles and initially 1820 3 hydrodynamical cells in a periodic box of size 75 h −1 Mpc. The mass for a dark matter particle is ∼ 6.26 × 10 6 M , and the baryonic matter has an initial mass resolution of ∼ 1.26 × 10 6 M . At the end of the simulation, the minimum mass for a baryonic matter particle reaches to a few times 10 4 M . The simulation assigns different softening lengths to different types of particles. For example, the softening length is fixed to be a comoving value of 1 h −1 kpc for dark matter particles and is larger than 0.5 h −1 kpc in physical scale for stars and black holes. For gas cells, an adaptive softening length is defined according to the fiducial cell size, with a minimum equaling to that used for the collisionless baryonic particles.
Generation of mock observations
Using the Illustris-1 simulation at redshift zero, we extract 5343 galaxies with stellar mass larger than 10 10 h −1 M . These galaxies are artificially put at redshift z d 0.178 which is close to the median redshift of the SLACS lenses (see the vertical dashed line shown in the left panel of Figure 1). At this redshift, 1 corresponds to 3 kpc for the cosmology adopted by the Illustris project. Based on thin lens approximation, we project the 3D mass distributions of the galaxies along the x-direction of the simulation box onto the 2D lens plane at z d 0.178 to analyze their lensing effect. In the process of projection, all the matter components are considered including the dark matter, stars, gas, and black holes. We assume the source plane is at redshift z s = 0.6 which is around the median redshift of the background sources of the SLACS lensing systems. In this case, the critical surface mass density is Σ crit 4.0 × 10 15 M /Mpc 2 for the Illustris-adopted cosmology.
We define the "true" Einstein radius for each galaxy as the radius of a circle within which the mean convergence is unity. Based on these true Einstein radii, we find that only a small fraction of galaxies can produce noticeable lensed images. For example, there are only ∼ 350 galaxies whose Einstein radii are larger than 0. 5. With the purpose of generating SL systems whose statistics are similar to those of the SLACS survey, we choose 283 galaxies with Einstein radii in the range of 0. 6 to 2 as the main sample for further analyses, of which about 85% are elliptical galaxies.
Based on the 283 galaxies, we first pixelize their projected overall and stellar mass distributions using the triangular shaped cloud algorithm (Hockney & Eastwood 1981), respectively, with a resolution of 0. 05. We then get 283 convergence maps by scaling the projected overall mass density distributions with Σ crit . By resorting to Equation (16) and assuming a constant mass-to-light ratio, the light distributions in units of e − /s can thus be derived for the lenses from their stellar mass distributions.
As we know, the deflection angle can be expressed as a convolution product between the convergence map κ( x) and the kernel x/| x| 2 . Given a surface mass distribution sampled in a regular grid, it is easy to derive the deflection field in Fourier space by applying the convolution theorem. For the 283 galaxy lenses, their deflection angles at grid points are thus evaluated in this way, where the FoV of the convergence map for each lens is large enough to cover its entire halo with zero-padding. By tracing the light rays back to the source plane based on the lens equation, we can construct lensed images of background sources. SL images without photometric noise can thus be obtained by directly adding together the light distributions of foreground galaxies and lensed images. In this paper, we assume each background source is a faint and compact galaxy which follows a 2D Sérsic profile
I(R) = I 0 exp −k R R eff ν with I 0 = 0.5 e − /s/pixel, R eff = 0
. 15 (about 1kpc at redshift z s = 0.6) and Sérsic index n = 1. In this case, the corresponding apparent magnitude is ∼ 23.5 for the background sources. The properties of source galaxies considered here are roughly consistent with those found by Newton et al. (2011).
More specifically, the position angle of each source galaxy is randomly assigned in the range of 0 to π and its axis ratio is in the range of 0.2 to 1. We randomly place a source galaxy on the source plane around the center of the foreground lens with a scatter of 0. 2 in both x and y directions. A lensed image will be a candidate while its magnification is larger than 5 and the number of pixels in the feature mask is more than 500 after a certain number of trials. The feature mask is defined in the absence of photometric noises. A pixel in the lensing image is marked as a masked pixel if its light intensity is 5% brighter than that of the lens itself. It is found that the feature mask defined in this way is roughly consistent with (or somewhat larger than) the region visually identified for the lensing features. That is, in practice, we cannot determine the feature mask so straightforwardly because we do not know exactly the true lensed images due to the existence of lens light contamination.
With the aim of producing more realistic lensing images, photometric noises are considered, e.g., a sky background of 0.11 e − /s and the Poisson noise on each pixel. The effect of the point spread function (PSF) is also taken into account by randomly applying the PSF in SLACS images to mock images. Bad pixels due to cosmic rays are added according to the bad pixel distributions in HST images. We also account for a readout noise of 5 e − for each pixel. By default, the exposure time is 420 s for each lensing image, corresponding to the single exposure time of the SLACS survey. For comparisons, we also generate SL images with four exposures (2200 s in total). The left panels of Figure 2 shows two examples of mock SL images with a single exposure.
It should be mentioned that, owing to the existence of large flat cores of Illustris galaxies arising from the softening effect in numerical simulations, most of the pure lensed images in our mock catalog exhibit lensing features in or toward the lens centers. This may not be the case for real SL images. Furthermore, due to the contamination of foreground lens light, the central images may disappear or hardly be detected in the lensing images. The missing of central images may preclude the accurate measurements of inner density profiles.
Based on the kinematics of stellar particles, Du et al. (2020) have calculated the AL-weighted LOSVDs for the Illustris galaxies adopted here, where the galaxies are assumed to be at redshift z d = 0.178 and observed by SDSS-like spectroscopy with a fiber radius of 1. 5 and in a seeing condition of 1. 69. We take the AL-weighted LOSVDs evaluated by Du et al. (2020) directly as true values in the following analyses.
LENSING MASS RECONSTRUCTION
This section includes two parts. One is for the subtraction of foreground lens light distributions using the B-spline technique, and the other is for the reconstruction of lensed images, where priors on mass distributions are investigated in detail.
Subtraction of lens galaxy light
For most of galaxy-scale lenses, their Einstein radii are typically smaller than their half-light radii, making it difficult to clearly identify the relatively faint lensed images. In order to subtract the foreground light distributions, parametric light profiles, e.g., Sérsic or double Sérsic profiles (Bolton et al. 2008;Shajib et al. 2019;Birrer et al. 2020;Etherington et al. 2022), are usually adopted as fitting models. However, these profiles with a limited number of parameters may result in undesired residuals, especially for the light distributions with complex angular structures.
In this paper, we use the B-spline technique, which is a more general model first investigated by , to fit the foreground light distributions. The B-spline fitting model can be written as
I bsp (R el , θ) = m,k [b mk cos(mθ) + c mk sin(mθ)] f k (R el ),(17)
where f k (R el ) are the radial basis functions with elliptical radius R el , and m denotes the multipole orders in the θ-direction. b mk and c mk are the coefficients which can be determined by fitting the light distributions in a sense of least-squares.
For the B-spline technique, the lensing features need to be estimated and masked in advance so as to get unbiased fittings to the lens light distributions (see Section 3.3 for the definition of feature mask). By masking the estimated lensing features, we then fit the monopole term (i.e., m = 0) of light distributions to estimate their centers, ellipticities and position angles. With the centers, ellipticities and position angles fixed, the light distributions are fitted again by adding higher order even multipoles with m = [2, 4, 6, 8]. If there are obvious odd features left in the residual for a lens after subtracting the even multipoles, we will further add the odd terms with m = [1, 3, 5, 7] to fit the lens light distribution once more. If the extracted lensing features cannot be clearly recognized no matter what types of multipole terms are included, the corresponding lens systems will be discarded.
Finally, we have 257 "grade-A" lenses left for the single exposure catalog, where 18 lenses show more clear lensing features if the odd multipole terms are included in the B-spline fittings. For the mock catalog with four exposures, we almost have the same cases left, indicating that more exposures may not help us identify more "grade-A" galaxy-scale lenses, because of the significant contamination from foreground lens light. We thus use these 257 lenses with both single and multiple exposures in the following analyses. Figure 2 shows two examples of B-spline fittings to the single exposure images. The first column displays the SL images with foreground lens light distributions. It is noticed that, for galaxy-scale lenses, the foreground lens light may blur the lensed images significantly and make it difficult to predefine the feature mask. The second column shows the residuals after subtracting the monopole (m = 0) term of light distributions. We can see obvious angular structures for these two lenses. By further subtracting the even terms, as shown in the third column, angular structures almost disappear for the first lens. However, there still exists noticeable odd features for the second lens, which can be largely reduced by further accounting for the odd terms in the B-spline fitting, as demonstrated by the fourth column. For reference, we show in the last column the true lensing features convolved with PSF. It is found that, for these two examples, the extracted lensing features are fairly consistent with the true ones.
Reconstruction of lensed images
4.2.1. χ 2 definition
In this paper, we use the forward modeling to reconstruct the lensed images, i.e., modeling the lens mass distributions by the BPL model and the source light distributions by the Sérsic profile. For comparisons, the SIE and SPL lens mass models are also investigated.
The χ 2 for the fittings to the extracted lensing features is defined according to
χ 2 = N i=1 (I data − I bsp − I model ) 2 i w 2 i 1 N N j=1 (I data − I bsp ) 2 j w 2 j ,(18)
where I data , I bsp and I model denote the light distribution of "observed" lensing image, the B-spline fitting to the foreground light and the modeling of the lensed image convolved with PSF, respectively. The index i indicates the i-th pixel in a suitable field of view and j is for the pixels adopted for the B-spline fittings which do not take into account pixels in feature mask. The w is the reciprocal of the poisson noise error, which is proportional to the square root of the total exposure time in a pixel. Note that, for single exposure images, null weights are assigned to the bad pixels. In order to avoid possible over-or under-fittings of the B-spline technique, the denominator in Equation (18) is applied to scale the weight for each pixel systematically and to make the χ 2 definition reasonable. We employ the Python module emcee, an affine invariant Markov chain Monte Carlo (MCMC) Ensemble sampler, to estimate the optimal values of parameters (Foreman-Mackey et al. 2013). The natural logarithm of the posterior probability used for emcee is defined to be
ln P(Θ|I data ) = − 1 2 χ 2 + ln p(Θ) + Const.(19)
where p(Θ) is the prior function for a set of parameters.
In order to avoid possible local maxima in parameter space, we run twice the MCMC samplings. The first run aims to find an initial guess of the parameters for the The second and third columns present the residual images after subtracting the B-spline fittings with monopole and even multipole terms, respectively. The fourth column shows the residual images by further subtracting the possible odd multipole terms. The rightmost column displays the intrinsic lensing features convolved with PSF. Note that, the resolution for all these maps is 0. 05 and the bad pixels are caused by cosmic rays.
second run, which are taken to be the values resulting in the maximum posterior probability in the first run. For both runs, a large enough number of burn-in steps with 500 walkers are set up to make sure the remaining samplings are in equilibrium. After the second run, we finally have 200,000 acceptable values saved for each parameter.
Priors
As reviewed in the introduction section, there are many lensing degeneracies, which can lead to large uncertainties in the measurement of lens mass distributions but leave the lensed images and flux ratios almost invariant. The lensed images can provide only limited information about the lens mass distribution in the annulus encompassing them. In regions without clear lensing features, the estimation of the mass distribution is just an interpolation or extrapolation based on a lens mass model (Kochanek 2020). In other words, when we choose a lens mass model, for example, with only one or two free parameters in the radial direction, rigid priors have already been applied, and the significance of systematics strongly depends on the consistency between the adopted lens model and the true mass distributions.
The BPL profile has more than three degrees of freedom in the radial direction. In principle, it is likely to result in more reasonable constraints on lens mass distributions. However, we find in the next section that, if we do not impose priors on the radial profile of the BPL model, the biases or uncertainties for the mass distributions within Einstein radii could still be significant, depending on the configuration and signal-to-noise ratio of the extracted lensed images. Therefore, it is essential to add reasonable priors to certain parameters. In this work, we pay special attention to the priors on the central black hole mass m b , axis ratio q and the inner density profile (r c and α c ) for the BPL model. These quantities are constrained by using stellar mass (or light) distributions.
Central black holes can have detectable effects on the formation of central images (Mao et al. 2001) and be closely related to the velocity dispersions (Gebhardt et al. 2000). However, due to the lack of central images or the low signal-to-noise ratio of lensed images in the central region, it is usually very hard to determine the mass of central black holes based solely on SL observations. To account for the effect of central black holes, we resort to the relation between central black hole mass m b and total stellar mass M of galaxies (Häring & Rix 2004;Kormendy & Ho 2013;Reines & Volonteri 2015), which is parameterized by
log(m b /M ) = a + b log(M /10 11 M ).(20)
The left panel of Figure 3 illustrates the fittings to the relation between m b and M of 5343 Illustris galaxies. The solid and dashed lines are for the elliptical and disk galaxies, respectively. For clarity, we only plot the data points for the elliptical galaxies. Note that the galaxy types are defined according to the Sérsic indices and stellar dynamical properties of galaxies (please refer to the details in Du et al. 2020). We find that, for elliptical galaxies, the best-fit values are The black solid line shows the best-fit to the red pluses corresponding to ∼ 1300 elliptical galaxies. For reference, the dotted line shows the best-fit for disk galaxies. Right: comparison between axis ratios of overall mass and stellar mass distributions. The dashed lines indicate the artificially defined upper and lower limits of the axis ratios q for overall mass distributions given the axis ratios q for stellar mass distributions. a = 8.22 and b = 1.33, comparable with the observations (e.g., Sijacki et al. 2015). For disk galaxies, a = 7.83 and b = 1.38. The intrinsic scatters around the best-fit lines are about 0.34 and 0.29 dex for elliptical and disk galaxies, respectively. It should be mentioned that, in practical fitting, we do not take into account the scatter of the black hole mass for each galaxy but fix it to be the value at the best-fit relation.
The second non-negligible quantity is the ellipticity which by inspection has evident degeneracy with other lens model parameters, e.g., amplitude b and density profile slopes (see also the Fig.B.1 in Millon et al. 2020, for example). In the right panel of Figure 3, we present the correlation of axis ratios between overall mass and light (i.e., stellar mass) distributions for elliptical galaxies. For disk galaxies, we find a very similar scatter diagram with only a slight shift. These axis ratios are estimated by simultaneously fitting the corresponding surface mass and light distributions in a FoV of 14 × 14 using the BPL and PL-Sérsic profiles with the same inner density slope and break radius. The two dashed lines show the upper (q = 0.75q + 0.4) and lower (q = 0.75q + 0.1) limits of the axis ratios q of overall mass distributions given the axis ratios q of stellar mass distributions, which are adopted to confine the ellipticities of lens mass distributions in model fittings.
The third prior is about the inner density profile which is hard to constrain if there are no identifiable lensed images in the inner or central region. In order to make the lens mass modeling more efficient and accurate, we propose constraining the inner part of mass distributions by resorting to light distributions, in view of the fact that baryonic matter dominates the overall mass in the core region of galaxies. For implementing this idea, we prefer using the PL-Sérsic profile to fit the lens light distributions first. In addition, we assume that the lens mass and light distributions share the same break radius and inner slope.
As we know, at the break radius r c of the PL-Sérsic profile, the logarithmic slope of the Sérsic part can be written as
α j = − d ln j(r) d ln r r=rc = u + ν r c s ν .(21)
The break radius of the PL-Sérsic profile can thus be expressed by
r c = s [(α j − u)n] n .(22)
This indicates that, from the point of view of model fitting, we may artificially define a break radius using the Sérsic part of the PL-Sérsic profile, where the logarithmic slope of the Sérsic part is α j . So, the aim now is to find a break radius for each lens using the lens light distribution, within which the inner slopes are assumed to be the same between lens mass and light distributions. Based on a series of tests, we find that the break radius r c,2.3 defined by α j = 2.3 is a fairly good choice for distinguishing the inner part from the outer part of mock lenses.
Here is a test showing the 3D fittings to the volume density profiles of ∼ 5000 Illustris galaxies. Specifically, for each Illustris galaxy, we first fit its 3D light distribution within 90% light radius using the PL-Sérsic profile to get the break radius r c = r c,2.3 and the inner slope α c = α c,2.3 . With fixed r c and α c given by the light profile fitting, the volume mass density profile is then fitted by the BPL model with the same radii as those used for the light profile fitting. In Figure 4, we present the distributions of slopes (inner and outer slopes) in the left panel and break radii in the right panel for Illustris 3. The two vertical dotted lines in the right panel mark the softening lengths in Illustris simulation for the stellar (0.5h −1 kpc) and dark matter particles (1h −1 kpc), respectively. In both panels, the red and blue histograms are for the elliptical and disk galaxies, respectively. The black histograms are for all the galaxies.
galaxies. It is found that, with the definition of r c,2.3 , the inner part of mass distributions can be clearly distinguished from the outer part. It is also demonstrated that the outer slopes α are around 2 and almost independent of galaxy types. The scatters of α are ∼ 0.13 for elliptical galaxies and ∼ 0.23 for disk galaxies. These results are in agreement with those found by the 2D "joint" fittings with FoV 6 × 6 shown in Figure 6 of Du et al. (2020).
In addition to the priors mentioned above, for the BPL model, we also examine the necessity of limiting the outer slopes, for example, confining the outer slopes in a range of α = 1.8 to 2.2. For the other parameters in model fittings, uniform priors are applied with sufficiently large ranges.
Note that, in this paper, we single out individual Illustris galaxies for our analyses. Therefore no large-scale structures are included, and consequently negligible effects from external convergence and shear. Even if a shear component exists due to correlated substructures or angular structures, its effect can be mimicked to some extent by ellipticity because of the well-known degeneracy between ellipticity and shear (Kassiola & Kovner 1993;Luhtaru et al. 2021).
Also note that, for SIE and SPL models, we do not account for the central black holes and the abovementioned limitations on axis ratios. For the BPL model, there are two cases. One is denoted as "BPLrigid" with the above-mentioned priors for black hole mass (m b ), axis ratio (q), break radius (r c ), inner (α c ) and outer (α) slopes. The other is denoted as "BPL-free" for comparisons, which retains the prior on the axis ratio, but does not have strong priors on the black hole mass, break radius and slopes. Table 1 shows the spe-
SIE 2 0 0 0 SPL 1.2 ∼ 2.8 0 0 0 BPL-free 1.2 ∼ 2.8 0 ∼ 2.8 0 ∼ 1. 5 0 ∼ 10m b BPL-rigid 1.8 ∼ 2.2 αc,2.3 rc,2.3 m b
Note-The parameter b, which is not listed above, is set to be 0. 1 ∼ 3. 5 for all the models. The symbol m b denotes the black hole mass inferred from the total stellar mass of a lens galaxy.
cific priors on the radial parameters of the BPL model, where the SIE and SPL models are two of the special cases. It is noticeable that there are two degrees of freedom in the radial direction for BPL-rigid and SPL models, while one for SIE and five for BPL-free models.
With the addition of axis ratios, position angles and two position parameters for a center, there are in total seven free parameters for a background source, and five, six, nine and six free parameters for the SIE, SPL, BPL-free and BPL-rigid lens mass models, respectively. We find that, for the SIE and SPL models, the MCMC fitting usually takes ∼ 8 CPU hours to reconstruct a SL image, while it takes about 24 and 20 CPU hours for the BPL-free and BPL-rigid models, respectively.
RESULTS
In this section, we investigate the performance of the BPL model as well as its special cases (e.g., the SIE and SPL models) in reconstructing the lensed images, lens mass and source light distributions. We also examine the accuracy of estimating AL-weighted LOSVDs.
Lensed image and lens mass reconstructions
Individual examples
As an example, Figure 5 shows the reconstruction of lensed images and radial convergence profiles of three mock SL systems with single exposure. It is noticed that, the SIE and SPL models can reproduce the lensed images around Einstein radius quite well, but cannot recover the lensing features toward the center and the inner density profiles accurately. The BPL-free model, which is more flexible in the radial direction, succeeds in recovering the lensed images and radial convergence profiles of the lens 152864 (the number here is the subhalo id in the Illustris-1 simulation), but fails for the other two lenses.
We show the results of BPL-rigid model fittings in the fifth column of Figure 5. It is amazing that, for all the three SL systems, the lensed images and radial convergence profiles can be recovered with high fidelity, although the relevant χ 2 values may not be the smallest. These results illustrate the necessity of adding priors to the radial density profiles, especially in the inner region where the central images, if they exist, may be too faint to be detected due to the significant contamination of lens light distributions. The BPL model has the ability to find out the possible central images by using the lens light distributions to constrain the inner density profiles.
We know that the image quality can be improved by multiple exposures. However, by inspecting the lensed images extracted from the SL images with four exposures, we find the contamination from lens light cannot be effectively reduced by increasing exposure time for galaxy-scale lenses. The extracted lensing features for multiple exposures are almost the same as those for single exposure, and so are the reconstructed lensed images and convergence profiles, although the lensing images with more exposures suffer less from the cosmic rays and poisson noise.
An ideal case is that the foreground lens light is perfectly subtracted from SL images. We investigate this case based on the mock SL images with four exposures (2200s in total). Similar to Figure 5, Figure 6 shows the relevant results, where the first column displays the lensed images left after subtracting the true lens light directly. Compared to Figure 5, the extracted lensed images become more clear, although there are still poisson noises from lens light. Figure 6 shows that, even if there are no lens light contaminations, it is impossible for SIE and SPL models to recover the lensed images and convergence profiles in the central region. The BPL-free model can now reconstruct well the lensed images and convergence profiles of the lens 132701, but still fails for the lens 1 whose central image disappears in noise. When looking at the accuracy of mass measurements rather than χ 2 values for image reconstructions, the BPL-rigid model with strong radial priors still performs best.
Both in Figure 5 and 6, the blue (red) shaded bands show the ±3σ error ranges of the reconstructed radial (mean) convergence profiles. It is demonstrated that, compared to their deviations from true profiles, the uncertainties estimated from model fittings are subdominant in most cases. We thus do not pay attention to the precision or statistical errors of the relevant quantities for each lens system, but focus on the statistics about the best-fit values or reconstructions.
Einstein radii
For an elliptical lens, its Einstein radius inferred from a model fitting is defined as the elliptical radius R E,el within which the mean convergence satisfiesκ = 1. This definition of elliptical Einstein radius avoids the azimuthal integral in evaluating the circular Einstein radius. For a smooth elliptical mass model, a little bias may exist between elliptical and circular Einstein radii, but it is negligible in practice, as shown in Figure 7, because of the complexity of true mass distributions, e.g., angular structures and variation of ellipticities with radius. Figure 7 shows the comparisons of the best-fit Einstein radii R E,el and the directly measured Einstein radii R E defined by a circle, for different model fittings and noise levels. It is shown that, independent of lens models and noise levels, the Einstein radius can always be estimated with negligible bias (typically subpercent). However, the uncertainties in R E estimation are model dependent.
It is shown that, in each panel, the red histograms for the multiple exposures are very similar to the black histograms for the single exposure. Their scatters are almost the same, indicating the limitation of using more exposures to get a better inference of Einstein radius. This is mainly because the light contamination from main lens itself cannot be effectively reduced by more exposures. If we artificially subtract the true lens light, as shown by the blue histograms and its 68% central confidence interval, the accuracy in R E estimation can be largely improved to 2 ∼ 5%, depending on the adopted lens mass models.
Also focusing on the black and red histograms, we find the scatters for SIE, SPL and BPL-free models are typically greater than 5% and can even reach up to ∼ 9%. For the BPL-rigid model with inner density profile Figure 5. Lensed image and lens mass reconstructions for three single-exposure mock SL systems with id = 1 (top two rows), 132701 (middle two rows) and 152864 (bottom two rows). The leftmost and rightmost columns present the extracted and true lensed images in a FoV of 6 × 6 , respectively. The middle four columns from left to right display the reconstructed images (odd rows), along with the corresponding reconstructed convergence profiles (even rows), based on SIE, SPL, BPL-free and BPL-rigid models, respectively. The reduced χ 2 values for the reconstructed images are shown at the left-top corner of the corresponding panels. In the panels about convergence profiles, the true radial convergence (κ) and mean convergence (κ) profiles are shown by black lines, where κ(R el ) <κ(R el ) and R el is the elliptical radius. The blue and red shaded bands indicate the ±3σ error bands around the best-fit profiles. Also shown are the best-fit values relevant to the radial profiles.
predetermined, the fractional uncertainty in estimated Einstein radius can be smaller than 5%. In any case, we find the fractional uncertainty in R E estimation is unlikely to be better than 2% even for the idealized case without lens light contamination.
Radial convergence profiles
We now inspect the accuracy of reconstructed radial convergence profiles over a wide range of radii from 0.1R E to 10R E . Figure 8 shows the statistical results of mean convergence profiles as a function of elliptical radius. It is noticeable that, close to Einstein radius, both biases and scatters approach the minimum regardless of lens mass models or noise levels. Away from Einstein radius, the scatters tend to be larger and larger, indicating the difficulty in recovering the mass distributions in these regions.
It is also evident that the biases and scatters for SIE and SPL models are rather large within Einstein radius, which can be greatly reduced by using the more flexible BPL model. Especially for the BPL model with rigid priors, the biases are typically less than 5% within R E , and less than 2% at R E . For the BPL-free model, it works well within R E only when the lens light distributions are perfectly subtracted.
Outside Einstein radius, an amazing result is that, the biases for the SIE model are typically less than 10% and the scatters are also not too significant. The SPL model still produces large scatters away from Einstein radius. It is shown that the BPL-free model is not good at constraining the convergence profiles at radii larger than R E . By adding strong priors to the radial parameters, the BPL-rigid model can recover the mean convergence profiles within 3R E quite well, with biases less than 5% and controllable scatters.
It is worth noting that the large biases and scatters within R E for SIE and SPL models may not be true for real galaxy lenses, which lack large and flat cores. Nonetheless, the first two columns of Figure 8 demonstrate the failure of the SIE and SPL models to reconstruct the more flexible lens mass distributions. In addition to the mean convergence profiles, we are also interested in the radial convergence profiles, especially the convergence κ E at Einstein radius. In Figure 9, we exhibit the deviations of the reconstructed convergence profiles from the true profiles. We realize again that, without reasonable priors or additional information on the mass distributions, it is impossible to constrain the radial convergence profiles over a wide range of radii based solely on lensed images. As shown in the last column of Figure 9, the BPL-rigid model outperforms all the other models. Within R E and 3R E , the biases are typically no more than 5% and 10%, respectively. Figure 9 also shows that the scatters in κ E measurements are quite large, especially for the SPL model fittings. Large uncertainties in κ E make difficult the accurate measurements of H 0 from time delay cosmography, because of the well-known scaling relation H 0 ∝ 1 − κ E (Kochanek 2020(Kochanek , 2021. The fractional error in H 0 can be roughly evaluated by f H0 = (1−κ E,fit )/(1−κ E,T )−1, where κ E,fit and κ E,T are the fitted and true convergences at R E , respectively. By calculating f H0 , we find its median reaches ∼ 20% for SIE, SPL and BPL-free models, let alone the much larger scatters. For the BPLrigid model, the bias in H 0 is less than 6%. Taking also scatters into consideration, we conclude that, even for the BPL-rigid model, H 0 can probably only be constrained to an accuracy of ∼ 10% for a single SL timedelay system.
Background source light reconstructions
We in Figure 10 show the source light reconstructions for the three SL systems with id = 1, 132701 and 152864, whose mass reconstructions are shown in Figure 5 and 6 as examples. The top panels display the source light properties estimated in the case of single exposure, while the bottom panels are for the multiple exposures with perfect removal of lens light. By looking at the ellipses, it is noticed that the centers, ellipticities and position angles of the reconstructed source light distributions are highly correlated between different lens mass models, although large offsets may exist compared to the true values. Statistically, we find that the correlation coefficients for centers, ellipticities, and position angles predicted by different lens mass models are all greater than 0.8. There are also weak correlations for parameters I 0 and n between different lens model fittings to the same lensed images, as shown by the parameter values listed in each panel.
By comparing the corresponding ellipses in the bottom panels to those in the top panels, we realize that the improvement in image quality cannot effectively reduce the center offsets of source galaxies. However, if there is no visible contamination from lens light, the amplitude I 0 and Sérsic index n become more consistent with the input values. Figure 11 exhibits the statistical distributions of the differences between estimated and input source parameter values. The black histograms are for the singleexposure cases, while the red are for multiple-exposures without lens light contamination. We do not plot the histograms for multiple-exposures with significant lens light contamination, which are very similar to the histograms for single-exposure cases. It is now more clear that, although there are significant uncertainties, there are no systematic biases for the centers, position angles and ellipticities of the source galaxies.
For single-exposure cases, negative biases exist for central light intensity I 0 and Sérsic index n, demonstrating the significant effect of lens light contamination on the determination of source light density profiles. These biases in I 0 and n disappear when the lens light are perfectly modeled and subtracted.
Of the measurements of source parameters, the most complicated one is the inference of effective radius which is not only sensitive to lens light contamination but also lens mass models. The lens light contamination may spoil the lensed images, while the lens mass models may introduce bias in mass distributions around Einstein radius and thus the magnifications. It is shown that, for single-exposure cases, the SIE, SPL and BPL-free models coincidentally give unbiased estimates of effective radii. However, for the cases without lens light contamination, the effective radii for SIE and BPL-free models tend to be larger, as indicated by the red histograms in the fourth column of Figure 11. The bias in R eff for the SPL model remains almost unchanged since the reconstructed mass distributions for the SPL model instead have smaller biases around Einstein radii. By contrast, when there is no lens light contamination, the BPL-rigid model can provide a more reliable and accurate estimate of R eff as well as other source galaxy parameters.
We now pay attention to the large uncertainty in source center determinations. As indicated by Figures 10 and 11, there are cases with large displacements of source centers. For example, the id = 1 with single exposure shows a source center displacement of ∼ 0. 41. We look for the cause of large offsets of source centers by comparing the true and reconstructed lens mass distributions out to the outskirts of lens halos, and also the deflection fields. As expected, we find that many lenses show complexity of mass distributions in the inner or outer regions, which may induce additional deflections around Einstein radii compared to the estimated elliptical mass distributions.
In this paper, we define the additional deflections as the difference between the true and reconstructed deflections. It is noticed that the additional deflections in the central region of a lens can be approximated by a constant vector (∆α x,0 , ∆α y,0 ) which is the average of the additional deflections within the Einstein radius. Figure 12 shows the strong correlation between (∆α x,0 , ∆α y,0 ) and (∆x cen , ∆y cen ) for the BPL-rigid modeling on lensed images without lens light contamination. It is demonstrated that the center offsets of source galaxies can be largely explained by the existence of constant deflection angles around the centers of lenses. For the case with id = 1, we find the amplitude of its additional constant deflection is ∼ 0. 38, in line with its source center offset.
AL-weighted LOSVDs
AL-weighted LOSVDs, denoted as σ = σ 2 1 2 hereafter, can provide complementary constraints on the lens mass distributions, and may help us break MSD. However, in order to implement this idea, we need to ensure there is no bias from modeling itself. For the BPL model, it has been found that velocity dispersion bias exists for direct fittings to surface mass distributions (Du et al. 2020). This velocity dispersion bias can be systematically corrected by b σ 1.015q −0.07 with q being the axis ratio of the lens light distribution.
We in this subsection look at the consistency between predicted and true AL-weighted LOSVDs, where the former is evaluated based on the reconstructed lens mass and directly-fitted lens light density profiles. For SIE and SPL lens mass models, the Sérsic profile is adopted for lens light fittings, whereas for BPL models, the PL-Sérsic profile is adopted. More specifically, for BPL-free modeling, the inner density profiles of lens light distributions are determined by lens mass reconstructions. However, for BPL-rigid modeling, the inner slopes and break radii of lens mass distributions are fixed to be α c,2.3 and r c,2.3 , respectively, which are given by fittings to lens light distributions. Also note that, in the estimation of AL-weighted LOSVD for each lens, the global velocity anisotropy is adopted and assumed to be known. Figure 13 displays the comparisons of the predicted and true AL-weighted LOSVDs, where the top two rows correspond to the single exposure cases and the bottom two rows are for the four exposures with perfect subtraction of lens light. We do not present the results for multiple exposure cases with lens light contamination, which are very similar to those for single exposure cases. We can see from the black histograms or the black scatter diagrams that there are positive biases in σ es-timations for all the lens mass models investigated in this paper. The biases are typically larger than 5% and have a certain dependence on the lens mass models. We find the uncertainties in σ estimations are more vulnerable to lens mass models. For instance, the uncertainty for the SPL model can reach up to 30% while it is only about 5% for the BPL-rigid model.
It is also found that the improvement on the lensed image quality has little effect on the systematic biases in σ predictions but may reduce the uncertainties significantly for some lens mass models. For example, the fractional uncertainty for the BPL-free model can be reduced from 14% to 8%. In contrast, we notice that both the uncertainties and biases for the BPL-rigid model are quite stable, basically insensitive to lens light contaminations. After correcting for the projection effect in the BPL-rigid model fittings, i.e., dividing the predicted σ by b σ for each lens, the finally obtained σ can match its true value to within 6%. It is well known that, by comparing the true and predicted velocity dispersions, an effective external convergence can be estimated according to κ ext = 1 − σ 2 ,true /σ 2 ,pred . We thus expect that, for a SL system, it is possible to infer its external convergence to within 12% accuracy using the BPL-rigid model.
CONCLUSION AND DISCUSSIONS
The BPL model proposed by Du et al. (2020) is a lens mass model with four free parameters in the radial direction. In this paper, we examine the performance of the BPL model on the lens mass and source light reconstructions, as well as the predictions of AL-weighted LOSVDs.
In order to quantify the uncertainties in the relevant quantities, an end-to-end test is done by starting with the creation of mock SL observations with various noises, e.g., sky background, cosmic rays, PSF and readout noise. About 100 SLACS lenses are adopted as references for calibrating noise levels and lens statistics. We find a scaling relation between the total number of photon-excited electrons N e and the total stellar mass M of SLACS lenses in HST F814W photometry, which is used to help us transform the stellar mass of Illustris galaxies into lens light distributions in a simple way. The background sources are assumed to be at redshift z s = 0.6 with an apparent magnitude of ∼ 23.5, following a Sérsic profile with an effective radius of 0. 15 and Sérsic index n = 1. We assume that there are two different exposure times (420 s for single exposure and 2200 s for four exposures) for each SL system.
Given the mock SL observations, we first pay attention to the extraction of lensed images, where the B-spline technique is used to fit the lens light distributions by masking the identifiable lensing features. For the Bspline technique, a fitting strategy with two or three steps is investigated, with the first step fitting to the monopole term of lens light distributions and the second step fitting to the even multipoles with m = [2,4,6,8]. The third step fitting to the odd multipoles will be run only if there are obvious odd features. We finally have ∼ 260 "grade-A" lenses left with both single and multiple exposures for further analyses.
We employ forward modeling to reproduce the lensed images, along with lens mass and source light distributions. Four lens mass models are investigated, which are in fact the special cases of the BPL model. They are the SIE, SPL, BPL-free and BPL-rigid models. The SIE model has a slope of 2, while the SPL model has a single free slope. The difference between BPL-free and BPLrigid models is whether or not strong priors are applied to the radial profiles of the BPL model.
We use ∼ 5000 Illustris galaxies to evaluate the relations between overall mass and stellar mass (or light) density profiles, which are adopted as priors to shrink the parameter space in lens mass model fittings. The scaling relations between black hole mass and stellar mass are derived for elliptical and disk galaxies, respectively. We find the axis ratios of surface mass distributions can be confined to a limited range by the axis ratios of surface light distributions. Because baryons dominate the mass in the central region of galaxies, we assume there exists a break radius within which the overall mass and light distributions have the same inner density slopes. We find that this break radius can be well approximated by a radius r c,2.3 at which the logarithmic slope of the 3D light distribution is ∼ 2.3. In practice, we evaluate r c,2.3 and α c,2.3 by fitting the 2D light distributions with the PL-Sérsic profile.
Additionally, we also inspect the model fittings to the four exposure cases with lens light perfectly subtracted. In total, we have three noise levels (i.e., the single and four exposures with lens light distributions fitted by the B-spline technique, and the four exposures with perfect subtraction of lens light) with four lens mass models (i.e., SIE, SPL, BPL-free and BPL-rigid) for each "grade-A" system.
By looking at the lensed image reconstructions, we find that, if there are no obvious central images identified, the extracted lensing features around Einstein radii can always be recovered fairly well, almost independently of lens mass models. Based solely on the χ 2 values of the image fittings, it is hard to judge which lens mass model performs best in reconstructing the lens mass distributions. If central images are evident in the Figure 13. Comparisons of the predicted AL-weighted LOSVDs σ ,pred to their true values σ ,true . From left to right, the columns correspond to SIE, SPL, BPL-free and BPL-rigid models, respectively. The top two rows show the results for the single exposure cases, while the bottom two rows are for the multiple exposures without lens light contamination. The scatter diagrams present the one-to-one comparisons, with green lines indicting the identity lines. The histograms show the corresponding distributions of ratio σ ,pred /σ ,true , where vertical dashed lines show the medians of the distributions and green lines mark the position without bias. The numbers represent the medians along with 68% confidence intervals. In the rightmost panels, the red pluses and histograms correspond to the results by accounting for the velocity dispersion bias bσ for BPL models. extracted lensed images, the BPL models outperform the singular power-law models in both lensed image and lens mass reconstructions. On the other hand, if central images are submerged in lens light, the BPL-rigid model is capable of finding out the missing central images.
We investigate the accuracy of Einstein radius measurements by comparing the elliptical Einstein radii R E,el inferred from model fittings with the "true" circular Einstein radii R E . Although different definitions of Einstein radius may lead to some inconsistency, we find that the inferred Einstein radius R E,el is a fairly good estimate of R E . It is shown that the Einstein radius can be measured with negligible bias, regardless of lens mass models or noise levels which however have large effects on the uncertainties in R E estimation. The BPL-rigid model, which is found to be more accurate than the SIE, SPL and BPL-free models, can determine the Einstein radius to an accuracy of 5% or better, depending on the quality of extracted lensed images.
An accurate measurement of the Einstein radius implies an accurate measurement of the enclosed mass within it. This is indeed the case, as shown in Figure 8. It is shown that all the lens mass models investigated in this paper can measure the radial mean convergence around Einstein radius with negligible biases and controllable uncertainties. However, away from Einstein ra-dius, the mean convergence profiles are hard to constrain unless rigid priors are added to restrict the BPL model. We demonstrate that the BPL-rigid model can recover the mean convergence profiles quite well, with biases less than ∼ 5% within 3R E and less than ∼ 10% within 10R E .
As for the radial convergence profiles, we find the biases at Einstein radius for the SIE, SPL and BPL-free models are typically larger than 10%, along with much larger uncertainties. For the BPL-rigid model, the bias of the convergence at Einstein radius is typically less than 4% with ∼ 10% scatters. It is difficult to constrain the radial convergence profiles over a broad range of radii based solely on lensed images. Among the lens mass models investigated, we find that the BPL-rigid model is the most successful one in recovering the radial convergence profiles, with biases typically no more than 5% and 10% within R E and 3R E , respectively.
We also inspect the source light reconstructions in detail. It is demonstrated that, despite significant uncertainties, there are basically no systematic biases in the estimation of centers, position angles and axis ratios of the source galaxies. We find that lens light contamination can significantly bias the estimation of the radial profile of source light distributions, e.g., the central light intensity I 0 , effective radius R eff and Sérsic index n. The biases in I 0 and n can disappear when the lens light contaminations are insignificant. However, the inference of R eff is also sensitive to lens mass models in addition to the lens light contaminations. We realize that, to a large extent, the center offsets of source galaxies can be attributed to the existence of constant deflection angles in the central region of lenses, which are mainly caused by the complexity of lens mass distributions, e.g., deviations from smoothness and elliptical symmetry.
We look into the consistency between predicted and true AL-weighted LOSVDs. It is found that there exist positive biases (≥ 5%) in the estimation of AL-weighted LOSVDs, and this positive bias cannot be reduced by improving the quality of lensed images. For the BPLrigid model, we notice that the positive bias can be effectively eliminated by accounting for the projection effect, leading to an uncertainty of no more than 6% in the prediction of AL-weighted LOSVDs. We thus conclude that, with the BPL-rigid model, it may be feasible to evaluate the external convergence to within 12% accuracy for a SL system by comparing its true AL-weighted LOSVD with that predicted from the reconstructed lens mass distribution.
In short, we notice that a good fit to the lensed images does not necessarily indicate a good measurement of the lens mass or source light distributions. With suit-able priors, the BPL model can significantly outperform the singular power-law models in the reconstruction of lensed images and lens mass distributions, as well as the prediction of AL-weighted LOSVDs. In any case, we find that the Einstein radius cannot be constrained to be more accurate than 2% statistically by the smooth lens mass models investigated in this paper. Because of the large uncertainty of convergence measurement at Einstein radius, the fractional error in H 0 is unlikely to be much smaller than 10% for a single SL time-delay system.
Finally, there are some issues to mention. One is about the mock lenses that are picked out from Illustris galaxies. As we know, the Illustris galaxies have large flat cores due to the softening effect, and are much larger than observed galaxies (Bottrell et al. 2017;Xu et al. 2017). The existence of flat cores largely increases the probability of forming central images and makes larger the deviations of mass distributions from the SIE and SPL models in the central region. So some relevant uncertainties reported in this paper for the SIE and SPL models may be overestimated.
Because the mock lenses are too large, lens light contamination may be too significant in our mock SL images. As shown in Fig 7, there are a few outliers whose Einstein radii are remarkably underestimated or overestimated. We find that most of them exhibit inconsistent lensing features with the true ones, demonstrating the significant effect of lens light contamination or the failure of the B-spline technique in extracting the lensed images of these cases. This also indicates that the criteria for defining "grade-A" lenses need to be improved. Nonetheless, as the actual SL sample size increases, there will be more lenses identified with smaller Einstein radii, which, as expected, will suffer more from lens light contamination. In a sense, we have addressed the significant effect of lens light contamination and demonstrated the success of the BPL-rigid model in dealing with SL systems with obvious lens light contamination.
There are some simplifications in generating and analyzing the mock images. For example, we do not take into consideration the lensing effects of environment and cosmological large scale structures, which would complicate the data analyses and the lens mass model fittings. We neglect the redshift distributions of foreground lenses and background sources. When applying the B-spline technique to fit the lens light, feature masks are defined automatically by pixels with more than 5% luminosity excess, which in practice cannot be recognized so easily. In the prediction of AL-weighted LOSVDs, the velocity anisotropy parameter is assumed to be constant and known for each lens. These simplifications or assump-tions may, to some extent, have improved the measurement accuracies of the lens mass distributions or the AL-weighted LOSVDs.
Priors are essential for the BPL model to ensure the accuracy of lens mass reconstructions. We inspect the priors using the Illustris galaxies at a fixed redshift, while the priors are most likely redshift and wavelength dependent. There may exist more precise scaling relations between overall mass and stellar mass (or lens light) distributions. It is thus worthwhile to investigate the priors in more detail.
Figure 1 .
1Some basic information about 101 reference lenses (63 SLACS and 38 S4TM lenses). Left: the distribution of lens redshift z d . The vertical dashed line indicates the median redshift of the lenses. Middle: the normalized stellar mass function of the 101 real lenses (black histogram) and the 257 "grade-A" mock lenses (red histogram). Right: the ratio of Ne and M as a function of redshift, with the red line corresponding to the prediction from Equation (16).
Figure 2 .
2Two examples of extracting lensing features based on the B-spline technique. Shown here for each lens is the central region in the FoV of 6 × 6 . The top and bottom rows are for the lenses with id = 1 and 268410, respectively. The left panels show the lensing images with lens light distributions. The shadows mark the estimated region of lensed images.
Figure 3 .
3Left: relation between central black hole mass m b and total stellar mass M .
Figure 4 .
4Left panel: distribution of inner slopes αc and outer slopes α of mass density profiles, where αc are estimated from PL-Sérsic profile fittings to the 3D light distributions with break radius rc,2.3. Right panel: distribution of rc,2.
Figure 6 .
6Similar to Figure 5 but for multiple exposures with perfect subtraction of foreground lens light distributions.
Figure 7 .
7Distributions of the ratios between best-fit elliptical Einstein radii R E,el and their true values RE. In each panel, the black, red and blue histograms are for the cases with single exposure, four exposures and four exposures with perfect subtraction of lens light, respectively. For each histogram, also shown is its median value along with the 68% confidence interval.
Figure 8 .
8Comparison of reconstructed mean convergence profilesκ fit and true onesκT as a function of elliptical radius R el scaled by true Einstein radius RE. The lens mass models used for image reconstructions are indicated by the titles, with (1), (4) and (4+) signifying the single exposure, four exposures and four exposures with perfect subtraction of lens light, respectively. We show 20 gray lines in each panel which correspond to 20 randomly selected SL systems. The red lines show the median trend ofκ fit /κT for the 257 grade-A lenses. The error bars show the 68% central confidence intervals at the corresponding radii. The vertical dotted line in each panel marks the Einstein radii, at which the median and 1-σ error ofκ fit /κT are presented.
Figure 9 .
9Similar toFigure 8, but for the radial convergence profiles κ(R el ).
Figure 10 .
10Source light reconstructions for three SL systems from left to right with id = 1, 132701 and 152864, respectively. Top panels show the results for single exposure, while bottom panels for multiple exposures without lens light contamination. In each panel, the ellipses show the isophotes at effective radii of the corresponding source light distributions (black for the true and colored for the reconstructed), and the dots mark their centers. Also displayed are the Sérsic profile values of I0, R eff and n corresponding to the ellipses.
Figure 11 .
11Histograms of the differences between estimated and true Sérsic profile parameter values for the source light distributions. From top to bottom, the panels are for SIE, SPL, BPL-free and BPL-rigid models, respectively. From left to right, the relevant parameters are, respectively, the central light intensity I0, source center (xcen, ycen), effective radius R eff , position angle φ, axis ratio q and Sérsic index n. The black histograms represent the cases for single exposure, while the red are for multiple exposures without lens light contamination. Numbers in each panel denote the medians along with central 68% confidence intervals of the corresponding histograms, where the medians are marked by vertical dashed lines. The vertical green bands indicate the positions of zero-deviation. Note that the true values for I0, R eff and n are 0.5 e − /s/pixel, 0. 15 and 1.0, respectively.
Figure 12 .
12Comparisons between the estimated constant deflections (∆αx,0, ∆αy,0) and source center offsets (∆xcen, ∆ycen) for the BPL-rigid modeling on lensed images without lens light contamination. The left and right panels are for the x-and y-directions, respectively. The red pluses denote the case with id = 1.
Table 1 .
1Priors on the radial parameters.Model
α
αc
rc
m b
We would like to thank the Laohu high-performance computing (HPC) cluster supported by National Astronomical Observatories, Chinese Academy of Sciences, which was utilized for part of the MCMC runs. The authors also acknowledge Beijing PARATERA Tech CO., Ltd. (https:// www.paratera.com/) for providing HPC resources that have contributed to the research results reported within this paper.
. S Birrer, A J Shajib, A Galan, A&A. 643165Birrer, S., Shajib, A. J., Galan, A., et al. 2020, A&A, 643, A165
. S Birrer, ApJ. 91938Birrer, S. 2021, ApJ, 919, 38
. A S Bolton, S Burles, L V E Koopmans, T Treu, L A Moustakas, ApJ. 638703Bolton, A. S., Burles, S., Koopmans, L. V. E., Treu, T., & Moustakas, L. A. 2006, ApJ, 638, 703
. A S Bolton, S Rappaport, S Burles, PhRvD. 7461501Bolton, A. S., Rappaport, S., & Burles, S. 2006, PhRvD, 74, 061501
. A S Bolton, S Burles, L V E Koopmans, ApJ. 682964Bolton, A. S., Burles, S., Koopmans, L. V. E., et al. 2008, ApJ, 682, 964
. V Bonvin, F Courbin, S H Suyu, MNRAS. 4654914Bonvin, V., Courbin, F., Suyu, S. H., et al. 2017, MNRAS, 465, 4914
. C Bottrell, P Torrey, L Simard, MNRAS. 4672879Bottrell, C., Torrey, P., Simard, L., et al. 2017, MNRAS, 467, 2879
. R R Bourassa, R Kantowski, ApJ. 19513Bourassa, R. R., & Kantowski, R. 1975, ApJ, 195, 13
. I Bray, MNRAS. 208511Bray, I. 1984, MNRAS, 208, 511
. S Cao, M Biesiada, R Gavazzi, ApJ. 806185Cao, S., Biesiada, M., Gavazzi, R., et al. 2015, ApJ, 806, 185
. X Cao, R Li, J W Nightingale, Research in Astronomy and Astrophysics. 2225014Cao, X., Li, R., Nightingale, J. W., et al. 2022, Research in Astronomy and Astrophysics, 22, 025014
. G Chabrier, PASP. 115763Chabrier, G. 2003, PASP, 115, 763
. G C Chen, .-F Fassnacht, C D Suyu, S H , MNRAS. 5132349Chen, G. C.-F., Fassnacht, C. D., Suyu, S. H., et al. 2022, MNRAS, 513, 2349
. L Ciotti, G Bertin, A&A. 352447Ciotti, L., & Bertin, G. 1999, A&A, 352, 447
. D Coe, K Umetsu, A Zitrin, ApJ. 75722Coe, D., Umetsu, K., Zitrin, A., et al. 2012, ApJ, 757, 22
. T E Collett, M W Auger, V Belokurov, MNRAS. 4242864Collett, T. E., Auger, M. W., Belokurov, V., et al. 2012, MNRAS, 424, 2864
. T E Collett, L J Oldham, R J Smith, Science. 3601342Collett, T. E., Oldham, L. J., Smith, R. J., et al. 2018, Science, 360, 1342
. W Du, G.-B Zhao, Z Fan, ApJ. 89262Du, W., Zhao, G.-B., Fan, Z., et al. 2020, ApJ, 892, 62
. J Einasto, Trudy Astrofizicheskogo Instituta Alma-Ata. 587Einasto, J. 1965, Trudy Astrofizicheskogo Instituta Alma-Ata, 5, 87
. A Etherington, J W Nightingale, R Massey, MNRAS. 5173275Etherington, A., Nightingale, J. W., Massey, R., et al. 2022, MNRAS, 517, 3275
. E E Falco, M V Gorenstein, I I Shapiro, ApJL. 2891Falco, E. E., Gorenstein, M. V., & Shapiro, I. I. 1985, ApJL, 289, L1
. D Foreman-Mackey, D W Hogg, D Lang, PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., et al. 2013, PASP, 125, 306
. K Gebhardt, R Bender, G Bower, ApJL. 53913Gebhardt, K., Bender, R., Bower, G., et al. 2000, ApJL, 539, L13
. S Genel, M Vogelsberger, V Springel, MNRAS. 445175Genel, S., Vogelsberger, M., Springel, V., et al. 2014, MNRAS, 445, 175
. M Gomer, L L R Williams, JCAP. 202045Gomer, M. & Williams, L. L. R. 2020, JCAP, 2020, 045
. M V Gorenstein, E E Falco, I I Shapiro, ApJ. 327693Gorenstein, M. V., Falco, E. E., & Shapiro, I. I. 1988, ApJ, 327, 693
. A C C Guimarães, L Sodré, ApJ. 72833Guimarães, A. C. C. & Sodré, L. 2011, ApJ, 728, 33
. N Häring, H.-W Rix, ApJL. 60489Häring, N. & Rix, H.-W. 2004, ApJL, 604, L89
R W Hockney, J W Eastwood, Computer Simulations Using Particles. New YorkMcGraw-HillHockney, R.W.,& Eastwood, J.W. 1981, Computer Simulations Using Particles (New York: McGraw-Hill)
. E Jullo, P Natarajan, J.-P Kneib, Science. 329924Jullo, E., Natarajan, P., Kneib, J.-P., et al. 2010, Science, 329, 924
. A Kassiola, I Kovner, ApJ. 417450Kassiola, A., & Kovner, I. 1993, ApJ, 417, 450
. C R Keeton, C S Kochanek, ApJ. 495157Keeton, C. R., & Kochanek, C. S. 1998, ApJ, 495, 157
. C R Keeton, astro-ph/0102341Keeton, C. R. 2001, astro-ph/0102341
. C S Kochanek, MNRAS. 4931725Kochanek, C. S. 2020, MNRAS, 493, 1725
. C S Kochanek, MNRAS. 5015021Kochanek, C. S. 2021, MNRAS, 501, 5021
L V E Koopmans, M Barnabe, A Bolton, astro2010: The Astronomy and Astrophysics Decadal Survey. 159Koopmans, L. V. E., Barnabe, M., Bolton, A., et al. 2009, astro2010: The Astronomy and Astrophysics Decadal Survey, 2010, 159
. J Kormendy, L C Ho, ARA&A. 51511Kormendy, J. & Ho, L. C. 2013, ARA&A, 51, 511
. R Li, H Li, S Shao, MNRAS. 4902124Li, R., Li, H., Shao, S., et al. 2019, MNRAS, 490, 2124
. J Liesenborgs, S De Rijcke, MNRAS. 4251772Liesenborgs, J. & De Rijcke, S. 2012, MNRAS, 425, 1772
. G B Lima Neto, D Gerbal, I Márquez, MNRAS. 309481Lima Neto, G. B., Gerbal, D., & Márquez, I. 1999, MNRAS, 309, 481
. E V Linder, PhRvD. 9483510Linder, E. V. 2016, PhRvD, 94, 083510
. R Luhtaru, P L Schechter, K M Soto, ApJ. 9154Luhtaru, R., Schechter, P. L., & de Soto, K. M. 2021, ApJ, 915, 4
. I Márquez, G B Lima Neto, H Capelato, A&A. 379767Márquez, I., Lima Neto, G. B., Capelato, H., et al. 2001, A&A, 379, 767
. L A Macarthur, S Courteau, J A Holtzman, ApJ. 582689MacArthur, L. A., Courteau, S., & Holtzman, J. A. 2003, ApJ, 582, 689
. S Mao, H J Witt, L V E Koopmans, MNRAS. 323301Mao, S., Witt, H. J., & Koopmans, L. V. E. 2001, MNRAS, 323, 301
. P J Marshall, T Treu, J Melbourne, ApJ. 6711196Marshall, P. J., Treu, T., Melbourne, J., et al. 2007, ApJ, 671, 1196
. M Millon, A Galan, F Courbin, A&A. 639101Millon, M., Galan, A., Courbin, F., et al. 2020, A&A, 639, A101
. J F Navarro, C S Frenk, S D M White, ApJ. 490493Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493
. D Nelson, A Pillepich, S Genel, Astronomy and Computing. 1312Nelson, D., Pillepich, A., Genel, S., et al. 2015, Astronomy and Computing, 13, 12
. E R Newton, P J Marshall, T Treu, ApJ. 734104Newton, E. R., Marshall, P. J., Treu, T., et al. 2011, ApJ, 734, 104
. S Mukherjee, L V E Koopmans, R B Metcalf, MNRAS. 4794108Mukherjee, S., Koopmans, L. V. E., Metcalf, R. B., et al. 2018, MNRAS, 479, 4108
. C M O'riordan, S J Warren, D J Mortlock, MNRAS. 5013687O'Riordan, C. M., Warren, S. J., & Mortlock, D. J. 2021, MNRAS, 501, 3687
. A E Reines, M Volonteri, ApJ. 81382Reines, A. E. & Volonteri, M. 2015, ApJ, 813, 82
. P Saha, AJ. 1201654Saha, P. 2000, AJ, 120, 1654
. J Schaye, R A Crain, R G Bower, MNRAS. 446521Schaye, J., Crain, R. A., Bower, R. G., et al. 2015, MNRAS, 446, 521
. P Schneider, D Sluse, A&A. 55937Schneider, P., & Sluse, D. 2013, A&A, 559, A37
. P Schneider, D Sluse, A&A. 564103Schneider, P. & Sluse, D. 2014, A&A, 564, A103
P Schneider, C Kochanek, J Wambsganss, Gravitational Lensing: Strong, Weak and Micro. Berlin HeidelbergSpringer-VerlagSchneider P., Kochanek C. & Wambsganss J. 2006, Gravitational Lensing: Strong, Weak and Micro, Springer-Verlag Berlin Heidelberg
. J Schwab, A S Bolton, S A Rappaport, ApJ. 708750Schwab, J., Bolton, A. S., & Rappaport, S. A. 2010, ApJ, 708, 750
. A J Shajib, S Birrer, T Treu, MNRAS. 4835649Shajib, A. J., Birrer, S., Treu, T., et al. 2019, MNRAS, 483, 5649
. A J Shajib, S Birrer, T Treu, MNRAS. 4946072Shajib, A. J., Birrer, S., Treu, T., et al. 2020, MNRAS, 494, 6072
. A J Shajib, T Treu, S Birrer, MNRAS. 5032380Shajib, A. J., Treu, T., Birrer, S., et al. 2021, MNRAS, 503, 2380
. C Shu, B Zhou, M Bartelmann, ApJ. 68570Shu, C., Zhou, B., Bartelmann, M., et al. 2008, ApJ, 685, 70
. Y Shu, A S Bolton, J R Brownstein, ApJ. 80371Shu, Y., Bolton, A. S., Brownstein, J. R., et al. 2015, ApJ, 803, 71
. Y Shu, A S Bolton, S Mao, ApJ. 833264Shu, Y., Bolton, A. S., Mao, S., et al. 2016, ApJ, 833, 264
. D Sijacki, M Vogelsberger, S Genel, MNRAS. 452575Sijacki, D., Vogelsberger, M., Genel, S., et al. 2015, MNRAS, 452, 575
. A Sonnenfeld, MNRAS. 4744648Sonnenfeld, A. 2018, MNRAS, 474, 4648
. S H Suyu, P J Marshall, M W Auger, ApJ. 711201Suyu, S. H., Marshall, P. J., Auger, M. W., et al. 2010, ApJ, 711, 201
. S H Suyu, M W Auger, S Hilbert, ApJ. 76670Suyu, S. H., Auger, M. W., Hilbert, S., et al. 2013, ApJ, 766, 70
. B Terzić, A W Graham, MNRAS. 362197Terzić, B., & Graham, A. W. 2005, MNRAS, 362, 197
. N Tessore, R B Metcalf, A&A. 58079Tessore, N., & Metcalf, R. B. 2015, A&A, 580, A79
. O Tihhonova, F Courbin, D Harvey, MNRAS. 4981406Tihhonova, O., Courbin, F., Harvey, D., et al. 2020, MNRAS, 498, 1406
. T Treu, R Gavazzi, A Gorecki, ApJ. 690670Treu, T., Gavazzi, R., Gorecki, A., et al. 2009, ApJ, 690, 670
. T Treu, ARA&A. 4887Treu, T. 2010, ARA&A, 48, 87
. S Unruh, P Schneider, D Sluse, A&A. 60177Unruh, S., Schneider, P., & Sluse, D. 2017, A&A, 601, A77
. M Vogelsberger, S Genel, V Springel, MNRAS. 4441518Vogelsberger, M., Genel, S., Springel, V., et al. 2014, MNRAS, 444, 1518
. J Wagner, A&A. 62086Wagner, J. 2018, A&A, 620, A86
. O Wertz, J Surdej, MNRAS. 4371051Wertz, O., & Surdej, J. 2014, MNRAS, 437, 1051
. K C Wong, S H Suyu, G. C.-F Chen, MNRAS. 4981420Wong, K. C., Suyu, S. H., Chen, G. C.-F., et al. 2020, MNRAS, 498, 1420
. D Xu, D Sluse, P Schneider, MNRAS. 456739Xu, D., Sluse, D., Schneider, P., et al. 2016, MNRAS, 456, 739
. D Xu, V Springel, D Sluse, MNRAS. 4691824Xu, D., Springel, V., Sluse, D., et al. 2017, MNRAS, 469, 1824
| [] |
[
"New method for determining the light travel time in static, spherically symmetric spacetimes. Calculation of the terms of order G 3",
"New method for determining the light travel time in static, spherically symmetric spacetimes. Calculation of the terms of order G 3"
] | [
"Bernard Linet [email protected] \nLaboratoire de Mathématiques et Physique Théorique\nUMR 7350\nCNRS\nFédération Denis Poisson\n\nUniversité François Rabelais\nF-37200ToursFrance\n",
"Pierre Teyssandier [email protected] \nUMR 8630\nSYRTE\nCNRS\nUPMC\nObservatoire de Paris\n61 avenue de l'ObservatoireF-75014ParisFrance\n"
] | [
"Laboratoire de Mathématiques et Physique Théorique\nUMR 7350\nCNRS\nFédération Denis Poisson",
"Université François Rabelais\nF-37200ToursFrance",
"UMR 8630\nSYRTE\nCNRS\nUPMC\nObservatoire de Paris\n61 avenue de l'ObservatoireF-75014ParisFrance"
] | [] | A new iterative method for calculating the travel time of a photon as a function of the spatial positions of the emitter and the receiver in the field of a static, spherically symmetric body is presented. The components of the metric are assumed to be expressible in power series in m/r, with m being half the Schwarzschild radius of the central body and r a radial coordinate. The procedure exclusively works for a light ray which may be described as a perturbation in powers of G of a Minkowskian null geodesic, with G being the Newtonian gravitational constant. It is shown that the expansion of the travel time of a photon along such a ray only involves elementary integrals whatever the order of approximation. An expansion of the impact parameter in power series of G is also obtained. The method is applied to explicitly calculate the perturbation expansions of the light travel time and the impact parameter up to the third order. The full expressions yielding the terms of order G 3 are new. The expression of the travel time confirms the existence of a third-order enhanced term when the emitter and the receiver are in conjunction relative to the central body. This term is shown to be necessary for determining the post-Newtonian parameter γ at a level of accuracy of 10 −8 with light rays grazing the Sun. | 10.1088/0264-9381/30/17/175008 | [
"https://arxiv.org/pdf/1304.3683v3.pdf"
] | 119,242,774 | 1304.3683 | 2859093c8cab73c98a42d67b11208a959de7bedf |
New method for determining the light travel time in static, spherically symmetric spacetimes. Calculation of the terms of order G 3
Bernard Linet [email protected]
Laboratoire de Mathématiques et Physique Théorique
UMR 7350
CNRS
Fédération Denis Poisson
Université François Rabelais
F-37200ToursFrance
Pierre Teyssandier [email protected]
UMR 8630
SYRTE
CNRS
UPMC
Observatoire de Paris
61 avenue de l'ObservatoireF-75014ParisFrance
New method for determining the light travel time in static, spherically symmetric spacetimes. Calculation of the terms of order G 3
arXiv:1304.3683v3 [gr-qc] 9 Apr 2014 PACS numbers: 04.20.-q, 04.25.-g, 04.80.Cc, 95.10.Jk
A new iterative method for calculating the travel time of a photon as a function of the spatial positions of the emitter and the receiver in the field of a static, spherically symmetric body is presented. The components of the metric are assumed to be expressible in power series in m/r, with m being half the Schwarzschild radius of the central body and r a radial coordinate. The procedure exclusively works for a light ray which may be described as a perturbation in powers of G of a Minkowskian null geodesic, with G being the Newtonian gravitational constant. It is shown that the expansion of the travel time of a photon along such a ray only involves elementary integrals whatever the order of approximation. An expansion of the impact parameter in power series of G is also obtained. The method is applied to explicitly calculate the perturbation expansions of the light travel time and the impact parameter up to the third order. The full expressions yielding the terms of order G 3 are new. The expression of the travel time confirms the existence of a third-order enhanced term when the emitter and the receiver are in conjunction relative to the central body. This term is shown to be necessary for determining the post-Newtonian parameter γ at a level of accuracy of 10 −8 with light rays grazing the Sun.
Introduction
Determining the travel time of a photon as a function of the positions of the emitter and the receiver for a given time of emission (or reception) is a crucial problem in many tests of general relativity. Indeed, such a function, that is called a time transfer function, is relevant for modelling not only the experiments involving the measurement of a time delay or the comparison of distant clocks [1,2], but also the bending of light and the highly accurate astrometry [3][4][5].
The aim of this paper is to determine the time transfer function in the exterior space of a static, spherically symmetric body as an asymptotic expansion in powers of the Newtonian gravitational constant G. So we neglect the multipole structure, the rotation and the dynamical aspects occurring in the realistic models. This limitation is justified since it is currently accepted that modelling the post-linear regime of solar experiments which can be planned in the foreseeable future only requires to take into account the mass of the Sun. Moreover, the influence of the cosmological constant is neglected.
We concentrate on a class of metric theories of gravity in which it is possible to suppose that the photons are propagating in a region where the components of the metric are analytic expansions in powers of m/r, where the monopole term m is half the Schwarzschild radius of the central body and r an isotropic radial coordinate. The metric is thus characterized by an infinity of dimensionless constants generalizing the well-known post-Newtonian parameters β and γ. In fact, we restrict our attention to the case where the photon follows a path that we call a quasi-Minkowskian light ray (see [5]), that is a null geodesic described as a perturbation in powers of G of a Minkowskian null segment. The corresponding time transfer function is then represented by a generalized post-Minkowskian expansion in powers of G. For the sake of brevity, a term of order G n will be said to be of order n.
The first-order term in the expansion of the time transfer function is the wellknown Shapiro time delay [6], which can be obtained by different reasonings, with some of them involving only elementary calculations (see, e.g., [7] or [1]). Nevertheless, calculating the higher-order terms is a more difficult problem. As far as we know, two distinct approaches are currently available.
a) The method based on integrating the null geodesic equations. This procedure has been applied up to the order G 2 within the framework of the parametrized postpost-Newtonian formalism in [8], and then reconsidered in the case of the exterior Schwarzschild metric for a class of quasi-Galilean coordinate systems of interest in celestial mechanics [9,10]. The problem has been recently revisited in [11] for a threeparameter family of static, spherically symmetric spacetimes in the context of the Gaia mission. However, the approach developed in [9][10][11] presents the drawback to be indirect, since the results are deduced from a solution which corresponds to a light ray emitted at infinity in a given direction.
b) The methods which are natively adapted to the generic case where both the emitter and the receiver of the light rays are located at finite distances from the origin of the spatial coordinates. These methods are based either on an iterative determination of the Synge or bi-scalar world function (see [12] for the Schwarzschild metric and [3] for a more general and simpler approach), or on an iterative integration of the Hamilton-Jacobi (or eikonal) equation satisfied by the time transfer function (see [13], and [14] for a more recent treatment). The case n = 2 has been successfully solved using the two variants.
The calculations required by the procedures mentioned in a) and b) get quickly unwieldy as n is increasing. For this reason, we present here a new method enabling a systematic determination of the time transfer function up to any given order n. This approach relies on the null geodesic equations written in spherical variables. The calculation of the time transfer function is elementary when the light ray is radial. Consequently, a specific method for determining the time transfer function is needed only in the case where the ray is non-radial. The starting point is the fact that for a non-radial ray which is not passing through a pericentre between its emission and its reception, the light travel time is given by an integral involving only one unknown quantity, namely the impact parameter of the ray. This expression transforms into an integro-differential equation when the property of the impact parameter to be a derivative of the time transfer function is taken into account (see [5]). We show that this equation can be solved by an iterative procedure and that each perturbation term involved in the time transfer function is expressible as a sum of elementary integrals which are easy to calculate with any symbolic computer program. A theorem of analyticity of the perturbation terms proved in the present paper allows to extend the results to the more general cases where the light ray is passing through a pericentre. Since the property of analyticity we use is inferred from the above-mentioned Hamilton-Jacobi equation, it may be pointed out that our procedure is an hybrid of the above-mentioned methods.
In order to illustrate the convenience of the new method, we carry out the explicit calculation of the time transfer function up to the order G 3 . The full expression of the third-order term that we obtain is new. This expression markedly improves the result previously found in [14], which only yields a partial expression describing the asymptotic form of the time transfer function when the emitter and the receiver tend to be in conjunction. Let us emphasize that our formula must not be confused with the expression of order G 3 previously obtained in [15] and [16]. Indeed, the expressions of the light travel time obtained in these works involve the radial coordinate of the pericentre of the ray without explicitly calculating this quantity as a function of the spatial positions of the emitter and the receiver.
It may be argued that the most accurate projects for testing general relativity, like SAGAS [17], ODYSSEY [18], LATOR [19] or ASTROD [20], are generally considered as only requiring a knowledge of the propagation of light up to the second order (see, e.g., [21] and references therein). However, the occurrence of a so-called 'enhanced term', that is the possibility for an effect of order n + 1 to be greater than some contributions of order n must be faced, as it is pointed out in [14]. The necessity to be cautious has been recently shown for the Gaia mission. This mission, indeed, is currently tackled within the linearized, post-Minkowskian regime [22,23] or the usual post-Newtonian approximation [24,25]. Nevertheless, an apparent discrepancy between the standard approach and the numerical integration of the null geodesic equations has required an in-depth discussion of an enhanced term of order G 2 (see [11] and [5]). For this reason, it seems to us that exploring new systematic procedures enabling the calculation of the time transfer function at any order is fully justified. Applying our results to experimental projects like SAGAS largely confirms this analysis. Indeed, we prove that the third-order term in the expansion of the time transfer function gives rise to an enhanced contribution in a solar conjunction. We show that this contribution must be taken into account in the attempts to determine the parameter γ with an accuracy of 10 −8 .
The paper is organized as follows. Section 2 lists the notations and conventions we use. In section 3, the general assumptions on the metric are stated. Section 4 is devoted to the expansion in a series in powers of G of the time transfer function associated with a quasi-Minkowskian light ray. Section 5 yields a recurrence relation satisfied by the perturbations terms involved in the expansion of the time transfer function. A fundamental property of analyticity is established for these terms in section 6. The impact parameter of any non-radial quasi-Minkowskian light ray is shown to be expressible as a series in powers of G in section 7. This feature is the basis of the new iterative procedure proposed in this paper for determining the time transfer function at any order n. This procedure is implemented in section 8. A simplification is carried out in section 9. The perturbation expansion of the time transfer function is explicitly calculated up to the third-order in G in section 10. The appearance of enhanced terms at each order is shown in section 11. The relevance of these terms is discussed for some solar system projects in section 12. The impact parameter of the corresponding light ray is obtained as a function of the positions of the emitter and the receiver up to the order G 3 in section 13. The impact parameter of a ray emitted at infinity in an arbitrary direction and observed at a given point is derived in section 14. Concluding remarks are given in section 15. An appendix yields some hints for the hand calculation of the third-order term in the time transfer function.
General assumptions, notations and conventions
Our general assumptions, notations and conventions are the following.
• Spacetime is assumed to be a static, spherically symmetric manifold (V 4 , g).
We suppose that there exists a region D h in which the metric g is regular, asymptotically flat and may be interpreted as the gravitational field of a central body having a mass M . We put m = GM/c 2 . • We assume that D h may be entirely covered by a single quasi-Cartesian system of coordinates (x 0 , x i ) adapted to the symmetries of the metric. We put x 0 = ct, with t being a time coordinate, and x = (x i ). • Greek indices run from 0 to 3, and latin indices run from 1 to 3.
• The signature adopted for the metric is + − −−. • Any bold italic letter refers to an (ordered) triple: (a 1 , a 2 , a 3 ) = (a i ) = a. All the triples are regarded as 3-vectors of the ordinary Euclidean space. • Given two triples a and b, a.b denotes the Euclidean scalar product a i b i , with
Einstein's convention on repeated indices being used. • |a| denotes the formal Euclidean norm of the triple a: |a| = (a.a) 1/2 . If |a| = 1, a is conventionally called a unit (Euclidean) 3-vector. • a × b is the triple obtained by the usual rule giving the exterior product of two vectors of the Euclidean space. • Given a bi-scalar function F (x, y), ∇ x F (x, y) and ∇ y F (x, y) denote the gradients of F with respect to x and y, respectively.
Generalized post-Minkowskian expansion of the metric
For convenience, the coordinates (x 0 , x) are chosen so that the metric takes an isotropic form in the domain of regularity D h :
ds 2 = A(r)(dx 0 ) 2 − B −1 (r)δ ij dx i dx j ,(1)
where r = |x|. Using the corresponding spherical coordinates (r, ϑ, ϕ), one has
δ ij dx i dx j = dr 2 + r 2 dϑ 2 + r 2 sin 2 ϑdϕ 2 .
The light rays of the metric (1) are also the light rays of any metric ds 2 conformal to (1). This feature enables us to carry out our calculations for a metric containing only one potential. We choose ds 2 = A −1 (r)ds 2 , that is
ds 2 = (dx 0 ) 2 − U(r)δ ij dx i dx j ,(2)
where U is defined by
U(r) = 1 A(r)B(r)
.
The metric (1) is considered as a generalization of the exterior Schwarzschild metric, which may be written in the form
ds 2 = 1 − m 2r 2 1 + m 2r 2 (dx 0 ) 2 − 1 + m 2r 4 δ ij dx i dx j(4)
in the region outside the event horizon located at r = m/2. So we henceforth assume that there exists a value r h > 0 of the radial coordinate such that the domain of regularity D h is the region outside the sphere of radius r h . If there exists at least one event horizon, we must take for r h the value of r on the outer horizon. By analogy with general relativity we consider that r h ∼ m and we suppose that whatever r > r h , A(r) and B −1 (r) are positive functions represented by analytical expansions as follow:
A(r) = 1 − 2m r + 2β m 2 r 2 − 3 2 β 3 m 3 r 3 + β 4 m 4 r 4 + ∞ n=5 (−1) n n 2 n−2 β n m n r n ,(5a)B −1 (r) = 1 + 2γ m r + 3 2 ǫ m 2 r 2 + 1 2 γ 3 m 3 r 3 + 1 16 γ 4 m 4 r 4 + ∞ n=5 (γ n − 1) m n r n ,(5b)
where the coefficients β, β 3 , . . . , β n , γ, ǫ, γ 3 , . . . , γ n , . . . are generalized post-Newtonian parameters chosen so that
β = γ = ǫ = 1, β n = γ n = 1 for n ≥ 3(6)
in general relativity. It results from (3), (5a) and (5b) that the potential U(r) occurring in (2) may be written as
U(r) = 1 + 2(1 + γ) m r + ∞ n=2 2κ n m n r n ,(7)
for r > r h , with the coefficients κ n being constants which can be expressed in terms of the generalized post-Newtonian parameters involved in the expansions of A(r) and B(r). Taking into account a notation already introduced in [5], namely
κ = 2(1 + γ) − β + 3 4 ǫ,(8)
κ 2 and κ 3 are given by
κ 2 = κ, κ 3 = 2κ − 2β(1 + γ) + 3 4 β 3 + 1 4 γ 3 .(9)
In general relativity, we have
κ 2 = κ = 15 4 , κ 3 = 9 2 .(10)
Time transfer function for a quasi-Minkowskian light ray
Let x A and x B be two points located in D h . Throughout this paper, we generically consider a photon emitted at x A and received at x B . The time of emission and the time of reception of this photon are denoted by t A and t B , respectively. It is assumed that the light ray followed by the photon is a null geodesic path Γ which is entirely lying in the domain of regularity D h . As it has been emphasized in introduction, it would be of primordial interest for modelling a lot of relativistic effects to determine the time transfer function associated with Γ, that is, the expression giving the travel time t B − t A as a function of x A and x B :
t B − t A = T Γ (x A , x B ).(11)
In practice, however, the problem is extremely complicated since there exists in general an infinite set of light rays emitted at x A at a given time t A and passing through x B (see, e.g., [26,27] for the Schwarzschild metric and [28] for a larger class of spacetimes). So, in this paper, we restrict our attention to the special class of null geodesic paths we have called the quasi-Minkowskian light rays in [5]. This means that in what follows, the path covered by the photon is assumed to be entirely confined in D h and to be described by parametric equations of the form
x 0 = ct A + ξ|x B − x A | + ∞ n=1 X 0 (n) (x A , x B , ξ),(12)x = z(ξ) + ∞ n=1 X (n) (x A , x B , ξ),(13)
where ξ is the affine parameter varying on the range 0 ≤ ξ ≤ 1, z(ξ) is defined by
z(ξ) = x A + ξ(x B − x A )(14)
and the functions X 0 (n) and X (n) are terms of order n obeying the boundary conditions
X 0 (n) (x A , x B , 0) = 0,(15)X (n) (x A , x B , 0) = X (n) (x A , x B , 1) = 0.(16)
According to a notation already introduced in [5], such a null geodesic path will be denoted by Γ s (x A , x B ) ‡. For the sake of brevity, the time transfer function associated with Γ s (x A , x B ) will be henceforth denoted by T (x A , x B ) or simply by T . Setting ξ = 1 in (12), it may be seen that this function can be expanded in power series of G as follows:
T (x A , x B ) = |x B − x A | c + ∞ n=1 T (n) (x A , x B ),(17)
where T (n) stands for the term of order n.
Expansion (17) is easy to determine when x A and x B are linked by a radial null geodesic entirely lying in D h . In this case, indeed, it is immediately deduced from (2) that the expression of T is given by the exact formula
T (r A , r B ) = sgn(r B − r A ) 1 c rB rA U(r)dr,(18)
where r A = |x A | and r B = |x B |. Substituting for U(r) from (7) into (18) shows that T may be expanded as follows:
T (r A , r B ) = |r B − r A | c + ∞ n=1 T (n) (r A , r B ),(19)
where the first three perturbation terms are given by
T (1) (r A , r B ) = (1 + γ)m c ln r B r A ,(20)T (2) (r A , r B ) = κ − 1 2 (1 + γ) 2 m 2 r A r B |r B − r A | c ,(21)T (3) (r A , r B ) = 1 2 κ 3 − (1 + γ)κ + 1 2 (1 + γ) 3 m 3 r A r B 1 r A + 1 r B |r B − r A | c .
(22) ‡ In a static spacetime, the mention of the initial time tA may be omitted.
Determining the right-hand side of (17) is much more complicated when Γ s (x A , x B ) is not a radial geodesic. As it has been recalled in introduction, the perturbations terms T (n) might be obtained by an iterative integration of the null geodesic equations. Indeed, taking into account that ds 2 = 0 along a null geodesic, it results from (2) and (13) that the time transfer function is given by
T (x A , x B ) = 1 c 1 0 U(r(ξ)) x B − x A + ∞ n=1 dX (n) (x A , x B , ξ) dξ dξ,(23)
where the integral is taken along Γ s (x A , x B ). Taking into account the boundary conditions (16), it may be inferred from (23) that each function T (n) is theoretically calculable if the perturbations terms X (1) , . . . , X (n−1) involved in (13) are determined by solving the null geodesic equations. This procedure is cumbersome, however. So we develop a different method, based on a property of analyticity of the functions T (n) which can be deduced from a recurrence relation (see sections 5 and 6).
Recurrence relation satisfied by the functions T (n)
It follows from a general result established in [13] that T (x A , x) satisfies a Hamilton-Jacobi equation which is equivalent to the eikonal equation
c 2 |∇ x T (x A , x)| 2 = U(r)(24)
when the metric is given by (2). Replacing T (x A , x) in (24) by its expansion in powers of G and U(r) by (7), and then applying the formulae already found in [13], we get a proposition as follows.
Proposition 1 The perturbation terms T (n) involved in expansion (17) may be written in the form
T (n) (x A , x B ) = 1 c |x B − x A |F (n) (x A , x B ),(25)
where the functions F (n) are determined by the recurrence relation
F (1) (x A , x B ) = (1 + γ)m 1 0 dξ |z(ξ)| ,(26)F (n) (x A , x B ) = κ n m n 1 0 dξ |z(ξ)| n − c 2 2 n−1 p=1 1 0 ∇ x T (p) (x A , x).∇ x T (n−p) (x A , x) x=z(ξ) dξ(27)
for n ≥ 2, with z(ξ) being defined by (14) §.
The recurrence relation explicitly given in proposition 1 shows that Γ s (x A , x B ) is unique provided that expansion (17) is an admissible representation of the time transfer function. However, determining the most general conditions under which our construction is valid remains an open problem. According to [13], (26) and (27) are inferred from an integro-differential equation involving the analytic expansion of the metric along the straight segment joining x A and x B . Consequently, we shall henceforth assume that the expression
|x B − x A |[1 + n p=1 F (p) (x A , x B )
]/c constitutes § Let us emphasize that the integrals involved in (26) and (27) are taken along the straight segment described by the parametric equation
x = z(ξ), 0 ≤ ξ ≤ 1.
a reliable approximation of the time transfer function as long as the straight segment joining x A and x B does not intersect the hypersurface r = r h , a condition expressed by the inequality
|z(ξ)| > r h for 0 ≤ ξ ≤ 1.(28)
This condition is largely satisfied for a star observed in the solar system, as well as in any foreseeable test of general relativity in the vicinity of the Sun. It has been shown in [13] that a recurrent relation equivalent to the one stated by proposition 1 enables to carry out the calculation of T (1) and T (2) . As matters stand, however, we do not know whether (27) allows explicit computations for n ≥ 3. Going deeper into this question is beyond the scope of this paper. Proposition 1 is used here only for proving a property of analyticity which is indispensable for justifying our new procedure.
Analyticity of the functions T (n)
Let us begin with proving the following lemma.
Lemma 1
The functions F (n) recursively determined by (26) et (27) are analytic in x A and x B , except when x A and x B are such that n B = −n A , with n A and n B being defined as
n A = x A r A , n B = x B r B .(29)
.
Proof of lemma 1. The proposition is obviously true for n = 1, since the integrand 1/|z(ξ)| in (26) is analytic in x A and x B for any ξ such that 0 ≤ ξ ≤ 1, provided that n B = −n A . Suppose now the validity of lemma 1 for F (1) , . . . , F (n) . Assuming p to be such that 1 ≤ p ≤ n, and then substituting z(ξ) for x into
∇ x T (p) (x A , x), it is immediately inferred from (25) that c ∇ x T (p) (x A , x) x=z(ξ) = N AB F (p) (x A , z(ξ)) +ξ|x B − x A | ∇ x F (p) (x A , x) x=z(ξ) ,(30)
where N AB is defined by
N AB = x B − x A |x B − x A | .(31)
Using (30) leads to
c 2 ∇ x T (p) (x A , x).∇ x T (n+1−p) (x A , x) x=z(ξ) = F (p) (x A , z(ξ))F (n+1−p) (x A , z(ξ)) +ξ(x B − x A ). F (p) (x A , x)∇ x F (n+1−p) (x A , x) + F (n+1−p) (x A , x)∇ x F (p) (x A , x) x=z(ξ) +ξ 2 |x B − x A | 2 ∇ x F (p) (x A , x).∇ x F (n+1−p) (x A , x) x=z(ξ) .(32)
It follows from our assumption that the right-hand side of (32) is a sum of functions which are analytic in x A and x B for any ξ such that 0
≤ ξ ≤ 1, except if n B = −n A . Each integral 1 0 ∇ x T (p) (x A , x).∇ x T (n+1−p) (x A , x) x=z(ξ)
dξ is therefore analytic if n B = −n A . The same property is obviously possessed by the integral
1 0 dξ/|z(ξ)| n+1 . Lemma 1 is thus proved by recurrence. Since |x B − x A | is analytic except if x B = x A , we can state the proposition below.
Proposition 2 The functions T (n) involved in expansion (17) are analytic in x A and x B when both the following conditions are met:
a) x B = x A ; b) n B = −n A .
The importance of this property will clearly appear in section 8. It is worth of noting that the second condition in proposition 2 is automatically fulfilled when inequality (28) is satisfied. This fact explains why the condition b) is never explicitly involved in the assumptions of the propositions enunciated below.
Relation between
T (x A , x B ) and the impact parameter of Γ s (x A , x B )
Null geodesic equations
Let Γ be an arbitrary non-radial null geodesic path of the metric ds 2 . We suppose that Γ is confined in the region D h and described by parametric equations x α = x α (ζ), where ζ is an arbitrarily chosen affine parameter. We choose the spherical coordinates (r, ϑ, ϕ) so that ϑ = π/2 for any point of this path. Denoting byl α the covariant components of the vector tangent to Γ s (x A , x B ), an equation as follows l 0 dx 0 +l r dr +l ϕ dϕ = 0
is satisfied along Γ sincel α is a null vector. Owing to the symmetries of the metric, we havel
0 = E,(34)l ϕ = −J,(35)
with E and J being constants of the motion. For convenience, the affine parameter is chosen in such a way that E > 0. Furthermore, it is always possible to suppose J > 0 without lack of generality when calculating the time transfer function in a static, spherically symmetric spacetime. Then the quantity defined as
b = J E(36)
is the impact parameter of the light ray (see, e.g., [29] and [5]) . It may be noted that b = 0 would correspond to a radial null geodesic.
Since ds 2 = 0 along Γ, it follows from (34), (35) and (36) that
l r = −ε E r r 2 U(r) − b 2 ,(37)
b is an intrinsic quantity attached to Γ since the constants of the motion E and J are themselves coordinate-independent quantities.
where ε = 1 when r is an increasing function of time and ε = −1 when r is a decreasing function of time ¶. Substituting forl r from (37) into (33), and then dividing throughout by E, we obtain a relation enabling to determine the light travel time by an integration along Γ(x A , x B ), namely
dx 0 = bdϕ + ε r r 2 U(r) − b 2 dr.(38)
Our procedure for calculating explicitly each function T (n) rests on the property reminded in the next subsection that the impact parameter b can be determined as a function of x A and x B by taking the partial derivative of T with respect to the cosine of the angle formed by x A and x B .
Expansion of the impact parameter as a series in powers of G
Let [ϕ A , ϕ B ] be the range of the angular function ϕ(t) along a quasi-Minkowskian light ray Γ s (x A , x B ). For the sake of brevity, we shall frequently use a notation as follows
µ = n A .n B = cos(ϕ B − ϕ A ).
(39)
Using this notation, the time transfer function may be considered as a function of r A , r B and µ:
T (x A , x B ) = T (r A , r B , µ).
It is then possible to enunciate the following proposition.
where r c is the usual Euclidean distance between the origin O of the spatial coordinates and the straight line passing through x A and x B , namely
r c = r A r B |x B − x A | |n A × n B |,(41)
and the quantities q n are functions of x A and x B given by
q n = −c r c m n 1 − µ 2 r c ∂T (n) (r A , r B , µ) ∂µ .(42)
Proof of proposition 3. Noting that
|n A × n B | = 1 − µ 2 ,
it is immediately inferred from equation (13) in [5] that the impact parameter of Γ s (x A , x B ) may be rewritten in the form
b = −c 1 − µ 2 ∂T (r A , r B , µ) ∂µ .
(43) ¶ The sign of ε in equation (37) is changed if and only if the photon passes through a pericentre or an apocentre. The passage through an apocentre corresponds to an extreme relativistic case.
Substituting for T from (17) into (43) directly leads to the expansion given by (40). The zeroth-order term is easily derived from the elementary formula
|x B − x A | = r 2 A − 2r A r B µ + r 2 B .(44)
Indeed, using this expression and taking (41) into account yield
∂|x B − x A | ∂µ = − r c 1 − µ 2 .(45)
We shall see in the next section that the expression of the time transfer function corresponding to a quasi-Minkowskian light ray can be straightforwardly deduced from proposition 3.
Implementation of the method
If Γ s (x A , x B ) passes through a pericentre x P , the integration of (38) requires the determination of the value of the radial variable at x P as a function of x A and x B . The calculation of the time transfer function is very complicated for such a configuration. Fortunately, owing to the analytic extension theorem, it follows from proposition 2 that it is sufficient to determine the expression of each term T (n) as a function of x A and x B in an arbitrarily chosen open subset of the domain of analyticity. For this reason, the calculation of the functions T (n) are henceforth carried out under the assumption that x A and x B fulfil the following conditions:
a) The radial variable r along a quasi-Minkowskian null geodesic joining x A and x B is an increasing function of t:
dr dt > 0, t A ≤ t ≤ t B .(46)
b) An inequality as follows
N AB .n A > 0(47)
is satisfied, with N AB being defined by (31). These conditions considerably simplify the calculations. Indeed, (46) eliminates the occurrence of any pericentre (or apocentre) between the emission and the reception of light and (47) implies that the projection of the origin O on the straight line passing through x A and x B lies outside the straight segment linking x A and x B . One has therefore
r c < r A ≤ r ≤ r B(48)
for any point of Γ s (x A , x B ). These inequalities ensure that condition (28) is met, since r h < r A for any point x A located in D h . Under these assumptions, integrating (38) along Γ s (x A , x B ) is straightforward since the range of the angular function ϕ(r) between the emission and the reception of the photon is given by
ϕ B − ϕ A = arccos µ.(49)
Noting that in this case ε = 1, it may be seen that the time transfer function is then related to the impact parameter b by an equation as follows
T (x A , x B ) = 1 c b arccos µ + rB rA 1 r r 2 U(r) − b 2 dr .(50)
Since b is a function of x A and x B determined by (43), (50) has to be regarded as an integro-differential equation satisfied by T . In order to solve this integrodifferential equation by an iterative procedure, let us substitute (7) for U and (40) for b. Expanding r 2 U(r) − b 2 /r in a power series in m/r c , rearranging the terms and introducing the notation
s = r 2 − r 2 c ,(51)
we get an expression as follows for T
T (x A , x B ) = 1 c r c arccos µ + rB rA s r dr + 1 c ∞ n=1 m r c n r c q n arccos µ + rB rA U n − r 2 c q n rs dr ,(52)
where each U n is a function of r which may be written in the form
U 1 = (1 + γ)r c s ,(53)U n = 3n−4 k=0 U kn (q 1 , . . . , q n−1 )r 3n−k−2 c r k−n+1 s 2n−1(54)
for n ≥ 2, with the quantities U kn (q 1 , . . . , q n−1 ) being polynomials in q 1 , . . . , q n−1 . Noting that
r c arccos µ + rB rA s r dr = |x B − x A |(55)
As it has been explained in the beginning of this section, the expression of T (n) as a function of x A and x B derived from (57) can be regarded as valid even when conditions (46) and (47) are not met. In this sense, (57) constitutes the main ingredient of the procedure developed in this paper.
The fact that the coefficient q n is not involved in U n and the property for each coefficient q k to be proportional to a derivative of the function T (k) imply that T (n) can be determined when the sequence of functions T (1) , . . . , T (n−1) is known. Moreover, it follows from (54) that all the integrations involved in the right-hand side of (57) are elementary and can be carried out with any symbolic computer program. Consequently, our procedure enables us to perform the explicit calculation of T (n) as a function of x A and x B whatever the order n. + Note that (55) is just (50) written in the case where the gravitational field vanishes, i.e., m = 0.
The problem is treated here in a detailed manner up the third order. So, (53) must be supplemented by the expressions of U 2 and U 3 , namely
U 2 = − κr 4 c rs 3 + (1 + γ)q 1 r 3 c s 3 + [2κ − (1 + γ) 2 − q 2 1 ]r 2 c r 2s 3 ,(58)U 3 = κ 3 r 7 c r 2 s 5 − κq 1 r 6 c rs 5 − [2κ 3 − (1 + γ)(κ + q 2 1 − q 2 )]r 5 c s 5 + [2κ − 3(1 + γ) 2 − q 2 1 + 2q 2 ]q 1 r 4 c r 2s 5 + [2κ 3 − (1 + γ)(2κ − q 2 1 − 2q 2 ) + (1 + γ) 3 ]r 3 c r 2 2s 5 − q 1 q 2 r 2 c r 3 s 5 .(59)
The expression of T (1) is immediately inferred from (53) and (57). Noting that
r 2 A − r 2 c = r A N AB .n A ,(60a)r 2 B − r 2 c = r B N AB .n B (60b)
when conditions (46) and (47) are met, we get
T (1) (x A , x B ) = (1 + γ)m c ln r B + N AB .x B r A + N AB .x A .(61)
As it could be expected, we recover the well-known Shapiro time delay expressed in isotropic coordinates (see, e.g., [7]). For determining q 1 , it is preferable to rewrite (61) in the more elegant form (see, e.g., [1])
T (1) (x A , x B ) = (1 + γ)m c ln r A + r B + |x B − x A | r A + r B − |x B − x A | .(62)
Substituting for T (1) from (62) into (42) written for n = 1, and then using (45), it is easily seen that
q 1 = (1 + γ)r c 1 + n A .n B 1 r A + 1 r B .(63)
Taking into account this determination of q 1 , it would be possible to carry out the calculation of T (2) via (58). Then, q 2 could be derived from (42) taken for n = 2. Consequently, T (3) could be deduced from (59). However, we shall see in the next section that the method can be simplified by making use of the differential equation governing the variation of the angular coordinate along the light ray. In particular, it turns out that determining q 2 is not indispensable for calculating T (3) .
Simplification of the procedure by using a constraint equation
Equations (35) and (37) are equivalent to the geodesic equations
dϕ dζ = J r 2 U(r) ,(64)dr dζ = ε E rU(r) r 2 U(r) − b 2 .(65)
Eliminating the affine parameter ζ between (64) and (65) leads to
dϕ dr = ε b r 1 r 2 U(r) − b 2 .(66)
Since ε = 1 when conditions (46) and (47)
where the W n 's are functions of r which may be written in the form
W 1 = −(1 + γ) r 3 c s 3 + q 1 r 2 c r s 3 ,(69)W n = 3(n−1) k=0 W kn (q 1 , . . . , q n−1 )r 3n−k c r k−n+1 s 2n+1 + q n r 2 c r s 3(70)
for n ≥ 2, with the terms W kn (q 1 , . . . , q n−1 ) being polynomials in q 1 , . . . , q n−1 . Taking into account (56), it is immediately seen that (68) reduces to
The set of constraint equations (72) may be systematically used for simplifying our problem. Let us consider the functions U * n defined as
U * 1 = U 1 ,(73)U * n = U n + n−1 p=1 k pn W p(74)
for n ≥ 2, where the k pn 's are arbitrary quantities which do not depend on r. Taking into account (72), it is immediately seen that
rB rA U n dr = rB rA U * n dr.(75)
Hence T (n) may be rewritten in the form
T (n) (x A , x B ) = 1 c m r c n rB rA U * n dr.(76)
Of course, the remark formulated just after (57) might be reproduced here. It is easily seen that a judicious choice of the quantities k pn enables to shorten the expressions involved in (76) when n ≥ 2. Until n = 3, only the expression of q 1 is needed. Indeed, for n = 2, setting k 12 = 1 2 q 1 removes the term in q 2 1 and leads to
U * 2 = − κr 4 c rs 3 + (1 + γ)q 1 r 3 c 2s 3 + [2κ − (1 + γ) 2 ]r 2 c r 2s 3 .(77)
For n = 3, choosing k 13 = q 2 and k 23 = 0 remove the terms involving q 2 . Then U * 3 reduces to
U * 3 = κ 3 r 7 c r 2 s 5 − κq 1 r 6 c rs 5 − [2κ 3 − (1 + γ)(κ + q 2 1 )]r 5 c s 5 + [2κ − 3(1 + γ) 2 − q 2 1 ]q 1 r 4 c r 2s 5 + [2κ 3 − (1 + γ)(2κ − q 2 1 ) + (1 + γ) 3 ]r 3 c r 2 2s 5 .(78)
It is thus proved that owing to the constraint equation (67), only the determination of q 1 is required for calculating the functions T (2) and T (3) .
Remark. It may be pointed out that the coefficients q n could be directly inferred from the constraint equation without differentiating the functions T (n) with respect to µ. Indeed, it follows from (69), (70) and (72) that
q 1 = 1 + γ r c r A r 2 B − r 2 c − r B r 2 A − r 2 c r 2 B − r 2 c − r 2 A − r 2 c ,(79)q n = − 1 r c r 2 A − r 2 c r 2 B − r 2 c r 2 B − r 2 c − r 2 A − r 2 c × 3(n−1) k=0 W kn (q 1 , . . . , q n−1 )r 3n−k−1 c rB rA r k−n+1 s 2n+1 dr(80)
for n ≥ 2. Equation (80) shows that q n can be determined once q 1 , . . . , q n−1 are known.
It is easily checked that (79) is equivalent to (63). Indeed, noting that
r 2 B − r 2 c − r 2 A − r 2 c = |x B − x A |,(81)
and then taking into account (41), (60a) and (60b), it may be seen that (79) transforms into
q 1 = (1 + γ) N AB .n B − N AB .n A |n A × n B | .(82)
Substituting r A n A for x A and r B n B for x B into the numerator of the right-handside of (31) yields
N AB .n B − N AB .n A = (r A + r B )(1 − n A .n B ) |x B − x A | .(83)
Finally, substituting for N AB .n B − N AB .n A from (83) into (82), and then noting that (41) is equivalent to
1 |x B − x A | = r c r A r B 1 |n A × n B | ,
it is immediately seen that (63) is recovered.
Time transfer function up to the third order
We are now in a position to determine the perturbation terms involved in the expansion of the time transfer function up to the order G 3 . The term T (1) has been already treated in section 8. For n = 2 and n = 3, it follows from (77) and (78) that T (n) may be written in the form
T (n) (x A , x B ) = 1 c m r c n σ(n) k=0 U * kn (q 1 )r 3n−k−2 c rB rA r k−n+1 s 2n−1 dr(84)
where σ(2) = 2 and σ(3) = 4, with the coefficients U * kn being polynomials in q 1 . The integrals occurring into the right-hand side of (84) are elementary and can be expressed in terms of r A , r B , r c , r 2 A − r 2 c and r 2 B − r 2 c . For the explicit calculations, it is convenient to write (60a) and (60b) in the form
r 2 A − r 2 c = r A (r B µ − r A ) |x B − x A | ,(85a)r 2 B − r 2 c = r B (r B − r A µ) |x B − x A | .(85b)
Using (41), (63), (85a) and (85b), it may be seen that T (2) and T (3) can be expressed in terms of r A r B , 1/r A + 1/r B , |x B − x A | and µ. It has been already emphasized that the explicit calculations can be performed with any symbolic computer program. Of course, a simple hand calculation is also possible. For n = 2, the result is straightforwardly obtained. For n = 3, however, the calculations are somewhat lengthy and tedious. For this reason, some hints concerning this case are delivered in the appendix. We have seen in section 8 that the expressions thus obtained can be considered as valid even when conditions (46) and (47) are not fulfilled. So we can formulate the following proposition.
Proposition 4 Let x A and x B be two points in D h such that both the conditions n A = n B and (28) are met. Then T (2) and T 3) are given by
T (2) (x A , x B ) = m 2 r A r B |x B − x A | c κ arccos n A .n B |n A × n B | − (1 + γ) 2 1 + n A .n B ,(86)T (3) (x A , x B ) = m 3 r A r B 1 r A + 1 r B |x B − x A | c(1 + n A .n B ) κ 3 − (1 + γ)κ arccos n A .n B |n A × n B | + (1 + γ) 3 1 + n A .n B ,(87)
where the coefficients κ and κ 3 are determined by (8) and (9), respectively.
The formula obtained for T (3) is new * . Concerning T (2) , it appears that (86) coincides with the expression previously obtained by completely different methods in [3], [13] and [14]♯. This concordance confirms the reliability of the procedure presented here.
In the case of a radial null geodesic, it is immediately inferred from lim nB →nA
arccos n A .n B |n A × n B | = 1(88)
that the system (20)-(22) is recovered from (62), (86) and (87). This agreement constitutes another confirmation of the validity of the new procedure. In general relativity, the expressions of T (1) , T (2) and T (3) are obtained by setting γ = 1, κ = 15 4 and κ 3 = 9 2 . * We have given the expression of T (3) without demonstration in [30].
♯ See also [11], where the expression of T (2) is obtained in harmonic coordinates by an integration of the null geodesic equations.
Enhanced terms in T (1) , T (2) and T (3)
In the present work, the time transfer function T is obtained in the form of an asymptotic expansion in power series in G (or m) provided that condition (28) is met. However, it is clear that the physical reliability of this expansion requires that inequalities as follow
T (n) (x A , x B ) ≪ T (n−1) (x A , x B )(89)
are satisfied for any n ≥ 1, with T (0) (x A , x B ) being conventionally defined as
T (0) (x A , x B ) = 1 c |x B − x A |.
The results obtained in the previous section enable us to find the conditions ensuring inequalities (89) for n = 1, 2, 3. It is clear that the magnitude of the functions given by (62), (86) and (87) may be extremely large when points x A and x B are located in almost opposite directions. This behaviour corresponds to the 'enhanced terms' determined up to G 2 for the light deflection in [11] and up to G 3 for the time transfer function in [14]. Indeed, it is straightforwardly derived from (41) that
1 1 + n A .n B ∼ 2r 2 A r 2 B (r A + r B ) 2 1 r 2 c(90)
when 1 + n A .n B → 0. Using this relation to eliminate 1 + n A .n B , the following proposition is easily deduced from (62), (86) and (87).
is fulfilled. This inequality coincides with the condition ensuring the validity of the asymptotic expansions obtained in [14]. It may be expected that (95) is sufficient to ensure inequality (89) at any order. Combined with (90), (95) means that our results are reliable when x A and x B tend to be located in opposite directions as long as an inequality as follows
π − arccos n A .n B ≫ 2m(r A + r B ) r A r B(96)
is satisfied. Such a condition clearly indicates that our procedure is not appropriate for the case of a gravitational lensing configuration.
Application to some solar system experiments
Condition (95) is fulfilled in experiments performed with photons exchanged between a spacecraft in the outer solar system and a ground station. Indeed, noting that m r c
r B r c < 2m r A + r B r A r B r 2 c < 2 m r c r B r c holds if r A > r B ,
replacing m by half the Schwarzschild radius of the Sun, m ⊙ , and then putting r B = 1 au, we find that inequalities
4.56 × 10 −4 × R 2 ⊙ r 2 c < 2m ⊙ r A + r B r A r B r 2 c < 9.12 × 10 −4 × R 2 ⊙ r 2 c(97)
hold if r A > r B , with R ⊙ denoting the radius of the Sun. We put R ⊙ = 6.96 × 10 8 m. The other numerical parameters of the Sun used throughout this section are taken from [31]. The formulae (91)-(93) enable us to discuss the relevance of the terms T The above discussion also reveals that an experiment like SAGAS would enable to determine the post-post-Newtonian parameter κ with a relative precision amounting to 7 × 10 −3 . In the solar system, indeed, the term proportional to κ in (86) yields the asymptotic contribution
T (2) κ (x A , x B ) ∼ κπm 2 ⊙ cr c(98)
when (90) holds. For a ray grazing the Sun (r c = R ⊙ ), one has T Before closing this study, it is worthy of note that the first-order contribution T
rc/R ⊙ T (1) S T (1) J 2 T (2) enh T (2) κ T(3)
(1) S to the time transfer function due to the gravitomagnetic effect of the solar rotation may be compared with the third-order enhanced term. Indeed, it is easily inferred from equation (62) in [2] that for a ray travelling in the equatorial plane of the Sun
T (1) S (x A , x B ) ∼ 2(1 + γ)GS ⊙ c 4 r c(99)
when (90) is checked, with S ⊙ being the angular momentum of the Sun. According to helioseismology, we can take S ⊙ ≈ 2 × 10 41 kg m 2 s −1 (see, e.g., [32]). So, in the case where r c = R ⊙ , we have |T
(1) S (x A , x B )| ≈ 10 ps. Furthermore, the contribution T (1) J2
due to the solar quadrupole moment J 2⊙ must also be considered for rays grazing the Sun. Using equation (24) in [4] for a ray travelling in the equatorial plane gives
T (1) J2 (x A , x B ) ∼ (1 + γ)m ⊙ c J 2⊙ R 2 ⊙ r 2 c .(100)
Taking J 2⊙ ≈ 2 × 10 −7 and putting r c = R ⊙ , (100) leads to T
J2 ≈ 2 ps † †.
Impact parameter up to the third order
The coefficient q 1 has been previously inferred from (62) in section 8. Substituting for T (2) from (86) into (42), and then taking into account that r c |x B − x A |/r A r B = 1 − µ 2 , we get
q 2 = κ − 1 |x B − x A | 2 κ arccos µ 1 − µ 2 Q AB + (1 + γ) 2 1 + µ |x B − x A | 2 − Q AB ,(101)
where Q AB is defined as
Q AB = |x B − x A | 2 µ − r A r B (1 − µ 2 ).(102)
Then, substituting for T (3) from (87) into (42) and using (102), we obtain
q 3 = r A + r B |x B − x A | 3 √ 1 − µ √ 1 + µ κ 3 |x B − x A | 2 − Q AB −(1 + γ)κ |x B − x A | 2 + |x B − x A | 2 (1 − µ) − Q AB 1 − µ 2
arccos µ † † It may be pointed out that the time delay due to the cosmological constant is much smaller than the contribution of T (3) in any solar system experiment since Λr 3 A /9 < 10 −22 s for rA < 100 au with Λ ≈ 10 −52 m −2 .
+(1 + γ) 3 |x B − x A | 2 (2 − µ) − Q AB 1 + µ .(103)
The dimensionless coefficients q n can be expressed in terms of the sine (or cosine) of the angles formed by n A , n B and N AB . Noting that
r c 1 r A + 1 r B = |N AB × n A | + |N AB × n B |(104)
and that (102) may be written in the form
Q AB = |x B − x A | 2 (N AB .n A )(N AB .n B ),(105)
a proposition as follows is straightforwardly inferred from (63), (101) and (103).
Proposition 6 Under the assumption of proposition 4, the coefficients q 1 , q 2 and q 3 involved in the expansion of the impact parameter b of a quasi-Minkowskian light ray joining x A and x B are given by
q 1 (x A , x B ) = (1 + γ) |N AB × n A | + |N AB × n B | 1 + n A .n B ,(106)q 2 (x A , x B ) = κ 1 − (N AB .n A )(N AB .n B ) |n A × n B | arccos n A .n B −(1 + γ) 2 1 − (N AB .n A )(N AB .n B ) 1 + n A .n B ,(107)q 3 (x A , x B ) = |N AB × n A | + |N AB × n B | 1 + n A .n B κ 3 [1 − (N AB .n A )(N AB .n B )] −(1 + γ)κ 1 + 1 − n A .n B − (N AB .n A )(N AB .n B ) |n A × n B | arccos n A .n B +(1 + γ) 3 2 − n A .n B − (N AB .n A )(N AB .n B ) 1 + n A .n B .(108)
Equations (106) and (107) are identical to the expressions obtained in [5]. We remark that the first-order expression of b yielded by (40) and (106) coincides with the Euclidean norm of the vector d' given by equation (62) in [11]. On the other hand, the formula (108) is new and completes the implementation of our method up to the third order.
To finish, it may be easily seen that condition (95) ensures that |q n | ≪ |q n−1 |
holds for n = 1, 2, 3, with q 0 being conventionally defined by q 0 = 1.
Case of a light ray emitted at infinity
A quasi-Minkowskian light ray coming from infinity in a initial direction defined by a given unit vector N e and observed at a given point x B is a relevant limiting case for modelling a lot of astrometric measurements. According to a notation introduced in [5], such a ray is denoted by Γ s (N e , x B ). The corresponding null geodesic is assumed to be a perturbation in powers of G of the straight segment defined by the parametric equations
x 0 (0) (λ) = ct B + λr c , x (0) (λ) = λr c N e + x B , −∞ < λ ≤ 0,(110)
where
r c = r B |N e × n B |.(111)
In order to ensure that condition (28) is satisfied for any point x A of Γ s (N e , x B ), N e and x B must be supposed to satisfy the condition
|λr c N e + x B | > r h when − ∞ < λ ≤ 0.(112)
This condition means that the straight segment coming from infinity in the direction N e and ending at x B is entirely lying in D h . Then the following proposition can be stated.
Proposition 7
Let N e be a unit vector and x B a point in D h fulfilling condition (112). The impact parameter of a quasi-Minkowskian light ray emitted at infinity in the direction N e and arriving at x B is given by expansion (40), where r c is expressed by (111) and the coefficients q 1 , q 2 and q 3 are yielded by
q 1 (N e , x B ) = (1 + γ) |N e × n B | 1 − N e .n B ,(113)q 2 (N e , x B ) = κ 1 + (N e .n B ) π − arccos N e .n B |N e × n B | − (1 + γ) 2 1 + N e .n B 1 − N e .n B ,(114)q 3 (N e , x B ) = |N e × n B | 1 − N e .n B κ 3 (1 + N e .n B ) + 2(1 + γ) 3 1 + N e .n B 1 − N e .n B −(1 + γ)κ 1 + (1 + 2N e .n B ) π − arccos N e .n B |N e × n B | .(115)
Proof of proposition 7 Let x A be a point lying on Γ s (N e , x B ). It is clear that the part of Γ s (N e , x B ) joining x A and x B coincides with a quasi-Minkowskian null geodesic path Γ s (x A , x B ). So, the impact parameters of Γ s (N e , x B ) and Γ s (x A , x B ) are equal. As a consequence, the coefficients q 1 , q 2 and q 3 can be obtained as functions of N e and x B by taking the limit of equations (106)-(108) when x A recedes towards the source of the light ray at infinity, i.e. when r A → ∞, n A → −N e and N AB → N e . Taking into account that arccos n A .n B → π − arccos N e .n B when n A → −N e , we get the system of equations (113)-(115). QED.
The expression found for q 3 is new, whereas the expressions obtained for q 1 and q 2 coincide with previous results found in [5].
Conclusion
This paper is devoted to the study of the time transfer function T and the impact parameter b corresponding to a photon travelling along a quasi-Minkowskian light ray in a static, spherically symmetric spacetime. The main results are the following: a) The system of equations (25)- (27) enabling, at least in principle, to determine the perturbation terms T (n) involved in the expansion of the time transfer function in power series of G. Such a system appreciably simplifies the approach developed in [13] for static, spherically symmetric spacetimes.
b) The demonstration of the analyticity of the functions T (n) when the emission and reception points are neither coincident, nor located in diametrically opposite directions.
c) The replacement of the recurrence method outlined in proposition 1 by an iterative procedure for solving the integro-differential equation satisfied by the time transfer function. This procedure presents the great advantage that only elementary integrations which can be performed with any symbolic computer program are required whatever the order of approximation. The legitimacy of this approach essentially rests on the property of analyticity of the functions T (n) .
d) The explicit calculation of the time transfer function and the impact parameter up to the third-order terms in G. The new results brought by equations (87) and (108) illustrate the efficiency of the procedure. The expressions obtained for the impact parameter up to the third order are extended to the case of a light ray emitted at infinity in a given direction. The ability to recover the expressions of T (1) and T (2) found in previous works confirms the reliability of the method developed in this paper. e) A new derivation of the enhanced terms up to the order G 3 , obtained from our full expressions of T (1) , T (2) and T (3) . It is shown that the third-order enhanced term must be taken into account for determining γ at a level of accuracy of 10 −8 . Surprisingly, for light rays grazing the Sun, this term is found to be larger than the first-order Lense-Thirring effect due to the solar rotation.
Finally, it may be noted that in accordance with equations (40) and (41) in [3], our formulae would allow to derive the triples characterizing the direction of a light ray at its points of emission and reception up to the third order; these triples would enable to determine the frequency shifts. Furthermore, the calculations developed here could be extended to light rays propagating in the equatorial plane of an axisymmetric, rotating body (a Kerr spacetime, e.g.).
r 3 B (r B − r A µ) 3 − r 3 A (r B µ − r A ) 3 = R 2 [R 4 + 3pR 2 µ − 3p 2 (1 − µ 2 )], (A.5) q 2 1 = (1 + γ) 2 R 2 + 2p(1 + µ) R 2 1 − µ 1 + µ , (A.6)
we are led to rB rA (1 + γ)[(1 + γ) 2 + q 2 1 ]r 2 2s 5 − [3(1 + γ) 2 + q 2 1 ]q 1 r c r 2s 5 + (1 + γ)q 2 1 r 2 c s 5 dr
= 1 3r A r B 1 r A + 1 r B |x B − x A | (1 + µ) 2 (1 + γ) 3 I [(r B − r A µ)(r B µ − r A )] 3 , (A.7)
where I is given by the lengthy expression
I = R 2 [R 2 + p(1 − µ 2 )][R 2 (1 + µ + µ 2 ) − p(1 + 2µ − µ 2 − 2µ 3 )] −(1 − µ 2 )[R 2 (2 + µ) + p(1 − µ 2 )][R 4 + 3pR 2 µ − 3p 2 (1 − µ 2 )] +(1 − µ) 2 [R 2 + 2p(1 + µ)] ×[R 4 (1 + 2µ) − pR 2 (1 − 3µ − 4µ 2 ) − 3p 2 (1 + µ − µ 2 − µ 3 )]. (A.8)
Taking into account the relation
(r B − r A µ)(r B µ − r A ) = R 2 µ − p(1 − µ 2 ), (A.9)
it is easily checked that I [(r B − r A µ)(r B µ − r A )] 3 = 3, (A. 10) a result which proves to be spectacularly simple in spite of the apparent complexity of the right-hand side of (A.8). Equations (A.2), (A.7) and (A.10) yield (87).
Proposition 3
3Let x A and x B be two points in D h such that both the conditions n A = n B and (28) are fulfilled. The impact parameter b of a quasi-Minkowskian light ray joining x A and x B may be expanded in powers of G as follows:
(46) and (47) are met + , (17) is immediately recovered from (52), with each perturbation term being given byT (n) (x A , x B
67) implicitly determines b as a function of x A and x B . So it may be expected that this equation implies some conditions on the coefficients q n which may be used to simplify the calculations.Replacing U by(7)and b by (40) into (67), it may be seen that arccos µ = r
71) holds whatever m, it is clear that (68) is equivalent to the infinite set of equations rB rA W n dr = 0, n = 1, 2, . . .
in a proposed mission like SAGAS, for instance. Indeed, this project plans to measure the parameter γ up to an accuracy reaching 10 −8 with light rays travelling between a spacecraft moving in the outer solar system and the Earth. For r A = 50 au and r B = 1 au, the travel time of a ray passing in close proximity to the Sun (conjunction) is about 2.54 × 10 4 s. It follows from (91) that T (1) is decreasing from 158 µs to 126 µs when r c varies from R ⊙ to 5R ⊙ . As a consequence, reaching an accuracy of 10 −8 on the measurement of γ requires to determine the light travel time with an accuracy of 0.7 ps. The numerical values of the respective contributions of T are indicated in table 1. It is clear that the contribution of the enhanced term of order G 3 is larger than 2 ps when r c < 2R ⊙ . The same order of magnitude for T(3) enh may be expected in other proposed missions like ODYSSEY, LATOR or ASTROD.
≈
123 ps if κ = 15/4. Hence the conclusion.
Table 1 .
1Numerical values in ps of the main stationary contributions to the light travel time in the solar system for various values of rc/R ⊙ . In each case, rA = 50 au and rB = 1 au. The parameters γ and κ are taken as γ = 1 and κ = 15/4, respectively. For the numerical estimates of |T , the light ray is assumed to propagate in the equatorial plane of the Sun. The dynamical effects due to the planetary perturbations are not taken into account.
AcknowledgementsWe thank one of the anonymous referees for having suggested to discuss the relevance of our results for modelling solar system experiments.Proposition 5 When x A and x B tend to be located in opposite directions (i.e. 1 + n A .n B → 0), the first three perturbation terms in the time transfer function are enhanced according to the asymptotic expressionsThese expressions confirm the formulae obtained in[14]by a different method. It is worthy noticing that, at least up to G 3 , γ is the only post-Newtonian parameter involved in the enhanced terms. When x A and x B tend to be located in opposite directions, the asymptotic behavior of each function T (n) enh is such thatFor n = 3, the formula (94) is straightforwardly derived from (92) and (93) (the symbol could be replaced by ∼). For n = 1, the formula results from the fact that ln x < x for any x > 0. Lastly, for n = 2, (94) obviously follows from the fact that ln(4r A r B /r 2 c ) → ∞ when 1 + n A .n B → 0.It results from (94) that inequalities (89) are satisfied for n = 1, 2, 3 as long as the zeroth-order distance of closest approach r c is such that a condition as follows 2mTo begin with, it may be noted that (78) can be rewritten in the formIt is easily seen thatThe calculation of the three other integrals is more lengthy. For the sake of brevity, it is convenient to putTaking into account (85a) and (85b), and then using relations as follow
. L Blanchet, C Salomon, P Teyssandier, P Wolf, Astron. Astrophys. 370320Blanchet L, Salomon C, Teyssandier P and Wolf P 2001 Astron. Astrophys. 370 320
. Linet B Teyssandier, P , Phys.Rev. D. 6624045Linet B and Teyssandier P 2002 Phys.Rev. D 66 024045
. Le Poncin-Lafitte, C , Linet B Teyssandier, P , Class. Quantum Grav. 214463Le Poncin-Lafitte C, Linet B and Teyssandier P 2004 Class. Quantum Grav. 21 4463
. Le Poncin-Lafitte, C Teyssandier, P , Phys.Rev. D. 7744029Le Poncin-Lafitte C and Teyssandier P 2008 Phys.Rev. D 77 044029
. P Teyssandier, Class. Quantum Grav. 29245010Teyssandier P 2012 Class. Quantum Grav. 29 245010
. I Shapiro, Phys. Rev. Lett. 13789Shapiro I I 1964 Phys. Rev. Lett. 13 789
C Will, Theory and Experiment in Gravitational Physics 2nd edn. CambridgeCambridge University PressWill C M 1993 Theory and Experiment in Gravitational Physics 2nd edn (Cambridge: Cambridge University Press)
. G W Richter, R A Matzner, Phys.Rev. D. 283007Richter G W and Matzner R A 1983 Phys.Rev. D 28 3007
. V A Brumberg, Kinematics Phys. Celest. Bodies. 36Brumberg VA 1987 Kinematics Phys. Celest. Bodies 3 6
. V A Brumberg, Essential Relativistic Celestial Mechanics. Brumberg VA 1991 Essential Relativistic Celestial Mechanics (Bristol: Adam Hilger)
. S A Klioner, S Zschocke, Class. Quantum Grav. 2775015Klioner S A and Zschocke S 2010 Class. Quantum Grav. 27 075015
. R John, Exp. Tech. Phys. 23127John R W 1975 Exp. Tech. Phys. 23 127
. P Teyssandier, Le Poncin-Lafitte, C , Class. Quantum Grav. 25145020Teyssandier P and Le Poncin-Lafitte C 2008 Class. Quantum Grav. 25 145020
. Ashby N Bertotti, B , Class. Quantum Grav. 27145013Ashby N and Bertotti B 2010 Class. Quantum Grav. 27 145013
. A Sarmiento, Gen. Rel. Grav. 14793Sarmiento A F 1982 Gen. Rel. Grav. 14 793
. C R Keeton, A Petters, Phys. Rev. D. 72104006Keeton C R and Petters A O 2005 Phys. Rev. D 72 104006
. P Wolf, Exp. Astron. 23651Wolf P et al 2009 Exp. Astron. 23 651
. B Christophe, Exp. Astron. 23529Christophe B et al 2009 Exp. Astron. 23 529
. S Turyshev, Exp. Astron. 2727Turyshev S G et al 2009 Exp. Astron. 27 27
. C Braxmaier, Exp. Astron. 34181Braxmaier C et al 2012 Exp. Astron. 34 181
. O Minazzoli, B Chauvineau, Class. Quantum Grav. 2885010Minazzoli O and Chauvineau B 2011 Class. Quantum Grav. 28 085010
. S M Kopeikin, G Schäfer, Phys. Rev. D. 60124002Kopeikin S M and Schäfer G 1999 Phys. Rev. D 60 124002
. S M Kopeikin, B Mashhoon, Phys. Rev. D. 6564025Kopeikin S M and Mashhoon B 2002 Phys. Rev. D 65 064025
. S Klioner, Astron. J. 1251580Klioner S A 2003 Astron. J. 125 1580
. M Crosta, Class. Quantum Grav. 28235013Crosta M 2011 Class. Quantum Grav. 28 235013
. C Darwin, Proc. Roy. Soc. London A. 249180Darwin C 1959 Proc. Roy. Soc. London A 249 180
. J-P Luminet, Astron. Astrophys. 75228Luminet J-P 1979 Astron. Astrophys. 75 228
. F Giannoni, A Masiello, P Piccione, Class. Quantum Grav. 16731Giannoni F, Masiello A and Piccione P 1999 Class. Quantum Grav. 16 731
S Chandrasekhar, The Mathematical Theory of Black Holes. New YorkOxford University PressChandrasekhar S 1983 The Mathematical Theory of Black Holes (New York: Oxford University Press)
P Teyssandier, Linet B , Proc. GREAT-ESF Workshop on QSO Astrophysics, Fundamental Physics and Astrometric Cosmology in the Gaia Era. GREAT-ESF Workshop on QSO Astrophysics, Fundamental Physics and Astrometric Cosmology in the Gaia Era831024Teyssandier P and Linet B 2012 Proc. GREAT-ESF Workshop on QSO Astrophysics, Fundamental Physics and Astrometric Cosmology in the Gaia Era ( 6-9 June 2011, Porto) Mem. S. A. It. 83 1024
. ; Iers Conventions, B Petit, Luzum, IERS Technical Note. 36179Verlag des Bundesamts für Kartographie und GeodäsieIERS Conventions 2010 IERS Technical Note No 36 ed G Petit and B Luzum (Frankfurt am Main: Verlag des Bundesamts für Kartographie und Geodäsie, 2010) 179 pp
. R Komm, R Howe, B Durney, F Hill, Astrophys. J. 586650Komm R, Howe R, Durney B R and Hill F 2003 Astrophys. J. 586 650
| [] |
[
"Observation of acoustic spatiotemporal vortices",
"Observation of acoustic spatiotemporal vortices"
] | [
"Hongliang Zhang \nInterdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina\n",
"Yeyang Sun 1# \nInterdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina\n",
"Junyi Huang \nInterdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina\n",
"Bingjun Wu ",
"Zhaoju Yang [email protected]**[email protected] \nInterdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina\n",
"Konstantin Y Bliokh \nCluster for Pioneering Research\nTheoretical Quantum Physics Laboratory\nRIKEN\nWako-shi351-0198SaitamaJapan\n\nCentre of Excellence ENSEMBLE3 Sp\n01-919WarsawPoland\n\nDonostia International Physics Center (DIPC)\nDonostia-San Sebastián 20018Spain\n",
"Zhichao Ruan \nInterdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina\n"
] | [
"Interdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina",
"Interdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina",
"Interdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina",
"Interdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina",
"Cluster for Pioneering Research\nTheoretical Quantum Physics Laboratory\nRIKEN\nWako-shi351-0198SaitamaJapan",
"Centre of Excellence ENSEMBLE3 Sp\n01-919WarsawPoland",
"Donostia International Physics Center (DIPC)\nDonostia-San Sebastián 20018Spain",
"Interdisciplinary Center of Quantum Information\nDepartment of Physics\nState Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device\nZhejiang University\n310027HangzhouChina"
] | [] | Vortices in fluids and gases have piqued the interest of human for centuries. Development of classical-wave physics and quantum mechanics highlighted wave vortices characterized by phase singularities and topological charges. In particular, vortex beams have found numerous applications in modern optics and other areas. Recently, optical spatiotemporal vortex states exhibiting the phase singularity both in space and time have been reported. Here, we report the first generation of acoustic spatiotemporal vortex pulses. We utilize an acoustic meta-grating with mirror-symmetry breaking as the spatiotemporal vortex generator. In the momentum−frequency domain, we unravel that the transmission spectrum functions exhibit a topological phase transition where the vortices with opposite topological charges are created or annihilated in pairs. Furthermore, with the topological textures of the nodal lines, these vortices are robust and exploited to generate spatiotemporal vortex pulse against structural perturbations and disorder. Our work paves the way for studies and applications of spatiotemporal structured waves in acoustics and other wave systems.Introduction.Wave vortices, i.e., structures with the wavefield intensity vanishing in the center and the phase winding around, are of enormous importance for various areas of physics. They are essential parts of almost any structured waves: atomic orbitals and superfluids in quantum mechanics, complex wave interference from ocean waves to nanophotonics and metamaterials, etc. Cylindrical vortex beams have been generated and found applications in electromagnetic [1-8], sound [9-17], elastic [18], electron [19-21], neutron[22], and atom[23]waves. Such states contain on-axis vortex lines and carry intrinsic orbital angular momentum (OAM) along their propagation direction.Recently, there was a great rise of interest in spatiotemporal vortex pulses (STVPs), which are generalizations of usual 'spatial' vortex states to the space-time domain and the OAM tilted | null | [
"https://export.arxiv.org/pdf/2303.10549v2.pdf"
] | 257,632,194 | 2303.10549 | f3d5df900dbfb9f38a9579e6c72319a49534dbba |
Observation of acoustic spatiotemporal vortices
Hongliang Zhang
Interdisciplinary Center of Quantum Information
Department of Physics
State Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
310027HangzhouChina
Yeyang Sun 1#
Interdisciplinary Center of Quantum Information
Department of Physics
State Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
310027HangzhouChina
Junyi Huang
Interdisciplinary Center of Quantum Information
Department of Physics
State Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
310027HangzhouChina
Bingjun Wu
Zhaoju Yang [email protected]**[email protected]
Interdisciplinary Center of Quantum Information
Department of Physics
State Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
310027HangzhouChina
Konstantin Y Bliokh
Cluster for Pioneering Research
Theoretical Quantum Physics Laboratory
RIKEN
Wako-shi351-0198SaitamaJapan
Centre of Excellence ENSEMBLE3 Sp
01-919WarsawPoland
Donostia International Physics Center (DIPC)
Donostia-San Sebastián 20018Spain
Zhichao Ruan
Interdisciplinary Center of Quantum Information
Department of Physics
State Key Laboratory of Modern Optical Instrumentation, and Zhejiang Province Key Laboratory of Quantum Technology and Device
Zhejiang University
310027HangzhouChina
Observation of acoustic spatiotemporal vortices
# These authors equal contributions to this work *
Vortices in fluids and gases have piqued the interest of human for centuries. Development of classical-wave physics and quantum mechanics highlighted wave vortices characterized by phase singularities and topological charges. In particular, vortex beams have found numerous applications in modern optics and other areas. Recently, optical spatiotemporal vortex states exhibiting the phase singularity both in space and time have been reported. Here, we report the first generation of acoustic spatiotemporal vortex pulses. We utilize an acoustic meta-grating with mirror-symmetry breaking as the spatiotemporal vortex generator. In the momentum−frequency domain, we unravel that the transmission spectrum functions exhibit a topological phase transition where the vortices with opposite topological charges are created or annihilated in pairs. Furthermore, with the topological textures of the nodal lines, these vortices are robust and exploited to generate spatiotemporal vortex pulse against structural perturbations and disorder. Our work paves the way for studies and applications of spatiotemporal structured waves in acoustics and other wave systems.Introduction.Wave vortices, i.e., structures with the wavefield intensity vanishing in the center and the phase winding around, are of enormous importance for various areas of physics. They are essential parts of almost any structured waves: atomic orbitals and superfluids in quantum mechanics, complex wave interference from ocean waves to nanophotonics and metamaterials, etc. Cylindrical vortex beams have been generated and found applications in electromagnetic [1-8], sound [9-17], elastic [18], electron [19-21], neutron[22], and atom[23]waves. Such states contain on-axis vortex lines and carry intrinsic orbital angular momentum (OAM) along their propagation direction.Recently, there was a great rise of interest in spatiotemporal vortex pulses (STVPs), which are generalizations of usual 'spatial' vortex states to the space-time domain and the OAM tilted
with respect to the propagation direction [24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. This conforms with the rapidly growing field of space-time structured waves allowing manipulation in both spatial and temporal degrees of freedom [41,42].In the simplest case, STVPs are flying doughnut-shaped pulses with the vortex line and OAM orthogonal to their propagation direction. Until now, such states have been generated only in optics, although theoretically these were also discussed for quantum matter and acoustic waves [33].
Here, we report the first generation of acoustic STVPs for sound waves in air. Our STVP generator is based on a meta-grating with spatial mirror symmetry breaking, which can be further controlled by a synthetic parameter. Through mapping the transmission spectra of the meta-grating as a function of the synthetic parameter, we show that there exist vortices in the momentum−frequency domains created and annihilated together in pairs at a critical point by mirror-symmetry breaking. In contrast to diffraction gratings with fork-like dislocations, which are used for the generation of spatial vortex beams in the first and higher orders of diffraction [1,3,4,19], this method uses the zeroth-order transmitted field and realizes simultaneous control in both spatial and temporal domains. Importantly, similar to the topological textures for electronic and optical systems [43,44], these vortices associated with the nodal lines are robust and exploited to generate STVPs with the topological protection against structural disorder. Our results open the avenue for spatiotemporal vortex generation and applications in acoustics and other areas of wave physics [7,11,32,[45][46][47][48][49].
Breaking spatial mirror symmetry for generating STVPs. Our STVP generator is based on spatial mirror symmetry breaking [32]. Figure 1a schematically displays the spatial-symmetry analysis of the spatiotemporal vortex generation [the detailed geometry in Supplementary Material (SM) Sec. I]. Here, a spatiotemporal Gaussian pulse impinges on the structure along = 0 (indicated by the white dashed line). Without loss of generality, assume that a metagrating exists mirror symmetry about the plane = 0, the phase distribution of the transmitted pulse must also be symmetric about the mirror plane, and thus there is no phase singularity. Therefore, the necessary condition for generating STVPs with nonzero winding numbers is the mirror symmetry breaking, which provides an asymmetry modulation for sound in both spatial and temporal domains simultaneously.
To realize the asymmetric spatiotemporal modulation, we design the meta-grating (Fig. 1a) with a unit cell consisting of four air blocks with different sizes as indicated by dashed boxes in SM Fig. S1. In the meta-grating, all cells are connected with a middle air channel (yellow areas in Fig. 1a). Initially, all the four air blocks are symmetric about the axis of = 0. To break the spatial mirror symmetry, we introduce a synthetic parameter , which describes the different x-directional shifts from the center = , where is the shifting ratio given in Supplementary Table S1 for the -th block and = 1, 2, 3, 4. Thus, controls the degree of the mirror asymmetry of the grating.
This asymmetric modulation stemming from mirror symmetry breaking can be illustrated by the transmission spectrum function ( , ) between the incident and transmitted waves, where is the angular frequency of a plane wave, and is the wavevector along the structure interface. Here only the zeroth order diffraction is considered in the operating-frequency range. For the mirror symmetric case of = 0, as expected, the transmission spectrum function ( , ) exhibits symmetric spectrum about = 0 for both the phase and the amplitude distributions (Fig. 1b). On the other hand, by breaking the mirror symmetry, the case of = 0.5 shows that the transmission spectrum function ( , ) has two vortices with the winding numbers of = +1 (white circle) and = −1 (black circle), respectively (Fig. 1c). Correspondingly, such two phase singularities coincide with the zero-value transmission at the center of the vortices. Furthermore, for = 0.7, the two vortices are further separated with a larger strength of mirror symmetry breaking (Fig. 1d).
The vortices associated with the transmission spectrum function are analogous to the fundamental 'charges' in the − domain, because they are always created or annihilated together in pairs of opposite charges. To clearly illustrate the creation and annihilation, Fig. 1e shows the parameter spectrum, which is in the form of nodal lines in the extended dimension with the asymmetry parameter . When a plane with a fixed value of has intersections with the nodal lines, there must be two vortices with opposite handedness appearing in the crosssection plane. Therefore, the total topological charge of the vortices is a conserved quantity of zero. Through numerical simulations, we find that the critical value of is = 0.40 for our designed meta-grating.
To experimentally demonstrate the topological phase transition, we fabricate three metagratings (SM Fig. S3) and measure the transmission spectrum function (SM Sec. II). Figure 2ac show the measured phase distributions of the transmission spectrum function for = 0, 0.5, 0.7, and the corresponding amplitude distributions are shown in Fig. 2d-f. For the mirror symmetric case ( Fig. 2a and d), there is no phase singularity in the measured phase distribution of the transmission spectrum function. By breaking the mirror symmetry with = 0.5, there are two vortices created with the winding numbers of +1 and −1, marked by the white and black circles in Fig. 2b, and hence two zero-value transmissions occur at /2 = 8.31 and /2 = 8. 23 in Fig. 2e, respectively. For the case of = 0.7 as shown in Fig. 2c and f, two vortices with opposite winding numbers are further separated at /2 = 8.31 and /2 = 8.04
. The measured results agree that the topological phase transition appears at the critical point of = 0.40 . The creation of the vortices confirms that the asymmetric modulation by breaking mirror symmetry indeed induces the topological charges in the ( , ) domain.
Topological protected generation of STVP.
Owing to the asymmetric modulation, the phase singularity of the transmission spectrum function located at ( 0 , 0 ) in the ( , ) domain can be directly transferred into the spatiotemporal domain. Considering an incident Gaussian wave pulse with the central angular frequency 0 and transverse wavevector component 0 , the transmitted wave packet can be determined to be a STVP with a nonzero winding number, but opposite to that of the vortex in the ( , ) domain (see SM Sec. IV). We also note that the nodal line in the space of , , is mathematically analog to many topological textures in other nodal-line topological physical systems [43,44]. Because of the topological robustness of the nodal lines, the corresponding vortices with the winding number = ±1 are stable. When the vortices are distanced from the critical points of the topological phase transition, small changes to the meta-grating geometry in the real space can be treated as perturbations. The strength of such topological protection for the vortex at ( 0 , 0 ) can be evaluated by the distance to its nearest vortex = √( − 0 ) 2 + 2 ( − 0 ) 2 , where and are the frequency and the wavevector component of the nearest vortex.
We next experimentally demonstrate the topologically protected generation of acoustic STVPs, schematically displayed in Fig. 3a. Here, we choose the meta-grating with the asymmetry parameter = 1.0, because such a meta-grating has a relatively strong strength of topological protection up to . We first experimentally investigate the transmission spectrum function (SM Sec. V), which exhibits a vortex at 0 /2 = 8.02 and 0 = 0.01 0 ( 0 is the wavenumber in air). For 0 ≪ 0 , therefore, it can be used as for normally-incident pulses. Using an arc-like linear transducer array, with an oscillatory Gaussian envelopes electric signal at a central carrier frequency 0 , we produce a -propagating Gaussian pulses, which has the full waist of 145 mm and the durations of 3.2 ms and the diffraction Rayleigh range about 383 mm (the detailed experimental setup and the measurement found in SM Sec. VI. and Fig. S6).
We first consider the unperturbed meta-grating as the experimental sample Fig. 3b. We numerically simulate the transmitted pulse envelope at different propagation distances of 64.4mm, 78.8mm, 93.2mm, and 107.6mm, which are separated by 3 from each other ( is the center wavelength of the pulse) as indicated by red dash lines in Fig. 3a, respectively. Here the transmitted acoustic wave ( , ) = ( , ) − 0 , where ( , ) is the transmitted pulse envelope. The simulation results are depicted in Fig. 3d-g, where the HSV color and the brightness represent the phase and amplitude of the pulse envelope, respectively. We then measure the transmitted pulse envelopes, as shown in Fig. 3h-k. The experimental measurements of the transmitted pulse envelope exhibit the vanishing amplitude and the whirl dislocated phase in the spatiotemporal domain corresponding to the winding number of = −1. Furthermore, the phase distributions at different distances exhibit that the phase of the STVP rotates around the center along with the pulse propagation, which agrees well with the central wavelength of the pulse.
Using the experimental data and numerical -propagation of the field, we calculate the realtime amplitude of the pulse and the momentum density at different instants of time, which shows the propagation evolution and diffraction of the generated STVP (Fig. 4). At the early stage of the pulse generation, Fig. 4a shows that the pressure amplitude increases as the pulse propagates along the direction, and the directions of the momentum density indicate the compression and the decompression of air. However, the zero-value amplitude is shown up when = −0.4 , which corresponds to the vortex center (Fig. 4b). Figs. 4c-d further exhibit the vortex rolls and diffract along with the propagation. Since there is an ongoing theoretical debate in community about the intrinsic OAM carried by a STVP [33,35,40], the calculation of the intrinsic OAM carried by the generated STVP is beyond the present studies, which needs more detailed theoretical and experimental investigation [SM Sec. VII].
To demonstrate the topologically robustness of STVP generation, we perturb the meta-grating structure by randomly placing fifteen photopolymer-resins particles of different shapes and sizes about 0.8cm-1cm, as shown in Fig. 3c. Moreover, as a regular perturbation to the grating, we also add one more block in the unit cell. The transmission spectrum function of the perturbed meta-grating still exhibited the same phase singularity with slightly shifted position at 0 /2 = 7. 56 and 0 = 0.02 0 (SM Fig. S5). Adjusting the incident pulse to the perturbed central frequency, we measured the transmitted pulse envelopes at the propagation distances of 65.3mm, 80.6mm, 95.9mm and 111.2mm, respectively. Fig. 3l-o clearly show a STVP quite similar to that in the unperturbed case.
Discussion.
In summary, our experimental results demonstrate the topologically protected generation of acoustic STVPs in two spatial dimensions and one temporal dimension, using a 1D periodic meta-grating. On the one hand, acoustic STVPs open an avenue for acoustic spacetime-structured waves, so far mostly studied in optics [41,42]. On the other hand, our new method of the generation of STVPs opens can find applications in acoustics, optics, and other types of waves. One can expect that by designing 2D metasurfaces with an additional spatial dimension, one can synthesize full-dimensional (3+1)D spatiotemporal acoustic vortices, such as vortices with arbitrarily tilted OAM [30,31] or toroidal vortices. In general, due to drastic geometric and physical differences from conventional monochromatic vortex beams, the STVPs can bring novel functionalities to acoustic/optical manipulation of particles, information transfer, and other applications [7,11,32,[45][46][47][48][49].
Furthermore, similarly to the image processing of edge detection in the spatial domain [50][51][52][53][54][55][56][57][58][59][60][61][62], our STVP generator, based on the phase singularity (vortex) in the momentum-frequency domain, operates as the first-order differentiator in both spatial and temporal domains. This allows efficient extraction of the spacetime boundary information in the incident sound wavepacket (SM Sec. VIII), which can find useful applications in sonars and sensing. In our experiment, the frequency bandwidth with the near-linear dependence of the transmission amplitude near the vortex center, which provides the first-order differentiation, is about 431.7Hz.
S1. The details of geometry parameters of acoustic spatiotemporal vortex pulse generator
Assume that there exists a spatiotemporal vortex pulse (STVP) with the winding number of generated by the meta-grating. Due to the existence of mirror symmetry about the axis = 0, we can easily obtain that there must exist a STVP with the winding number ofthat is generated together with the STVP with the winding number of . Therefore, according to the uniqueness theorem of wave equations, we can say that only the wave solution with the winding number of = 0 is satisfied. As a result, without loss of generality, we suppose that the necessary condition of generating STVPs with the nonzero winding number is mirror symmetry breaking, which provides a simultaneous control for sound in both spatial and temporal domains. Table. S1.
S2. Experimental setup and methods to measure the transmission spectrum function
The experimental setup is shown in Fig. S2(a) platform (LINBOU NFS03) and the data acquisition are integrated into a PC. The meta-grating and the sound absorber are placed between two glasses as shown in Fig. 3(a).
To measure the transmission spectrum function in the frequency domain (as shown in Fig. S2(b)), we use 10 transducers to form a rectangle source in the spatial domain, which ensures that the spatial spectrum of the incident field is sufficiently wide but don't overlap with the non-zero diffraction order. The incident(transmitted) acoustic wave ( ) ( , ) = ( ) ( , )
is collected by two microphones, where ( ) ( , ) is the pulse envelope of the input(transmitted) waves. One microphone is fixed in the acoustic field as a reference probe, and the other one probes the acoustic distribution through the displacement platform as a measurement probe. In order to obtain the transmission spectrum function of the signal, we obtain the spatial distribution of the sound field by sweeping the field, and using cross-power spectrum methods to process the data from the measurement probe and the reference probe.
+ − ),
where is the winding number of the vortex, and are the two constant parameters, respectively. We note that when the phase singularity is anisotropic, both and are nonzero.
The overall topological charge is if | | is larger than | |, and − otherwise. Suppose that an incident pulse with zero topological charge ĩ n ( ) impinges on the meta-gratings, the transmitted pulse can be expressed as: We use a curved transducer array to simultaneously generate a series of pulses with spatiotemporal Gaussian envelope at the center frequency /2 = 7.56 × 10 as shown in Fig. S6(a). The distribution of curved transducers array satisfies Gaussian function distribution ( = exp( 2 ) − 1), and the gap between the transducers in x direction is 1cm. We put the sample 50cm away from the transducers array. The distribution of ( , ) is shown in Fig. S6(b). The height shows the amplitude and the color indicates the phase distribution, respectively. Figures S6(c, d) show the amplitude distribution of the envelope along (c) = 0 and (d) = 0.
S7. Discussion about the calculation of the intrinsic transverse OAM
We note that there is an ongoing theoretical debate about the value of transverse OAM carried by STVPs in Ref. [33,35,40]. Depending on the definition of the intrinsic OAM with respect to the photon/phonon probability or energy centroid, this intrinsic OAM (normalized per photon/phonon) equals L = in Ref. [33] and L = in Ref. [40], where γ is the ellipticity of the STVP shape in the (z,x) plane. For ≃ 7.6 for the STVP in our experiment, two theoretical values for the intrinsic OAM are about 3.9 and 3.8, respectively. Imperfections of the STVP intensity profile as compared to the ideal elliptical shape make little difference in this case.
S8. The simulation of detecting the sharp changes of pulse envelopes.
To demonstrate spatiotemporal differentiator, we consider the perturbed meta-grating as shown in Here, 0 is the frequency at the center wavelength 0 . The results indicate that the structure is a spatiotemporal differentiator, which can be applied to detect sharp changes of pulse envelopes, similar to the image processing of edge detection by the spatial differentiators in the real space (Ref. [32,50,55,60]).
To demonstrate the spatiotemporal boundary extraction, we simulate the incident pulse with the amplitude modulation as a submarine in the spatiotemporal domain in Fig. S7(a), and the phases of the incident pulse envelope are binary with only 0 or as Fig. S7
Fig. 1|
1|Generation of acoustic spatiotemporal vortex pulses and their topological nature. a, Sketch of a meta-grating generating a spatiotemporal vortex pulse for airborne sound by breaking mirror symmetry, where the asymmetric structure is necessary to create the phase singularity in the spatiotemporal domain. b-d, Numerical simulations for the phase (above) and the amplitude (below) of the transmission spectrum function by breaking the mirror symmetry of the meta-grating. The synthetic parameter controls the deformation of symmetry breaking, and = 0, 0.5, 0.7 correspond to b-d, respectively. The vortices with the winding numbers of +1 and -1, are indicated by white and black circles, respectively. e, Nodal lines in the extended dimension with the asymmetry parameter , emerges at the critical values of = 0.40, for the vortices with the winding number as +1 and -1 with the green and the purple lines, respectively.
Fig. 2|
2|Experimental measurement for the phase (a-c) and amplitude (d-f) of transmission spectrum function for the asymmetry parameters = , . , . respectively. The vortices with the winding numbers of +1 and -1, which are indicated by white and black circles, respectively.
Fig. 3|
3|Experimental measurement of STVP generations with the acoustic meta-grating. a, Sketch of the experimental setup with an incident Gaussian-profile pulse in both spatial and temporal domains. b, Experimental sample of the acoustic STVP generator with the asymmetry parameter = 1.0. c, Schematics of the perturbed case where fifteen particles of different shapes such as sphere, pyramid, cube and ring are randomly placed with additional small truncations. d-g, Simulation results of the transmitted pulse envelopes at the positions which are separated by 3 from each other ( is the center wavelength of the pulse), respectively. h-k, Experimental measurements of the transmitted pulse envelopes at the corresponding positions of d-g, respectively. l-o, Experimental measurement of topologically protected STVP generations with perturbation, at the corresponding positions away from the sample, respectively.
Fig. 4| Acoustic
4|STVP propagation in real space. a-f, The pressure amplitude distribution (colormap) and the momentum density (arrows) of the transmitted wave are shown at the time of
Fig. S1 .
S1The schematic of air channels in the meta-grating as shownFig. 1ain the main text. The block marked in yellow is fixed, and the other four blocks shift toward left or right.Table. S1 Structure parameters and shifting ratiosfor four rectangular blocks in Fig. S1. The block shift toward right (left) when is positive (negative). The structure of the STVP generator consists of a fixed symmetrical air unit and four air blocks which are deformed to break mirror symmetry. The fixed part is marked in yellow in Fig. For = 0, the four blocks are spatially mirror-symmetrical with respect to the fixed part. When the four blocks are shifted to the left or right, the mirror symmetry of the structure is gradually broken. The offsets of the four blocks relative to the axis of symmetry are defined as: = , where is the shifting ratio different for each block and is a synthetic parameter characterizing the degree of asymmetry. The specific values of parameters for the four blocks and their corresponding shifting ratios are shown in
. A data acquisition (Brüel & Kjaer 3160-A-042-R) is used to collect the data of acoustic field and control the output waveform. Two microphones (Brüel & Kjaer 4193-L-004) are connected to the data acquisition and used to measure the acoustic field. We use a power amplifier (Brüel & Kjaer 2735) to amplify the input signal. The displacement
Fig. S2
S2The experimental setup and the process of measuring. (a) The experimental setup. (b) The process of measuring. S3. Diagram of the experimental structures with different shifting ratios.Figs. S3(a-c) shows the experimental samples which fabricated with 3D printing technology with different = 0,0.5,0.7, respectively.
Fig.
S3 (a-c) The experimental samples of the meta-gratings for = 0,0.5,0.7.S4. The detailed derivation about STVP generationIn order to specifically depict the transformation between the incident and transmitted pulses whilepropagating through the STVP generator, we decompose the incident (transmitted) pulse envelope into a series of plane waves by Fourier transform in(out) ( , ) = ∬ĩ n(out) ( , ) ( − ) d d , where = − is the wavevector component shifted along x direction, and = − is the sideband angular frequency from the center one . In the polar coordinate, the expression is converted to in(out) ( , ) = ∫ ∫ĩ n(out) ( , ) [ ( + )] d d ∞ and where = √ + , = ( / ), = + , = ( / ). According to the restriction of the winding number around vortex, within the small enough region, the amplitude of transmission spectrum function for the STVP generator can be written as ( ), and without loss of general can be expressed as (ρ, χ) = ( )(
Fig. S6
S6Experimental setup and the incident pulse which have the Gaussian-like profiles in both the spatial and temporal domains. (a) Experimental setup for measuring STVP. (b) Amplitude and phase distribution of the envelope of the incident pulse. Amplitude distribution of the envelope along (c) = 0 and (d) = 0.
Fig. 3c ,
3cwhere the corresponding transmission spectrum function is shown in Fig. S5. The phase distribution of exhibits that the vortex in the − domain still survives, which leads to a zero amplitude at 0 /2π=7.56 × 10 3 and 0 = 0.02 0 . Furthermore, exhibit a good linear dependence on / 0 = 0 and / = 0 within a certain bandwidth, and the phase shifts with at the minima occurring at = 0 and = 0, respectively, which indicates that the structure still enables the first-order differentiation in both the spatial and temporal domains. Around = 0 and = 0, the transmission spectrum function has the form: 0.598 )/ 0 and = 3.93 ( -0.012 )/ 0 are two complex numbers.
(b) thus without phase singularities. We simulate the pulse transmitted through the perturbed meta-grating with the central frequency at 0 /2π=7.56 × 10 3 and central transverse wavevector component 0.02 . Fig. S7(c) and S7(d) correspond to the amplitude and phase distributions of the transmitted pulse envelope, respectively. Moreover, the phase distribution of the transmitted pulse exhibits the generation of large numbers of adjacent STVPs in the spatiotemporal domain. As the generated STVPs interfere with each other, the constructive interference occurs at the sharp changes of the incident pulse envelope in both spatial and temporal domains, while the destructive one strongly takes place where the amplitudes have slight variations.
Fig. S7
S7Spatiotemporal boundary extraction for an arbitrary amplitude modulated spatiotemporal pulse. (a) Amplitude and (b) phase distributions of an incident pulse envelope as a torpedo. The phases of the incident pulse envelope are binary with only 0 or , without phase singularities. (c) Amplitude and (d) phase distributions of the transmitted pulse envelope.
Laser beams with screw dislocations in their wavefronts. V Y Bazhenov, M V Vasnetsov, M S Soskin, Jetp. Lett. 52429V. Y. Bazhenov, M. V. Vasnetsov, and M. S. Soskin, Laser beams with screw dislocations in their wavefronts, Jetp. Lett. 52, 429 (1990).
Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes. L Allen, M W Beijersbergen, R J C Spreeuw, J P Woerdman, Phys. Rev. A. 458185L. Allen, M. W. Beijersbergen, R. J. C. Spreeuw, and J. P. Woerdman, Orbital angular momentum of light and the transformation of Laguerre-Gaussian laser modes, Phys. Rev. A 45, 8185 (1992).
J P Torres, L Torner, Twisted photons: applications of light with orbital angular momentum. John Wiley & SonsJ. P. Torres and L. Torner, Twisted photons: applications of light with orbital angular momentum (John Wiley & Sons, 2011).
D L Andrews, M Babiker, The angular momentum of light. Cambridge University PressD. L. Andrews and M. Babiker, The angular momentum of light (Cambridge University Press, 2012).
Orbital angular momentum: origins, behavior and applications. A M Yao, M J Padgett, Adv. Opt. Photonics. 3161A. M. Yao and M. J. Padgett, Orbital angular momentum: origins, behavior and applications, Adv. Opt. Photonics 3, 161 (2011).
Advances in optical angular momentum. S Franke-Arnold, L Allen, M Padgett, Laser Photonics Rev. 2299S. Franke-Arnold, L. Allen, and M. Padgett, Advances in optical angular momentum, Laser Photonics Rev. 2, 299 (2008).
Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities. Y Shen, X Wang, Z Xie, C Min, X Fu, Q Liu, M Gong, X Yuan, Light Sci. Appl. 890Y. Shen, X. Wang, Z. Xie, C. Min, X. Fu, Q. Liu, M. Gong, and X. Yuan, Optical vortices 30 years on: OAM manipulation from topological charge to multiple singularities, Light Sci. Appl. 8, 90 (2019).
Spontaneous generation and active manipulation of real-space optical vortices. D Kim, A Baucour, Y.-S Choi, J Shin, M.-K Seo, Nature. 61148D. Kim, A. Baucour, Y.-S. Choi, J. Shin, and M.-K. Seo, Spontaneous generation and active manipulation of real-space optical vortices, Nature 611, 48 (2022).
An acoustical helicoidal wave transducer with applications for the alignment of ultrasonic and underwater systems. B T Hefner, P L Marston, J. Acoust. Soc. Am. 1063313B. T. Hefner and P. L. Marston, An acoustical helicoidal wave transducer with applications for the alignment of ultrasonic and underwater systems, J. Acoust. Soc. Am. 106, 3313 (1999).
Transfer of Angular Momentum to Matter from Acoustical Vortices in Free Space. K Volke-Sepúlveda, A O Santillán, R R Boullosa, Phys. Rev. Lett. 10024302K. Volke-Sepúlveda, A. O. Santillán, and R. R. Boullosa, Transfer of Angular Momentum to Matter from Acoustical Vortices in Free Space, Phys. Rev. Lett. 100, 024302 (2008).
A review on acoustic vortices: Generation, characterization, applications and perspectives. S Guo, Z Ya, P Wu, M Wan, J. Appl. Phys. 132210701S. Guo, Z. Ya, P. Wu, and M. Wan, A review on acoustic vortices: Generation, characterization, applications and perspectives, J. Appl. Phys. 132, 210701 (2022).
Convert Acoustic Resonances to Orbital Angular Momentum. X Jiang, Y Li, B Liang, J Cheng, L Zhang, Phys. Rev. Lett. 11734301X. Jiang, Y. Li, B. Liang, J.-c. Cheng, and L. Zhang, Convert Acoustic Resonances to Orbital Angular Momentum, Phys. Rev. Lett. 117, 034301 (2016).
Making sound vortices by metasurfaces. L Ye, C Qiu, J Lu, K Tang, H Jia, M Ke, S Peng, Z Liu, Aip Adv. 685007L. Ye, C. Qiu, J. Lu, K. Tang, H. Jia, M. Ke, S. Peng, and Z. Liu, Making sound vortices by metasurfaces, Aip Adv. 6, 085007 (2016).
Generation of acoustic helical wavefronts using metasurfaces. H Esfahlani, H Lissek, J R Mosig, Phys. Rev. B. 9524312H. Esfahlani, H. Lissek, and J. R. Mosig, Generation of acoustic helical wavefronts using metasurfaces, Phys. Rev. B 95, 024312 (2017).
Generation of topologically diverse acoustic vortex beams using a compact metamaterial aperture. C J Naify, C A Rohde, T P Martin, M Nicholas, M D Guild, G J Orris, Appl. Phys. Lett. 108223503C. J. Naify, C. A. Rohde, T. P. Martin, M. Nicholas, M. D. Guild, and G. J. Orris, Generation of topologically diverse acoustic vortex beams using a compact metamaterial aperture, Appl. Phys. Lett. 108, 223503 (2016).
Broadband and stable acoustic vortex emitter with multi-arm coiling slits. X Jiang, J Zhao, S Liu, B Liang, X Zou, J Yang, C.-W Qiu, J.-C Cheng, Appl. Phys. Lett. 108203501X. Jiang, J. Zhao, S.-l. Liu, B. Liang, X.-y. Zou, J. Yang, C.-W. Qiu, and J.-c. Cheng, Broadband and stable acoustic vortex emitter with multi-arm coiling slits, Appl. Phys. Lett. 108, 203501 (2016).
Particle manipulation with acoustic vortex beam induced by a brass plate with spiral shape structure. T Wang, M Ke, W Li, Q Yang, C Qiu, Z Liu, Appl. Phys. Lett. 109123506T. Wang, M. Ke, W. Li, Q. Yang, C. Qiu, and Z. Liu, Particle manipulation with acoustic vortex beam induced by a brass plate with spiral shape structure, Appl. Phys. Lett. 109, 123506 (2016).
Elastic orbital angular momentum transfer from an elastic pipe to a fluid. G J Chaplain, J M De Ponti, T A Starkey, Commun. Phys. 5279G. J. Chaplain, J. M. De Ponti, and T. A. Starkey, Elastic orbital angular momentum transfer from an elastic pipe to a fluid, Commun. Phys. 5, 279 (2022).
Theory and applications of free-electron vortex states. K Y Bliokh, Phys. Rep. 6901K. Y. Bliokh et al., Theory and applications of free-electron vortex states, Phys. Rep. 690, 1 (2017).
Production and application of electron vortex beams. J Verbeeck, H Tian, P Schattschneider, Nature. 467301J. Verbeeck, H. Tian, and P. Schattschneider, Production and application of electron vortex beams, Nature 467, 301 (2010).
Direct observation of vortices in an electron fluid. A Aharon-Steinberg, Nature. 60774A. Aharon-Steinberg et al., Direct observation of vortices in an electron fluid, Nature 607, 74 (2022).
Controlling neutron orbital angular momentum. C W Clark, R Barankov, M G Huber, M Arif, D G Cory, D A Pushin, Nature. 525504C. W. Clark, R. Barankov, M. G. Huber, M. Arif, D. G. Cory, and D. A. Pushin, Controlling neutron orbital angular momentum, Nature 525, 504 (2015).
Vortex beams of atoms and molecules. A Luski, Science. 3731105A. Luski et al., Vortex beams of atoms and molecules, Science 373, 1105 (2021).
A P Sukhorukov, V V Yangirova, Proc.SPIE2005). .SPIE2005)594906A. P. Sukhorukov and V. V. Yangirova, in Proc.SPIE2005), p. 594906.
Spatiotemporal vortex beams and angular momentum. K Y Bliokh, F Nori, Phys. Rev. A. 8633824K. Y. Bliokh and F. Nori, Spatiotemporal vortex beams and angular momentum, Phys. Rev. A 86, 033824 (2012).
N Jhajj, I Larkin, E W Rosenthal, S Zahedpour, J K Wahlstrand, H M Milchberg, Spatiotemporal Optical Vortices. 631037N. Jhajj, I. Larkin, E. W. Rosenthal, S. Zahedpour, J. K. Wahlstrand, and H. M. Milchberg, Spatiotemporal Optical Vortices, Phys. Rev. X 6, 031037 (2016).
Generation of spatiotemporal optical vortices with controllable transverse orbital angular momentum. A Chong, C Wan, J Chen, Q Zhan, Nat. Photonics. 14350A. Chong, C. Wan, J. Chen, and Q. Zhan, Generation of spatiotemporal optical vortices with controllable transverse orbital angular momentum, Nat. Photonics 14, 350 (2020).
Free-space propagation of spatiotemporal optical vortices. S W Hancock, S Zahedpour, A Goffin, H M Milchberg, Optica. 61547S. W. Hancock, S. Zahedpour, A. Goffin, and H. M. Milchberg, Free-space propagation of spatiotemporal optical vortices, Optica 6, 1547 (2019).
Spatiotemporal Vortex Pulses: Angular Momenta and Spin-Orbit Interaction. K Y Bliokh, Phys. Rev. K. Y. Bliokh, Spatiotemporal Vortex Pulses: Angular Momenta and Spin-Orbit Interaction, Phys. Rev.
. Lett, 126243601Lett. 126, 243601 (2021).
Spatiotemporal optical vortices with arbitrary orbital angular momentum orientation by astigmatic mode converters. Y Zang, A Mirando, A Chong, Nanophotonics. 11745Y. Zang, A. Mirando, and A. Chong, Spatiotemporal optical vortices with arbitrary orbital angular momentum orientation by astigmatic mode converters, Nanophotonics 11, 745 (2022).
Engineering arbitrarily oriented spatiotemporal optical vortices using transmission nodal lines. H Wang, C Guo, W Jin, A Y Song, S Fan, Optica. 8966H. Wang, C. Guo, W. Jin, A. Y. Song, and S. Fan, Engineering arbitrarily oriented spatiotemporal optical vortices using transmission nodal lines, Optica 8, 966 (2021).
Spatiotemporal Differentiators Generating Optical Vortices with Transverse Orbital Angular Momentum and Detecting Sharp Change of Pulse Envelope. J Huang, J Zhang, T Zhu, Z Ruan, Laser Photonics Rev. 162100357J. Huang, J. Zhang, T. Zhu, and Z. Ruan, Spatiotemporal Differentiators Generating Optical Vortices with Transverse Orbital Angular Momentum and Detecting Sharp Change of Pulse Envelope, Laser Photonics Rev. 16, 2100357 (2022).
Orbital angular momentum of optical, acoustic, and quantum-mechanical spatiotemporal vortex pulses. K Y Bliokh, Phys. Rev. A. 10731501K. Y. Bliokh, Orbital angular momentum of optical, acoustic, and quantum-mechanical spatiotemporal vortex pulses, Phys. Rev. A 107, L031501 (2023).
Spatiotemporal optical differentiation and vortex generation with metal-dielectric-metal multilayers. L L Doskolovich, A I Kashapov, E A Bezus, D A Bykov, Phys. Rev. A. 10633523L. L. Doskolovich, A. I. Kashapov, E. A. Bezus, and D. A. Bykov, Spatiotemporal optical differentiation and vortex generation with metal-dielectric-metal multilayers, Phys. Rev. A 106, 033523 (2022).
Mode Structure and Orbital Angular Momentum of Spatiotemporal Optical Vortex Pulses. S W Hancock, S Zahedpour, H M Milchberg, Phys. Rev. Lett. 127193901S. W. Hancock, S. Zahedpour, and H. M. Milchberg, Mode Structure and Orbital Angular Momentum of Spatiotemporal Optical Vortex Pulses, Phys. Rev. Lett. 127, 193901 (2021).
Stable knot-like structures in classical field theory. L Faddeev, A J Niemi, Nature. 38758L. Faddeev and A. J. Niemi, Stable knot-like structures in classical field theory, Nature 387, 58 (1997).
Second-harmonic generation of spatiotemporal optical vortices and conservation of orbital angular momentum. S W Hancock, S Zahedpour, H M Milchberg, Optica. 8594S. W. Hancock, S. Zahedpour, and H. M. Milchberg, Second-harmonic generation of spatiotemporal optical vortices and conservation of orbital angular momentum, Optica 8, 594 (2021).
C Wan, Y Shen, A Chong, Q Zhan, Scalar optical hopfions. 222C. Wan, Y. Shen, A. Chong, and Q. Zhan, Scalar optical hopfions, eLight 2, 22 (2022).
Plasmonic Generation of Spatiotemporal Optical Vortices. A I Kashapov, E A Bezus, D A Bykov, L L Doskolovich, Photonics. 10109A. I. Kashapov, E. A. Bezus, D. A. Bykov, and L. L. Doskolovich, Plasmonic Generation of Spatiotemporal Optical Vortices, Photonics 10, 109 (2023).
M A Porras, arXiv:2301.09105Transverse orbital angular momentum of spatiotemporal optical vortices. M. A. Porras, Transverse orbital angular momentum of spatiotemporal optical vortices, arXiv:2301.09105 (2023).
M Yessenov, L A Hall, K L Schepler, A F Abouraddy, Space-time wave packets. 14455M. Yessenov, L. A. Hall, K. L. Schepler, and A. F. Abouraddy, Space-time wave packets, Adv. Opt. Photonics 14, 455 (2022).
Y Shen, arXiv:2210.11273Roadmap on spatiotemporal light fields. Y. Shen et al., Roadmap on spatiotemporal light fields, arXiv:2210.11273 (2022).
Topological nodal line semimetals*. C Fang, H Weng, X Dai, Z Fang, Chin. Phys. B. 25117106C. Fang, H. Weng, X. Dai, and Z. Fang, Topological nodal line semimetals*, Chin. Phys. B 25, 117106 (2016).
Soljač ič , Weyl points and line nodes in gyroid photonic crystals. L Lu, L Fu, J D Joannopoulos, M , Nat. Photonics. 7294L. Lu, L. Fu, J. D. Joannopoulos, and M. Soljač ič , Weyl points and line nodes in gyroid photonic crystals, Nat. Photonics 7, 294 (2013).
A revolution in optical manipulation. D G Grier, Nature. 424810D. G. Grier, A revolution in optical manipulation, Nature 424, 810 (2003).
Terabit free-space data transmission employing orbital angular momentum multiplexing. J Wang, Nat. Photonics. 6488J. Wang et al., Terabit free-space data transmission employing orbital angular momentum multiplexing, Nat. Photonics 6, 488 (2012).
. Z Hong, J Zhang, B W Drinkwater, Observation of Orbital Angular Momentum Transfer fromZ. Hong, J. Zhang, and B. W. Drinkwater, Observation of Orbital Angular Momentum Transfer from
Bessel-Shaped Acoustic Vortices to Diphasic Liquid-Microparticle Mixtures. Phys. Rev. Lett. 114214301Bessel-Shaped Acoustic Vortices to Diphasic Liquid-Microparticle Mixtures, Phys. Rev. Lett. 114, 214301 (2015).
High-speed acoustic communication by multiplexing orbital angular momentum. C Shi, M Dubois, Y Wang, X Zhang, Proc. Natl. Acad. Sci. 1147250C. Shi, M. Dubois, Y. Wang, and X. Zhang, High-speed acoustic communication by multiplexing orbital angular momentum, Proc. Natl. Acad. Sci. 114, 7250 (2017).
Generating reconfigurable acoustic orbital angular momentum with double-layer acoustic metasurface. Z Li, Y Lei, K Guo, Z Guo, J. Appl. Phys. 13374901Z. Li, Y. Lei, K. Guo, and Z. Guo, Generating reconfigurable acoustic orbital angular momentum with double-layer acoustic metasurface, J. Appl. Phys. 133, 074901 (2023).
Plasmonic computing of spatial differentiation. T Zhu, Y Zhou, Y Lou, H Ye, M Qiu, Z Ruan, S Fan, Nat. Commun. 815391T. Zhu, Y. Zhou, Y. Lou, H. Ye, M. Qiu, Z. Ruan, and S. Fan, Plasmonic computing of spatial differentiation, Nat. Commun. 8, 15391 (2017).
Photonic crystal slab Laplace operator for image differentiation. C Guo, M Xiao, M Minkov, Y Shi, S Fan, Optica. 5251C. Guo, M. Xiao, M. Minkov, Y. Shi, and S. Fan, Photonic crystal slab Laplace operator for image differentiation, Optica 5, 251 (2018).
Nonlocal Metasurfaces for Optical Signal Processing. H Kwon, D Sounas, A Cordaro, A Polman, A Alù, Phys. Rev. Lett. 121173004H. Kwon, D. Sounas, A. Cordaro, A. Polman, and A. Alù, Nonlocal Metasurfaces for Optical Signal Processing, Phys. Rev. Lett. 121, 173004 (2018).
Two-Dimensional Edge Detection by Guided Mode Resonant Metasurface. A Saba, M R Tavakol, P Karimi-Khoozani, A Khavasi, IEEE Photon. Technol. Lett. 30853A. Saba, M. R. Tavakol, P. Karimi-Khoozani, and A. Khavasi, Two-Dimensional Edge Detection by Guided Mode Resonant Metasurface, IEEE Photon. Technol. Lett. 30, 853 (2018).
High-Index Dielectric Metasurfaces Performing Mathematical Operations. A Cordaro, H Kwon, D Sounas, A F Koenderink, A Alù, A Polman, Nano Lett. 198418A. Cordaro, H. Kwon, D. Sounas, A. F. Koenderink, A. Alù, and A. Polman, High-Index Dielectric Metasurfaces Performing Mathematical Operations, Nano Lett. 19, 8418 (2019).
Generalized Spatial Differentiation from the Spin Hall Effect of Light and Its Application in Image Processing of Edge Detection. T Zhu, Phys. Rev. Appl. 1134043T. Zhu et al., Generalized Spatial Differentiation from the Spin Hall Effect of Light and Its Application in Image Processing of Edge Detection, Phys. Rev. Appl. 11, 034043 (2019).
Optical edge detection based on high-efficiency dielectric metasurface. J Zhou, H Qian, C.-F Chen, J Zhao, G Li, Q Wu, H Luo, S Wen, Z Liu, Proc. Natl. Acad. Sci. Natl. Acad. Sci11611137J. Zhou, H. Qian, C.-F. Chen, J. Zhao, G. Li, Q. Wu, H. Luo, S. Wen, and Z. Liu, Optical edge detection based on high-efficiency dielectric metasurface, Proc. Natl. Acad. Sci. 116, 11137 (2019).
Optical phase mining by adjustable spatial differentiator. T Zhu, J Huang, Z Ruan, Adv. Photon. 216001T. Zhu, J. Huang, and Z. Ruan, Optical phase mining by adjustable spatial differentiator, Adv. Photon. 2, 016001 (2020).
Flat optics for image differentiation. Y Zhou, H Zheng, I I Kravchenko, J Valentine, Nat. Photonics. 14316Y. Zhou, H. Zheng, I. I. Kravchenko, and J. Valentine, Flat optics for image differentiation, Nat. Photonics 14, 316 (2020).
Photonic Spin-Multiplexing Metasurface for Switchable Spiral Phase Contrast Imaging. P Huo, Nano Lett. 202791P. Huo et al., Photonic Spin-Multiplexing Metasurface for Switchable Spiral Phase Contrast Imaging, Nano Lett. 20, 2791 (2020).
Topological optical differentiator. T Zhu, C Guo, J Huang, H Wang, M Orenstein, Z Ruan, S Fan, Nat. Commun. 12680T. Zhu, C. Guo, J. Huang, H. Wang, M. Orenstein, Z. Ruan, and S. Fan, Topological optical differentiator, Nat. Commun. 12, 680 (2021).
Analogue computing with metamaterials. F Zangeneh-Nejad, D L Sounas, A Alù, R Fleury, Nat. Rev. Mater. 6207F. Zangeneh-Nejad, D. L. Sounas, A. Alù, and R. Fleury, Analogue computing with metamaterials, Nat. Rev. Mater. 6, 207 (2021).
Fundamental limit for gain and resolution in analog optical edge detection. P Karimi, A Khavasi, S S Mousavi Khaleghi, Opt. Express. 28898P. Karimi, A. Khavasi, and S. S. Mousavi Khaleghi, Fundamental limit for gain and resolution in analog optical edge detection, Opt. Express 28, 898 (2020).
| [] |
[
"Social Influence Dialogue Systems: A Survey of Datasets and Models For Social Influence Tasks",
"Social Influence Dialogue Systems: A Survey of Datasets and Models For Social Influence Tasks"
] | [
"Kushal Chawla \nUniversity of Southern\nCalifornia\n",
"Weiyan Shi \nColumbia University\n\n",
"Jingwen Zhang [email protected] \nUniversity of California Davis\n\n",
"Gale Lucas \nUniversity of Southern\nCalifornia\n",
"Zhou Yu \nColumbia University\n\n",
"Jonathan Gratch [email protected] \nUniversity of Southern\nCalifornia\n"
] | [
"University of Southern\nCalifornia",
"Columbia University\n",
"University of California Davis\n",
"University of Southern\nCalifornia",
"Columbia University\n",
"University of Southern\nCalifornia"
] | [
"Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics"
] | Dialogue systems capable of social influence such as persuasion, negotiation, and therapy, are essential for extending the use of technology to numerous realistic scenarios. However, existing research primarily focuses on either task-oriented or open-domain scenarios, a categorization that has been inadequate for capturing influence skills systematically. There exists no formal definition or category for dialogue systems with these skills and data-driven efforts in this direction are highly limited. In this work, we formally define and introduce the category of social influence dialogue systems that influence users' cognitive and emotional responses, leading to changes in thoughts, opinions, and behaviors through natural conversations. We present a survey of various tasks, datasets, and methods, compiling the progress across seven diverse domains. We discuss the commonalities and differences between the examined systems, identify limitations, and recommend future directions. This study serves as a comprehensive reference for social influence dialogue systems to inspire more dedicated research and discussion in this emerging area. | null | [
"https://www.aclanthology.org/2023.eacl-main.53.pdf"
] | 256,231,532 | 2210.05664 | cb5211cd4e8b1463f6fe3f5714d19fbfc1b2653b |
Social Influence Dialogue Systems: A Survey of Datasets and Models For Social Influence Tasks
May 2-6, 2023
Kushal Chawla
University of Southern
California
Weiyan Shi
Columbia University
Jingwen Zhang [email protected]
University of California Davis
Gale Lucas
University of Southern
California
Zhou Yu
Columbia University
Jonathan Gratch [email protected]
University of Southern
California
Social Influence Dialogue Systems: A Survey of Datasets and Models For Social Influence Tasks
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics
the 17th Conference of the European Chapter of the Association for Computational LinguisticsMay 2-6, 2023
Dialogue systems capable of social influence such as persuasion, negotiation, and therapy, are essential for extending the use of technology to numerous realistic scenarios. However, existing research primarily focuses on either task-oriented or open-domain scenarios, a categorization that has been inadequate for capturing influence skills systematically. There exists no formal definition or category for dialogue systems with these skills and data-driven efforts in this direction are highly limited. In this work, we formally define and introduce the category of social influence dialogue systems that influence users' cognitive and emotional responses, leading to changes in thoughts, opinions, and behaviors through natural conversations. We present a survey of various tasks, datasets, and methods, compiling the progress across seven diverse domains. We discuss the commonalities and differences between the examined systems, identify limitations, and recommend future directions. This study serves as a comprehensive reference for social influence dialogue systems to inspire more dedicated research and discussion in this emerging area.
Introduction
Consider a human user who signs up to interact with a persuasive dialogue system that motivates for engaging in physical exercise. The system: 1) uses social cues like small talk and empathy, useful for providing continued support, and 2) employs persuasive strategies to convince the user who, at least in the short-term, is reluctant to indulge in exercise. Does such a system fit the definition of a task-oriented dialogue system that are traditionally designed to assist users in completing their tasks such as restaurant or flight booking (Zhang et al., 2020c)? Although the system is task-oriented or goal-oriented per se, the task here goes beyond the traditional definition of assisting users, given the * Equal contribution, * * Co-supervised project possible misalignment between the goals of the system and the user. Clearly, this system is also not open-domain . Although conversations involve social open-ended interactions, there is still a concrete goal of persuading the user towards a healthier habit.
Scenarios similar to above are ubiquitous in everyday life, including games (Peskov et al., 2020), social platforms (Tan et al., 2016), and therapeutic interactions (Tanana et al., 2016). Dialogue systems for these applications require a core function in human communication, that is, social influence (Cialdini and Goldstein, 2004;Cialdini, 2009), which involves influencing users' cognitive and emotional responses, leading to changes in thoughts, opinions, and behaviors through natural conversations. This goes beyond what is captured by traditional task definitions in the dialogue community which primarily focus on task completion and social companionship.
Despite numerous independent efforts in identifying and analyzing various social influence scenarios, there is a lack of common understanding around social influence in AI research which inhibits a systematic study in this space. Further, data-driven efforts for dialogue systems in this space are highly limited. To this end, we introduce the category of social influence dialogue systems (Section 2), providing a comprehensive literature review and discussing future directions.
Developing these systems holds importance in AI research for multiple reasons. Tackling these tasks not only involves AI but also aspects of game theory, communication, linguistics, and social psychology, making them an ideal testbed for interdisciplinary AI research. Most importantly, they reflect AI's general ability to consider their partners' inputs, tailor the communication strategies, personalize the responses, and lead the conversation actively.
We design a taxonomy for existing social influ-ence dialogue datasets, studying their task structure (symmetric vs asymmetric) and context (local vs global). We also organize them by their domains: games, multi-issue bargaining, social good, e-commerce, therapy and support, argumentation, conversational recommendations, and miscellaneous tasks (Section 3). We further design a taxonomy of existing methods, assisting readers to comprehend the progress and reflect on future directions. We organize them based on the system strategy, language generation, partner model, architecture, learning process, and the use of pretrained language models (Section 4). Finally, we identify key challenges and provide recommendations for future work (Section 5).
Over the years, research in task-oriented and open-domain dialogues has benefited from a myriad of survey efforts Zhang et al., 2020c;Ni et al., 2021). We instead focus on dialogue systems with social influence capabilities and present a thorough review across various domains. We hope that our work serves as a timely entry point for interested researchers to take this area further, inspiring dedicated effort and discussion on social influence in the dialogue community.
Social Influence Dialogue Systems
"Social influence is a fact of everyday life" (Gass, 2015). It is the change in thoughts, feelings, attitudes, or behaviors resulting from interaction with an individual or a group (Rashotte, 2007). Influence is measured by quantifiable proxies of the observed change, like the interest to indulge in physical exercise before or after the interaction with a system, or the final deal in a negotiation as opposed to one person taking it all. Social influence dialogue systems act interactively and influence their partners in decision-making and behavioral contexts (Zhang et al., 2020a;Lee et al., 2020). This calls for an active role by the system, distinguishing them from other well-studied scenarios, such as purely task-oriented, where systems passively assist their partners to complete tasks, and opendomain, that target social companionship. Key social influence tasks include persuasion , aiming to change users' attitudes or behaviors, and negotiation, aiming to change the users' perspective to achieve a common ground (Lewis et al., 2017). Conceptual overview: Figure 1 distinguishes between the kinds of conversational content in social influence interactions. The task-oriented content focuses on influencing for a domain-specific goal, like persuading for donation, bargaining with tradeoffs, or encouraging healthier habits. These interactions may also contain social content, such as small talk, empathy, or self-disclosure. The task-oriented content provides a context for social interactions. Depending on the task, social content is optional, but if present, can in turn build rapport and enhance user-system relationship for improved task outcomes (Liao et al., 2021). Connections with task-oriented and opendomain systems: Similar to a task-oriented or an open-domain scenario, social influence dialogue can also be seen as a sequential decision making process with the goal of maximizing the expected reward Gao et al., 2018). Our proposed category is not meant to be disjoint from these traditional categories. However, it still uniquely brings together the tasks that capture social influence, which is fundamentally absent from how we primarily define dialogue tasks in the community. Defining a new category that captures social influence dialogue would foster a dedicated effort towards this important aspect of real-world conversations.
Task-oriented scenarios focus on collaborative information exchange for a common goal of task completion. In social influence tasks, the goals of the system and the user can be different and even conflicting, leading to collaborative or noncollaborative interactions. Further, the goals can go beyond the current task (e.g. multiple therapy interactions, repeated negotiations), leading to social interactions for long-term relationships. If a scenario involves the system's goal to influence its partner, we consider it under social influence in this paper. For instance, He et al. (2018) studied buyerseller price negotiations. The task of the buyer is to negotiate for a reasonable price (arguably making it task-oriented), but achieving it requires social influence skills of engaging in trade-offs and building a rapport with the seller so as to reach an agreement. Measures of Success: The above discussion indicates that a comprehensive evaluation of social influence systems must draw from both task-oriented and open-domain dialogue research. Since there exist surveys that discuss the evaluation in these settings (Deriu et al., 2021;Li et al., 2021), we don't cover them here in detail. However, we define three essential axes for evaluation: 1) Linguistic Performance, or the system's linguistic sophistication based on automatic (e.g. perplexity, BLEU) and human (e.g. fluency, consistency, coherency) evaluation. 2) Influence Outcome, or the ability to influence defined by objective goals like the negotiated price or weight loss after therapy. 3) Partner Perception, or the subjective evaluation of the user, for instance, the user's satisfaction, likeness towards the system, and interest in interacting again. In a buyer-seller negotiation, if the seller hates the buyer in the end, no matter how favorable the deal is for the buyer, one might argue that this is still a failed negotiation for the buyer. Hence, we encourage future work to take all three dimensions into account collectively.
Social Influence Across Diverse Application Areas
We now illustrate social influence across numerous domains and application areas. In total, we curated 22 datasets from prior work that capture social influence in various forms, spanning 12 publication venues, 4 languages, and 7 application domains (see Appendix A for details on the compilation process). In general, the datasets capture the following information about an interaction: the non-conversational context for the participants (e.g. negotiation preferences or other role-specific information), the conversation between them, and outcome assessment. Optionally, some datasets also gather participant demographics and personality traits, utterance-level annotations, and subjective evaluations via post-surveys.
To understand the structural similarities and differences between these datasets, we design a taxonomy with two primary dimensions: Task Structure (Symmetric vs Asymmetric), and Context Definition (Global vs Local). Task Structure captures whether the participant roles are defined in a sym-metric or an asymmetric manner. For instance, a typical multi-issue negotiation is symmetric, in the sense that both parties have their own preferences and goals based on which they actively try to reach a favorable agreement (Lewis et al., 2017). On the other hand, a counseling session between a therapist and a patient is asymmetric, where the therapist attempts to emotionally support the patient by employing social influence skills (Althoff et al., 2016). Context Definition relates to whether the input context before each interaction is defined globally or locally. For instance, the PersuasionFor-Good dataset globally defines the context of persuasion for charity donation, which is kept the same throughout . On the contrary, in a typical debate, although the rules are defined globally, the conversation topic and arguments are local and can vary for each conversation (Durmus and Cardie, 2019). We present this categorization in Table 1. We further categorize the datasets according to their Domain, Source, and the # of parties. We provide key statistics and the available metadata in Appendix B. We now briefly discuss the datasets in each domain.
Games: Strategy games involve social influence dynamics of trust and deception. Diplomacy captures deception in long-lasting relationships, where players forge and break alliances to dominate Europe (Peskov et al., 2020). Catan revolves around the trade of resources for acquiring roads, settlements, and cities (Asher et al., 2016;Boritchev and Amblard, 2021). The players have access to only a subset of resources that they would need, which encourages strategic influence and trade.
Multi-Issue Bargaining Tasks (MIBT): MIBT is a tractable closed-domain abstraction of a typical negotiation (Fershtman, 1990). It is based on a fixed set of issues each with a predefined priority for each player, which essentially governs the goals of the players. If the priorities of the players align, this leads to competitive negotiations, where each party attempts to convince their partner with tradeoffs and persuasive arguments. If they don't, this allows cooperative interactions where the negotiators try to find optimal divisions that benefit everyone. DealOrNoDeal (Lewis et al., 2017) involves negotiations over three issues: books, balls, and hats. Other datasets define a more grounded scenario, such as symmetric CaSiNo (Chawla et al., 2021b) negotiations between two campsite neighbors and asymmetric JobInterview (Yamaguchi et al., 2021) negotiations between recruiters and applicants. Social Good: Social influence is critical for social good applications. The tactics must be personalized using knowledge that is both relevant and appealing. PersuasionForGood involves asymmetric interactions led by a persuader who attempts to convince the other participant for charity donation by employing a variety of tactics. For instance, Logical Appeal uses reason and evidence to support the argument, while Emotional Appeal elicits specific emotions. E-commerce: These tasks are typically asymmetric. A buyer influences the seller towards a reasonable price, while the seller tries to maximize their own profit. An effective system must combine price-related reasoning with language realization. CraigslistBargain (He et al., 2018) involves openended price negotiations with rich influence strategies like embellishments, side offers, emotional appeals, and using world knowledge. Another example is customer support interactions in AntiScam dataset (Li et al., 2020), where users defend themselves against attackers who try to steal sensitive personal information with convincing arguments.
Therapy & Support: Effective therapy using social influence aids in the treatment of mental disorders, and substance use disorders, along with changing undesirable behaviors like unhealthy diets. A counselor needs to be adaptive, personalized, should understand the core issues, and should facilitate a change in patient's perspective (Althoff et al., 2016). In SMS counseling, Althoff et al. (2016) found that linguistic influence like pushing the conversation in the desired direction is associated with perspective change. Similar scenarios were captured in other datasets as well (Demasi et al., 2019;Liang et al., 2021). Tanana et al. (2016) collected the Motivational Interviewing dataset where the goal is to elicit and explore the patient's own motivations for behavior change. EmpatheticDialogues (Rashkin et al., 2019) captured empathetic support interactions, which has been associated with rapport and better task outcomes (Kim et al., 2004;Norfolk et al., 2007;Fraser et al., 2018).
Argumentation: In addition to factuality and social proof, a convincing argument must also consider the intensity, valence, authoritativeness, and framing (Chaiken, 1987;Althoff et al., 2014). Tan et al. (2016) released the ChangeMyView logs from Reddit, involving discussions on numerous controversial topics. Other datasets include Debate Dot Org (DDO) debates on diverse topics (Durmus and Cardie, 2019), congressional proceedings (Thomas et al., 2006), and court hearings (Fornaciari and Poesio, 2012;D.-N.-M. et al., 2012;Ji et al., 2020). Conversational Recommendation: Everyday scenarios naturally hold potential for influence via recommendations, for instance, a movie fan per-suading their friends to watch a movie that they adore. and Dodge et al. (2016) collected movie recommendation datasets. Instead of guiding the conversation towards a specific movie, the goal is simply to provide recommendations based on facts and personal experiences. Nevertheless, they still provide interesting examples of scenarios that can involve social influence.
Miscellaneous: The Target-Guided dataset (Tang et al., 2019) was constructed from the PersonaChat corpus (Zhang et al., 2018).
Instead of being openended, the Target-Guided scenario defines a concrete goal of naturally guiding the conversation to a designated target subject, thereby, making it a social influence setting.
Methodological Progress
Having summarized the datasets that capture social influence, we now discuss the modeling approaches developed for social influence dialogue systems. Most domains have seen efforts in analyzing human dialogue behaviors and their impact on task outcomes. Examples include analyzing deception in games (Peskov et al., 2020), the impact of persuasive strategies and dialogue acts on charity donations ( In addition, researchers have targeted various domain-specific subtasks that can be crucial for the eventual development of dialogue systems in this space. This involves research in lie detection methods (Yeh and Ku, 2021;Yu et al., 2015), discourse parsing (Shi and Huang, 2019;Ouyang et al., 2021), strategy prediction (Chawla et al., 2021b;, breakdown detection (Yamaguchi et al., 2021), outcome prediction (Sinha and Dasgupta, 2021;Chawla et al., 2020;Dutt et al., 2020), and argument mining (Dutta et al., 2022).
Research that directly targets the development of dialogue systems in this space is still nascent. Among other challenges like limited cross-cultural diversity and relatively smaller dataset size, social influence dialogue settings pose a unique challenge: an average human often exhibits sub-optimal strategic behaviors in social influence tasks (Wunderle, 2007;Babcock and Laschever, 2009). This means that standard seq2seq approaches trained on these collected datasets using supervised learning are fundamentally insufficient for developing effective dialogue systems with influence capabilities. Hence, prior work has put a special attention to the system strategy, employing different ways to model the strategy and language together.
We design a taxonomy of methods developed for social influence tasks, assisting readers to comprehend the progress and reflect on future directions. We organize them based on the system strategy, language generation, partner model, architecture, learning process, and the use of pretrained language models. We present annotations for all the surveyed methods in Table 2 and discuss the common categories in brief below.
Strategy Representation
Implicit: The most obvious way to represent the system strategy is implicitly, without any intended decoupling between system strategy and response realization. This corresponds to the usual sequenceto-sequence framework that has been a standard baseline for the methods developed in this space. An important example is the work by Lewis et al. (2017), who were one of the first works to train endto-end dialogue models that exhibit social influence. The authors employed a neural network based on GRUs, one for encoding the negotiation context, one to encode the dialogue utterances, and two recurrent units to generate the output agreement in a bidirectional manner. Latent vectors: Yarats and Lewis (2018) explored latent vectors to decouple utterance semantics from its linguistic aspects. Their hierarchical approach first constructs a latent vector from the input message, which is then used for response generation and planning. These latent vectors are trained to maximize the likelihood of future dialogue messages and actions, which enables the decoupling between semantics and realization. Dialogue Acts (DAs): Dialogue Acts, such as greeting, offer propose, agreement, or disagreement, are effective at capturing a high-level structure of the dialogue flow in social influence settings, reducing the model strategy to first predicting the dialogue act for the next response. The use of DAs makes it convenient to apply reinforcement learning approaches (Zhang et al., 2020b;Yang et al., 2021), while also aiding in developing a modular dialogue system design (He et al., 2018). Semantic Strategies: The structural properties ex-
Language Generation
An important aspect of the system design is an effective way to realize the language, that is, to generate the next response so that it portrays the desired strategic behaviors. Borrowing from task-oriented and open-domain research, existing dialogue models for social influence use a variety of methods to generate the final system response.
Templates and retrieval methods: Predefined templates and response retrieval from the training data simplify the generation pipeline, improving controllability and modularity. He et al. (2018) used templates in their generator which are later filled by retrieving similar responses from the data. This allowed the authors to explore supervised and reinforcement learning at the level of DAs for the influence strategy of the system. Conditional Generation: Text generation methods result in more diverse responses, but negatively impact the controllability and interpretability. Prior work relies on autoregressive text generation conditioned on the dialogue history, non-conversational context, and additional annotations. These are either encoder-decoder networks (Lewis et al., 2017;Li et al., 2020;Joshi et al., 2020) or use a decoderonly design (Li et al., 2020). A useful future direction is to combine generation with retrieval for knowledge-grounded settings like argumentation. Similar methods have been explored for other NLP tasks like open-domain question answering and question generation (Lewis et al., 2020).
Partner Modeling
Partner modeling refers to inferring the mental states of the partner based on the conversation. For example, understanding the cause that the persuadee cares about in the PersuasionForGood context, or inferring the priorities of the partner in DealOrNoDeal negotiations. Building an accurate partner model is essential in social influence settings for guiding the decision-making of the system (Baarslag et al., 2013;Zhang et al., 2020b). Hence, we discuss various ways in which prior work tackles partner modeling. Implicit: A majority of the efforts do not explicitly model the behavior of the partner but instead, this behavior implicitly guides the next response of the sequence-to-sequence dialogue system pipeline. approach is yet to be used in an end-to-end system.
Training
Architecture Choices: One crucial aspect is the architecture design: End-to-end (Lewis et al., 2017;Radford et al., 2019) vs Modular (He et al., 2018). While end-to-end methods improve the diversity and need less manual effort, a modularized design enhances controllability and explainability. Perhaps, this is why modular methods are popular in large-scale models (Hadfi et al., 2021). Improving the control of desired variables such as topics, strategy, or emotion in the end-to-end methods is an open area of research and is yet to be explored for social influence dialogue systems. The performance was later improved by Joshi et al. (2020), who replaced FSTs with Graph Neural Networks to better model the interdependencies. Others have relied on RL to explicitly optimize the model on task-specific objective outcomes. While SL trains the model to mimic the average human behavior, RL techniques, such as those based on REINFORCE (Williams, 1992), allow the system to explore its own strategies in the wild while being guided by one or more overall reward metrics. Lewis et al. (2017) used RL in negotiations, with the final points scored in the agreed deal as the reward. More recent work employed RL to incorporate simplistic partner models into the decisionmaking process of the dialogue system, showing improvements in negotiation tasks (Zhang et al., 2020b;Yang et al., 2021).
Multi-tasking and Pretraining: Limited efforts have also explored multi-tasking and pretrained language models for social influence dialogue systems, which provide promising ways to deal with the challenge of insufficient training data. Liu (2021) trained a sequence-to-sequence transformer on a mix of Cornell Movie Dialogue corpus (Danescu-Niculescu-Mizil and Lee, 2011) and psychotherapy data. Li et al. (2020) fine-tuned the GPT model (Radford et al., 2018), while employing multi-tasking to incorporate intents and slots for both the human and the system. Wu et al. (2021) recently introduced ARDM which uses GPT2 (Radford et al., 2019) to separately encode the utterances of the human and the dialogue system, reducing the reliance on additional annotations.
Discussion and Recommendations
Past few years have seen an exciting progress in social influence dialogue systems. However, building sophisticated and practically useful systems remains a challenging endeavor. Several limitations still exist that must be addressed. To guide future work, we now discuss the key challenges and provide our recommendations. Need for unifying the efforts: One challenge in this space has been the lack of large-scale datasets for model training. Social influence tasks are complex for crowdsourcing workers to understand and to participate in. Hence, prior work used extensive instructions and tutorials, making the study expensive and time consuming Chawla et al., 2021b). To address this, we recommend the researchers to aim for a more unified view of the efforts in social influence. First, this would encourage researchers to adopt the best practices from other social influence scenarios. For instance, most datasets miss out on user attributes like demographics and personality, which are crucial in social influence scenarios (Stuhlmacher and Walters, 1999;Bogaert et al., 2008). Most datasets also ignore the partner perception after the interaction is over. This can result in misleading conclusions about the model performance, where the models perform well objectively, but hurt the relationship with their partners, and thus, negatively impacting practical utility (Aydogan et al., 2020).
Secondly, a holistic outlook will promote transfer learning and domain adaptation. Our taxonomy for datasets (Table 1) governs the way systems must be modeled and trained. Task structure is crucial to understand whether the model can learn from the utterances of all parties or just one. Further, understanding the context definition guides how it must be encoded. Hence, one interesting future direction is joint training on datasets with similar structure and context definition. Finally, progress in task-oriented and opendomain systems can inspire more unified modeling for social influence tasks involving multiple skills in the same interaction (e.g. a combination of negotiation and persuasion tactics as common in realistic scenarios). Roller et al. (2020) blend various open-domain tasks to address multiple challenges together (e.g., persona-based, knowledge-enriched, etc.). Hosseini-Asl et al. (2020) concatenate structured and unstructured data in task-oriented dialogues, and unify task-oriented dialogue system building to be a single sequence generation problem. Future work should explore similar unified approaches for social influence settings as well, especially since these tasks follow a common conceptual foundation (Figure 1), with similar evaluation and theoretical principles (Cialdini, 2009).
To encourage this unified view, we encapsulate our insights from this survey effort in a theoretical framework, which is presented in Appendix C. The framework covers key components for designing a social influence dialogue task, including system attributes, target audience, underlying modeling techniques, and evaluation mechanisms. Theory integration: Most modeling efforts are based on crowdsourced datasets. Since crowdsourcing workers may not exhibit optimal strategies, supervised training on these datasets is fundamentally insufficient to build an effective system for applications like pedagogy (teaching social skills to students). Unfortunately, this holds regardless of how system strategy and partner model are designed. Further, using RL to optimize on objective rewards is also not expected to be enough to reliably learn complex influence capabilities, especially when the reward is restrictive.
To address this, we recommend to tap into the vast amount of research effort in social sciences and psychology on building theories for social influence (Cameron, 2009;Giles, 2016;Lewicki et al., 2016;Cialdini and Goldstein, 2004). Instead of solely relying on the collected data, future work should consider leveraging fundamentals from this research to guide the dialogue policy. Previous works have studied resistance to social influence (Knowles and Linn, 2004;Dal Cin et al., 2004;Petty and Cacioppo, 1977;Ahluwalia, 2000). Rucker et al. (2004) found that people resist persuasion differently depending on their beliefs, suggest-ing personalizing the social influence process. One can also employ the politeness theory (Brown and Levinson, 1978) and model the participants' face acts to better understand users in social influence contexts (Dutt et al., 2020).
Task Evaluation:
Another key limitation of existing work is the lack of a comprehensive evaluation. Prior work majorly focused on objective metrics which only provides a limited view of the model performance. A comprehensive evaluation is challenging since it must consider partner perception along with objective outcomes. Building user simulators could potentially alleviate this problem (Li et al., 2016;Jain et al., 2018;. Most existing simulators are developed for task-oriented systems which follow a certain agenda. Future research should study how to use partner modeling to build social influence user simulators for more efficient and accurate task evaluation (He et al., 2018;Yang et al., 2020). For instance, one could potentially design different user personalities and simulate the change in user's beliefs, opinions, and attitudes accordingly (Yang et al., 2021).
Multimodal systems: Being a core function of human communication, social influence occurs not just through text, but through all possible modalities. Schulman and Bickmore (2009) showed that embodied agents achieve better persuasion results than text-only agents. Other studies have recognized the importance of emotion in social influence tasks (Asai et al., 2020;Chawla et al., 2021a). Nguyen et al. (2021) proposed a speech dataset in debates and study the influence of spoken tactics on persuasiveness across genders. Given these findings, we encourage interdisciplinary efforts in the future to explore the developement of multimodal social influence agents.
Knowledge-enriched systems: Social influence tasks often involve constantly-changing world knowledge such as organization facts and news. Often, the system's internal state (e.g., the change of task setting from one set of products to a different set) needs to be updated. Retraining the entire system is costly to maintain after the initial development. Recent work has proposed to augment the dialogue system with internet-search ability to generate more factual and updated responses in open-domain dialogues (Komeili et al., 2021). Future efforts in this direction will benefit social influence dialogue systems as well.
Conclusions
We introduced the category of social influence dialogue systems that aim to influence their partners through dialogue. We presented a survey of the recent prior work in this space, compiling datasets and methods across diverse application domains. We pointed out key limitations in existing methodologies and proposed promising directions for designing more sophisticated systems in the future. Our survey reveals that although substantial progress has been made, this is still an emerging research area. We hope our work inspires more dedicated interdisciplinary effort and discussion, which is necessary for making progress in this space.
Broader Impact and Ethical Considerations
Social influence is ubiquitous in everyday life. Research on how we use influence in all aspects of our lives spans a number of fields, including social psychology, communication, consumer behavior, behavioral change, and behavioral economics. This research has led to crucial findings about the strategies of social influence and how they impact our decision-making. Over the past few decades, research has accumulated and demonstrated the effectiveness of using various strategies across contexts and domains. Prominent examples include core principles of social influence by Cialdini from social psychology: reciprocity, commitment and consistency, social proof, liking and attractiveness, authority, and scarcity (Cialdini, 2009). Further, communication strategies used in persuasion and general social influence contexts include credibility appeals, two-sided argumentation, emotional tactics, and appeals to social norms, among others (Cameron, 2009;O'keefe, 2015). First, the well-studied principles in social influence research can guide the development of effective dialogue systems with influence capabilities. In fact, many of the strategies found in the datasets developed for social influence tasks (Section 3) directly map to the principles laid out by Cialdini, for instance, credibility and emotional appeal in PersuasionForGood dataset and reciprocity observed in CaSiNo negotiation dataset (Chawla et al., 2021b). Second, research in social influence dialogue systems provides novel datasets on human-human and humanmachine communication, and therefore, holds a great potential to advance theories of human cogni-tion and influence processes (Gratch et al., 2015). The datasets and subsequent analyses can further contribute new theoretical insights to social influence research.
Although dialogue systems have already been used in a number of applications involving chatbots and AI assistants, advancements in social influence dialogue systems can help to bridge the gap between our existing task definitions and a number of other real-world applications. For instance, realistic customer support interactions often involve active behaviors from both the support agent and the user where the agent uses social cues for improved customer satisfaction and retention, while the user attempts to address their queries. These settings naturally involve aspects of social influence, unlike traditional task-oriented definitions where the dialogue system plays a passive role to assist the human users. As discussed earlier, social influence dialogue systems can positively help to advance other areas as well. In therapy domain, these systems can assist in various psychological treatments such as by increasing the willingness to disclose (Lucas et al., 2014). In pedagogy, they can help to make social skills training more accessible (Johnson et al., 2019).
While we think about these applications, it is crucial to also lay out proper ethical guidelines to avoid any misuse of these systems. Primary concerns are around the use of deception (e.g. in Diplomacy and other negotiation tasks), emotional appeals (e.g. in persuasion), and behavior change (e.g. in conversational recommendations).
To mitigate possible misuse scenarios or unintended harms, we now lay out a few ethical guidelines which also apply to dialogue research in general. First, rigorous attempts must be made to ensure that the data collection, design processes, and evaluations, strictly abide by the guidelines and regulations laid out by the relevant Institutional Review Board (IRB). Second, the research team needs to develop a thorough plan to monitor and understand the behaviors of the developed systems before deployment. This includes identifying the goals of the dialogue system, identifying potential toxic language use, and any discriminatory behaviors. Third, investment into improved data collection practices, along with explainable and controllable dialogue systems can help identify these issues early on and allow manipulation to avoid them. Fourth, we argue that transparency is the key.
All stakeholders must be made aware of the goals and design objectives of the system, along with any known misbehaviors or potential risks. The users must also be informed of any data collected during the deployment phase. Lastly, we believe that continuous monitoring of dialogue systems is necessary to ensure that the system performs consistently and does not diverge to unexpected conditions that may incur offensive or discriminative actions. We hope that our work promotes a more systematic study of social influence dialogue systems, which in turn will help to tackle the ethical concerns in a more principled way.
Limitations
Literature Search: We presented a survey of efforts in social influence dialogue systems. Although every attempt was made to provide the readers with a comprehensive overview of the research in this space, our work does not claim exhaustiveness in the covered literature and it is likely that we missed out on other relevant research in this space. Intention for influence: The datasets and tasks covered in this literature review focus on scenarios where social influence is intentional by design. However, social influence can also be unintentional, that is, interactions between humans and machines can have unintended influence on the attitudes, behaviors, or feelings of the human user (Gass, 2015). For instance, changes in topic preferences after interacting with a system on a variety of topics, or incorporating biases after interacting with a biased system. As we continue to make an unprecedented progress towards AI systems that interact with humans via natural means of communication, we must also take into account the unintended influence on the users of the underlying technology. We hope that our work motivates researchers to study these effects methodically in the future.
Acknowledgments
We would like to thank our colleagues at the University of Southern California, Columbia University, and the University of California Davis, along with fellow researchers with whom we interacted at conferences, for all their comments and helpful discussions that have shaped this project. We also thank the anonymous reviewers for their valuable time and feedback. Our research was, in part, sponsored by the Army Research Office and was accomplished under Cooperative Agreement Number W911NF-20-2-0053. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Office or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
References
760
A Literature Compilation
In this section, we provide details about how the literature was curated for our survey. We hope this helps the overall reproducibility and also guides similar studies in the future. We followed a simple two-stage process. First, we compiled the relevant datasets that capture various forms of social influence across diverse domains (presented in Section 3) and then, we compiled the techniques developed on these datasets (presented in Section 4).
Step I -Datasets: Our objective was to gather datasets that (by design) capture forms of social influence. We primarily focused on dialogue interactions but include the datasets based on transcripts from multimodal interactions as well. Given the large breadth of research in this space across a number of domains, our collection is not exhaustive but is rather restricted to the following sources. We surveyed the past 6 years of *ACL conference proceedings. We then covered several online repositories of dialogue data to capture datasets published at other venues. This includes ParlAI 1 , Huggingface 2 , NLP-Progress 3 , and Convokit 4 . Further, we revisited several recent surveys in dialogue systems and Natural Language Generation (NLG) research Zhang et al., 2020c;Ni et al., 2021;Duerr and Gloor, 2021). Datasets that were categorized as task-oriented or open-domain in these surveys but also contain some aspects of social influence have been included in our discussion. As discussed in Section 4, we also include the datasets that have not been directly used for designing dialogue systems but rather for various Natural Language Understanding (NLU) subtasks that can be crucial for the eventual development of dialogue systems in this space. Finally, we also reviewed the citation graphs of the collected papers from Google Scholar. Overall, we ended up with 22 dataset papers, spanning 12 publication venues, 4 languages, and 7 application domains.
Step II -Methods: Compiling the methodological progress was based on the models developed on the curated datasets. For this purpose, we simply reviewed the citations of all the dataset papers using Google Scholar. Figure 2: A theoretical model for the development of dialogue systems for social influence tasks. Curved arrows represent forward relations and the straight arrow represents the feedback. I. Task Specifications: Key properties that define the task in consideration and are captured by the collected dataset, II. Chatbot Characteristics and User Backgrounds: Attributes for the agent design and target audience, III. Chatbot Capacity: The desirable capabilities of the system, IV. Chatbot Design & Techniques: The modeling techniques to develop the dialogue system, and V. Evaluation Mechanisms: Metrics to evaluate system performance.
Figure 1 :
1A conceptual overview.
Wang et al., 2019), cooperative and non-cooperative strategies in MIBT (Chawla et al., 2021b), the use of emotion expression for predicting partner perceptions (Chawla et al., 2021a), and studying semantic categories of persuasive arguments on web forums (Egawa et al., 2019).
Supervised Learning (SL) and Reinforcement Learning (RL): Zhou et al. (2019) used SL to train a hierarchical encoder-decoder for generating the next response and used Finite State Transducers (FSTs) to encode the historic sequence of DAs and persuasive strategies into the model, showing improvements in negotiation and persuasion tasks.
Table 1 :
1Categorization of social influence dialogue corpora. This list is non-exhaustive, and also covers the datasets that have enabled research into various sub-tasks and analyses that can eventually be useful for dialogue systems in respective domains. MIBT: Multi-Issue Bargaining Task. Key statistics and associated metadata are in Appendix 3.
Table 2 :
2Categorization of methods (non-exhaustive) for social influence dialogue. We only cover papers that
explicitly design a dialogue system. NLG: Natural Language Generation, PLM: Pretrained Language Model,
MIBT: Multi-Issue Bargaining Task, E-Com: E-Commerce, DA: Dialogue Act, Enc: Encoder, Dec: Decoder, SL:
Supervised Learning, RL: Reinforcement Learning. Methods that use RL usually apply it in conjunction with SL.
pressed by DAs are insufficient for capturing se-
mantics like emotion, small talk, and appeal. To
better incorporate them, researchers have relied on
additional utterance-level annotations grounded in
prior theories in social influence contexts (Wang
et al., 2019; Chawla et al., 2021b). These strategies
have been used in conjunction with DAs (Zhou
et al., 2019; Joshi et al., 2020).
Nicholas Asher, Julie Hunter, Mathieu Morey, Benamara Farah, and Stergos Afantenos. 2016. Discourse structure and dialogue acts in multiparty dialogue: the stac corpus. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 2721-2727. Reyhan Aydogan, Tim Baarslag, Katsuhide Fujita, Johnathan Mell, Jonathan Gratch, Dave de Jonge, Yasser Mohammad, Shinji Nakadai, Satoshi Morinaga, Hirotaka Osawa, et al. 2020. Challenges and main results of the automated negotiating agents competition (anac) 2019. In Multi-agent systems and agreement technologies, pages 366-381. Springer. Tim Baarslag, Mark Hendrikx, Koen Hindriks, and Catholijn Jonker. 2013. Predicting the performance of opponent models in automated negotiation. In 2013 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), volume 2, pages 59-66. IEEE. Linda Babcock and Sara Laschever. 2009. Women don't ask: Negotiation and the gender divide. Princeton University Press. Sandy Bogaert, Christophe Boone, and Carolyn Declerck. 2008. Social value orientation and cooperation in social dilemmas: A review and conceptual model. British Journal of Social Psychology, 47(3):453-480. Maria Boritchev and Maxime Amblard. 2021. Ding-a corpus of transcriptions of real-life, oral, spontaneous multi-party dialogues between french-speaking players of catan. In Journées du GdR LIFT. Penelope Brown and Stephen C Levinson. 1978. Universals in language usage: Politeness phenomena. In Questions and politeness: Strategies in social interaction, pages 56-311. Cambridge University Press. Kenzie A Cameron. 2009. A practitioner's guide to persuasion: An overview of 15 selected persuasion theories, models and frameworks. Shelly Chaiken. 1987. The heuristic model of persuasion. In Social influence: the ontario symposium, volume 5, pages 3-39. Kushal Chawla, Rene Clever, Jaysa Ramirez, Gale Lucas, and Jonathan Gratch. 2021a. Towards emotionaware agents for negotiation dialogues. In 2021 9th International Conference on Affective Computing and Intelligent Interaction (ACII), pages 1-8. IEEE. Kushal Chawla, Gale Lucas, Jonathan May, and Jonathan Gratch. 2020. Exploring early prediction of buyer-seller negotiation outcomes. arXiv preprint arXiv:2004.02363. Kushal Chawla, Gale M Lucas, Jonathan May, and Jonathan Gratch. 2022. Opponent modeling in negotiation dialogues by related data adaptation. arXiv preprint arXiv:2205.00344. Chawla, Jaysa Ramirez, Rene Clever, Gale Lucas, Jonathan May, and Jonathan Gratch. 2021b. Casino: A corpus of campsite negotiation dialogues for automatic negotiation systems. arXiv preprint arXiv:2103.15721. R. B. Cialdini. 2009. Influence: Science and Practice. fifth ed. Pearson/Allyn & Bacon, Boston, MA. Robert B Cialdini and Noah J Goldstein. 2004. Social influence: Compliance and conformity. Annual review of psychology, 55(1):591-621. Cristian D.-N.-M., Lillian Lee, Bo Pang, and Jon Kleinberg. 2012. Echoes of power: Language effects and power differences in social interaction. In Proceedings of the 21st international conference on World Wide Web, pages 699-708. Sonya Dal Cin, Mark P Zanna, and Geoffrey T Fong. 2004. Narrative persuasion and overcoming resistance. Resistance and persuasion, 2:175-191. Cristian Danescu-Niculescu-Mizil and Lillian Lee. 2011. Chameleons in imagined conversations: A new approach to understanding coordination of linguistic style in dialogs. In Proceedings of the 2nd Workshop on Cognitive Modeling and Computational Linguistics, pages 76-87. Orianna Demasi, Marti A Hearst, and Benjamin Recht. 2019. Towards augmenting crisis counselor training by improving message retrieval. In Proceedings of the Sixth Workshop on Computational Linguistics and Clinical Psychology, pages 1-11. Jan Deriu, Alvaro Rodrigo, Arantxa Otegi, Guillermo Echegoyen, Sophie Rosset, Eneko Agirre, and Mark Cieliebak. 2021. Survey on evaluation methods for dialogue systems. Artificial Intelligence Review, 54(1):755-810. David DeVault, Johnathan Mell, and Jonathan Gratch. 2015. Toward natural turn-taking in a virtual human negotiation agent. In AAAI Spring Symposia. Citeseer. Jesse Dodge, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander H Miller, Arthur Szlam, and Jason Weston. 2016. Evaluating prerequisite qualities for learning end-to-end dialog systems. In 4th International Conference on Learning Representations, ICLR 2016. Sebastian Duerr and Peter A Gloor. 2021. Persuasive natural language generation-a literature review. arXiv preprint arXiv:2101.05786. Esin Durmus and Claire Cardie. 2019. A corpus for modeling user and language effects in argumentation on online debating. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Dutt, Rishabh Joshi, and Carolyn Penstein Rosé. 2020. Keeping up appearances: Computational modeling of face acts in persuasion oriented discussions. arXiv preprint arXiv:2009.10815. Dutta, Jeevesh Juneja, Dipankar Das, and Tanmoy Chakraborty. 2022. Can unsupervised knowledge transfer from social discussions help argument mining? arXiv preprint arXiv:2203.12881. Ryo Egawa, Gaku Morio, and Katsuhide Fujita. 2019. Annotating and analyzing semantic role of elementary units and relations in online persuasive arguments. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 422-428. Chaim Fershtman. 1990. The importance of the agenda in bargaining. Games and Economic Behavior, 2(3):224-238. Tommaso Fornaciari and Massimo Poesio. 2012. Decour: a corpus of deceptive statements in italian courts. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1585-1590. Jamie Fraser, Ioannis Papaioannou, and Oliver Lemon. 2018. Spoken conversational ai in video games: Emotional dialogue management increases user engagement. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, pages 179-184.Rohini Ahluwalia. 2000. Examination of psycholog-
ical processes underlying resistance to persuasion.
Journal of Consumer Research, 27(2):217-232.
Tim Althoff, Kevin Clark, and Jure Leskovec. 2016.
Large-scale analysis of counseling conversations: An
application of natural language processing to mental
health. Transactions of the Association for Computa-
tional Linguistics, 4:463-476.
Tim Althoff, Cristian Danescu-Niculescu-Mizil, and
Dan Jurafsky. 2014. How to ask for a favor: A case
study on the success of altruistic requests. In Pro-
ceedings of the International AAAI Conference on
Web and Social Media, volume 8, pages 12-21.
Sara Asai, Koichiro Yoshino, Seitaro Shinagawa, Sakri-
ani Sakti, and Satoshi Nakamura. 2020. Emotional
speech corpus for persuasive dialogue system. In
Proceedings of The 12th Language Resources and
Evaluation Conference, pages 491-497.
Patient education
and counseling, 74(3):309-317.
Kushal Ritam Subhabrata
https://github.com/facebookresearch/ParlAI 2 https://huggingface.co./docs/datasets/index 3 http://nlpprogress.com/english/dialogue.html 4 https://convokit.cornell.edu/documentation/ datasets.htmlB DatasetsA comprehensive list of the available datasets for investigating social influence in dialogues is provided inTable 3. For each dataset, we mention the application domain, source, key statistics, as well as the available metadata and annotations apart from the conversation logs.
in the main paper for a detailed discussion about these datasets and to Section 4 for information about various methods developed using them.
C Five Stages for Designing Social Influence Dialogue SystemsWe develop a five-stage framework to summarize our recommendations for future work. These stages cover key decisions in the design of a dialogue system in this space, encouraging a holistic understanding of the system characteristics, target audience, underlying modeling techniques, and evaluation mechanisms. These steps are inspired by a behavior change model in healthcare research(Zhang et al., 2020a). We adapt this model to make it suitable for general social influence tasks in NLP. We present these steps inFigure 2. . Note that not all datasets listed above have been directly used for designing end-to-end dialogue systems, but instead, these have enabled research into various sub-tasks and analyses that can eventually be useful for dialogue systems in this area. Please refer to Section
Neural approaches to conversational ai. Jianfeng Gao, Michel Galley, Lihong Li, The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval. Jianfeng Gao, Michel Galley, and Lihong Li. 2018. Neu- ral approaches to conversational ai. In The 41st In- ternational ACM SIGIR Conference on Research & Development in Information Retrieval, pages 1371- 1374.
Social influence, sociology of. H Robert, Gass, International Encyclopedia of the Social and Behavioral Sciences. ElsevierRobert H Gass. 2015. Social influence, sociology of. In International Encyclopedia of the Social and Behav- ioral Sciences, pages 348-354. Elsevier.
Communication accommodation theory: Negotiating personal relationships and social identities across contexts. Howard Giles, Cambridge University PressHoward Giles. 2016. Communication accommodation theory: Negotiating personal relationships and so- cial identities across contexts. Cambridge University Press.
Negotiation as a challenge problem for virtual humans. Jonathan Gratch, David Devault, M Gale, Stacy Lucas, Marsella, International Conference on Intelligent Virtual Agents. SpringerJonathan Gratch, David DeVault, Gale M Lucas, and Stacy Marsella. 2015. Negotiation as a challenge problem for virtual humans. In International Con- ference on Intelligent Virtual Agents, pages 201-215. Springer.
Argumentative conversational agents for online discussions. Rafik Hadfi, Jawad Haqbeen, Sofia Sahab, Takayuki Ito, Journal of Systems Science and Systems Engineering. 304Rafik Hadfi, Jawad Haqbeen, Sofia Sahab, and Takayuki Ito. 2021. Argumentative conversational agents for online discussions. Journal of Systems Science and Systems Engineering, 30(4):450-464.
Decoupling strategy and generation in negotiation dialogues. He He, Derek Chen, Anusha Balakrishnan, Percy Liang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingHe He, Derek Chen, Anusha Balakrishnan, and Percy Liang. 2018. Decoupling strategy and generation in negotiation dialogues. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2333-2343.
A simple language model for task-oriented dialogue. Ehsan Hosseini-Asl, Bryan Mccann, Chien-Sheng Wu, Semih Yavuz, Richard Socher, Advances in Neural Information Processing Systems. 33Ehsan Hosseini-Asl, Bryan McCann, Chien-Sheng Wu, Semih Yavuz, and Richard Socher. 2020. A simple language model for task-oriented dialogue. Advances in Neural Information Processing Systems, 33:20179- 20191.
Challenges in building intelligent open-domain dialog systems. Minlie Huang, Xiaoyan Zhu, Jianfeng Gao, ACM Transactions on Information Systems (TOIS). 383Minlie Huang, Xiaoyan Zhu, and Jianfeng Gao. 2020. Challenges in building intelligent open-domain di- alog systems. ACM Transactions on Information Systems (TOIS), 38(3):1-32.
A user simulator architecture for socially-aware conversational agents. Alankar Jain, Florian Pecune, Yoichi Matsuyama, Justine Cassell, Proceedings of the 18th International Conference on Intelligent Virtual Agents. the 18th International Conference on Intelligent Virtual AgentsAlankar Jain, Florian Pecune, Yoichi Matsuyama, and Justine Cassell. 2018. A user simulator architecture for socially-aware conversational agents. In Proceed- ings of the 18th International Conference on Intelli- gent Virtual Agents, pages 133-140.
Jiun-Hao Jhan, Chao-Peng Liu, Shyh-Kang Jeng, Hung-Yi Lee, arXiv:2110.03949Cheerbots: Chatbots toward empathy and emotionusing reinforcement learning. arXiv preprintJiun-Hao Jhan, Chao-Peng Liu, Shyh-Kang Jeng, and Hung-Yi Lee. 2021. Cheerbots: Chatbots toward empathy and emotionusing reinforcement learning. arXiv preprint arXiv:2110.03949.
Cross copy network for dialogue generation. Changzhen Ji, Xin Zhou, Yating Zhang, Xiaozhong Liu, Changlong Sun, Conghui Zhu, Tiejun Zhao, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Changzhen Ji, Xin Zhou, Yating Zhang, Xiaozhong Liu, Changlong Sun, Conghui Zhu, and Tiejun Zhao. 2020. Cross copy network for dialogue generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1900-1910.
Intelligent tutoring system for negotiation skills training. Emmanuel Johnson, Gale Lucas, Peter Kim, Jonathan Gratch, International Conference on Artificial Intelligence in Education. SpringerEmmanuel Johnson, Gale Lucas, Peter Kim, and Jonathan Gratch. 2019. Intelligent tutoring system for negotiation skills training. In International Con- ference on Artificial Intelligence in Education, pages 122-127. Springer.
Dialograph: Incorporating interpretable strategy-graph networks into negotiation dialogues. Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, Yulia Tsvetkov, International Conference on Learning Representations. Rishabh Joshi, Vidhisha Balachandran, Shikhar Vashishth, Alan Black, and Yulia Tsvetkov. 2020. Di- alograph: Incorporating interpretable strategy-graph networks into negotiation dialogues. In International Conference on Learning Representations.
The effects of physician empathy on patient satisfaction and compliance. Evaluation & the health professions. Sung Soo Kim, Stan Kaplowitz, Mark V Johnston, 27Sung Soo Kim, Stan Kaplowitz, and Mark V Johnston. 2004. The effects of physician empathy on patient satisfaction and compliance. Evaluation & the health professions, 27(3):237-251.
Resistance and persuasion. S Eric, Jay A Knowles, Linn, Psychology PressEric S Knowles and Jay A Linn. 2004. Resistance and persuasion. Psychology Press.
. Mojtaba Komeili, Kurt Shuster, Jason Weston, arXiv:2107.075662021. Internet-augmented dialogue generation. arXiv preprintMojtaba Komeili, Kurt Shuster, and Jason Weston. 2021. Internet-augmented dialogue generation. arXiv preprint arXiv:2107.07566.
Designing a chatbot as a mediator for promoting deep self-disclosure to a real mental health professional. Yi-Chieh Lee, Naomi Yamashita, Yun Huang, Proceedings of the ACM on Human-Computer Interaction. 4CSCW1Yi-Chieh Lee, Naomi Yamashita, and Yun Huang. 2020. Designing a chatbot as a mediator for promoting deep self-disclosure to a real mental health professional. Proceedings of the ACM on Human-Computer Inter- action, 4(CSCW1):1-27.
Essentials of negotiation. J Roy, Bruce Lewicki, David M Barry, Saunders, McGraw-HillRoy J Lewicki, Bruce Barry, and David M Saunders. 2016. Essentials of negotiation. McGraw-Hill.
Deal or no deal? end-to-end learning of negotiation dialogues. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, Dhruv Batra, EMNLP. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In EMNLP.
Retrieval-augmented generation for knowledge-intensive nlp tasks. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-Tau Yih, Tim Rocktäschel, Advances in Neural Information Processing Systems. 33Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neu- ral Information Processing Systems, 33:9459-9474.
Towards deep conversational recommendations. Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, Chris Pal, Advances in neural information processing systems. 31Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommenda- tions. Advances in neural information processing systems, 31.
Long Qin, and Quanjun Yin. 2021. How to evaluate your dialogue models: A review of approaches. Xinmeng Li, Wansen Wu, arXiv:2108.01369arXiv preprintXinmeng Li, Wansen Wu, Long Qin, and Quanjun Yin. 2021. How to evaluate your dialogue mod- els: A review of approaches. arXiv preprint arXiv:2108.01369.
A user simulator for task-completion dialogues. Xiujun Li, C Zachary, Bhuwan Lipton, Lihong Dhingra, Jianfeng Li, Yun-Nung Gao, Chen, arXiv:1612.05688arXiv preprintXiujun Li, Zachary C Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. 2016. A user simulator for task-completion dialogues. arXiv preprint arXiv:1612.05688.
Endto-end trainable non-collaborative dialog system. Yu Li, Kun Qian, Weiyan Shi, Zhou Yu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Yu Li, Kun Qian, Weiyan Shi, and Zhou Yu. 2020. End- to-end trainable non-collaborative dialog system. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8293-8302.
Evaluation of in-person counseling strategies to develop physical activity chatbot for women. Kai-Hui Liang, Patrick L Lange, Yoo Jung Oh, Jingwen Zhang, Yoshimi Fukuoka, Zhou Yu, Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue. the 22nd Annual Meeting of the Special Interest Group on Discourse and DialogueKai-Hui Liang, Patrick L Lange, Yoo Jung Oh, Jingwen Zhang, Yoshimi Fukuoka, and Zhou Yu. 2021. Eval- uation of in-person counseling strategies to develop physical activity chatbot for women. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 32-44.
Linguistic accommodation enhances compliance to charity donation: The role of interpersonal communication processes in mediated compliance-gaining conversations. Wang Liao, Jingwen Zhang, Yoo Jung Oh, Nicholas A Palomares, Journal of Computer-Mediated Communication. 263Wang Liao, Jingwen Zhang, Yoo Jung Oh, and Nicholas A Palomares. 2021. Linguistic accommo- dation enhances compliance to charity donation: The role of interpersonal communication processes in me- diated compliance-gaining conversations. Journal of Computer-Mediated Communication, 26(3):167- 185.
Towards automated psychotherapy via language modeling. Houjun Liu, arXiv:2104.10661arXiv preprintHoujun Liu. 2021. Towards automated psychother- apy via language modeling. arXiv preprint arXiv:2104.10661.
It's only a computer: Virtual humans increase willingness to disclose. M Gale, Jonathan Lucas, Aisha Gratch, Louis-Philippe King, Morency, Computers in Human Behavior. 37Gale M Lucas, Jonathan Gratch, Aisha King, and Louis- Philippe Morency. 2014. It's only a computer: Vir- tual humans increase willingness to disclose. Com- puters in Human Behavior, 37:94-100.
Acousticprosodic, lexical and demographic cues to persuasiveness in competitive debate speeches. Huyen Nguyen, Ralph Vente, David Lupea, Sarah Ita Levitan, Julia Hirschberg, Proc. Interspeech 2021. Interspeech 2021Huyen Nguyen, Ralph Vente, David Lupea, Sarah Ita Levitan, and Julia Hirschberg. 2021. Acoustic- prosodic, lexical and demographic cues to persua- siveness in competitive debate speeches. Proc. Inter- speech 2021, pages 1034-1038.
Vinay Adiga, and Erik Cambria. 2021. Recent advances in deep learning based dialogue systems: A systematic survey. Jinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, arXiv:2105.04387arXiv preprintJinjie Ni, Tom Young, Vlad Pandelea, Fuzhao Xue, Vinay Adiga, and Erik Cambria. 2021. Recent ad- vances in deep learning based dialogue systems: A systematic survey. arXiv preprint arXiv:2105.04387.
The role of empathy in establishing rapport in the consultation: a new model. Tim Norfolk, Kamal Birdi, Deirdre Walsh, Medical education. 417Tim Norfolk, Kamal Birdi, and Deirdre Walsh. 2007. The role of empathy in establishing rapport in the consultation: a new model. Medical education, 41(7):690-697.
Persuasion: Theory and research. J O' Daniel, Keefe, Sage PublicationsDaniel J O'keefe. 2015. Persuasion: Theory and re- search. Sage Publications.
Dialogue graph modeling for conversational machine reading. Siru Ouyang, Zhuosheng Zhang, Hai Zhao, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Siru Ouyang, Zhuosheng Zhang, and Hai Zhao. 2021. Dialogue graph modeling for conversational machine reading. In Findings of the Association for Com- putational Linguistics: ACL-IJCNLP 2021, pages 3158-3169.
It takes two to lie: One to lie, and one to listen. Denis Peskov, Benny Cheng, Ahmed Elgohary, Joe Barrow, Cristian Danescu-Niculescu-Mizil, Jordan Boyd-Graber, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsDenis Peskov, Benny Cheng, Ahmed Elgohary, Joe Bar- row, Cristian Danescu-Niculescu-Mizil, and Jordan Boyd-Graber. 2020. It takes two to lie: One to lie, and one to listen. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 3811-3854.
Forewarning, cognitive responding, and resistance to persuasion. E Richard, John T Petty, Cacioppo, Journal of Personality and social Psychology. 359645Richard E Petty and John T Cacioppo. 1977. Forewarn- ing, cognitive responding, and resistance to persua- sion. Journal of Personality and social Psychology, 35(9):645.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners.
Towards empathetic opendomain conversation models: A new benchmark and dataset. Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsHannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5370-5381.
Social influence. The Blackwell encyclopedia of sociology. Lisa Rashotte, Lisa Rashotte. 2007. Social influence. The Blackwell encyclopedia of sociology.
Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, arXiv:2004.13637Recipes for building an open-domain chatbot. arXiv preprintStephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.
Individual differences in resistance to persuasion: The role of beliefs and meta-beliefs. Resistance and persuasion. D Derek, Rucker, L Zakary, Richard E Tormala, Petty, 83Derek D Rucker, Zakary L Tormala, and Richard E Petty. 2004. Individual differences in resistance to persua- sion: The role of beliefs and meta-beliefs. Resistance and persuasion, page 83.
Persuading users through counseling dialogue with a conversational agent. Daniel Schulman, Timothy Bickmore, Proceedings of the 4th international conference on persuasive technology. the 4th international conference on persuasive technologyDaniel Schulman and Timothy Bickmore. 2009. Per- suading users through counseling dialogue with a conversational agent. In Proceedings of the 4th inter- national conference on persuasive technology, pages 1-8.
How to build user simulators to train rl-based dialog systems. Weiyan Shi, Kun Qian, Xuewei Wang, Zhou Yu, arXiv:1909.01388arXiv preprintWeiyan Shi, Kun Qian, Xuewei Wang, and Zhou Yu. 2019. How to build user simulators to train rl-based dialog systems. arXiv preprint arXiv:1909.01388.
A deep sequential model for discourse parsing on multi-party dialogues. Zhouxing Shi, Minlie Huang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Zhouxing Shi and Minlie Huang. 2019. A deep se- quential model for discourse parsing on multi-party dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7007-7014.
Predicting success of a persuasion through joint modeling of utterance categorization. Manjira Sinha, Tirthankar Dasgupta, Proceedings of the 30th ACM International Conference on Information & Knowledge Management. the 30th ACM International Conference on Information & Knowledge ManagementManjira Sinha and Tirthankar Dasgupta. 2021. Predict- ing success of a persuasion through joint modeling of utterance categorization. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pages 3423-3427.
Gender differences in negotiation outcome: A meta-analysis. F Alice, Amy E Stuhlmacher, Walters, Personnel Psychology. 523Alice F Stuhlmacher and Amy E Walters. 1999. Gender differences in negotiation outcome: A meta-analysis. Personnel Psychology, 52(3):653-677.
Winning arguments: Interaction dynamics and persuasion strategies in good-faith online discussions. Chenhao Tan, Vlad Niculae, Cristian Danescu-Niculescu-Mizil, Lillian Lee, Proceedings of the 25th international conference on world wide web. the 25th international conference on world wide webChenhao Tan, Vlad Niculae, Cristian Danescu- Niculescu-Mizil, and Lillian Lee. 2016. Winning ar- guments: Interaction dynamics and persuasion strate- gies in good-faith online discussions. In Proceedings of the 25th international conference on world wide web, pages 613-624.
A comparison of natural language processing methods for automated coding of motivational interviewing. Michael Tanana, Kevin A Hallgren, E Zac, Imel, C David, Vivek Atkins, Srikumar, 65Journal of substance abuse treatmentMichael Tanana, Kevin A Hallgren, Zac E Imel, David C Atkins, and Vivek Srikumar. 2016. A comparison of natural language processing methods for automated coding of motivational interviewing. Journal of sub- stance abuse treatment, 65:43-50.
Targetguided open-domain conversation. Jianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiaodan Liang, Eric Xing, Zhiting Hu, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsJianheng Tang, Tiancheng Zhao, Chenyan Xiong, Xiao- dan Liang, Eric Xing, and Zhiting Hu. 2019. Target- guided open-domain conversation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5624-5634.
Get out the vote: determining support or opposition from congressional floor-debate transcripts. Matt Thomas, Bo Pang, Lillian Lee, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingMatt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: determining support or opposition from congressional floor-debate transcripts. In Proceed- ings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 327-335.
Persuasion for good: Towards a personalized persuasive dialogue system for social good. Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, Zhou Yu, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsXuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh, Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Per- suasion for good: Towards a personalized persuasive dialogue system for social good. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5635-5649.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229-256.
Alternating recurrent dialog model with large-scale pre-trained language models. Qingyang Wu, Yichi Zhang, Yu Li, Zhou Yu, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeQingyang Wu, Yichi Zhang, Yu Li, and Zhou Yu. 2021. Alternating recurrent dialog model with large-scale pre-trained language models. In Proceedings of the 16th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Main Volume, pages 1292-1301.
How to negotiate in the middle east. Military review. William Wunderle, 8733William Wunderle. 2007. How to negotiate in the mid- dle east. Military review, 87(2):33.
Dialogue act-based breakdown detection in negotiation dialogues. Atsuki Yamaguchi, Kosui Iwasa, Katsuhide Fujita, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeAtsuki Yamaguchi, Kosui Iwasa, and Katsuhide Fujita. 2021. Dialogue act-based breakdown detection in negotiation dialogues. In Proceedings of the 16th Conference of the European Chapter of the Associ- ation for Computational Linguistics: Main Volume, pages 745-757.
Improving dialog systems for negotiation with personality modeling. Runzhe Yang, Jingxiao Chen, Karthik Narasimhan, arXiv:2010.09954arXiv preprintRunzhe Yang, Jingxiao Chen, and Karthik Narasimhan. 2020. Improving dialog systems for negotia- tion with personality modeling. arXiv preprint arXiv:2010.09954.
Improving dialog systems for negotiation with personality modeling. Runzhe Yang, Jingxiao Chen, Karthik Narasimhan, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Runzhe Yang, Jingxiao Chen, and Karthik Narasimhan. 2021. Improving dialog systems for negotiation with personality modeling. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 681-693.
Hierarchical text generation and planning for strategic dialogue. Denis Yarats, Mike Lewis, PMLRInternational Conference on Machine Learning. Denis Yarats and Mike Lewis. 2018. Hierarchical text generation and planning for strategic dialogue. In In- ternational Conference on Machine Learning, pages 5591-5599. PMLR.
Lying through one's teeth: A study on verbal leakage cues. Min-Hsuan Yeh, Lun-Wei Ku, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingMin-Hsuan Yeh and Lun-Wei Ku. 2021. Lying through one's teeth: A study on verbal leakage cues. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 4504- 4510.
Detecting deceptive groups using conversations and network analysis. Dian Yu, Yulia Tyshchuk, Ji Heng, William Wallace, 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. ACLDian Yu, Yulia Tyshchuk, Heng Ji, and William Wallace. 2015. Detecting deceptive groups using conversa- tions and network analysis. In 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Nat- ural Language Processing, ACL-IJCNLP 2015, pages 857-866. Association for Computational Linguistics (ACL).
Artificial intelligence chatbot behavior change model for designing artificial intelligence chatbots to promote physical activity and a healthy diet. Jingwen Zhang, Jung Yoo, Patrick Oh, Zhou Lange, Yoshimi Yu, Fukuoka, Journal of medical Internet research. 22922845Jingwen Zhang, Yoo Jung Oh, Patrick Lange, Zhou Yu, Yoshimi Fukuoka, et al. 2020a. Artificial intelli- gence chatbot behavior change model for designing artificial intelligence chatbots to promote physical ac- tivity and a healthy diet. Journal of medical Internet research, 22(9):e22845.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204-2213.
Learning goal-oriented dialogue policy with opposite agent awareness. Zheng Zhang, Lizi Liao, Xiaoyan Zhu, Tat-Seng Chua, Zitao Liu, Yan Huang, Minlie Huang, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingZheng Zhang, Lizi Liao, Xiaoyan Zhu, Tat-Seng Chua, Zitao Liu, Yan Huang, and Minlie Huang. 2020b. Learning goal-oriented dialogue policy with opposite agent awareness. In Proceedings of the 1st Confer- ence of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th Interna- tional Joint Conference on Natural Language Pro- cessing, pages 122-132.
Zheng Zhang, Ryuichi Takanobu, Qi Zhu, MinLie Huang, and XiaoYan Zhu. 2020c. Recent advances and challenges in task-oriented dialog systems. Science China Technological Sciences. Zheng Zhang, Ryuichi Takanobu, Qi Zhu, MinLie Huang, and XiaoYan Zhu. 2020c. Recent advances and challenges in task-oriented dialog systems. Sci- ence China Technological Sciences, pages 1-17.
Augmenting non-collaborative dialog systems with explicit semantic and strategic dialog history. Yiheng Zhou, Yulia Tsvetkov, Alan W Black, Zhou Yu, International Conference on Learning Representations. Yiheng Zhou, Yulia Tsvetkov, Alan W Black, and Zhou Yu. 2019. Augmenting non-collaborative dialog sys- tems with explicit semantic and strategic dialog his- tory. In International Conference on Learning Rep- resentations.
| [
"https://github.com/facebookresearch/ParlAI"
] |
[
"Neuro-Symbolic Execution: The Feasibility of an Inductive Approach to Symbolic Execution",
"Neuro-Symbolic Execution: The Feasibility of an Inductive Approach to Symbolic Execution"
] | [
"Shiqi Shen [email protected] \nNational University of Singapore\n\n",
"Soundarya Ramesh [email protected] \nNational University of Singapore\n\n",
"Shweta Shinde [email protected] \nNational University of Singapore\n\n",
"Abhik Roychoudhury \nNational University of Singapore\n\n",
"Prateek Saxena [email protected] \nNational University of Singapore\n\n"
] | [
"National University of Singapore\n",
"National University of Singapore\n",
"National University of Singapore\n",
"National University of Singapore\n",
"National University of Singapore\n"
] | [] | Symbolic execution is a powerful technique for program analysis. However, it has many limitations in practical applicability: the path explosion problem encumbers scalability, the need for languagespecific implementation, the inability to handle complex dependencies, and the limited expressiveness of theories supported by underlying satisfiability checkers. Often, relationships between variables of interest are not expressible directly as purely symbolic constraints. To this end, we present a new approachneuro-symbolic execution -which learns an approximation of the relationship as a neural net. It features a constraint solver that can solve mixed constraints, involving both symbolic expressions and neural network representation. To do so, we envision such constraint solving as procedure combining SMT solving and gradient-based optimization. We demonstrate the utility of neuro-symbolic execution in constructing exploits for buffer overflows. We report success on 13/14 programs which have difficult constraints, known to require specialized extensions to symbolic execution. In addition, our technique solves 100% of the given neuro-symbolic constraints in 73 programs from standard verification and invariant synthesis benchmarks.Neuro-Symbolic ExecutionIn this paper, we aim to improve the expressiveness of symbolic execution to reason about parts of the code that are not expressible in the theories supported by the symbolic language (including its SMT theories), too complex, or simply unavailable in analyzable form. We present a technique called neuro-symbolic execution, which accumulates two types of constraints: standard symbolic constraints (derived deductively) and neural constraints (learned inductively). Neural constraints capture relations between program variables of code that are not expressible directly as purely symbolic constraints. The representation of these constraints is chosen to be a neural network (or neural net) in this work. The constraints including both symbolic and neural constraints are called neuro-symbolic.Our procedure infers and manipulates neural constraints using only two generic interfaces, namely learn and check satisfaction. The first interface learns a neural network given concrete values of variables and an objective function to optimize. The second interface checks for satisfiability: given an output value for a neural network, finding whether an input evaluates to it. Both of these can be instantiated by many different procedures; we present a specific set of algorithms in this work for concreteness. We believe arXiv:1807.00575v1 [cs.PL] 2 Jul 2018We outline a set of challenges posed to symbolic execution with the help of a real-world example from an HTTP server.Motivating Example. Consider the simplified example of parsing the HTTP request shown inFigure 1. The code extracts the fields (e.g., uri and version) from the request and constructs the new message for further processing. Our goal is to check whether there exists any buffer overflow in this program. If so, we find the exploit that triggers the overflow. As shown inFigure 1, on Line 4-5, | null | [
"https://arxiv.org/pdf/1807.00575v1.pdf"
] | 49,557,953 | 1807.00575 | 586e9bcf29282f1b23e25d068c4ca759660e5ac9 |
Neuro-Symbolic Execution: The Feasibility of an Inductive Approach to Symbolic Execution
Shiqi Shen [email protected]
National University of Singapore
Soundarya Ramesh [email protected]
National University of Singapore
Shweta Shinde [email protected]
National University of Singapore
Abhik Roychoudhury
National University of Singapore
Prateek Saxena [email protected]
National University of Singapore
Neuro-Symbolic Execution: The Feasibility of an Inductive Approach to Symbolic Execution
Symbolic execution is a powerful technique for program analysis. However, it has many limitations in practical applicability: the path explosion problem encumbers scalability, the need for languagespecific implementation, the inability to handle complex dependencies, and the limited expressiveness of theories supported by underlying satisfiability checkers. Often, relationships between variables of interest are not expressible directly as purely symbolic constraints. To this end, we present a new approachneuro-symbolic execution -which learns an approximation of the relationship as a neural net. It features a constraint solver that can solve mixed constraints, involving both symbolic expressions and neural network representation. To do so, we envision such constraint solving as procedure combining SMT solving and gradient-based optimization. We demonstrate the utility of neuro-symbolic execution in constructing exploits for buffer overflows. We report success on 13/14 programs which have difficult constraints, known to require specialized extensions to symbolic execution. In addition, our technique solves 100% of the given neuro-symbolic constraints in 73 programs from standard verification and invariant synthesis benchmarks.Neuro-Symbolic ExecutionIn this paper, we aim to improve the expressiveness of symbolic execution to reason about parts of the code that are not expressible in the theories supported by the symbolic language (including its SMT theories), too complex, or simply unavailable in analyzable form. We present a technique called neuro-symbolic execution, which accumulates two types of constraints: standard symbolic constraints (derived deductively) and neural constraints (learned inductively). Neural constraints capture relations between program variables of code that are not expressible directly as purely symbolic constraints. The representation of these constraints is chosen to be a neural network (or neural net) in this work. The constraints including both symbolic and neural constraints are called neuro-symbolic.Our procedure infers and manipulates neural constraints using only two generic interfaces, namely learn and check satisfaction. The first interface learns a neural network given concrete values of variables and an objective function to optimize. The second interface checks for satisfiability: given an output value for a neural network, finding whether an input evaluates to it. Both of these can be instantiated by many different procedures; we present a specific set of algorithms in this work for concreteness. We believe arXiv:1807.00575v1 [cs.PL] 2 Jul 2018We outline a set of challenges posed to symbolic execution with the help of a real-world example from an HTTP server.Motivating Example. Consider the simplified example of parsing the HTTP request shown inFigure 1. The code extracts the fields (e.g., uri and version) from the request and constructs the new message for further processing. Our goal is to check whether there exists any buffer overflow in this program. If so, we find the exploit that triggers the overflow. As shown inFigure 1, on Line 4-5,
INTRODUCTION
Symbolic execution is a code analysis technique which reasons about sets of input values that drive the program to a specified state [52]. Certain inputs are marked as symbolic and the analysis gathers symbolic constraints on these values, by analyzing the operations along a path of a program. Satisfying solutions to these constraints are concrete values that cause the program to execute the analyzed path leading to a particular state of interest. Manipulating these constraints allows one to reason about the reachability of different paths and states, thereby serving to guide search in the execution space efficiently. Symbolic execution, especially its mixed-dynamic variant, has been widely used in computer security. Its prime application over the last decade has been in white-box fuzzing, with the goal of discovering software vulnerabilities [42,43,82]. More broadly, it has been used for patching [30,74], invariant discovery [46], and verification to prove the absence of vulnerabilities [24,48]. Off-the-shelf symbolic execution tools targeting languages such as C/C++ [86], JavaScript [1,58], Python [2,20], and executable binary code [23] are available.
Symbolic analysis is a powerful technique; however, it has a number of limitations in practical applicability. First, symbolic analysis is mostly designed as a deductive procedure classically, requiring complete modeling of the target language (e.g., C vs. x64). A set of logical rules specific to the target language describe how to construct symbolic constraints for operations in that language [49,83].
As new languages emerge, such symbolic analysis needs to be re-implemented for each language. More importantly, if a certain functionality of a program is unavailable for analysis -either because it is implemented in a language different from the target language, or because it is accessible as a closed, proprietary service -then, such functionality cannot be analyzed.
Second, the reasoning about symbolic constraints is limited to the expressiveness of theories supported by underlying satisfiability checkers (e.g., SAT / SMT solvers) [10]. Symbolic analysis typically uses quantifier-free and decidable theories in first-order logic, and satisfiability solvers have well-known limits [4]. For instance, nonlinear arithmetic over reals is not well supported in existing solvers, and string support is relatively new and still an area of active research [37,95]. When program functionality does not fall within the supported theories, analysis either precludes such functionality altogether, or encodes it abstractly using supported theories (e.g., arrays, bit-vectors, or uninterpreted functions).
Third, symbolic analyses often enumeratively analyze multiple paths in a program. Complex control flows and looping structures are well-known to be missed by state-of-the-art implementations, which have attracted best-effort extensions to the basic technique and do not offer generality [84]. In particular, dynamic symbolic execution is known to suffer from scalability issues in long-running loops containing a large number of acyclic paths in the iterations, owing to loop unrolling and path explosion [19,91]. the general framework can be extended to other machine learning models which can implement such interfaces.
Our choice of representation via neural networks is motivated by two observations. First, neural nets can approximate or represent a large category of functions, as implied by the universal approximation theorem [36,47]; and in practice, an explosion of empirical results are showing that they are learnable for many practical functions [7,44]. Although specialized training algorithms are continuously on the rise [53,76], we expect that neural networks will prove effective in learning approximations to several useful functions we encounter in practice. Second, neural nets are a differentiable representation, often trained using optimization methods such as gradient descent [79]. This differentiability allows for efficient analytical techniques to check for satisfiability of neural constraints, and produce satisfying assignments of values to variables [45,73] -analogous to the role of SMT solvers for purely symbolic constraints. One of the core technical contributions of this work is a procedure to solve neuro-symbolic constraints: checking satisfiability and finding assignments for variables involved in neural and symbolic constraints simultaneously, with good empirical accuracy on benchmarks tested.
Inductive synthesis of symbolic constraints usable in symbolic analyses has been attempted in prior work [35,70,71]. One notable difference is that our neural constraints are a form of unstructured learning, i.e. they approximate a large class of functions and do not aim to print out constraints in a symbolic form amenable to SMT reasoning. Prior constraint synthesis works pre-determine a fixed template or structure of symbolic constraints -for instance, octagonal inequalities [71], low-degree polynomial equalities over integers [35], and so on. Each such template-based learning comes with a specialized learning procedure and either resorts to standard SMT solvers for solving constraints, or has hand-crafted procedures specialized to each template type. As a result, these techniques have found limited applicability in widely used symbolic execution analyses. As a side note, when the code being approximated does not fall within chosen template structure in prior works, they resolve to brute-force enumeration of templates to fit the samples.
Applications & Results
Neuro-symbolic execution has the ability to reason about purely symbolic constraints, purely neural constraints, and mixed neurosymbolic constraints. This approach has a number of possible future applications, including but not limited to: (a) analyzing protocol implementations without analyzable code [29]; (b) analyzing code with complex dependency structures [92]; and (c) analyzing systems that embed neural networks directly as sub-components [13].
To anchor our proposal, we focus on the core technique of neurosymbolic execution through the lens of one application -finding exploits for buffer overflows. In this setting, we show that neurosymbolic execution can be used to synthesize neural constraints from parts of a program, to which the analysis only has black-box executable access. The program logic can have complex dependencies and control structure, and the technique does not need to know the operational semantics of the target language. We show that for many real programs, our procedure can learn moderately accurate models, incorporate them with symbolic memory safety conditions, and solve them to uncover concrete exploits.
Tool. We build a prototype tool (called NeuEx) to perform neurosymbolic execution of C programs, where the analyst specifies which parts of the code it wants to treat as a black-box, and a memory unsafety condition (symbolic) which captures an exploit. NeuEx uses standard training algorithms to learn a neural net which approximates the black-box functionality and conjoins it with other symbolic constraints. Next, NeuEx employs a new procedure to solve the symbolic and neural constraints simultaneously, yielding satisfying assignments with high probability. The tool is constructive, in that it produces concrete values for free variables in the constraints, which can be tested as candidate exploits.
Results. Our main empirical results are two-fold. First, we select a benchmark which has difficult constraints, known to require specialized extensions to symbolic execution. We show that NeuEx finds exploits for 13/14 of programs in the benchmark. Our results are comparable to binary-level symbolic execution tools [84] with little knowledge of the semantics of the target code and the specific language. The second empirical experiment analyzes two benchmarks used in prior works in invariant synthesis for verification and program synthesis [70,71]. They comprise 73 programs with 82 loops and 259 input variables in total. Given the neuro-symbolic constraints, NeuEx successfully solves 100% neuro-symbolic constraints for these benchmarks.
Contributions. We make the following contributions:
• Neuro-Symbolic Constraints. NeuEx represents the relationship between variables of code as a neural net without the knowledge of code semantics and language, and then conjoins it along with symbolic constraints. • Neuro-Symbolic Constraint Solving. NeuEx envisions constraint solving as a search problem and encodes symbolic constraints as an objective function for optimization along with neural net to check their satisfiability. • Evaluation. NeuEx successfully constructs exploits for 13 out of 14 vulnerable programs, which is comparable to binary-level symbolic execution [84]. In addition, NeuEx solves 100% of given neuro-symbolic constraints over 73 programs comprising of 259 input variables in total.
OVERVIEW
Symbolic execution provides a tool that is useful in a variety of security-related applications. In this work, we focus on the challenges within symbolic execution and present a solution that is general for various kinds of programs. the function process_request takes one input input and checks whether input starts with 'GET '. On Line 6-14, it finds the URI and version from input by searching for the delimiter ' ' and '\n' separately. Then, the function checks whether the program supports the request on Line 15-16 based on the version. Finally, it concatenates the version and URI with the delimiter ',' into a buffer msgbuf on Line 17-22. There exists a buffer overflow on Line 21-22, as the pointer ptr may exceed the boundary of msgbuf.
Motivation and Challenges
Challenge 1: Complex Dependency. To discover this buffer overflow via purely symbolic analysis, the technique has to reason about a complex dependency structure between the input and the variables of interest. Assume that the analyst has some knowledge of the input format, namely that the input has fields, URI and version, separated by ' ' and '\n' and knows the allocated size of the msgbuf (which is 100). By analyzing the program, the analyst knows the vulnerable condition of msgbuf is ptr> 99, which leads to buffer overflows. Note that the path executed for reaching the vulnerability point on Line 21-22 involves updates to a number of variables (on Line 8 and 13) which do not have a direct dependency chain (rather a sequence of control dependencies) on the target variable ptr. Specifically, uri_len and ver_len are dependent on input, which in turn control ptr and the iterations of the vulnerable loop. Further, the relationship between uri_len, ver_len, and ptr involves reasoning over the conditional statements on Line 4 and 15, which may lead to the termination of function. Therefore, without specialized heuristics (e.g., loop-extension [84]), the stateof-the-art solvers resort to enumeration [17]. For example, KLEE enumerates characters on input over ' ' and '\n' until the input passes the checking on Line 15 and ver_len+uri_len>98.
The unavailability of source code is another challenge for capturing complex dependency between variables, especially when the functions are implemented as a remote call or a library call written in a different language. For example, a symbolic execution may abort for calls to native Java methods and unmanaged code in .NET, as the symbolic values flow outside the boundary of target code [6]. To handle this challenge, symbolic execution has to hard-code models for these unknown function calls, which requires considerable manual expertise. Even though symbolic execution tools often provide hand-crafted models for analyzing system calls, they do not precisely capture all the behaviors (e.g., the failure of system calls) [17]. Thus, the constraints generated by purely symbolic execution cannot capture the real behavior of functions, which leads to the failure in vulnerability detection.
Challenge 2: Lack of Expressiveness. Additional challenges can arise in such analysis due to the complexity of the constraint and the lack of back-end theorem for solving the constraint. As shown in Figure 1, the function is equivalent to a replacement based on regular expressions. It replaces the request of the form, "GET␣"URI "␣"Version "\n" * , to the message of the form, URI ","Version "\0"on Line 4-22. 1 The complex relationship between the input and target buffer makes it infeasible for symbolic execution to capture it. Moreover, even if the regular expression is successfully extracted, the symbolic engine may not be able to solve it as the embedded SAT/SMT solver is not able to express certain theories (e.g., the string replacement and non-linear arithmetic). Although works have targeted these theories, current support for non-linear real and integer arithmetic is still in its infancy [4].
Our Approach
To address the above challenges, we propose a new approach with two main insights: (1) leveraging the high representation capability of neural nets to learn constraints when symbolic execution is infeasible to capture it; (2) encoding the symbolic constraints into neural constraint and leveraging the optimization algorithms to solve the neuro-symbolic constraints as a search problem.
NeuEx departs from the purist view that all variable dependencies and relations should be expressible precisely in a symbolic form. Instead, NeuEx treats the entire code from Line 4-22 as a blackbox, and inductively learn a neural network -an approximate representation of the logic mapping the variables of interest to target variables. The constraint represented by the neural network is termed neural constraint. This neural constraint, say N , can represent relationships that may or may not be representable as symbolic constraints. Instead, our approach creates a neuro-symbolic constraint, which includes both symbolic and neural constraints. Such neural constraint learning addresses the preceding first challenge as it learns the constraints from test data rather than source code.
Revisiting the example in Figure 1, the neuro-symbolic constraints capturing the vulnerability at the last control location on Line 22 are as follows.
uri_length = strlen(input_uri) (1) ver_length = strlen(input_version) (2) ptr > 99 (3) N : {uri_length, ver_length} → {ptr} (4)
where uri_length is the length of uri field input_uri and ver_length is the length of version field input_version in input. 2 The first
⊖ ∨ | ∧ Conditional ⊗ == | | > | ≥ | < | ≤ Arithmetic ⊘ + | -| * | /
two constraints are symbolic constraints over the input fields, uri_length and ver_length. The third symbolic constraint captures the vulnerable condition for msgbuf. The last constraint is a neural constraint capturing the relationship between the variable uri_length and ver_length and the variable ptr accessing the vulnerable buffer msgbuf.
To the best of our knowledge, our approach is the first to train a neural net as a constraint and solve both symbolic constraint and neural constraint together. In our approach, we design an intermediate language, termed as neuro-symbolic constraint language. Table 1 presents the syntax of the neuro-symbolic constraint language supported by NeuEx, which is expressive enough to model various constraints specified in many real applications such as string and arithmetic constraints.
Given the learned neuro-symbolic constraints, we seek the values of variables of interest that satisfy all the constraints within it. There exist multiple approaches to solve neuro-symbolic constraints. One naive way is to solve the neural and symbolic constraints separately. For example, consider the neuro-symbolic constraints in Equation 1-4. We first solve the three symbolic constraints by SAT/SMT solvers and then discover a input_uri where uri_length=10, a in-put_version where ver_length=20 and a ptr whose value is 100. Then, we feed the values of uri_length, ver_length and ptr to the neural constraint to check whether it satisfies the learned relationship. For the above case, the neural constraint produces the output such as 32 for the ptr when uri_length=10 and ver_length=20. Although this is a valid satisfiability result for the neural constraint, ptr=100 is not satisfiable for the current input_uri and input_version. This discrepancy arises because we solve these two types of constraints individually without considering the inter-dependency of variables within these constraints. Alternatively, one could resort to enumeration over values of these three variables as a solution. However, it will require a lot of time for discovering the exploit.
This inspires our design of neuro-symbolic constraint solving. NeuEx's solving precedence is purely symbolic, purely neural and mixed constraints, in that order. Solving pure constraints is straightforward [33,79]. The main technical novelty in our design is that NeuEx treats the mixed constraint solving as a search problem and utilizes the optimization algorithm to search for the satisfying solutions. To solve the mixed constraints simultaneously, NeuEx converts symbolic constraints to a loss function (or objective function) which is then used to guide the optimization of the loss function, thereby enabling conjunction of symbolic and neural constraints.
DESIGN
NeuEx is the first tool to solve neuro-symbolic constraints. We first explain the NeuEx setup and the building blocks we use in our approach. Then, we present the core constraint solver of NeuEx along with various optimization strategies.
Overview
Symbolic execution is a generic technique to automatically construct inputs required to drive the program's execution to a specific program point in the code. To this end, a typical symbolic execution framework takes in a program and a vulnerable condition for which we want to test the program. The analyst using the framework also needs to mark the variables of interest as symbolic. Typically, all the input variables are marked symbolic irrespective of its type. Further, environment variables, implicit inputs, user events, storage devices can also be marked as symbolic based on the use-case [17,18,83]. Then the framework generates a set of test inputs to execute an execution path in the program. The analyst can aid this process by providing hints about input grammar and so on, which they know beforehand.
At each branch in the execution, the framework logs the symbolic constraints collected so far as the path conditions required to reach this code point. Specifically, a logical conjunction of all the symbolic constraints gives us the path constraints that have to be satisfied by the input to reach this code point. Solving the symbolic path constraints gives us the concrete input value which can lead us to this execution point in the program, by invoking a constraint solver to produce concrete values. The framework may also negate the symbolic constraints to explore other paths in the program or introduce a feedback loop which uses the concrete input values returned by the constraint solver as new inputs in order to increase path coverage. Figure 2 shows how NeuEx interacts with one such symbolic engine. It takes in symbolic constraint formulas in conjunctive normal form and returns concrete values for each symbolic variable in the constraint formula if the path is feasible; otherwise, it returns UNSAT which implies that the path is infeasible. There are various solvers which support a wide range of theories including but not limited to linear and non-linear arithmetic over integers, booleans, bit-vectors, arrays and strings.
However, there does not exist any theory for solving neural constraints in conjunction with symbolic constraints. A symbolic execution may need to solve neural constraints for programs which invoke a neural network for parts of the execution. For example, if the web application uses a face recognition module before granting access to a critical feature, a traditional symbolic framework will not be able to get past it. Furthermore, symbolic execution is wellknown to fare badly for complex pieces of code involving loops [84]. Thus, whenever the symbolic engine's default constraint solver is not able to find a solution and reaches its timeout, the framework can pause its execution and automatically trigger an alternative mechanism. This is where a neural constraint solver comes into play. If the framework is armed with a neural constraint solver such as NeuEx, it can model parts of the program a black-box and invoke the neural counterpart to solve the constraints. Specifically, the framework can dispatch all the symbolic constraints it has collected so far along with the piece of code it wants to treat as a black box. NeuEx in turn first adds all the symbolic constraints to neuro-symbolic constraints and then queries its constraint solver to produce concrete inputs or return UNSAT. In fact, any piece of code can be modeled in terms of neural constraints to leverage NeuEx. NeuEx is generic in design as it can plug in any symbolic execution engine of choice. It only requires the symbolic execution tool to provide two interfaces: one for outputting the symbolic constraints and the other for querying the SAT/SMT solvers as shown in Figure 2. Table 1 shows the grammar that NeuEx's constraint solver can reason about. For our example in Figure 1, we want to infer the relations between input HTTP request and variable index accessing the vulnerable buffer msgbuf. So the symbolic framework will pass the following constraints to NeuEx:
uri_length = strlen(input_uri) ∧ ver_length = strlen(input_version) ∧ ptr > 99 ∧ N : {uri_length, ver_length} → {ptr} (5)
Building Blocks
NeuEx's core engine solves the neuro-symbolic constraints such as in Equation 5 using its custom constraint solver detailed in Section 3.3. It relies on two existing techniques: SAT/SMT solver and gradient-based neural solver. These solvers referred to as SymSolv and NeuSolv respectively form the basic building blocks of NeuEx.
SymSolv. NeuEx's symbolic constraint solver takes in first-order quantifier-free formulas over multiple theories (e.g., empty theory, the theory of linear arithmetic and strings) and returns UNSAT or concrete values as output. It internally employs Z3 Theorem S1 V1 V2 Figure 3: NeuEx's DAG representation. S represents a symbolic constraint; N represents a neural constraint; V represents a variable. The dotted rectangle represents connected components of the DAG.
N1 V3 V4 G2 S2 N2 S3 V5 V6 V8 V9 V7 G1 G3
Prover [33] as an SMT solver to solve both arithmetic and string symbolic constraints.
NeuSolv. For solving purely neural constraints, NeuSolv takes in the neural net and the associated loss function to generate the expected values that the output variables should have. NeuEx considers the neural constraint solving as a search problem and uses a gradient-based search algorithm to search for the satisfiable results. Gradient-based search algorithm searches for the minimum of a given loss function L(X ) where X is a n-dimensional vector [79]. The loss function can be any differentiable function that monitors the error between the objective and current predictions. Consider the example in Figure 1. The objective of NeuEx is to check whether the index ptr overruns the boundary of msgbuf. Hence, the error is the distance between the value of ptr leading to the buffer overflow and the value of ptr on Line 22 given by the function process_request with current input. By minimizing the error, NeuEx can discover the input closest to the exploit. To minimize the error, gradient-based search algorithm first starts with a random input X 0 which is the initial state of NeuSolv. For every enumeration i, it computes the derivative ∇ X i L(X i ) of L(X i ) given the input X i and then update X i according to ∇ X i L(X i ). This is based on the observation that the derivative of a function always points to a local nearest valley. The updated input X i+1 is defined as:
X i+1 = X i − ϵ∇ X i L(X i )(6)
where ϵ is the learning rate that controls how much is going to be updated. Gradient-based search algorithm keeps updating the input until it reaches the local minima. To avoid the non-termination case, we set the maximum number of enumerations to be M e . If it exceed M e , NeuSolv stops and returns current updated result. Note that gradient-based search algorithm can only find the local minima since it stops when the error increases. If the loss function is a nonconvex function with multiple local minima, the found local minima may not be the global minima. Moreover, it may find different local minima when the initial state is different. Thus, NeuEx executes the search algorithm multiple times with different initial states in order to find the global minima of L(X ).
Algorithm 1 Algorithm for neuro-symbolic constraint solving. S p is purely symbolic constraints; N p is purely neural constraints; S m and N m are symbolic constraints and neural constraints in mixed components. (X, assign2) ← NeuSolv(N p ); 10: if X == SAT then 11: go to 16 12: end if 13: cnt ← cnt+1; 14: end while 15: go to UNSAT 16: assign ← Union(assign1, assign2); 17: ConflictDB ← ∅; trial_cnt ← 0; 18: while trial_cnt < MAX_TRIAL2 do 19: ConflictConsts ← CreateConflictConsts(ConflictDB) 20: (X, assign3) ← SymSolv(S m , ConflictConsts); 21: if X == UNSAT then 22: go to UNSAT 23: end if 24: NeuralConsts ← PartialAssign(N m , assign3); cnt ← 0; 25: while cnt<MAX_TRIAL1 do 26: (X, assign4) ← NeuSolv(NeuralConsts); 27: if X == SAT then 28: assign2 ← Union(assign3, assign4); 29: go to SAT 30: end if 31: cnt ← cnt+1; 32: end while 33: trial_cnt ← trial_cnt+1; 34: ConflictDB ← ConflictDB ∪ assign3; 35: end while 36: trial_cnt ← 0; F ← Encode(S m , N m ); 37: while trial_cnt<MAX_TRIAL1 do 38: assign2 ← NeuSolv(F); 39: X ← CheckSAT(assign2, S m ); 40: if X == SAT then 41: go to SAT 42: end if 43: trial_cnt ← trial_cnt+1; 44: end while 45: UNSAT: 46: return (False, ∅); 47: SAT: 48: return (True, Union(assign, assign2)) 49: end function
Constraint Solver
We propose a constraint solver to solve the neuro-symbolic constraints with the help of SymSolv and NeuSolv. If the solver returns SAT, then the neuro-symbolic constraints guarantee to be satisfiable. It is not guaranteed to decide all satisfiability results with a timeout. Algorithm 1 shows the algorithm for neuro-symbolic constraint solver. Interested readers can refer to Algorithm 1 for the precise algorithm.
DAG Generation. NeuEx takes the neuro-symbolic constraints and generates the directed acyclic graph (DAG) between constraints and its variables. Each vertex of the DAG represents a variable or constraint, and the edge shows that the variable is involved in the constraint. For example, Figure 3 shows the generated DAG for constraints
(V 1 op 1 V 2 )∧(V 3 op 2 V 4 )∧(V 5 op 3 V 6 )∧(V 6 op 3 V 7 op 4 V 8 )∧ (V 8 op 5 V 9 )
where op k can be any operator.
Next, NeuEx partitions the DAG into connected components by breadth-first search [16]. Consider the example shown in Figure 3. There are 5 constraints that are partitioned into three connected components, G 1 , G 2 and G 3 . NeuEx topologically sorts the components based on the type of constraints to schedule the solving sequence. Specifically, it clusters the components with only one kind of constraints as pure components (e.g., G 1 and G 2 ) and the components including both constraints as mixed components (e.g., G3). It further sub-categorizes pure components into purely symbolic (e.g., G 1 ) and purely neural constraints (e.g., G 2 ).
NeuEx assigns solving precedence to be pure and mixed constraints. NeuEx solves the mixed constraints in the end because the constraints have different representation and hence are timeconsuming to solve. Thus, in our example, NeuEx first solves S1∧N 1 and then checks the satisfiability of S 2 ∧ N 2 ∧ S 3 .
Pure Constraint Solving. In pure constraints, we first apply Sym-Solv to solve purely symbolic constraints on Line 3 and then handle purely neural constraints using NeuSolv on Line 9. Note that the order of these two kinds of constraints does not affect the result. We solve purely symbolic constraints first because the SymSolv is fast, while the search algorithm for neural constraints requires numerous iteration and may not terminate. So, if the SymSolv reports UNSAT for purely symbolic constraints, the whole neuro-symbolic constraints is UNSAT, as all the constraints are conjunctive. Such an early UNSAT detection speeds up the satisfiability checking. If both solvers output SAT, NeuEx continues the process of solving the mixed constraints.
Mixed Constraint Solving I. NeuEx obtains the symbolic constraints from mixed components (e.g., S 2 and S 3 ) by cutting the edges between the neural constraints and its variables. Then, NeuEx invokes SymSolv to check their satisfiability on Line 20. If the solver returns UNSAT, NeuEx goes to UNSAT state; otherwise, NeuEx collects the concrete values of variables used in these symbolic constraints. Then, NeuEx plugs these concrete values into neural constraints on Line 24. For example, in Figure 3, if the satisfiability result of S 2 ∧S 3 is < t 5 , t 6 , t 8 , t 9 > for the variables < V 5 , V 6 , V 8 , V 9 >, NeuEx partially assigns V 6 and V 8 in N 2 to be t 6 and t 8 separately. Now, we have the partially assigned neural constraint N 2 ' from N 2 . All that remains is to search for the value of V 7 satisfying N 2 '.
To solve such a partially assigned neural constraint, NeuEx employs NeuSolv on Line 26. If the NeuSolv outputs SAT, NeuEx goes to SAT state. In SAT state, NeuEx terminates and returns SAT with the combination of the satisfiability results for all the constraints. If the NeuSolv outputs UNSAT, NeuEx considers the satisfiability result of symbolic constraints as a counterexample and derives a conflict clause on Line 19. Specifically in our example, NeuEx creates a new conflict clause (V 5 t 5 )∨(V 6 t 6 )∨(V 8 t 8 )∨(V 9 t 9 ). Then NeuEx adds this clause (Line 34) and queries the SymSolv with these new symbolic constraints (Line 20). This method of adding conflict clauses is similar to the backtracking in DPLL algorithm [32]. Although the conflict clause learning approach used in NeuEx is simple, NeuEx is generic to adopt other advance strategies for constraint solving [60,66,87].
The above mixed constraint solving keeps executing the backtracking procedure until it does not find any new counterexample. Consider the example in Figure 1. NeuEx first finds the input whose uri_length=10, ver_length=30, and ptr=100. However, the result generated in this trial does not satisfy the neural constraint. Then, NeuEx transforms this counterexample into a conflict clause and goes to next trial to discover a new result. But this trial can be very expensive. For the example in Section 2, mixed constraint solving takes more than 5000 trials in the worst case even after augmenting the constraints with additional information that the value of ptr is 100. To speed up mixed solving, NeuEx chooses to limit the number of trials to a threshold value.
Specifically, if we do not have a SAT decision after mixed constraint solving I within k iterations 3 , NeuEx applies an alternative strategy where we combine the symbolic constraints with neural constraints together. There exist two possible strategies: transforming neural constraints to symbolic constraints or the other way around. However, collapsing neural constraints to symbolic constraints incurs massive encoding clauses. For example, merely encoding a small binarized neural network generates millions of variables and millions of clauses [67]. Thus, we transform the mixed constraints into purely neural constraints for solving them together.
Mixed Constraint Solving II. NeuEx collapses symbolic constraints to neural constraints by encoding the symbolic constraints to a loss function on Line 36. This ensures the symbolic and neural constraints are in the same form. For example, in Figure 3, NeuEx transforms the constraint S 2 and S 3 into a loss function of N 2 .
Once the symbolic constraints are encoded into neural constraints, NeuEx applies the NeuSolv to minimize the loss function on Line 38. The main intuition behind this approach is to guide the search with the help of encoded symbolic constraints. The loss function measures the distance between current result and the satisfiability result of symbolic constraints. The search algorithm gives us a candidate value for satisfiability checking of neural constraints. However, the candidate value generated by minimizing the distance may not always satisfy the symbolic constraints since the search algorithm only tries to minimize the loss, rather than exactly forces the satisfiability of symbolic constraints. To weed out such cases, NeuEx checks the satisfiability for the symbolic constraints by plugging in the candidate value and querying the SymSolv on Line 39. If the result is SAT, NeuEx goes to SAT state. Otherwise, NeuEx 3 Users can adapt k according to their applications. continues executing Approach II with a different initial state of the search algorithm. For example, in Figure 3, NeuEx changes the initial value of V 7 for every iteration. Note that each iteration in Approach I has to execute sequentially because the addition of the conflict clause forces serialization. As opposed to this, each trial in Approach II is independent and thus embarrassingly parallelizable.
To avoid the non-termination case, NeuEx sets the maximum number of trials for mixed constraint solving II to be M t , which can be configured independently of our constraint solver. Empirically, we notice that the mixed constraint solving II is always able to find the satisfiability result for complex constraints before hitting the threshold of 10.
Encoding Mixed Constraints
NeuEx's mixed constraints take up most of the time during solving. We reduce this overhead by transforming them to purely neural constraints. Specifically, NeuEx encodes the symbolic constraints S(X ) as a loss function L(X ) such as:
S(X ) = S(min X (L(X )))(7)
Next, NeuEx uses this loss function along with neural constraints and applies NeuSolv to minimize the loss function of the entire mixed constraints. This encoding has two main advantages. First, it is straightforward to encode symbolic constraints into loss function. Second, there exists gradient-based search algorithm for minimizing the loss function, which speeds up constraint solving in NeuEx.
Generic Encoding. As long as we have a loss function for the symbolic constraints, we can apply NeuSolv to solve the mixed constraints. In this paper, given the grammar of symbolic constraints shown in Table 1, there exist six types of symbolic constraints and two kinds of combinations between two symbolic constraints based on its logical operators. Table 2 describes the loss function for all forms of symbolic constraints. Taking a = b as an example, the loss function L = abs(a − b) achieves the minimum value 0 when a = b, where a and b can be arbitrary expressions. Thus, minimizing the loss function L is equivalent to solving the symbolic constraint. Similar logic is also useful to explain the equivalence between the other kinds of symbolic constraints and its loss function. These loss functions are not the only possible loss functions for these constraints. Any functions satisfying the Equation 7 can be used as loss functions and the same encoding mechanism can be applied to the other constraints. Note that there are three special requirements for the encoding mechanism.
Non-Zero Gradient Until SAT. The derivative of the loss function should not be zero until we find the satisfiability results. For example, when we encode a < b, the derivative of the loss function should not be equal to zero when a = b. Otherwise, the NeuSolv will stop searching and return an unsatisfiable result. To guarantee that, we add a small positive value α and adapts the loss function to be L = max(a − b + α, 0) for the constraint a < b and similarly for a > b and a b. Taking the motivation sample shown in Section 2 as an example, the loss function is L = max(99 − ptr + 0.5, 0) where α = 0.5 Table 2: Transforming symbolic constraints into the corresponding loss function. a and b represent arbitrary expressions. S 1 and S 2 represent arbitrary symbolic constraints. L represents the loss function used for neural constraint solving. L S 1 and L S 2 represents the loss function for symbolic constraints S 1 and S 2 respectively. α represents a small positive value. β represents a small real value.
Symbolic Constraint Loss Function (L)
S 1 a < b L = max(a − b + α, 0) S 1 a > b L = max(b − a + α, 0) S 1 a ≤ b L = max(a − b, 0) S 1 a ≥ b L = max(b − a, 0) S 1 a = b L = abs(a − b) S 1 a b L = max(−1, −abs(a − b + β)) S 1 ∧ S 2 L = L S1 + L S2 S 1 ∨ S 2 L = min(L S1 , L S2 )
Fixed Lower Bound on Loss Function. The loss function for each constraint needs a fixed lower bound to avoid only minimizing the loss function of one constraint within the conjunctive constraints. For instance, we should not encode a b to L = −abs(a − b + β) as the loss function can be negative infinity, where β is a small real value. If the constraint is a b ∧ c < d where c and d can be arbitrary expressions, NeuSolv may only minimize the loss function for a b, because the loss function for a b ∧ c < d is the sum of the loss function for a b and c < d. Thus, it may not find the satisfiability result for both symbolic constraints. To avoid this, we add a lower bound and adjust the loss function to be L = max(−1, −abs(a − b + β)). This lower bound ensures that the loss functions have a finite global minima.
Generality of Encoding. NeuSolv can only be applied to differentiable loss functions, because it requires computing the derivatives of the loss function. Thus, NeuEx need to transform the expression a and b in Table 2 to a differentiable function. The encoding mechanism of expressions is generic. As long as NeuEx can transform the expression into a differential function, any encoding mechanism can be plugged in NeuEx for neuro-symbolic constraint solving.
Optimizations
NeuEx applies five optimization strategies to reduce the computation time for neuro-symbolic constraint solving.
Single Variable Update. Given a set of input variables to neural constraint, NeuEx only updates one variable for each enumeration in NeuSolv. In order to select the variable, NeuEx computes the derivative values for each variable and sorts the absolute values of derivatives. The updated variable is the one with the largest absolute value of the derivative. This is because the derivative value for each element only computes the influence of changing the value of one variable towards the value of loss function, but does not measure the joint influence of multiple variables. Thus, updating them simultaneously may increase the loss value. Moreover, updating one variable per iteration allows the search engine to perform the minimum number of mutations on the initial input in order to prevent the input from being invalid.
Type-based Update. To ensure the input is valid, NeuEx adapts the update strategy according to the types of variables. If the variable is an integer, NeuEx first binarizes the value of derivatives and then updates the variables with the binarized value. If the variable is a float, NeuEx updates the variable with the actual derivatives.
Caching. NeuEx stores the updated results for each enumeration in NeuSolv. As the search algorithm is a deterministic approach, if we have the same input, neural constraints and the loss function, the final generated result is the same. Thus, to avoid unnecessary recomputation, NeuEx stores the update history and checks whether current input is cached in history. If yes, NeuEx reuses previous result; otherwise, NeuEx keeping searching for new input.
SAT Checking Per Enumeration. To speed up the solving procedure, NeuEx verifies the satisfiability of the variables after each enumeration in NeuSolv. Once it satisfies the symbolic constraints, NeuSolv terminates and returns SAT to NeuEx. This is because not only the result achieving global minima can be the satisfiability result of symbolic constraint. For example, any result can be the satisfiability result of the constraint a b except for the result satisfying a = b. Hence, NeuEx does not wait for minimizing the loss function, but checks the updated result for every iteration.
Parallelization. NeuEx executes NeuSolv with different initial input in parallel since each loop for solving mixed constraints is independent. This parallelization reduces the time for finding the global minima of the loss function.
NEURAL CONSTRAINT LEARNING
We have described the constraint solver for neuro-symbolic constraint solving; now it remains to discuss how NeuEx obtains the neural constraints. In this section, we discuss the design of neural constraint learning engine in NeuEx.
Given a program, the selection of network architecture is the key for learning any neural constraint. In this paper, we use multilayer perceptron (MLP) architecture which consists of multiple layers of nodes and connects each node with all nodes in the previous layer [80]. Each node in the same layer does not share any connections with others. We select this architecture because it is a suitable choice for the fixed-length inputs. There are other more efficient architectures (e.g., CNN [55,57] and RNN [63,64]) for the data with special relationships, and NeuEx gives users the flexibility to add more network architectures in NeuEx.
The selection of activation function plays significant role for neural constraint inference as well. In this paper, we consider multiple activation functions (e.g., Sigmoid and Tanh) and finally select the rectifier function Relu as the activation function, because Relu obtains parse representation and reduces the likelihood of vanishing gradient [39,61]. In other words, the neural network with Relu has higher chance to converge than other activation functions.
In addition, to ensure the generality of neural constraint, we implement an early-stopping mechanism which is a regularization approach to reduce over-fitting [93]. It stops the learning procedure when the current learned neural constraint behaves worse on unseen test executions than the previous constraint. As the unseen test executions are never used for learning the neural constraint, the performance of learned neural constraint on unseen test executions is a fair measure for the generality of learned neural constraints. Table 3: NeuEx finds the exploits for 13 out of 14 programs in the buffer overflow benchmark. LD represents the number of branches of which the condition relies on loop counts but not input arguments, which indicates the complexity of the program for symbolic execution to analyze it.
NeuEx can use any machine learning approach, optimization algorithm (e.g., momentum gradient descent [76] and AdaGrad [34]) and regularization solution (e.g., dropout [89] and Tikhonov regularization [90]) to learn the neural constraints. With the advances in machine learning, NeuEx can adopt new architectures and learning approaches in neural constraint inference.
EVALUATION
We implement NeuEx in Python and Google TensorFlow [3] with a total of 1808 lines of code for training the neural constraints and solving the neuro-symbolic constraints. Our evaluation highlights two features of NeuEx: (a) it generates the exploits for 13/14 vulnerable programs; (b) it solves 100% of the given neuro-symbolic constrains for each loop.
Experimental Setup. To evaluate NeuEx, we configure the maximum enumeration of NeuSolv M e to be 10000 after which NeuSolv will terminate. (discussed in Section 3.1). The larger the maximum enumeration, the better the performance of NeuEx is on neural constraint solving. Our experiments are performed on a server with 40-core Intel Xeon 2.6GHz CPUs with 64 GB of RAM.
Effectiveness in Exploit Generation
To evaluate the effectiveness of NeuEx in exploit generation, we select 14 vulnerable programs with buffer overflows from opensource network servers (e.g., BIND, Sendmail and WuFTP) [96]. We choose this benchmark because it comprises of multiple loops and various complex control and data dependencies which are challenging for symbolic execution to handle (discussed in Section 2). To measure the complexity of problems, we utilize the number of branches of which the condition is related to loop counts rather than input arguments in the vulnerable path. This metric is also used in [84]. Table 3 represents the complexity of each program along with the result of exploit generation.
To show the effectiveness of neuro-symbolic constraint learning and solving, for each program, we mark the code from the beginning of the program to the location accessing buffers to be represented as neural constraints. Then, we mark all inputs and all buffer lengths in the program as symbolic by default. In cases where we know the input format, we provide it as additional information in form of program annotations (for e.g., specific input field values). In our example from Section 2, to analyze the program which takes HTTP requests as input, NeuEx marks the uri and version field as well as the length of all the buffers as symbolic. NeuEx randomly initializes the symbolic input arguments for each program, executes the program and collects the values of variables of interest. For our experiments, we collect up to 100000 samples of such executions. 80% of these samples are used for learning the neural constraints, while remaining 20% are used for evaluating the accuracy of learned neural constraints. To get the vulnerable conditions, we manually analyze the source code and set it as symbolic constraint.
Using the above steps, our experiments show that NeuEx is able to find the correct exploit for 13 out of 14 programs in the benchmark. Next, we compare the efficiency of NeuEx on buffer overflow exploit generation with an existing symbolic execution method called Loop-Extended Symbolic Execution (LESE) [84] which is a dynamic symbolic execution based tool. It is a heuristic-based approach which hard-codes the relationship between loop counts and inputs. We reproduce LESE's machine configuration for fair comparison. Our experiments show that NeuEx requires maximum two hours to find the exploits on this setup. On the other hand, LESE requires more than five hours. Thus, NeuEx's performance is comparable to LESE for exploit generation.
In addition, the time that NeuEx spends in exploit generation is not dependent on the complexity of the target code, as NeuEx is a black-box approach for neural constraint learning. For example, the time spent for analyzing the program Sendmail1 with one loop-dependent branch is as same as the time used for program Sendmail3 with 18 loop-dependent branches. Finding 1: NeuEx is able to find the correct exploit for 13 out of 14 programs.
To check whether NeuEx learns the correct constraint, we manually analyze the weights of trained neural constraint (discussed in Appendix A.1). We find that NeuEx is able to learn the neural constraints representing the correct variable relationships. For example, in program Sendmail7, NeuEx not only learns that the final length of vulnerable buffer ( * rr) →rr_u.rr_txt is controlled by txtlen field of DNS response which is the 49 th element of the input, but also the fact that the allocated size for the vulnerable buffer is determined by the size field which is the 47 th and 48 th elements of DNS response. For the programs that NeuEx successfully generates exploits for, we manually analyze all the neural constraints and find that they all precisely represent the variable relationships in the source code.
Finding 2: NeuEx learns the correct neural constraint to represent the variable relationships in the source code. Figure 4: Type distribution of the symbolic constraints in NLA and HOLA. T1 represents the constraints with ≥ or ≤ operator; T2 presents the constraints with > or < operator; T3 represents the constraints with == or operator; T4 represents the constraints with ∧ or ∨ operators. NeuEx reaches timeout for exploit generation in only one program (Sendmail6) where the buffer overflow is caused by the integer overflow. NeuEx fails to generate exploits because the neural net treats integers as real values and is not aware of the programmatic behavior that integers wrap around after they exceed the maximum value representable by its bit width. For example, to capture 32-bit integer overflow, NeuEx needs to know the rule of integer overflow where the value becomes negative if it is larger than 0x7FFFFFFF in x86. To address this, we can explicitly add this rule as a part of symbolic constraints for all integer types and then solve the neuro-symbolic constraints.
Micro-Benchmarking of NeuEx
We ask three empirical questions with our micro-benchmarks:
(1) How fast does NeuEx solve a given neuro-symbolic constraint? (2) What is the accuracy of neural constraints learned by NeuEx? (3) What is the influence of learning and solving on the overall efficiency of NeuEx? For this, we use two benchmarks, namely HOLA and NLA, which comprise 73 programs with 82 loops and 259 input variables in total. These two benchmarks are widely used for invariant synthesis [46,70,71] which is useful for formal verification. We select these two benchmarks because they have various kinds of loop invariants and capturing them is known to be a challenge for symbolic execution. To this end, we evaluate NeuEx's ability to reach the post-condition of the loops in these benchmarks. For each program, we mark the loop to be represented by neural constraints. In each loop, NeuEx needs to (1) Figure 4 shows the type distribution of the negation of loop guards in NLA and HOLA benchmarks which covers all kinds of constraints expressed in Table 2.
Effectiveness of Neuro-Symbolic Constraint Solving. Recall that NeuSolv randomly sets an initial state when it begins the gradient-based optimization of a loss function. If it fails to find a satisfiability result before the timeout, NeuEx needs to restart the search from a different initial state because the search is dependent on the initial state (discussed in Section 3.1). We call each search attempt from a new initial state as one trial. Thus, to evaluate how fast NeuEx solves a given neuro-symbolic constraint, we use the number of trials that NeuSolv takes as the metric. The lower the number of trials that NeuEx needs, the faster the neuro-symbolic constraint solving. T N S column in Table 4 and Table 5 shows the number of trials NeuEx required to solve the given neuro-symbolic constraints for each loop in NLA and HOLA benchmarks. From these results, we find that NeuEx successfully solves 100% of the given neuro-symbolic constraints with a maximum of three trials. Among 82 loops, NeuEx solves 95% of neuro-symbolic constraints with only one trial. This result indicates that NeuEx can successfully solve various kinds of given neuro-symbolic constraints efficiently.
Finding 3: NeuEx is effective in neuro-symbolic constraint solving for 100% of constraints with a maximum of three trials.
NeuEx needs more than one trials for 4/82 loops because of two main reasons. First, our current timeout value is not enough for solving the constraints in two cases (program 14 and 40 in HOLA benchmark). To address this, we can either increase the timeout or restart the search with a new initial state. We experiment on both options and report that the latter can solve the constraints faster. For example, in program 40, NeuEx solves the given neurosymbolic constraints within 2 trials, but it reaches timeout for one trial where the timeout is increased to three-folds. For the remaining two loops, NeuEx fails because of the inefficiency of gradient-based search in NeuSolv. For example, in program fermat2, NeuSolv gets stuck at the saddle point. To address this, we can apply trust region algorithm [88] or cubic regularization [68] which utilizes secondorder derivative to find and avoid saddle points.
Accuracy of Neural Constraint Learning. To measure the effectiveness of neural constraint learning, we computes the learning accuracy Acc which is defined as: Acc = M R M , where M R is the number of (unseen) test executions where learned neural constraints predict the right outputs and M is the total tested executions. The higher the accuracy, the more precise the learned neural constraints. For 82 loops in our benchmarks, NeuEx achieves more than 80% accuracy for 66 neural constraints. For example, NeuEx achieves 97% accuracy for learning the second loop invariant in program hard which contains multiple multiplications with divisions. T2 1 1 dijkstra_2 T3 2 -prod4br T4 1 1 geo3 T1 1 1 divbin_1 T2 1 1 freire1 T1 1 1 knuth T4 1 1 ps2 T1 1 1 divbin_2 T3 1 5 freire2 T1 1 1 fermat1 T3 1 1 ps3 T1 1 1 mannadiv T3 1 1 cohencu T2 1 1 fermat2 T3 3 3 ps4 T1 1 1 hard_1 T2 1 1 egcd T3 1 2 lcm1 T3 1 1 ps5 T1 1 1 hard_2 T3 1 5 egcd2 T3 1 1 lcm2 T3 1 4 ps6 T1 1 1 sqrt1 T1 1 1 egcd3 T3 1 5 geo1 T1 1 1 dijkstra_1 T1 1 1 prodbin T3 1 1 geo2 T1 1 1 Table 4: Evaluation results of NeuEx's constraint solving on NLA benchmark. P represents the program name; Type shows the type of symbolic constraints; T N S shows the number of trials NeuSolv takes for solving the given neuro-symbolic constraints;
P Type T N S T N E P Type T N S T N E P Type T N S T N E P Type T N S T N E cohendiv
T N E represents the number of trials NeuEx needs to reach the post-condition of the loop. '-' represents that NeuEx reaches its timeout before reaching the post-condition. T1 1 1 12_1 T1 1 1 24 T1 1 1 36_2 T1 1 1 02 T1 1 1 12_2 T2 1 1 25 T1 1 1 37 T1 1 1 03 T1 1 1 13 T1 1 1 26 T1 1 1 38 T1 1 1 04 T1 1 2 14 T2 3 3 27 T1 1 1 39 T3 1 1 05 T1 1 1 15 T1 1 1 28_1 T1 1 1 40_1 T1 1 1 06 T1 1 1 16 T3 1 2 28_2 T3 1 1 40_2 T1 1 1 07 T1 1 1 17 T1 1 1 29 T1 1 1 41 T2 1 1 08 T1 1 1 18 T1 1 1 31 T1 1 1 42 T1 1 1 09_1 T1 1 1 19 T1 1 1 32 T1 1 1 43 T1 1 1 09_2 T1 1 1 20 T1 1 1 33 T1 1 1 44 T2 2 2 09_3 T1 1 1 21 T1 1 1 34 T1 1 1 45_1 T1 1 1 09_4 T1 1 1 22 T1 1 1 35 T1 1 1 45_2 T1 1 1 10 T1 1 1 23 T1 1 1 36_1 T1 1 1 46 T1 1 1 Table 5: Evaluation results of NeuEx's constraint solving on HOLA benchmark. P represents the program name; Type shows the type of symbolic constraints; T N S shows the number of trials NeuSolv takes for solving the given neuro-symbolic constraints;
P Type T N S T N E P Type T N S T N E P Type T N S T N E P Type T N S T N E 01
T N E represents the number of trials NeuEx needs to reach the post-condition of the loop.
Finding 4: NeuEx achieves more than 80% learning accuracy for 66/82 neural constraints.
Combined (Learning + Solving) Efficiency. There are two steps involved in solving a neuro-symbolic task (reaching the post-condition in this case) namely: infer the constraints and solve them. So far in our micro-benchmarks, we have evaluated these two steps independent of each other. For completeness, we now present our experimental analysis for understanding how these two steps affect the overall efficiency of NeuEx in performing a given task.
NeuEx successfully solves 71 out of 82 end-to-end tasks in total. Table 6 shows the contributions of each step in solving a neurosymbolic task. When both steps are successful, NeuEx succeeds in solving 96.8% of tasks (in top left cell), However, when only NeuEx's solving is unsuccessful (4 cases in bottom left cell), it always fails to complete the task. This shows that task solving is directly dependent on constraint solving, and justifies our focus on improving the neuro-symbolic constraint solving efficiency in our constraint solver. Ideally, NeuEx must always learn the constraints accurately as well as solve the constraints successfully in order to guarantee post-condition reachability. However, we notice that even when learning is inaccurate, NeuEx is still able to solve the Table 6: Effect of constraint learning and solving on NeuEx's overall efficiency. We classify constraint learning to be a success when accuracy ≥ 80% and a failure otherwise. We classify constraint solving to be a success when NeuEx solves the given constraints with one trial and a failure otherwise. We classify task solving to be a success when the concrete values generated with 1 trial reaches the post-condition and a failure otherwise. The cell value represents the number of loops which succeed in task solving out of total loops under that category.
66.7% of the tasks (in top right cell). This is because NeuEx is at least able to learn the trend of certain variables involved in the constraints if not the precise constraints. Consider the example in Figure 5. If the neural constraint learns c = a + 4 ×cnt 2 ∧d = b +cnt, NeuEx finds the satisfiability result a = 2, b = 5, cnt = 1, c = 6 and d = 6. Even though the neural constraint does not capture the precise loop invariant c = a + cnt × 2×b+1+cnt 2 ∧ d = b + cnt, it at least knows that the value of c increases with the increase in cnt 2 . This partial learning aids NeuEx to solve the task and find a = 2, b = 5 and cnt = 1. Thus, we conclude that although learning is important, it does not affect task solving as drastically as constraint solving. This highlights the importance of effectiveness in constraint solving.
Finding 5: Constraint solving affects NeuEx's effectiveness more significantly than constraint learning.
RELATED WORK
NeuEx is a new design point in constraint synthesis and constraint solving. In this section, we discuss the problems of the existing symbolic execution tools to show how NeuEx can handle it and presents how NeuEx differs from existing constraint synthesis.
Symbolic Execution
Symbolic execution [51] has been used for program verification [31], software testing [17,51], and program repair via specification inference [69]. In the last decade, we have witnessed an increased adoption of dynamic symbolic execution [41] where symbolic execution is used to partition of the input space, with the goal of achieving increased behavioral coverage. The input partitions computed are often defined as program paths, all inputs tracing the same path belong to the same partition. Thus, the test generation achieved by dynamic symbolic execution suffers from the path explosion problem. The problem of path explosion can be exacerbated owing to the presence of complex control flows, including long-running loops (which may affect the scalability of dynamic symbolic execution since it involves loop unrolling) and external libraries. However, NeuEx does not suffer from the path explosion as it learns the constraints from test executions directly.
Tackling path explosion is a major challenge in symbolic execution. Boonstopel et al. suggest the pruning of redundant paths during the symbolic execution tree construction [14]. Veritesting alternates between dynamic symbolic execution and static symbolic execution to mitigate path explosion [9]. The other predominant way of tackling the path explosion problem is by summarizing the behavior of code fragments in a program [5,8,40,56,75,85]. Simply speaking, a summarization technique provides an approximation of the behavior of certain fragments of a program to keep the scalability of symbolic execution manageable. Such an approximation of behaviors is also useful when certain code fragments, such as remote calls and libraries written in a different language, are not available for analysis.
Among the past approaches supporting approximation of behaviors of (parts of) a program, the use of function summaries have been studied by Godefroid [40]. Such function summaries can also be computed on-demand [5]. Kuznetsov et al. present a selective technique to merge dynamic states. It merges two dynamic symbolic execution runs based on an estimation of the difficulty in solving the resultant Satisfiability Modulo Theory (SMT) constraints [56]. Veritesting suggests supporting dynamic symbolic execution with static symbolic execution thereby alleviating path explosion due to factors such as loop unrolling [8]. The works of [75,85] suggest grouping together paths based on similar symbolic expressions in variables, and use such symbolic expressions as dynamic summaries to group paths.
Constraints Synthesis
To support the summarization of program behaviors, the other core technical primitive we can use is constraint synthesis. In our work, we propose a new constraint synthesis approach which utilizes neural networks to learn the constraints which are infeasible for symbolic execution. In comparison with previous solutions, the major difference is that NeuEx does not require any pre-defined templates of constraints and can learn any kind of relationships between variables.
Over the last decade, there are two lines of works in constraint synthesis: white-box and black-box approaches. White-box constraint inference relies on a combination of light-weight techniques such as abstract interpretation [11, 26-28, 65, 77, 78], interpolation [22,50,62] or model checking algorithm IC3 [15]. Although some white-box approaches can provide sound and complete constraints [25], it is dependent on the availability of source code and a human-specified semantics of the source language. Constructing these tools have required considerable manual expertise to achieve precision, and many of these techniques can be highly computationally intensive.
To handle the unavailability of source code, there also exist a rich class of works on reverse engineering from dynamic executions [35,38,46,[70][71][72]81]. Such works can be used to generate summaries of observed behavior from test executions. These summaries are not guaranteed to be complete. On the other hand, such incomplete summaries can be obtained from tests, and hence the source code of the code fragment being summarized need not be available. Daikon [35] is one of the earlier works proposing synthesis of potential invariants from values observed in test executions. The invariants supported in Daikon are in the form of linear relations among program variables. DIG extends Daikon to enable dynamic discovery of non-linear polynomial invariants via a combination of techniques including equation solving and polyhedral reasoning [71]. Krishna et al. use the decision tree, a machine learning technique, to learn the inductive constraints from good and bad test executions [54].
NeuEx devises a new gradient-based constraint solver which is the first work to support the solving of the conjunction of neural and SMT constraints. A similar gradient-based approach is also used in Angora [21], albeit for a completely different usage. It treats the predicates of branches as a black-box function which is not differentiable. Then, it computes the changes on the predicates by directly mutating the values of each variable in order to find the direction for changing variables. Similarly, Li et al. utilize the number of satisfied primitive constraints in a path condition as the target function for optimization and applies RACOS algorithm [94] to optimize the non-differentiable function for complementing symbolic execution [59]. However, NeuEx learns a differentiable function to represent the behaviors of the program from the test cases, encodes the symbolic constraints into a differentiable function and embeds it into neural constraints. It computes the values of derivatives for each variable for updating.
A recent work [12] suggests the combination of neural reasoning and symbolic reasoning, albeit for an entirely different purpose, automated repair of student programming assignments. In contrast, our proposed neuro-symbolic execution solves neural and symbolic constraints together, and can be seen as a general purpose testing and analysis engine for programs.
CONCLUSIONS
To our knowledge, NeuEx is the first work utilizing neural networks to learn the constraints from values observed in test executions without pre-defined templates. NeuEx offers a new design point to simultaneously solve both symbolic constraints and neural constraints effectively, which can be used for complementing symbolic execution. It achieves good performance in both neuro-symbolic constraint solving and exploit generation for buffer overflows.
Figure 1 :
1A simplified example that parses the HTTP request and constructs a new message.
Figure 2 :
2Workflow for NeuEx. The circled number represents the order of operations. '*' represents the corresponding operations may be performed multiple times. SymSolv represents the symbolic constraint solver. NeuSolv represents the neuro-symbolic constraint solver.
1 :
1function NeuCL(S, N ) ▷ S: Symbolic constraint list; p , N p , S m , N m ) ← CheckDependency(N,
Figure 5 :
5A simple function with one loop.
learn the loop invariant N , (2) get the symbolic invariant of loop guard S from the symbolic execution engine, and (3) solve N ∧ ¬S. Consider the example in Figure 5. NeuEx first learns the neural constraint N : {a, b, cnt } → {c, d} representing the loop invariant on Line 5. Then, it gets the loop guard c > d on Line 3 from the symbolic execution engine. Finally, it solves the neuro-symbolic constraint N ∧ c ≤ d. For each loop in our benchmarks, we mark all the input arguments (e.g., a and b) as well as the loop count as symbolic. If the loop count is not an explicit variable, NeuEx adds an implicit count incremented for each iteration to capture the number of iterations in the loop.
Table 1 :
1The grammar of neuro-symbolic constraint language supported by NeuEx.Nuero-Symbolic
Constraint
NS
N ∧ S
Neural
constraint
N
V I n → V O n
symbolic
constraint
S
e1 ⊖ e2 | e
Variable
StrVar
ConstStr | StrVar•StrVar
NumVar
ConstNum | NumVar⊘NumVar
Expression
e
contains(StrVar, StrVar)
strstr(StrVar, StrVar) ⊗ NumVar
strlen(StrVar) ⊗ NumVar
NumVar ⊗ NumVar
Logical
' * ' matches as many characters as possible.2 input_uri and input_version are the content of the fields from input generated based on the knowledge of input, which is different from URI and version inFigure 1.
ACKNOWLEDGMENTSWe thank Marcel Böhme for participating in the initial discussion of the project. We also thank Shruti Tople, Shin Hwei Tan and Xiang Gao for useful feedback on earlier drafts of this paper. This research is supported by a research grant from DSO, Singapore. All opinions expressed in this paper are solely those of the authors.A APPENDIX A.1 Neural Constraint AnalysisWe analyze the learned neural constraints by observing the trained weights and bias of neural network. Given a set of variables as the input to the neural network, if the input variable is not related with the output variable, the weight between the input and output variable is zero; otherwise, it is larger than zero. For example, the length of vulnerable buffer in program Bind1 is controlled by dlen field which is the 43 th byte of of DNS queries, because the weight for this input variable is 0.99 which has the largest absolute value compared with other fields.
Jalangi2: Dynamic analysis framework for JavaScript. 2018. Jalangi2: Dynamic analysis framework for JavaScript. https://github.com/ Samsung/jalangi2. (2018).
PyExZ3: Python Exploration with Z3. 2018. PyExZ3: Python Exploration with Z3. https://github.com/thomasjball/ PyExZ3. (2018).
TensorFlow: A System for Large-Scale Machine Learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, OSDI. 16Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. TensorFlow: A System for Large-Scale Machine Learning.. In OSDI, Vol. 16. 265-283.
Building bridges between symbolic computation and satisfiability checking. Erika Ábrahám, Proceedings of the 2015 ACM on International Symposium on Symbolic and Algebraic Computation. the 2015 ACM on International Symposium on Symbolic and Algebraic ComputationACMErika Ábrahám. 2015. Building bridges between symbolic computation and satisfiability checking. In Proceedings of the 2015 ACM on International Symposium on Symbolic and Algebraic Computation. ACM, 1-6.
Demand driven compositional symbolic execution. S Anand, P Godefroid, N Tillman, International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS). S. Anand, P. Godefroid, and N. Tillman. 2008. Demand driven compositional symbolic execution. In International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS).
Type-dependence analysis and program transformation for symbolic execution. Saswat Anand, Alessandro Orso, Mary Jean Harrold, International Conference on Tools and Algorithms for the Construction and Analysis of Systems. SpringerSaswat Anand, Alessandro Orso, and Mary Jean Harrold. 2007. Type-dependence analysis and program transformation for symbolic execution. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 117-133.
Learning polynomials with neural networks. Alexandr Andoni, Rina Panigrahy, Gregory Valiant, Li Zhang, International Conference on Machine Learning. Alexandr Andoni, Rina Panigrahy, Gregory Valiant, and Li Zhang. 2014. Learn- ing polynomials with neural networks. In International Conference on Machine Learning. 1908-1916.
Enhancing Symbolic Execution with Veritesting. T Avgerinos, A Rebert, S K Cha, D Brumley, Proceedings of International Conference on Software Engineering (ICSE). International Conference on Software Engineering (ICSE)T. Avgerinos, A. Rebert, S.K. Cha, and D. Brumley. 2014. Enhancing Symbolic Execution with Veritesting. In Proceedings of International Conference on Software Engineering (ICSE).
Enhancing symbolic execution with veritesting. Thanassis Avgerinos, Alexandre Rebert, Sang Kil Cha, David Brumley, Proceedings of the 36th International Conference on Software Engineering. the 36th International Conference on Software EngineeringACMThanassis Avgerinos, Alexandre Rebert, Sang Kil Cha, and David Brumley. 2014. Enhancing symbolic execution with veritesting. In Proceedings of the 36th Inter- national Conference on Software Engineering. ACM, 1083-1094.
A Survey of Symbolic Execution Techniques. Roberto Baldoni, Emilio Coppa, Daniele Cono D'elia, Camil Demetrescu, Irene Finocchi, ACM Comput. Surv. 5150Roberto Baldoni, Emilio Coppa, Daniele Cono D'Elia, Camil Demetrescu, and Irene Finocchi. 2018. A Survey of Symbolic Execution Techniques. ACM Comput. Surv. 51, 3, Article 50 (2018).
Path invariants. Dirk Beyer, A Thomas, Rupak Henzinger, Andrey Majumdar, Rybalchenko, Acm Sigplan Notices. ACM42Dirk Beyer, Thomas A Henzinger, Rupak Majumdar, and Andrey Rybalchenko. 2007. Path invariants. In Acm Sigplan Notices, Vol. 42. ACM, 300-309.
Neuro-Symbolic Program Corrector for Introductory Programming Assignments. S Bhatia, P Kohli, R Singh, International Conference on Software Engineering (ICSE). S. Bhatia, P. Kohli, and R. Singh. 2018. Neuro-Symbolic Program Corrector for Introductory Programming Assignments. In International Conference on Software Engineering (ICSE).
End to end learning for self-driving cars. Mariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, D Lawrence, Mathew Jackel, Urs Monfort, Jiakai Muller, Zhang, arXiv:1604.07316arXiv preprintMariusz Bojarski, Davide Del Testa, Daniel Dworakowski, Bernhard Firner, Beat Flepp, Prasoon Goyal, Lawrence D Jackel, Mathew Monfort, Urs Muller, Jiakai Zhang, et al. 2016. End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016).
RWset: Attacking path explosion in constraint-based test generation. P Boonstoppel, C Cadar, D Engler, International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS). P. Boonstoppel, C. Cadar, and D. Engler. 2008. RWset: Attacking path explosion in constraint-based test generation. In International Conference on Tools and Algortihms for Construction and Analysis of Systems (TACAS).
SAT-based model checking without unrolling. R Aaron, Bradley, International Workshop on Verification, Model Checking, and Abstract Interpretation. SpringerAaron R Bradley. 2011. SAT-based model checking without unrolling. In Inter- national Workshop on Verification, Model Checking, and Abstract Interpretation. Springer, 70-87.
Breadth-first search. Alan Bundy, Lincoln Wallen, Catalogue of Artificial Intelligence Tools. SpringerAlan Bundy and Lincoln Wallen. 1984. Breadth-first search. In Catalogue of Artificial Intelligence Tools. Springer, 13-13.
KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Programs. Cristian Cadar, Daniel Dunbar, Dawson R Engler, Proceedings of the USENIX Symposium on Operating System Design and Implementation. the USENIX Symposium on Operating System Design and Implementation8Cristian Cadar, Daniel Dunbar, Dawson R Engler, et al. 2008. KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems Pro- grams. Proceedings of the USENIX Symposium on Operating System Design and Implementation 8, 209-224.
EXE: Automatically Generating Inputs of Death. Cristian Cadar, Vijay Ganesh, Peter M Pawlowski, David L Dill, Dawson R Engler, Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS '06). the 13th ACM Conference on Computer and Communications Security (CCS '06)Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, and Dawson R. Engler. 2006. EXE: Automatically Generating Inputs of Death. In Proceedings of the 13th ACM Conference on Computer and Communications Security (CCS '06).
. 10.1145/1180405.1180445ACMNew York, NY, USAACM, New York, NY, USA, 322-335. https://doi.org/10.1145/1180405.1180445
Symbolic execution for software testing: three decades later. Cristian Cadar, Koushik Sen, Commun. ACM. 56Cristian Cadar and Koushik Sen. 2013. Symbolic execution for software testing: three decades later. Commun. ACM 56, 2 (2013), 82-90.
A NICE way to test OpenFlow applications. Marco Canini, Daniele Venzano, Peter Peresini, Dejan Kostic, Jennifer Rexford, Proceedings of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI). the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI)Marco Canini, Daniele Venzano, Peter Peresini, Dejan Kostic, and Jennifer Rex- ford. 2012. A NICE way to test OpenFlow applications. In Proceedings of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI).
Peng Chen, Hao Chen, arXiv:1803.01307Angora: Efficient Fuzzing by Principled Search. arXiv preprintPeng Chen and Hao Chen. 2018. Angora: Efficient Fuzzing by Principled Search. arXiv preprint arXiv:1803.01307 (2018).
Counterexample-guided polynomial loop invariant generation by lagrange interpolation. Yu-Fang Chen, Chih-Duo Hong, Bow-Yaw Wang, Lijun Zhang, International Conference on Computer Aided Verification. SpringerYu-Fang Chen, Chih-Duo Hong, Bow-Yaw Wang, and Lijun Zhang. 2015. Counterexample-guided polynomial loop invariant generation by lagrange in- terpolation. In International Conference on Computer Aided Verification. Springer, 658-674.
S2E: A platform for in-vivo multi-path analysis of software systems. Vitaly Chipounov, Volodymyr Kuznetsov, George Candea, ACM SIGPLAN Notices. 46Vitaly Chipounov, Volodymyr Kuznetsov, and George Candea. 2011. S2E: A platform for in-vivo multi-path analysis of software systems. ACM SIGPLAN Notices 46, 3 (2011), 265-278.
Using symbolic execution for verifying safety-critical systems. Alberto Coen-Porisini, Giovanni Denaro, Carlo Ghezzi, Mauro Pezzé, In ACM SIGSOFT Software Engineering Notes. 26ACMAlberto Coen-Porisini, Giovanni Denaro, Carlo Ghezzi, and Mauro Pezzé. 2001. Using symbolic execution for verifying safety-critical systems. In ACM SIGSOFT Software Engineering Notes, Vol. 26. ACM, 142-151.
Linear invariant generation using non-linear constraint solving. Sriram Michael A Colón, Sankaranarayanan, B Henny, Sipma, International Conference on Computer Aided Verification. SpringerMichael A Colón, Sriram Sankaranarayanan, and Henny B Sipma. 2003. Lin- ear invariant generation using non-linear constraint solving. In International Conference on Computer Aided Verification. Springer, 420-432.
Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. Patrick Cousot, Radhia Cousot, Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languagesACMPatrick Cousot and Radhia Cousot. 1977. Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In Proceedings of the 4th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. ACM, 238-252.
The ASTRÉE analyzer. Patrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, Xavier Rival, European Symposium on Programming. SpringerPatrick Cousot, Radhia Cousot, Jérôme Feret, Laurent Mauborgne, Antoine Miné, David Monniaux, and Xavier Rival. 2005. The ASTRÉE analyzer. In European Symposium on Programming. Springer, 21-30.
Automatic discovery of linear restraints among variables of a program. Patrick Cousot, Nicolas Halbwachs, Proceedings of the 5th ACM SIGACT-SIGPLAN symposium on Principles of programming languages. the 5th ACM SIGACT-SIGPLAN symposium on Principles of programming languagesACMPatrick Cousot and Nicolas Halbwachs. 1978. Automatic discovery of linear restraints among variables of a program. In Proceedings of the 5th ACM SIGACT- SIGPLAN symposium on Principles of programming languages. ACM, 84-96.
Discoverer: Automatic Protocol Reverse Engineering from Network Traces. Weidong Cui, Jayanthkumar Kannan, Helen J Wang, USENIX Security Symposium. Weidong Cui, Jayanthkumar Kannan, and Helen J Wang. 2007. Discoverer: Automatic Protocol Reverse Engineering from Network Traces.. In USENIX Security Symposium. 1-14.
On test repair using symbolic execution. Brett Daniel, Tihomir Gvero, Darko Marinov, Proceedings of the 19th international symposium on Software testing and analysis. the 19th international symposium on Software testing and analysisACMBrett Daniel, Tihomir Gvero, and Darko Marinov. 2010. On test repair using sym- bolic execution. In Proceedings of the 19th international symposium on Software testing and analysis. ACM, 207-218.
Formal Program Verification using Symbolic Execution. R B Dannenberg, G W Ernst, IEEE Transactions on Software Engineering. 8R.B. Dannenberg and G.W. Ernst. 1982. Formal Program Verification using Symbolic Execution. IEEE Transactions on Software Engineering 8 (1982). Issue 1.
A machine program for theorem-proving. Martin Davis, George Logemann, Donald Loveland, Commun. ACM. 5Martin Davis, George Logemann, and Donald Loveland. 1962. A machine pro- gram for theorem-proving. Commun. ACM 5, 7 (1962), 394-397.
Z3: An Efficient SMT Solver. Leonardo De Moura, Nikolaj Bjørner, Tools and Algorithms for the Construction and Analysis of Systems. C. R. Ramakrishnan and Jakob RehofBerlin Heidelberg; Berlin, HeidelbergSpringerLeonardo de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In Tools and Algorithms for the Construction and Analysis of Systems, C. R. Ramakr- ishnan and Jakob Rehof (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 337-340.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, Journal of Machine Learning Research. 12John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research 12, Jul (2011), 2121-2159.
The Daikon system for dynamic detection of likely invariants. D Michael, Jeff H Ernst, Philip J Perkins, Stephen Guo, Carlos Mccamant, Pacheco, S Matthew, Chen Tschantz, Xiao, Science of Computer Programming. 69Michael D Ernst, Jeff H Perkins, Philip J Guo, Stephen McCamant, Carlos Pacheco, Matthew S Tschantz, and Chen Xiao. 2007. The Daikon system for dynamic detection of likely invariants. Science of Computer Programming 69, 1-3, 35-45.
On the approximate realization of continuous mappings by neural networks. Ken-Ichi Funahashi, Neural networks. 2Ken-Ichi Funahashi. 1989. On the approximate realization of continuous map- pings by neural networks. Neural networks 2, 3 (1989), 183-192.
HAMPI: A string solver for testing, analysis and vulnerability detection. Vijay Ganesh, Adam Kieżun, Shay Artzi, J Philip, Pieter Guo, Michael Hooimeijer, Ernst, International Conference on Computer Aided Verification. SpringerVijay Ganesh, Adam Kieżun, Shay Artzi, Philip J Guo, Pieter Hooimeijer, and Michael Ernst. 2011. HAMPI: A string solver for testing, analysis and vulnerability detection. In International Conference on Computer Aided Verification. Springer, 1-19.
ICE: A robust framework for learning invariants. Pranav Garg, Christof Löding, P Madhusudan, Daniel Neider, International Conference on Computer Aided Verification. SpringerPranav Garg, Christof Löding, P Madhusudan, and Daniel Neider. 2014. ICE: A ro- bust framework for learning invariants. In International Conference on Computer Aided Verification. Springer, 69-87.
Deep sparse rectifier neural networks. Xavier Glorot, Antoine Bordes, Yoshua Bengio, Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. the Fourteenth International Conference on Artificial Intelligence and StatisticsXavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 315-323.
Compositional Dynamic Test Generation. Patrice Godefroid, Proceedings of 34th Symposium on Principles of Programming Languages (POPL). 34th Symposium on Principles of Programming Languages (POPL)Patrice Godefroid. 2007. Compositional Dynamic Test Generation. In Proceedings of 34th Symposium on Principles of Programming Languages (POPL).
DART: Directed Automated Random Testing. Patrice Godefroid, Nils Klarlund, Koushik Sen, Proceedings of International Symposium on Programming Language Design and Implementation (PLDI). International Symposium on Programming Language Design and Implementation (PLDI)Patrice Godefroid, Nils Klarlund, and Koushik Sen. 2005. DART: Directed Auto- mated Random Testing. In Proceedings of International Symposium on Program- ming Language Design and Implementation (PLDI).
SAGE: whitebox fuzzing for security testing. Patrice Godefroid, Y Michael, David Levin, Molnar, Commun. ACM. 55Patrice Godefroid, Michael Y Levin, and David Molnar. 2012. SAGE: whitebox fuzzing for security testing. Commun. ACM 55, 3 (2012), 40-44.
Automated whitebox fuzz testing. Patrice Godefroid, Y Michael, David A Levin, Molnar, NDSS. 8Patrice Godefroid, Michael Y Levin, David A Molnar, et al. 2008. Automated whitebox fuzz testing.. In NDSS, Vol. 8. 151-166.
A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks. B Luke, Michael S Godfrey, Gashler, Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K). IEEE17th International Joint Conference onLuke B Godfrey and Michael S Gashler. 2015. A continuum among logarithmic, linear, and exponential functions, and its potential to improve generalization in neural networks. In Knowledge Discovery, Knowledge Engineering and Knowledge Management (IC3K), 2015 7th International Joint Conference on, Vol. 1. IEEE, 481-486.
Explaining and harnessing adversarial examples. J Ian, Jonathon Goodfellow, Christian Shlens, Szegedy, International Conference on Learning Representations. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
Invgen: An efficient invariant generator. Ashutosh Gupta, Andrey Rybalchenko, International Conference on Computer Aided Verification. SpringerAshutosh Gupta and Andrey Rybalchenko. 2009. Invgen: An efficient invariant generator. In International Conference on Computer Aided Verification. Springer, 634-640.
Approximation capabilities of multilayer feedforward networks. Kurt Hornik, Neural networks. 4Kurt Hornik. 1991. Approximation capabilities of multilayer feedforward net- works. Neural networks 4, 2 (1991), 251-257.
TRACER: A symbolic execution tool for verification. Joxan Jaffar, Vijayaraghavan Murali, Jorge A Navas, Andrew E Santosa, International Conference on Computer Aided Verification. SpringerJoxan Jaffar, Vijayaraghavan Murali, Jorge A Navas, and Andrew E Santosa. 2012. TRACER: A symbolic execution tool for verification. In International Conference on Computer Aided Verification. Springer, 758-766.
SymDroid: Symbolic execution for Dalvik bytecode. Jinseong Jeon, K Kristopher, Jeffrey S Micinski, Foster, Technical ReportJinseong Jeon, Kristopher K Micinski, and Jeffrey S Foster. 2012. SymDroid: Symbolic execution for Dalvik bytecode. Technical Report.
A practical and complete approach to predicate refinement. Ranjit Jhala, Kenneth L Mcmillan, International Conference on Tools and Algorithms for the Construction and Analysis of Systems. SpringerRanjit Jhala and Kenneth L McMillan. 2006. A practical and complete approach to predicate refinement. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer, 459-473.
Symbolic Execution and Program Testing. J C King, Commun. ACM. 19J.C. King. 1976. Symbolic Execution and Program Testing. Commun. ACM 19 (1976). Issue 7.
Symbolic execution and program testing. C James, King, Commun. ACM. 19James C King. 1976. Symbolic execution and program testing. Commun. ACM 19, 7 (1976), 385-394.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, International Conference on Learning Representations. Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic opti- mization. In International Conference on Learning Representations.
Learning invariants using decision trees. Siddharth Krishna, Christian Puhrsch, Thomas Wies, arXiv:1501.04725arXiv preprintSiddharth Krishna, Christian Puhrsch, and Thomas Wies. 2015. Learning invari- ants using decision trees. arXiv preprint arXiv:1501.04725 (2015).
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classifica- tion with deep convolutional neural networks. In Advances in neural information processing systems. 1097-1105.
Efficient state merging in symbolic execution. V Kuznetsov, J Kinder, S Bucur, G Candea, Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation. the 33rd ACM SIGPLAN Conference on Programming Language Design and ImplementationPLDIV. Kuznetsov, J. Kinder, S. Bucur, and G. Candea. 2012. Efficient state merging in symbolic execution. In Proceedings of the 33rd ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI).
Face recognition: A convolutional neural-network approach. Steve Lawrence, Lee Giles, Ah Chung Tsoi, Andrew D Back, IEEE transactions on neural networks. 8Steve Lawrence, C Lee Giles, Ah Chung Tsoi, and Andrew D Back. 1997. Face recognition: A convolutional neural-network approach. IEEE transactions on neural networks 8, 1 (1997), 98-113.
SymJS: automatic symbolic testing of JavaScript web applications. Guodong Li, Esben Andreasen, Indradeep Ghosh, Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. the 22nd ACM SIGSOFT International Symposium on Foundations of Software EngineeringACMGuodong Li, Esben Andreasen, and Indradeep Ghosh. 2014. SymJS: automatic symbolic testing of JavaScript web applications. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 449-459.
Symbolic execution of complex program driven by machine learning based constraint solving. Xin Li, Yongjuan Liang, Hong Qian, Yi-Qi Hu, Lei Bu, Yang Yu, Xin Chen, Xuandong Li, Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. the 31st IEEE/ACM International Conference on Automated Software EngineeringACMXin Li, Yongjuan Liang, Hong Qian, Yi-Qi Hu, Lei Bu, Yang Yu, Xin Chen, and Xuandong Li. 2016. Symbolic execution of complex program driven by ma- chine learning based constraint solving. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. ACM, 554-559.
Exponential Recency Weighted Average Branching Heuristic for SAT Solvers. Jia Hui Liang, Vijay Ganesh, Pascal Poupart, Krzysztof Czarnecki, Jia Hui Liang, Vijay Ganesh, Pascal Poupart, and Krzysztof Czarnecki. 2016. Exponential Recency Weighted Average Branching Heuristic for SAT Solvers..
. Aaai In, In AAAI. 3434-3440.
Rectifier nonlinearities improve neural network acoustic models. L Andrew, Maas, Y Awni, Andrew Y Hannun, Ng, Proc. icml. icml30Andrew L Maas, Awni Y Hannun, and Andrew Y Ng. 2013. Rectifier nonlinearities improve neural network acoustic models. In Proc. icml, Vol. 30. 3.
Interpolation and SAT-based Model Checking. Ken Mcmillan, International Conference on Computer Aided Verification. Ken McMillan. 2003. Interpolation and SAT-based Model Checking. In Interna- tional Conference on Computer Aided Verification.
. L R Medsker, Jain, Recurrent neural networks. Design and Applications. 5LR Medsker and LC Jain. 2001. Recurrent neural networks. Design and Applica- tions 5 (2001).
Recurrent neural network based language model. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, Sanjeev Khudanpur, Eleventh Annual Conference of the International Speech Communication Association. Tomáš Mikolov, Martin Karafiát, Lukáš Burget, Jan Černockỳ, and Sanjeev Khu- danpur. 2010. Recurrent neural network based language model. In Eleventh Annual Conference of the International Speech Communication Association.
Weakly relational numerical abstract domains. Antoine Miné, Ecole Polytechnique XPh.D. DissertationAntoine Miné. 2004. Weakly relational numerical abstract domains. Ph.D. Disser- tation. Ecole Polytechnique X.
Chaff: Engineering an efficient SAT solver. Matthew W Moskewicz, F Conor, Ying Madigan, Lintao Zhao, Sharad Zhang, Malik, Proceedings of the 38th annual Design Automation Conference. the 38th annual Design Automation ConferenceACMMatthew W Moskewicz, Conor F Madigan, Ying Zhao, Lintao Zhang, and Sharad Malik. 2001. Chaff: Engineering an efficient SAT solver. In Proceedings of the 38th annual Design Automation Conference. ACM, 530-535.
Nina Narodytska, Leonid Shiva Prasad Kasiviswanathan, Mooly Ryzhyk, Toby Sagiv, Walsh, arXiv:1709.06662Verifying properties of binarized deep neural networks. arXiv preprintNina Narodytska, Shiva Prasad Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, and Toby Walsh. 2017. Verifying properties of binarized deep neural networks. arXiv preprint arXiv:1709.06662 (2017).
Cubic regularization of Newton method and its global performance. Yurii Nesterov, T Boris, Polyak, Mathematical Programming. 1081Yurii Nesterov and Boris T Polyak. 2006. Cubic regularization of Newton method and its global performance. Mathematical Programming 108, 1 (2006), 177-205.
Semfix: Program repair via semantic analysis. Hoang Duong Thien Nguyen, Dawei Qi, Proceedings of the 2013 International Conference on Software Engineering. the 2013 International Conference on Software EngineeringIEEE PressAbhik Roychoudhury, and Satish ChandraHoang Duong Thien Nguyen, Dawei Qi, Abhik Roychoudhury, and Satish Chan- dra. 2013. Semfix: Program repair via semantic analysis. In Proceedings of the 2013 International Conference on Software Engineering. IEEE Press, 772-781.
Counterexample-guided approach to finding numerical invariants. Thanhvu Nguyen, Timos Antonopoulos, Andrew Ruef, Michael Hicks, Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. the 2017 11th Joint Meeting on Foundations of Software EngineeringACMThanhVu Nguyen, Timos Antonopoulos, Andrew Ruef, and Michael Hicks. 2017. Counterexample-guided approach to finding numerical invariants. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. ACM, 605- 615.
DIG: a dynamic invariant generator for polynomial and array invariants. Thanhvu Nguyen, Deepak Kapur, Westley Weimer, Stephanie Forrest, ACM Transactions on Software Engineering and Methodology (TOSEM). 2330Thanhvu Nguyen, Deepak Kapur, Westley Weimer, and Stephanie Forrest. 2014. DIG: a dynamic invariant generator for polynomial and array invariants. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4, 30.
. Saswat Padhi, Todd Millstein, arXiv:1707.02029Data-Driven Loop Invariant Inference with Automatic Feature Synthesis. arXiv preprintSaswat Padhi and Todd Millstein. 2017. Data-Driven Loop Invariant Inference with Automatic Feature Synthesis. arXiv preprint arXiv:1707.02029 (2017).
The limitations of deep learning in adversarial settings. Nicolas Papernot, Patrick Mcdaniel, Somesh Jha, Matt Fredrikson, Ananthram Berkay Celik, Swami, Security and Privacy (EuroS&P). Nicolas Papernot, Patrick McDaniel, Somesh Jha, Matt Fredrikson, Z Berkay Celik, and Ananthram Swami. 2016. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on. IEEE, 372-387.
Automatically patching errors in deployed software. H Jeff, Sunghun Perkins, Sam Kim, Saman Larsen, Jonathan Amarasinghe, Michael Bachrach, Carlos Carbin, Frank Pacheco, Stelios Sherwood, Greg Sidiroglou, Sullivan, Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. the ACM SIGOPS 22nd symposium on Operating systems principlesACMJeff H Perkins, Sunghun Kim, Sam Larsen, Saman Amarasinghe, Jonathan Bachrach, Michael Carbin, Carlos Pacheco, Frank Sherwood, Stelios Sidiroglou, Greg Sullivan, et al. 2009. Automatically patching errors in deployed software. In Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles. ACM, 87-102.
Path Exploration using Symbolic Output. D Qi, H D Nguyen, A Roychoudhury, ACM Transactions on Software Engineering and Methodology (TOSEM). 22Issue 4.D. Qi, H.D.T Nguyen, and A. Roychoudhury. 2013. Path Exploration using Symbolic Output. ACM Transactions on Software Engineering and Methodology (TOSEM) 22 (2013). Issue 4.
On the momentum term in gradient descent learning algorithms. Ning Qian, Neural networks. 12Ning Qian. 1999. On the momentum term in gradient descent learning algorithms. Neural networks 12, 1 (1999), 145-151.
Automatic generation of polynomial invariants of bounded degree using abstract interpretation. Enric Rodríguez, -Carbonell , Deepak Kapur, Science of Computer Programming. 64Enric Rodríguez-Carbonell and Deepak Kapur. 2007. Automatic generation of polynomial invariants of bounded degree using abstract interpretation. Science of Computer Programming 64, 1 (2007), 54-75.
Generating all polynomial invariants in simple loops. Enric Rodríguez, -Carbonell , Deepak Kapur, Journal of Symbolic Computation. 42Enric Rodríguez-Carbonell and Deepak Kapur. 2007. Generating all polynomial invariants in simple loops. Journal of Symbolic Computation 42, 4 (2007), 443-476.
An overview of gradient descent optimization algorithms. Sebastian Ruder, abs/1609.04747CoRRSebastian Ruder. 2016. An overview of gradient descent optimization algorithms. CoRR, abs/1609.04747 (2016).
Learning internal representations by error propagation. Geoffrey E David E Rumelhart, Ronald J Hinton, Williams, California Univ San Diego La Jolla Inst for Cognitive ScienceTechnical ReportDavid E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1985. Learning internal representations by error propagation. Technical Report. California Univ San Diego La Jolla Inst for Cognitive Science.
Dynamic inference of likely data preconditions over predicates by tree learning. Sriram Sankaranarayanan, Swarat Chaudhuri, Franjo Ivančić, Aarti Gupta, Proceedings of the 2008 international symposium on Software testing and analysis. the 2008 international symposium on Software testing and analysisACMSriram Sankaranarayanan, Swarat Chaudhuri, Franjo Ivančić, and Aarti Gupta. 2008. Dynamic inference of likely data preconditions over predicates by tree learning. In Proceedings of the 2008 international symposium on Software testing and analysis. ACM, 295-306.
A symbolic execution framework for javascript. Prateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen Mccamant, Dawn Song, Security and Privacy (SP). Prateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen McCamant, and Dawn Song. 2010. A symbolic execution framework for javascript. In Security and Privacy (SP), 2010 IEEE Symposium on. IEEE, 513-528.
A Symbolic Execution Framework for JavaScript. Prateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen Mccamant, Dawn Song, 10.1109/SP.2010.38Proceedings of the 2010 IEEE Symposium on Security and Privacy (SP '10). the 2010 IEEE Symposium on Security and Privacy (SP '10)Washington, DC, USAIEEE Computer SocietyPrateek Saxena, Devdatta Akhawe, Steve Hanna, Feng Mao, Stephen McCamant, and Dawn Song. 2010. A Symbolic Execution Framework for JavaScript. In Proceedings of the 2010 IEEE Symposium on Security and Privacy (SP '10). IEEE Computer Society, Washington, DC, USA, 513-528. https://doi.org/10.1109/SP. 2010.38
Loop-extended symbolic execution on binary programs. Prateek Saxena, Pongsin Poosankam, Stephen Mccamant, Dawn Song, Proceedings of the eighteenth international symposium on Software testing and analysis. the eighteenth international symposium on Software testing and analysisACMPrateek Saxena, Pongsin Poosankam, Stephen McCamant, and Dawn Song. 2009. Loop-extended symbolic execution on binary programs. In Proceedings of the eighteenth international symposium on Software testing and analysis. ACM, 225- 236.
multiSE: Multi-path Symbolic Execution. K Sen, G Necula, L Gong, W Choi, International Symposium on Foundations of Software Engineering. K. Sen, G. Necula, L. Gong, and W. Choi. 2015. multiSE: Multi-path Symbolic Execution. In International Symposium on Foundations of Software Engineering.
CIVL: the concurrency intermediate verification language. F Stephen, Manchun Siegel, Ziqing Zheng, Timothy K Luo, Andre V Zirkel, John G Marianiello, Edenhofner, B Matthew, Michael S Dwyer, Rogers, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. the International Conference for High Performance Computing, Networking, Storage and AnalysisACM61Stephen F Siegel, Manchun Zheng, Ziqing Luo, Timothy K Zirkel, Andre V Marianiello, John G Edenhofner, Matthew B Dwyer, and Michael S Rogers. 2015. CIVL: the concurrency intermediate verification language. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis. ACM, 61.
GRASP-a new search algorithm for satisfiability. Marques João, Silva, A Karem, Sakallah, Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design. the 1996 IEEE/ACM international conference on Computer-aided designIEEE Computer SocietyJoão P Marques Silva and Karem A Sakallah. 1997. GRASP-a new search algorithm for satisfiability. In Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design. IEEE Computer Society, 220-227.
Newton's method with a model trust region modification. C Danny, Sorensen, SIAM J. Numer. Anal. 19Danny C Sorensen. 1982. Newton's method with a model trust region modifica- tion. SIAM J. Numer. Anal. 19, 2 (1982), 409-426.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research 15, 1 (2014), 1929-1958.
On the solution of ill-posed problems and the method of regularization. Andrei Nikolaevich, Tikhonov , Doklady Akademii Nauk. 151Andrei Nikolaevich Tikhonov. 1963. On the solution of ill-posed problems and the method of regularization. In Doklady Akademii Nauk, Vol. 151. Russian Academy of Sciences, 501-504.
Fitness-guided path exploration in dynamic symbolic execution. Tao Xie, Nikolai Tillmann, Jonathan De Halleux, Wolfram Schulte, Dependable Systems & Networks, 2009. DSN'09. IEEE/IFIP International Conference on. IEEE. Tao Xie, Nikolai Tillmann, Jonathan de Halleux, and Wolfram Schulte. 2009. Fitness-guided path exploration in dynamic symbolic execution. In Dependable Systems & Networks, 2009. DSN'09. IEEE/IFIP International Conference on. IEEE, 359-368.
Proteus: computing disjunctive loop summary via path dependency analysis. Xiaofei Xie, Bihuan Chen, Yang Liu, Wei Le, Xiaohong Li, Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software EngineeringACMXiaofei Xie, Bihuan Chen, Yang Liu, Wei Le, and Xiaohong Li. 2016. Proteus: computing disjunctive loop summary via path dependency analysis. In Proceed- ings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ACM, 61-72.
On early stopping in gradient descent learning. Yuan Yao, Lorenzo Rosasco, Andrea Caponnetto, Constructive Approximation. 26Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. 2007. On early stopping in gradient descent learning. Constructive Approximation 26, 2 (2007), 289-315.
Derivative-Free Optimization via Classification. Yang Yu, Hong Qian, Yi-Qi Hu, AAAI. 16Yang Yu, Hong Qian, and Yi-Qi Hu. 2016. Derivative-Free Optimization via Classification.. In AAAI, Vol. 16. 2286-2292.
Z3-str: A z3-based string solver for web application analysis. Yunhui Zheng, Xiangyu Zhang, Vijay Ganesh, Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. the 2013 9th Joint Meeting on Foundations of Software EngineeringACMYunhui Zheng, Xiangyu Zhang, and Vijay Ganesh. 2013. Z3-str: A z3-based string solver for web application analysis. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. ACM, 114-124.
Testing static analysis tools using exploitable buffer overflows from open source code. Misha Zitser, Richard Lippmann, Tim Leek, ACM SIGSOFT Software Engineering Notes. ACM29Misha Zitser, Richard Lippmann, and Tim Leek. 2004. Testing static analysis tools using exploitable buffer overflows from open source code. In ACM SIGSOFT Software Engineering Notes, Vol. 29. ACM, 97-106.
| [
"https://github.com/thomasjball/"
] |
[
"Polymer Crowding and Shape Distributions in Polymer-Nanoparticle Mixtures",
"Polymer Crowding and Shape Distributions in Polymer-Nanoparticle Mixtures"
] | [
"Wei Kang Lim \nDepartment of Physics\nNorth Dakota State University\n58108-6050FargoNDUSA\n",
"Alan R Denton \nDepartment of Physics\nNorth Dakota State University\n58108-6050FargoNDUSA\n"
] | [
"Department of Physics\nNorth Dakota State University\n58108-6050FargoNDUSA",
"Department of Physics\nNorth Dakota State University\n58108-6050FargoNDUSA"
] | [] | Macromolecular crowding can influence polymer shapes, which is important for understanding the thermodynamic stability of polymer solutions and the structure and function of biopolymers (proteins, RNA, DNA) under confinement. We explore the influence of nanoparticle crowding on polymer shapes via Monte Carlo simulations and free-volume theory of a coarse-grained model of polymer-nanoparticle mixtures. Exploiting the geometry of random walks, we model polymer coils as effective penetrable ellipsoids, whose shapes fluctuate according to the probability distributions of the eigenvalues of the gyration tensor. Accounting for the entropic cost of a nanoparticle penetrating a larger polymer coil, we compute the crowding-induced shift in the shape distributions, radius of gyration, and asphericity of ideal polymers in a theta solvent. With increased nanoparticle crowding, we find that polymers become more compact (smaller, more spherical), in agreement with predictions of free-volume theory. Our approach can be easily extended to nonideal polymers in good solvents and used to model conformations of biopolymers in crowded environments. | 10.1063/1.4895612 | [
"https://arxiv.org/pdf/1410.6559v1.pdf"
] | 19,124,666 | 1410.6559 | 53553f33f5b46a361b6bd44a8ddf108f708678b8 |
Polymer Crowding and Shape Distributions in Polymer-Nanoparticle Mixtures
24 Oct 2014
Wei Kang Lim
Department of Physics
North Dakota State University
58108-6050FargoNDUSA
Alan R Denton
Department of Physics
North Dakota State University
58108-6050FargoNDUSA
Polymer Crowding and Shape Distributions in Polymer-Nanoparticle Mixtures
24 Oct 2014
Macromolecular crowding can influence polymer shapes, which is important for understanding the thermodynamic stability of polymer solutions and the structure and function of biopolymers (proteins, RNA, DNA) under confinement. We explore the influence of nanoparticle crowding on polymer shapes via Monte Carlo simulations and free-volume theory of a coarse-grained model of polymer-nanoparticle mixtures. Exploiting the geometry of random walks, we model polymer coils as effective penetrable ellipsoids, whose shapes fluctuate according to the probability distributions of the eigenvalues of the gyration tensor. Accounting for the entropic cost of a nanoparticle penetrating a larger polymer coil, we compute the crowding-induced shift in the shape distributions, radius of gyration, and asphericity of ideal polymers in a theta solvent. With increased nanoparticle crowding, we find that polymers become more compact (smaller, more spherical), in agreement with predictions of free-volume theory. Our approach can be easily extended to nonideal polymers in good solvents and used to model conformations of biopolymers in crowded environments.
I. INTRODUCTION
Polymers are commonly confined within biological systems and other soft materials [1]. Confinement can result from geometric boundaries, as in thin films and porous media, or from crowding by other species, as in nanocomposite materials and cellular environments. Within the nucleoplasm and cytoplasm of eukaryotic cells, for example, an assortment of macromolecules (proteins, RNA, DNA, etc.) share a tightly restricted space, occupying from 20% to 40% of the total volume [2,3]. In this crowded milieu, smaller molecules exclude volume to larger, softer biopolymers, constraining conformations and influencing folding pathways. Macromolecular crowding, because of its profound influence on the structure, and hence function, of biopolymers, has been intensely studied over the past three decades [4][5][6][7][8][9][10][11][12].
It is well established that crowding can significantly modify polymer conformations. The asymmetric shapes of folded and denatured states of biopolymers, in particular, are known to respond sensitively to the presence of crowding agents [13][14][15][16][17][18][19]. The shape distribution of a protein or RNA, for example, can vary with crowder concentration, which in turn, can affect the biopolymer's function. Polymer shapes are also important in determining the nature of depletion-induced effective interactions between colloids and nanoparticles, thereby influencing thermodynamic stability of colloid-polymer mixtures against demixing. Direct measurements [20] show, for example, that rodlike and spherical depletants induce significantly different interactions between colloids. Confinement and crowding effects are thus of practical concern for their impact on the properties of polymernanoparticle composite materials [12,[21][22][23][24][25][26][27] and for their role in diseases associated with protein aggregation [28].
Fundamental interest in polymer shapes dates to the dawn of polymer science. Already 80 years ago, * [email protected] Kuhn [29] recognized that macromolecules in solution are fluctuating objects, whose shapes are far from spherical, and that a linear polymer chain, when viewed in its principal-axis frame of reference, resembles a significantly elongated, flattened (bean-shaped) ellipsoid. The close analogy between polymers and random walks has inspired many mathematical and statistical mechanical studies to analyze sizes and shapes of random walks [30][31][32][33][34][35][36][37][38][39][40][41][42][43]. Such studies validate Kuhn's insight and reveal broad distributions of radius of gyration and shape, as characterized by the eigenvalues of the gyration tensor.
In the case of colloidal particles larger than polymer radii of gyration (colloid limit), polymer depletion and induced effective attraction between colloids are relatively well understood phenomena [44][45][46][47][48]. The opposite case, in which smaller colloids (nanoparticles) can easily penetrate larger polymers (protein limit), has been studied more recently by theory [49][50][51], simulation [52][53][54][55], and experiment [56][57][58]. Previous studies, while analyzing depletion-induced interactions and demixing phase behavior, have not directly addressed the response of polymer shape to crowding. The purpose of this paper is to explore the influence of nanoparticle crowding on the shapes of polymers in polymer-nanoparticle mixtures.
In the next section, we define our model of a polymernanoparticle mixture. In Sec. III, we describe our simulation method and outline the free-volume theory, relegating details to an appendix. In Sec. IV, we present results from our simulations for the shape distributions of crowded polymers and compare with theoretical predictions. Finally, in Sec. V, we summarize and suggest possible extensions of our approach for future work.
II. MODELS
A. Polymer-Nanoparticle Mixtures
We model a mixture of nanoparticles and nonadsorbing polymers using a generalization of the Asakura-Oosawa- Vrij (AOV) model of colloid-polymer mixtures [44,45]. The original AOV model represents the particles as hard (impenetrable) spheres, interacting via a hard-sphere pair potential,
v nn (r) = ∞ , r < 2R n , 0 , r ≥ 2R n ,(1)
and the polymers as effective spheres of fixed size (radius of gyration) that are mutually ideal (noninteracting), but impenetrable to the particles. While the neglect of polymer-polymer interactions is justified for polymers in a theta solvent [59], the effective-sphere approximation ignores aspherical conformations and shape fluctuations of polymer coils. Moreover, the assumption of hard polymer-particle interactions is physically reasonable only for particles much larger than the polymers. In order to study the influence of nanoparticle crowding on polymer shapes, we generalize the AOV model by allowing nanoparticles to penetrate polymers and by representing the polymers as ellipsoids that fluctuate in size and shape. Following Schmidt and Fuchs [60], we attribute to each overlapping polymer-nanoparticle pair, an average free energy cost ε, which accounts for the loss in conformational entropy of the coil. For a hard sphere penetrating an ideal polymer coil, in a theta solvent at temperature T , polymer field theory predicts ε = 3k B T /q, where q = R p /R n is the ratio of the polymer radius of gyration R p to the nanoparticle radius R n [61,62]. An obvious refinement of this model would allow the overlap free energy to vary with the nanoparticle's position relative to the polymer center. Such effective interaction energy profiles have been computed from Monte Carlo simulations of polymers on a lattice [63]. Alternatively, the overlap free energy profile could be derived from an approximation for the monomer density in the ellipsoidal polymer model [43] (see below). In the current study, however, for conceptual simplicity and computational efficiency, we neglect this level of spatial resolution. Furthermore, since the nanoparticles in our model are chemically in-ert and act only to limit the free volume available to the polymers, we assume that the theta temperature of the solution is independent of nanoparticle concentration.
B. Penetrable Polymer Model
The size and shape of a polymer coil can be characterized by the gyration tensor, defined by
T = 1 N N i=1 r i r i ,(2)
where r i denotes the position of the i th of N segments, relative to the center of mass. Any particular conformation has a radius of gyration defined by
R p = 1 N N i=1 r 2 i 1/2 = Λ 1 + Λ 2 + Λ 3 ,(3)
where Λ 1 , Λ 2 , Λ 3 are the eigenvalues of T. For reference, the gyration tensor is related to the moment of inertia tensor I, familiar from classical mechanics of rigid bodies, via T = R 2 p 1 − I, where 1 is the unit tensor. The root-mean-square (rms) radius of gyration, which is experimentally measurable, is given by
R g = R 2 p = Λ 1 + Λ 2 + Λ 3 ,(4)
where the angular brackets represent an ensemble average over conformations. Now, if the average in Eq. (4) is defined relative to a fixed (laboratory) frame of reference, then the average tensor is symmetric, has equal eigenvalues, and describes a sphere. If instead the average is performed in a frame of reference that rotates with the polymer's principal axes, the coordinate axes being labelled to preserve the order of the eigenvalues by magnitude (Λ 1 > Λ 2 > Λ 3 ), then the average tensor is asymmetric and describes an anisotropic object [38,39]. In other words, viewed from the laboratory frame, the average shape of a random walk is spherical, but viewed from the principal-axis frame, the average shape is aspherical [29]. In fact, in the principalaxis frame, the average shape is a significantly elongated (prolate), flattened ellipsoid with principal radii along the three independent axes in the approximate ratio 3.4 : 1.6 : 1 [29,35,36]. Each eigenvalue of the gyration tensor is proportional to the square of the respective principal radius of the general ellipsoid that best fits the shape of the polymer, an arbitrary point (x, y, z) on the surface of the ellipsoid satisfying
x 2 Λ 1 + y 2 Λ 2 + z 2 Λ 3 = 3 .(5)
This ellipsoid serves as a gross representation of the tertiary structure of a biopolymer.
The shape of an ideal, freely-jointed polymer coil of N segments of length l, modeled as a soft Gaussian ellipsoid [42], has a normalized probability distribution that is well approximated by the analytical ansatz of Eurich and Maass [43]:
P r (λ 1 , λ 2 , λ 3 ) = 3 i=1 P ir (λ i ) ,(6)
where λ i ≡ Λ i /(N l 2 ) are scaled (dimensionless) eigenvalues and
P ir (λ i ) = (a i d i ) ni−1 λ −ni i 2K i exp − λ i a i − d 2 i a i λ i ,(7)
with fitting parameters K 1 = 0.094551, K 2 = 0.0144146, K 3 = 0.0052767, a 1 = 0.08065, a 2 = 0.01813, a 3 = 0.006031, d 1 = 1.096, d 2 = 1.998, d 3 = 2.684, n 1 = 1/2, n 2 = 5/2, and n 3 = 4. The assumption of independent eigenvalues underlying the factorization ansatz of Eq. (6) is not exact, since an extension of a random walk in one direction affects the probability of an extension in an orthogonal direction. Nevertheless, conformations that significantly violate the ansatz are rare for random walks sufficiently long to model real polymers. It should be noted that the ellipsoidal polymer model has also been extended to block copolymers [64].
In modeling mixtures of polymers and nanoparticles, it is convenient to consider the system to be in osmotic equilibrium with a reservoir of pure polymer, which fixes the polymer chemical potential. A key parameter that defines the system is the ratio, q r ≡ R r g /R n , of the rms radius of gyration of polymer in the reservoir R r g to the nanoparticle radius. Expressed in terms of the scaled eigenvalues, the ratio of the rms radius of gyration in the system [Eq. (4)] to its counterpart in the reservoir [R r g = l N/6] is given by
R g R r g = 6 λ 1 + λ 2 + λ 3 .(8)
Similarly, the principal radii are related to the scaled eigenvalues according to
R i = R r g 18λ i , i = 1, 2, 3 .(9)
The broad eigenvalue distributions described by Eq. (7) imply significant fluctuations in size (R g ) and shape (λ i ) of the polymer [see Fig. (3) below]. The deviation of a polymer's average shape from a perfect sphere can be quantified by an asphericity parameter [38,39]
A = 1 − 3 λ 1 λ 2 + λ 1 λ 3 + λ 2 λ 3 (λ 1 + λ 2 + λ 3 ) 2 .(10)
By this definition, a spherical object with all eigenvalues equal has A = 0, while an elongated object, with one eigenvalue much larger than the other two, has A ≃ 1.
In the next section, we describe computational methods for calculating the shape distribution, radius of gyration, and asphericity of polymers crowded by nanoparticles.
III. COMPUTATIONAL METHODS
A. Monte Carlo Simulations
To explore the influence of nanoparticle crowding on polymer conformations, we have developed a Monte Carlo (MC) method for simulating mixtures of hard nanoparticles and ideal polymers, whose uncrowded shape distribution follows Eq. (7). In the canonical ensemble, the temperature, particle numbers (N n nanoparticles, N p polymers), and volume V are fixed. Trial moves include displacements of nanoparticles and, for the polymers, displacements, rotations, and shape changes. In the standard Metropolis algorithm [65][66][67], a trial move from an old to a new configuration, due to displacement of any particle or rotation of a polymer, is accepted with probability
P config (old → new) = min {exp(−β∆U ), 1} ,(11)
where β = 1/(k B T ) and ∆U is the associated change in potential energy.
Overlaps of hard-sphere nanoparticles are easily detected and are, of course, automatically rejected. Polymer-nanoparticle overlaps, on the other hand, are harder to identify, because of the nontrivial calculation required to determine the shortest distance between the surface of a sphere and that of a general ellipsoid [68]. To avoid the computational overhead of this calculation, we here restrict our investigations to cases in which the nanoparticles are much smaller than the rms radius of gyration of the polymers (q r ≫ 1). In this limit, we can accurately approximate the volume excluded by a polymer to a nanoparticle, whose true shape is an ellipsoid coated by a shell of uniform thickness R n , by a larger ellipsoid, whose principal radii are extended by R n . Thus, we approximate the overlap criterion by
x √ Λ 1 + R n 2 + y √ Λ 2 + R n 2 + z √ Λ 3 + R n 2 < 1 ,(12)
where (x, y, z) here represent the coordinates of the vector joining the centers of the sphere and ellipsoid.
In the event that a trial move results in a change in the number ∆N pn of polymer-nanoparticle overlaps, then ∆U = ε∆N pn . Thus, any displacement or rotation that reduces, or leaves unchanged, the number of overlaps is automatically accepted, while a move that creates new overlaps is accepted only with a probability equal to the Boltzmann factor for ∆U . For trial rotations, we define the orientation of a polymer by a unit vector u, aligned with the long (λ 1 ) axis of the ellipsoid at polar angle θ and azimuthal angle φ, and generate a new (trial) direction u ′ via
u ′ = u + τ v |u + τ v| ,(13)
where v is a unit vector with random orientation and τ is a tolerance that determines the magnitude of the trial rotation [66]. To confirm even sampling of orientations, we checked that histograms of cos θ and φ for a free (i.e., uncrowded) polymer were flat. A trial change in shape of an ellipsoidal polymer coil, from an old shape λ old to a new shape λ new = λ old + ∆λ, is accepted with probability
P shape (λ old → λ new ) = min P r (λ new ) P r (λ old ) e −β∆U , 1 ,(14)
where λ ≡ (λ 1 , λ 2 , λ 3 ) collectively denotes the eigenvalues and P r (λ) is the reservoir polymer shape distribution [Eqs. (6) and (7)]. Thus, a trial shape change is accepted with a probability equal to the Boltzmann factor for the change in potential energy multiplied by the ratio of the new to the old shape probabilities. Through trial changes in gyration tensor eigenvalues, a polymer explores the landscape of possible shapes in the presence of crowders and evolves toward a new equilibrium shape distribution.
One MC step of a simulation consists of a trial displacement of every nanoparticle, followed by a trial displacement, rotation, and shape change of every polymer. To maximize computational efficiency, we chose tolerances of 0.01 σ n for trial displacements, τ = 0.001 for trial rotations, and for trial shape (eigenvalue) changes, ∆λ 1 = 0.01, ∆λ 2 = 0.003, and ∆λ 3 = 0.001. To facilitate extensions and portability of our simulation methods, we coded our MC algorithm in the Java programming language within the Open Source Physics library [69,70], exploiting the numerical and visualization classes of the library. The simulations thus run on any platform, with a convenient graphical user interface, and so may have both scientific and pedagogical value.
B. Free-Volume Theory of Crowding
For the model polymer-nanoparticle mixtures described in Sec. II, Denton et al. [71] recently developed a free-volume theory, which generalizes the theory of Lekkerkerker et al. [72] from incompressible, spherical polymers to compressible, aspherical polymers. To guide our choice of parameters, check for consistency, and test the theory, we compare our simulation results with theoretical predictions. As outlined in Appendix A, the theory predicts a crowded-polymer shape probability distribution of the form
P (λ; φ n ) = P r (λ) α(λ; φ n ) α eff (φ n ) ,(15)
where the free-volume fraction α(λ; φ n ) is the fraction of the total volume accessible to a polymer, whose ellipsoidal shape is characterized by the eigenvalues λ = (λ 1 , λ 2 , λ 3 ), amidst nanoparticles of volume fraction φ n = n n (4π/3)R 3 n (number density n n ), and is an effective polymer free-volume fraction, expressed as an average of α(λ; φ n ) over polymer shapes in the reservoir. In practice, we adopt the ansatz for P r (λ) described in Sec. II B and compute α(λ; φ n ) by implementing the generalized scaled-particle theory of Oversteegen and Roth [73]. From Eq. (15), the probability distribution for a single eigenvalue is obtained by integrating over the other two eigenvalues. For example,
α eff (φ n ) ≡ dλ P r (λ)α(λ; φ n )(16)P 1 (λ 1 ; φ n ) = ∞ 0 dλ 2 ∞ 0 dλ 3 P (λ; φ n ) .(17)
In calculating the rms radius of gyration [Eq.
f = dλ P (λ)f (λ) .(18)
In the next section, we present numerical results from MC simulations and free-volume theory that characterize the shapes of ideal polymers in crowded environments.
IV. RESULTS AND DISCUSSION
To investigate how the shapes of ideal polymers respond to crowding, we simulated compressible, penetrable polymers, immersed in a fluid of smaller, hard-sphere nanoparticles (protein limit), modeled as described in Sec. II, and using the MC method outlined in Sec. III A.
Confining the system to a cubic box of fixed size with periodic boundary conditions applied to opposite faces, we initialized the nanoparticles on the sites of a cubic lattice and the polymers at interstitial sites. For illustration, a snapshot of the simulation cell is shown in Fig. 2. Each run consisted of an initial equilibration stage of 5 × 10 4 MC steps, followed by a data collection stage of 10 7 steps. We monitored the total overlap energy and shape distributions and confirmed that the averages of these diagnostics were stable after the equilibration stage. Our results represent averages over 10 4 independent configurations (spaced by intervals of 10 3 steps) from each of five independent runs (total of 5 × 10 4 configurations), with statistical uncertainties computed from standard deviations of the five runs. Most of our simulations were performed for systems of N n = 216 nanoparticles. To rule out finite-size effects, however, we repeated several runs for larger systems (up to N n = 1728) and confirmed that the results are independent of system size to within statistical fluctuations. Figure 3 shows the probability distributions for the eigenvalues of the gyration tensor, representing the shape of the best-fit ellipsoid, for one polymer amidst N n = 216 nanoparticles, with the reservoir rms radius of gyration equal to five times the nanoparticle radius (q r = 5). At this large size ratio, our approximation for the polymernanoparticle overlap criterion [Eq. (12)] is quite accu- rate. With increasing nanoparticle volume fraction, from φ n = 0 (reservoir) to φ n = 0.3, the shape distributions progressively shift toward smaller eigenvalues, reflecting compression of the polymer along all three principle axes. The greatest fractional shift occurs, however, in the two largest eigenvalues (λ 1 and λ 2 ), implying that the best-fit ellipsoids tend to become less elongated. Figure 4 shows the probability distributions for a polymer twice as large (q r = 10). Doubling the size ratio, while still avoiding significant finite-size effects, required doubling the simulation box length, and thus increasing eight-fold the number of nanoparticles (N n = 1728). As a rough guide, the simulation box must be large enough that the long axis of the polymer cannot span a significant fraction of the box length. Otherwise, correlations between a polymer and its own images can cause spurious effects. To minimize computational time for the larger system, we reduced the run length to 10 6 MC steps, without a significant change in results. Our runs of 10 7 steps proved, therefore, to be conservatively long. For the same nanoparticle concentration, the shape distributions of the larger polymer are considerably more shifted relative to the reservoir distributions. This trend is easily explained by considering the average free energy costĒ pn of polymer-nanoparticle overlaps. Neglecting correlations, the average number of overlaps scales as φ n q 3 , while the penetration energy scales as q −1 . Thus, the average overlap energy scales asĒ pn ∼ φ n q 2 , i.e., the crowding effect increases with the square of the size ratio.
Also shown in Figs. 3 and 4 are the shape distributions predicted by the free-volume theory, described in Sec. III B and the Appendix. In this limit of dilute polymer concentration, theory and simulation are evidently in close agreement at lower nanoparticle concentrations. As the polymer becomes increasingly crowded, however, slight quantitative deviations emerge, particularly for the largest eigenvalue λ 1 of the gyration tensor at q r = 5. These small deviations result from the mean-field theory's neglect of polymer-nanoparticle correlations and from approximations inherent in scaled-particle theory.
From the polymer shape (eigenvalue) distributions, we have computed the rms radius of gyration [Eq. (8)] and asphericity [Eq. (10)] of a single crowded polymer as functions of nanoparticle concentration. As shown in Figs. 5 and 6, an ideal polymer responds to crowding by contracting in size (decreasing R g ) and becoming more spherical in shape (decreasing A). Thus, with increasing nanoparticle volume fraction, the polymer progressively compactifies. Increasing the size ratio from q r = 5 to q r = 10 enhances the crowding effect, for reasons explained above, the polymer becoming even smaller and more spherical for a given nanoparticle concentration. Figures 5 and 6 also show that the free-volume theory again accurately captures the trends in size and shape. Nevertheless, small gaps between theory and simulation are apparent, and these quantitative deviations grow with increasing nanoparticle concentration. The theory's slight, but consistent, underprediction of both R g and A is due mainly to the underprediction of λ 1 . To emphasize the distinction between ellipsoidal and spherical polymer models, Fig. 5 also shows, for comparison, free-volume theory predictions for a spherical, compressible polymer model [74,75]. Clearly ellipsoidal polymers, being free to distort their shape, have significantly larger radii of gyration in crowded environments than polymers that are constrained to remain spherical.
To explore crowding at higher polymer concentrations, we increased the polymer volume fraction to φ p ≡ n p (4π/3)(R r g ) 3 ≃ 0.5, with N p = 8 polymers now sharing the simulation box with N n = 216 nanoparticles at size ratio q r = 5. These conditions actually place the system in a part of the phase diagram that is thermodynamically unstable toward polymer-nanoparticle demixing [54,57]. Bulk phase separation is prevented only by the constraints of the NVT ensemble and the relatively small system size. As illustrated in Fig. 7, the simulated shape distributions do not substantially differ from those for a single polymer (Fig. 3). Interestingly, this behavior differs from that observed in simulations of the spherical, compressible, ideal polymer model [75] in the colloid limit (q r = 1), where polymer compression reversed with increasing crowding. This reversal, caused by polymer clustering and shielding -a correlation effect neglected by the mean-field free-volume theory -is not observed here in the protein limit.
In closing this section, we briefly discuss the relation of our approach to experiments and other modeling ap-proaches. Recent studies that applied small-angle neutron scattering to polystyrene chains in the presence of various molecular crowding agents [22][23][24], and to deuterated PEG amidst the polysaccharide crowder Ficoll 70 [76,77], reported substantial crowding-induced polymer compression. Although the polymers in these experiments were nonideal and relatively close in size to the crowders, our results for ideal polymers and larger size ratios are at least qualitatively consistent with these observations.
The role of crowding in native-denatured transitions of real polypeptides was recently modeled by Minton [6]. Applying an effective two-state model of proteins [5], Minton calculated excluded-volume interactions between unfolded proteins and macromolecular cosolutes, modeled as hard spheres or rods. Taking as input the radius of gyration probability distributions of four real proteins, computed by Goldenberg [13] via Monte Carlo simulations that include steric interactions between nonadjacent amino acid residues, Minton calculated chemical potentials and radii of gyration of unfolded proteins as a function of cosolute concentration. He concluded that long-range intramolecular steric interactions significantly increase the radii of gyration of unfolded polypeptides in crowded environments. Our approach can potentially complement Minton's by incorporating knowledge of both the size and shape of the uncrowded polymer.
V. CONCLUSIONS
In summary, we have investigated the influence of crowding on polymer shapes in a coarse-grained model of polymer-nanoparticle mixtures. The ideal polymer coils are modeled here as effective ellipsoids that fluctuate in shape according to the probability distributions of the eigenvalues of the gyration tensor of a random walk. The nanoparticles are modeled as hard spheres that can penetrate the polymers with a free energy penalty varying inversely with the polymer-to-nanoparticle size ratio q r . For this model, we performed both Monte Carlo simulations, incorporating novel trial moves that change the polymer shape, and free-volume theory calculations. In the protein limit, for size ratios of q r = 5 and 10, we computed the shape distributions, radius of gyration, and asphericity of ideal polymers induced by crowding of hard-sphere nanoparticles. Relative to uncrowded polymers, we observed significant shifts in polymer shape, which grow with increasing nanoparticle concentration and size ratio. Our results demonstrate that ideal polymers become more compact when crowded by smaller, hard nanoparticles, in good agreement with predictions of free-volume theory. The methods and results presented here significantly extend the scope of previous studies of colloid-polymer mixtures in which the polymers were modeled as compressible spheres [74,75].
For future work, we envision several intriguing directions in which our approach may be extended. While the present paper focuses on the influence of nanoparticles on polymers, one could, conversely, study the impact of polymers on effective interactions between nanoparticles. In particular, by simulating a pair of nanoparticles in a bath of shape-fluctuating polymers, the depletioninduced potential of mean force between nanoparticles could be computed and compared with simulations of more microscopic models [53], as well as with predictions of polymer field theory [78] and density-functional theory [79,80], in the protein limit.
Our model can be refined by replacing the stepfunction polymer-nanoparticle overlap energy profile with a more realistic, continuous profile based on the monomer density profile [43] or on molecular simulations [63]. Furthermore, by replacing the shape distribution of an ideal (non-self-avoiding) random walk with that of a nonideal (self-avoiding) walk [41,81,82], the model can be extended from ideal polymers in theta solvents to real polymers in good solvents. Such extensions can include biopolymers in aqueous solutions, such as unfolded proteins, whose persistence lengths can be sensitive to excluded-volume interactions [6], and whose uncrowded size distributions can be independently computed [13]. For a single biopolymer in a crowded environment, our computational methods can be directly applied, given as input the requisite shape distribution [18,19,83]. Simulating solutions of multiple selfavoiding polymers would require incorporating polymerpolymer interactions [68,84]. It is important to note, however, that our Monte Carlo approach, while efficiently sampling polymer conformations, does not accurately represent time scales for distinct molecular motionsdiffusion, rotation, and shape fluctuations. Therefore, our methods, while finding equilibrium shapes of crowded polymers, cannot describe dynamical processes, such as folding and unfolding.
Beyond adding realism to the polymer model, our approach can also be extended to mixtures of polymers with nonspherical [83] or charged [85,86] crowders, or to other crowded environments, such as confinement within a vesicle [87], or two-dimensional confinement, e.g., of DNA adsorbed onto lipid membranes [88,89]. Finally, for all of these systems, it would be interesting to explore the influence of polymer shape degrees of freedom on bulk thermodynamic properties, including the demixing transition between polymer-rich and polymer-poor phases, by implementing our Monte Carlo methods in either the Gibbs ensemble [75] or the grand canonical ensemble [90,91]. Here, we outline in greater detail the theory sketched in Sec. III B. In the semi-grand ensemble, a fixed number N n of nanoparticles are confined to a volume V , while the polymers can exchange with a reservoir of polymer that maintains constant polymer chemical potential µ p in the system. At a given temperature, the thermodynamic state is characterized by the nanoparticle number density, n n = N n /V , and the polymer number density in the reservoir, n r p ∝ exp(βµ p ) (ideal polymer). The polymer number density in the system, n p = N p /V , which depends on the nanoparticle density, is determined by chemical equilibrium between the system and reservoir.
The free-volume theory, a generalization of the theory first proposed by Lekkerkerker et al. [72] for the AOV model of colloid-polymer mixtures, can be derived by separating the Helmholtz free energy density, f = f id + f ex , into an ideal-gas contribution f id and an excess contribution f ex due to interparticle interactions. The excess free energy density consists of a hard-sphere nanoparticle contribution f hs (φ n ) and a polymer contribution f p , which depends on polymer-nanoparticle interactions. In a mean-field approximation, the polymer excess free energy density is equated to that of ideal polymers confined to the free volume (not excluded by the nanoparticles).
For shape-fluctuating polymers, the free energy must be averaged over shape degrees of freedom and supplemented by a conformational free energy. Assuming that a polymer of a given shape (i.e., eigenvalues λ) has the same conformational entropy in the system as in the reservoir, namely k B ln P r (λ), the polymer excess free energy density is approximated by βf p (φ n , φ p ) = −n p dλ P (λ; φ n ) ln[P r (λ)α(λ; φ n )] , (A1) where P (λ; φ n ) and α(λ; φ n ) are the probability distribution and free-volume fraction, respectively, of polymer coils of shape λ amidst nanoparticles of volume fraction φ n ≡ (4π/3)n n R 3 n . The ideal-gas free energy density is given exactly by βf id (φ n , φ p ) = n p dλ P (λ; φ n ){ln[φ p P (λ; φ n )] − 1} + n n (ln φ n − 1) ,
where φ p ≡ (4π/3)n p (R r g ) 3 is the effective polymer volume fraction in the system and R r g is the rms radius of gyration in the reservoir.
Equating chemical potentials of ideal polymers of a given shape in the system and reservoir now implies n p (φ n )P (λ; φ n ) = n r p P r (λ)α(λ; φ n ) .
Integrating over λ and using the normalization of P (λ; φ n ) yields n p (φ n ) = n r p α eff (φ n ) ,
where α eff is an effective polymer free-volume fraction,
α eff (φ n ) ≡ dλ P r (λ)α(λ; φ n ) ,(A5)
defined as an average of the free-volume fraction over polymer shapes in the reservoir. The corresponding shape distribution of crowded polymers is P (λ; φ n ) = P r (λ) α(λ; φ n ) α eff (φ n ) .
(A6)
Note that in the dilute nanoparticle limit (φ n → 0), the free-volume fraction α → 1 and the shape distribution reduces to that of the reservoir: P (λ) → P r (λ). Collecting the various contributions, the total free energy density may be expressed as βf (φ n , φ r p ) = n n (ln φ n − 1) + βf hs (φ n ) + n r p α eff (φ n )(ln φ r p − 1) ,
where now φ r p ≡ (4π/3)n r p (R r g ) 3 is the effective polymer volume fraction in the reservoir.
For the polymer free-volume fraction, we adopt the accurate geometry-based approximation of Oversteegen and Roth [73], which generalizes scaled-particle theory [92] from spheres to arbitrary shapes by using fundamental-measures density-functional theory [93][94][95] to separate thermodynamic properties of the crowders (nanoparticles) from geometric properties of the depletants (polymers). The result is α(λ; φ n ) = (1 − φ n ) exp[−β(pv p + γa p + κc p )] , (A8) where p, γ, and κ are the bulk pressure, surface tension at a planar hard wall, and bending rigidity of the nanoparticles, while v p , a p , and c p are the volume, surface area, and integrated mean curvature of a polymer. For a spherical polymer, v p = (4π/3)R 3 p , a p = 4πR 2 p , and c p = R p . A general ellipsoid polymer, with principal radii R 1 , R 2 , R 3 , has volume v p = (4π/3)R 1 R 2 R 3 , while a p and c p are numerically evaluated from the principal radii. The thermodynamic properties of hard-sphere nanoparticles are accurately approximated by the Carnahan-Starling expressions [73,96]:
βf hs = n n φ n (4 − 3φ n ) (1 − φ n ) 2 βp = 3φ n 4πR 3 n 1 + φ n + φ 2 n − φ 3 n (1 − φ n ) 3 βγ = 3 4πR 2 n φ n (2 − φ n ) (1 − φ n ) 2 + ln(1 − φ n ) βκ = 3φ n R n (1 − φ n ) .(A9)
FIG. 1 .
1Model of polymer-nanoparticle mixtures. Polymers are penetrable ellipsoids that can fluctuate in size and shape. Nanoparticles are hard spheres of fixed size.
FIG. 2 .
2Snapshot of a simulation of Nn = 216 nanoparticles (blue spheres) and one polymer (red ellipsoid) in a cubic box. The polymer rms radius of gyration in the reservoir equals five times the nanoparticle radius (qr = 5).
( 8 )
8] and asphericity [Eq. (10)], mean values of functions of eigenvalues f (λ) are defined as averages with respect to P (λ):
FIG. 3 .
3Probability distributions for the eigenvalues (a) λ1, (b) λ2, and (c) λ3 of the gyration tensor of a polymer coil, modeled as an ideal, freely-jointed chain. Monte Carlo simulation data (symbols) are compared with predictions of freevolume theory (solid curves) for a single ellipsoidal polymer, with rms radius of gyration in the reservoir equal to five times the nanoparticle radius (qr = 5), amidst Nn = 216 nanoparticles with volume fraction φn = 0.1 (triangles), 0.2 (squares), and 0.3 (circles). Also shown are the reservoir distributions (dashed curves), in the absence of nanoparticles (φn = 0).
FIG. 4 .
4Same as Fig. 3, but for larger polymer (qr = 10). Notice the changes in scale.
FIG. 5 .FIG. 6 .
56Root-mean-square radius of gyration of a polymer vs. nanoparticle volume fraction [Eq.(8)]. Monte Carlo simulation data (symbols) are compared with predictions of freevolume theory for ellipsoidal polymer (solid curves) and spherical polymer (dashed curves). Results are shown for reservoir polymer-to-nanoparticle size ratio qr = 5 (circles), qr = 10 (squares). (Error bars are smaller than symbols.) Asphericity of an ellipsoidal polymer vs. nanoparticle volume fraction [Eq.(10)]. Monte Carlo simulation data (symbols) are compared with predictions of free-volume theory (curves). Results are shown for reservoir polymer-tonanoparticle size ratio qr = 5 (circles) and qr = 10 (squares). (Error bars are smaller than symbols.) As crowding increases, the polymer becomes more compact (less aspherical).
FIG. 7 .
7Same as Fig. 3, but for higher polymer concentration (Np = 8, φp ≃ 0.5).
ACKNOWLEDGMENTS
We thank Sylvio May, Emmanuel Mbamala, Ben Lu, Matthias Schmidt, and James Polson for discussions. This work was supported by the National Science Foundation (Grant No. DMR-1106331) and by the Donors of the American Chemical Society Petroleum Research Fund (Grant No. PRF 44365-AC7).
Appendix A: Free-Volume Theory
. A P Minton, J. Biol. Chem. 27610577A. P. Minton, J. Biol. Chem. 276, 10577 (2001).
J R C Van Der Maarel, Introduction to Biopolymer Physics. SingaporeWorld ScientificJ. R. C. van der Maarel, Introduction to Biopolymer Physics (World Scientific, Singapore, 2008).
. R Phillips, J Kondev, J Theriot, Physical Biology of the Cell. Garland ScienceR. Phillips, J. Kondev, and J. Theriot, Physical Biology of the Cell (Garland Science, New York, 2009).
. A P Minton, Biopolymers. 202093A. P. Minton, Biopolymers 20, 2093 (1981).
. A P Minton, Biophys. J. 78101A. P. Minton, Biophys. J. 78, 101 (2000).
. A P Minton, Biophys. J. 88971A. P. Minton, Biophys. J. 88, 971 (2005).
. R J Ellis, Current Opin. Struct. Biol. 11114R. J. Ellis, Current Opin. Struct. Biol. 11, 114 (2001).
. K Richter, M Nessling, P Lichter, J. Cell Sci. 1201673K. Richter, M. Nessling, and P. Lichter, J. Cell Sci. 120, 1673 (2007).
. K Richter, M Nessling, P Lichter, Biochim. Biophys. Acta. 17832100K. Richter, M. Nessling, and P. Lichter, Biochim. Bio- phys. Acta 1783, 2100 (2008).
. A H Elcock, Current Opin. Struct. Biol. 201A. H. Elcock, Current Opin. Struct. Biol. 20, 1 (2010).
R Hancock, Genome Organization and Function in the Cell Nucleus. K. Rippe (Wiley-VCH, WeinheimR. Hancock, in Genome Organization and Function in the Cell Nucleus, edited by K. Rippe (Wiley-VCH, Wein- heim, 2012) pp. 169-184.
A R Denton, New Models of the Cell Nucleus: Crowding and Entropic Forces and Phase Separation and Fractals. R. Hancock and K. W. JeonUKAcademic PressA. R. Denton, in New Models of the Cell Nucleus: Crowd- ing and Entropic Forces and Phase Separation and Frac- tals, edited by R. Hancock and K. W. Jeon (Academic Press, UK, 2013) pp. 27-72.
. D P Goldenberg, J. Mol. Biol. 3261615D. P. Goldenberg, J. Mol. Biol. 326, 1615 (2003).
. R I Dima, D Thirumalai, J. Phys. Chem. B. 1086564R. I. Dima and D. Thirumalai, J. Phys. Chem. B 108, 6564 (2004).
M Cheung, D Klimov, D Thirumalai, Proc. Natl. Acad. Sci. Natl. Acad. Sci1024753M. Cheung, D. Klimov, and D. Thirumalai, Proc. Natl. Acad. Sci 102, 4753 (2005).
. E Chen, A Christiansen, Q Wang, M S Cheung, D S Kliger, P Wittung-Stafshede, Biochem. 519836E. Chen, A. Christiansen, Q. Wang, M. S. Cheung, D. S. Kliger, and P. Wittung-Stafshede, Biochem. 51, 9836 (2012).
. A Linhananta, G Amadei, T Miao, J. Phys.: Conf. Ser. 34112009A. Linhananta, G. Amadei, and T. Miao, J. Phys.: Conf. Ser. 341, 012009 (2012).
. N A Denesyuk, D Thirumalai, J. Am. Chem. Soc. 13311858N. A. Denesyuk and D. Thirumalai, J. Am. Chem. Soc 133, 11858 (2011).
. N A Denesyuk, D Thirumalai, Biophys. Rev. 5225N. A. Denesyuk and D. Thirumalai, Biophys. Rev. 5, 225 (2013).
. K Lin, J C Crocker, A C Zeri, A G Yodh, Phys. Rev. Lett. 8788301K. Lin, J. C. Crocker, A. C. Zeri, and A. G. Yodh, Phys. Rev. Lett. 87, 088301 (2001).
. A I Nakatani, W Chen, R G Schmidt, G V Gordon, C C Han, Polymer. 423713A. I. Nakatani, W. Chen, R. G. Schmidt, G. V. Gordon, and C. C. Han, Polymer 42, 3713 (2001).
. T Kramer, R Schweins, K Huber, J. Chem. Phys. 12314903T. Kramer, R. Schweins, and K. Huber, J. Chem. Phys. 123, 014903 (2005).
. T Kramer, R Schweins, K Huber, Macromol , 38151T. Kramer, R. Schweins, and K. Huber, Macromol. 38, 151 (2005).
. T Kramer, R Schweins, K Huber, Macromol , 389783T. Kramer, R. Schweins, and K. Huber, Macromol. 38, 9783 (2005).
. A C Balazs, T Emrick, T P Russell, Science. 1107A. C. Balazs, T. Emrick, and T. P. Russell, Science , 1107 (2006).
. M E Mackay, A Tuteja, P M Duxbury, C J Hawker, B Van Horn, Z Guan, G Chen, R S Krishnan, Science. 3111740M. E. Mackay, A. Tuteja, P. M. Duxbury, C. J. Hawker, B. Van Horn, Z. Guan, G. Chen, and R. S. Krishnan, Science 311, 1740 (2006).
. K Nusser, S Neueder, G J Schneider, M Meyer, W Pyckhout-Hintzen, L Willner, A Radulescu, D Richter, Macromol. 439837K. Nusser, S. Neueder, G. J. Schneider, M. Meyer, W. Pyckhout-Hintzen, L. Willner, A. Radulescu, and D. Richter, Macromol. 43, 9837 (2010).
. A Stradner, G Foffi, N Dorsaz, G Thurston, P Schurtenberger, Phys. Rev. Lett. 99198103A. Stradner, G. Foffi, N. Dorsaz, G. Thurston, and P. Schurtenberger, Phys. Rev. Lett. 99, 198103 (2007).
. W Kuhn, Kolloid-Zeitschrift. 68W. Kuhn, Kolloid-Zeitschrift 68, 2 (1934).
. M Fixman, J. Chem. Phys. 36306M. Fixman, J. Chem. Phys. 36, 306 (1962).
. P J Flory, S Fisk, J. Chem. Phys. 442243P. J. Flory and S. Fisk, J. Chem. Phys. 44, 2243 (1966).
P J Flory, Statistical Mechanics of Chain Molecules. New YorkWileyP. J. Flory, Statistical Mechanics of Chain Molecules (Wiley, New York, 1969).
Modern Theory of Polymer Solutions. H Yamakawa, Harper & RowNew YorkH. Yamakawa, Modern Theory of Polymer Solutions (Harper & Row, New York, 1970).
. H Fujita, T Norisuye, J. Chem. Phys. 521115H. Fujita and T. Norisuye, J. Chem. Phys. 52, 1115 (1970).
. K Šolc, J. Chem. Phys. 55335K.Šolc, J. Chem. Phys. 55, 335 (1971).
. K Šolc, 6378K.Šolc, Macromol. 6, 378 (1973).
. D N Theodorou, U W Suter, Macromol. 181206D. N. Theodorou and U. W. Suter, Macromol. 18, 1206 (1985).
. J Rudnick, G Gaspari, J. Phys. A: Math. Gen. 19191J. Rudnick and G. Gaspari, J. Phys. A: Math. Gen. 19, L191 (1986).
. J Rudnick, G Gaspari, Science. 237384J. Rudnick and G. Gaspari, Science 237, 384 (1987).
. M Bishop, C J Saltiel, J. Chem. Phys. 886594M. Bishop and C. J. Saltiel, J. Chem. Phys. 88, 6594 (1988).
. S J Sciutto, J. Phys. A: Math. Gen. 295455S. J. Sciutto, J. Phys. A: Math. Gen. 29, 5455 (1996).
. M Murat, K Kremer, J. Chem. Phys. 1084340M. Murat and K. Kremer, J. Chem. Phys. 108, 4340 (1998).
. F Eurich, P Maass, J. Chem. Phys. 117655F. Eurich and P. Maass, J. Chem. Phys. 11, 7655 (2001).
. S Asakura, F Oosawa, J. Chem. Phys. 221255S. Asakura and F. Oosawa, J. Chem. Phys. 22, 1255 (1954).
. A Vrij, Pure & Appl. Chem. 48471A. Vrij, Pure & Appl. Chem. 48 48, 471 (1976).
Colloidal suspensions. P N Pusey, Liquids, Freezing and Glass Transition, Les Houches session 51. J.-P. Hansen, D. Levesque, and J. Zinn-JustinNorth-Holland, Amsterdam2P. N. Pusey, "Colloidal suspensions," in Liquids, Freezing and Glass Transition, Les Houches session 51, Vol. 2, edited by J.-P. Hansen, D. Levesque, and J. Zinn-Justin (North-Holland, Amsterdam, 1991) pp. 763-931.
R A L Jones, Soft Condensed Matter. Oxford, OxfordR. A. L. Jones, Soft Condensed Matter (Oxford, Oxford, 2002).
. M Fuchs, K S Schweizer, J. Phys.: Condens. Matter. 14239M. Fuchs and K. S. Schweizer, J. Phys.: Condens. Mat- ter 14, R239 (2002).
. R P Sear, Phys. Rev. E. 564463R. P. Sear, Phys. Rev. E 56, 4463 (1997).
. R P Sear, Phys. Rev. Lett. 864696R. P. Sear, Phys. Rev. Lett. 86, 4696 (2001).
. R P Sear, Phys. Rev. E. 6651401R. P. Sear, Phys. Rev. E 66, 51401 (2002).
. P G Bolhuis, A A Louis, J.-P Hansen, Phys. Rev. Lett. 89128302P. G. Bolhuis, A. A. Louis, and J.-P. Hansen, Phys. Rev. Lett. 89, 128302 (2002).
. P G Bolhuis, E J Meijer, A A Louis, Phys. Rev. Lett. 9068304P. G. Bolhuis, E. J. Meijer, and A. A. Louis, Phys. Rev. Lett. 90, 68304 (2003).
. A Moncho-Jordá, A A Louis, P G Bolhuis, R Roth, J. Phys.: Condens. Matter. 153429A. Moncho-Jordá, A. A. Louis, P. G. Bolhuis, and R. Roth, J. Phys.: Condens. Matter 15, S3429 (2003).
. M S Cheung, Current Opin. Struct. Biol. 231M. S. Cheung, Current Opin. Struct. Biol 23, 1 (2013).
. Y Hennequin, M Evens, Q Angulo, C M , J S Van Duijneveldt, J. Chem. Phys. 12354906Y. Hennequin, M. Evens, Q. Angulo, C. M., and J. S. van Duijneveldt, J. Chem. Phys. 123, 054906 (2005).
. Z Zhang, J S Van Duijneveldt, Langmuir. 2263Z. Zhang and J. S. van Duijneveldt, Langmuir 22, 63 (2006).
. K J Mutch, J S Van Duijneveldt, J Eastoe, Soft Matter. 3155K. J. Mutch, J. S. van Duijneveldt, and J. Eastoe, Soft Matter 3, 155 (2007).
P G De Gennes, Scaling Concepts in Polymer Physics. IthacaCornellP. G. de Gennes, Scaling Concepts in Polymer Physics (Cornell, Ithaca, 1979).
. M Schmidt, M Fuchs, J. Chem. Phys. 1176308M. Schmidt and M. Fuchs, J. Chem. Phys. 117, 6308 (2002).
. E Eisenriegler, A Hanke, S Dietrich, Phys. Rev. E. 541134E. Eisenriegler, A. Hanke, and S. Dietrich, Phys. Rev. E 54, 1134 (1996).
. A Hanke, E Eisenriegler, S Dietrich, Phys. Rev. E. 596853A. Hanke, E. Eisenriegler, and S. Dietrich, Phys. Rev. E 59, 6853 (1999).
. A Pelissetto, J.-P Hansen, Macromol , 399571A. Pelissetto and J.-P. Hansen, Macromol. 39, 9571 (2006).
. F Eurich, A Karatchentsev, J Baschnagel, W Dieterich, P Maass, J. Chem. Phys. 127134905F. Eurich, A. Karatchentsev, J. Baschnagel, W. Di- eterich, and P. Maass, J. Chem. Phys. 127, 134905 (2007).
Monte Carlo and Molecular Dynamics Simulations in Polymer Science. K Binder, Oxford, New YorkK. Binder, Monte Carlo and Molecular Dynamics Simu- lations in Polymer Science (Oxford, New York, 1995).
Understanding Molecular Simulation. D Frenkel, B Smit, AcademicLondon2nd ed.D. Frenkel and B. Smit, Understanding Molecular Simu- lation, 2nd ed. (Academic, London, 2001).
K Binder, D W Heermann, Monte Carlo Simulation in Statistical Physics: An Introduction. BerlinSpringer5th ed.K. Binder and D. W. Heermann, Monte Carlo Simulation in Statistical Physics: An Introduction, 5th ed. (Springer, Berlin, 2010).
. D Frenkel, B M Mulder, Mol. Phys. 551171D. Frenkel and B. M. Mulder, Mol. Phys. 55, 1171 (1985).
H Gould, J Tobochnik, W Christian, Introduction to Computer Simulation Methods. Addison WesleyH. Gould, J. Tobochnik, and W. Christian, Introduc- tion to Computer Simulation Methods (Addison Wesley, 2006).
Open Source Physics: A User's Guide with Examples. W Christian, Addison WesleyW. Christian, Open Source Physics: A User's Guide with Examples (Addison Wesley, 2006).
. A R Denton, E Mbamala, S May, unpublishedA. R. Denton, E. Mbamala, and S. May, unpublished.
. H N W Lekkerkerker, W C K Poon, P N Pusey, A Stroobants, P B Warren, Europhys. Lett. 20559H. N. W. Lekkerkerker, W. C. K. Poon, P. N. Pusey, A. Stroobants, and P. B. Warren, Europhys. Lett. 20, 559 (1992).
. S M Oversteegen, R Roth, J. Chem. Phys. 122214502S. M. Oversteegen and R. Roth, J. Chem. Phys. 122, 214502 (2005).
. A R Denton, M Schmidt, J. Phys.: Condens. Matter. 1412051A. R. Denton and M. Schmidt, J. Phys.: Condens. Mat- ter 14, 12051 (2002).
. B Lu, A R Denton, J. Phys.: Condens. Matter. 23285102B. Lu and A. R. Denton, J. Phys.: Condens. Matter 23, 285102 (2011).
. C Le Coeur, B Demé, S Longeville, Phys. Rev. E. 7931910C. Le Coeur, B. Demé, and S. Longeville, Phys. Rev. E 79, 031910 (2009).
. C Le Coeur, J Teixeira, P Busch, S Longeville, Phys. Rev. E. 8161914C. Le Coeur, J. Teixeira, P. Busch, and S. Longeville, Phys. Rev. E 81, 061914 (2010).
. E Eisenriegler, A Bringer, R Maassen, J. Chem. Phys. 1188093E. Eisenriegler, A. Bringer, and R. Maassen, J. Chem. Phys. 118, 8093 (2003).
. J Forsman, C E Woodward, J. Chem. Phys. 13144903J. Forsman and C. E. Woodward, J. Chem. Phys. 131, 044903 (2009).
. H Wang, C E Woodward, J Forsman, J. Chem. Phys. 140194903H. Wang, C. E. Woodward, and J. Forsman, J. Chem. Phys. 140, 194903 (2014).
. D Lhuilier, J. Phys. 49705D. Lhuilier, J. Phys. 49, 705 (1988).
Excluded Volume Effects in Polymer Solutions as Explained by the Renormalization Group. L Schäfer, SpringerBerlinL. Schäfer, Excluded Volume Effects in Polymer Solutions as Explained by the Renormalization Group (Springer, Berlin, 1999).
. A Kudlay, M S Cheung, D Thirumalai, J. Phys. Chem. B. 1168513A. Kudlay, M. S. Cheung, and D. Thirumalai, J. Phys. Chem. B 116, 8513 (2012).
Hard convex body fluids. M P Allen, G T Evans, D Frenkel, B , Advances in Chemical Physics. I. Prigogine and S. A. Rice86WileyMulderM. P. Allen, G. T. Evans, D. Frenkel, and B. M. Mul- der, "Hard convex body fluids," in Advances in Chemical Physics, Vol. 86, edited by I. Prigogine and S. A. Rice (Wiley, New York, 1993) pp. 1-164.
. A R Denton, M Schmidt, J. Chem. Phys. 122244911A. R. Denton and M. Schmidt, J. Chem. Phys. 122, 244911 (2005).
. A Fortini, M Dijkstra, R Tuinier, J. Phys.: Condens. Matter. 177783A. Fortini, M. Dijkstra, and R. Tuinier, J. Phys.: Con- dens. Matter 17, 7783 (2005).
. M Fošnarič, A Iglič, D M Kroll, S May, Soft Matter. 93976M. Fošnarič, A. Iglič, D. M. Kroll, and S. May, Soft Matter 9, 3976 (2013).
. Y Fang, J Yang, J. Phys. Chem. B. 101441Y. Fang and J. Yang, J. Phys. Chem. B 101, 441 (1997).
. B Maier, J O Rädler, Macromol. 337185B. Maier and J. O. Rädler, Macromol. 33, 7185 (2000).
. R L C Vink, J Horbach, J. Chem. Phys. 1213253R. L. C. Vink and J. Horbach, J. Chem. Phys. 121, 3253 (2004).
. R L C Vink, J Horbach, J. Phys.: Condens. Matter. 163807R. L. C. Vink and J. Horbach, J. Phys.: Condens. Mat- ter 16, S3807 (2004).
. J L Lebowitz, E Helfand, E Praestgaard, J. Chem. Phys. 43774J. L. Lebowitz, E. Helfand, and E. Praestgaard, J. Chem. Phys. 43, 774 (1964).
. Y Rosenfeld, Phys. Rev. Lett. 63980Y. Rosenfeld, Phys. Rev. Lett. 63, 980 (1989).
. Y Rosenfeld, M Schmidt, H Löwen, P Tarazona, Phys. Rev. E. 554245Y. Rosenfeld, M. Schmidt, H. Löwen, and P. Tarazona, Phys. Rev. E 55, 4245 (1997).
. M Schmidt, H Löwen, J M Brader, R Evans, Phys. Rev. Lett. 851934M. Schmidt, H. Löwen, J. M. Brader, and R. Evans, Phys. Rev. Lett 85, 1934 (2000).
J.-P Hansen, I R Mcdonald, Theory of Simple Liquids. LondonElsevier3rd ed.J.-P. Hansen and I. R. McDonald, Theory of Simple Liq- uids, 3rd ed. (Elsevier, London, 2006).
| [] |
[
"CFLOWNETS: CONTINUOUS CONTROL WITH GENERATIVE FLOW NETWORKS",
"CFLOWNETS: CONTINUOUS CONTROL WITH GENERATIVE FLOW NETWORKS"
] | [
"Yinchuan Li \nHuawei Noah's Ark Lab\nBeijingChina\n",
"Shuang Luo [email protected]@tju.edu.cn \nZhejiang University\nHuangzhouChina\n",
"Haozhi Wang \nTianjin University\nTianjinChina\n",
"Jianye Hao [email protected] \nHuawei Noah's Ark Lab\nBeijingChina\n\nTianjin University\nTianjinChina\n"
] | [
"Huawei Noah's Ark Lab\nBeijingChina",
"Zhejiang University\nHuangzhouChina",
"Tianjin University\nTianjinChina",
"Huawei Noah's Ark Lab\nBeijingChina",
"Tianjin University\nTianjinChina"
] | [] | Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks. GFlowNet aims to generate distribution proportional to the rewards over terminating states, and to sample different candidates in an active learning fashion. GFlowNets need to form a DAG and compute the flow matching loss by traversing the inflows and outflows of each node in the trajectory. No experiments have yet concluded that GFlowNets can be used to handle continuous tasks. In this paper, we propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks. First, we present the theoretical formulation of CFlowNets. Then, a training framework for CFlowNets is proposed, including the action selection process, the flow approximation algorithm, and the continuous flow matching loss function. Afterward, we theoretically prove the error bound of the flow approximation. The error decreases rapidly as the number of flow samples increases. Finally, experimental results on continuous control tasks demonstrate the performance advantages of CFlowNets compared to many reinforcement learning methods, especially regarding exploration ability. | 10.48550/arxiv.2303.02430 | [
"https://export.arxiv.org/pdf/2303.02430v1.pdf"
] | 257,365,137 | 2303.02430 | 43737655c34a6f2a1446d1574e7830560c34bd2f |
CFLOWNETS: CONTINUOUS CONTROL WITH GENERATIVE FLOW NETWORKS
Yinchuan Li
Huawei Noah's Ark Lab
BeijingChina
Shuang Luo [email protected]@tju.edu.cn
Zhejiang University
HuangzhouChina
Haozhi Wang
Tianjin University
TianjinChina
Jianye Hao [email protected]
Huawei Noah's Ark Lab
BeijingChina
Tianjin University
TianjinChina
CFLOWNETS: CONTINUOUS CONTROL WITH GENERATIVE FLOW NETWORKS
Published as a conference paper at ICLR 2023
Generative flow networks (GFlowNets), as an emerging technique, can be used as an alternative to reinforcement learning for exploratory control tasks. GFlowNet aims to generate distribution proportional to the rewards over terminating states, and to sample different candidates in an active learning fashion. GFlowNets need to form a DAG and compute the flow matching loss by traversing the inflows and outflows of each node in the trajectory. No experiments have yet concluded that GFlowNets can be used to handle continuous tasks. In this paper, we propose generative continuous flow networks (CFlowNets) that can be applied to continuous control tasks. First, we present the theoretical formulation of CFlowNets. Then, a training framework for CFlowNets is proposed, including the action selection process, the flow approximation algorithm, and the continuous flow matching loss function. Afterward, we theoretically prove the error bound of the flow approximation. The error decreases rapidly as the number of flow samples increases. Finally, experimental results on continuous control tasks demonstrate the performance advantages of CFlowNets compared to many reinforcement learning methods, especially regarding exploration ability.
INTRODUCTION
As an emerging technology, generative flow networks (GFlowNets) (Bengio et al., 2021a;b) can make up for the shortcomings of reinforcement learning (Kaelbling et al., 1996;Sutton & Barto, 2018) on exploratory tasks. Specifically, based on the Bellman equation (Sutton & Barto, 2018), reinforcement learning is usually trained to maximize the expectation of future rewards; hence the learned policy is more inclined to sample action sequences with higher rewards. In contrast, the training goal of GFlowNets is to define a distribution proportional to the rewards over terminating states, i.e., the parent states of the final states, rather than generating a single high-reward action sequence (Bengio et al., 2021a). This is more like sampling different candidates in an active learning setting (Bengio et al., 2021b), thus better suited for exploration tasks.
GFlowNets construct the state transitions of trajectories into a directed acyclic graph (DAG) structure. Each node in the graph structure corresponds to a different state, and actions correspond to transitions between different states, that is, an edge connecting different nodes in the graph. For discrete tasks, the number of nodes in this graph structure is limited, and each edge can only correspond to one discrete action. However, in real environments, the state and action spaces are continuous for many tasks, such as quadrupedal locomotion (Kohl & Stone, 2004), autonomous driving (Kiran et al., 2021;Shalev-Shwartz et al., 2016;Pan et al., 2017), or dexterous in-hand manipulation (Andrychowicz et al., 2020). Moreover, the reward distributions corresponding to these environments may be multimodal, requiring more diversity exploration. The needs of these environments closely match the strengths of GFlowNets. (Bengio et al., 2021b) proposes an idea for adapting GFlowNets to continuous tasks by replacing sums with integrals for continuous variables, and they suggest the use of integrable densities and detailed balance (DB) or trajectory balance (TB) Malkin et al. (2022) criterion to obtain tractable training objectives, which can avoid some integration operations. However, this idea has not been verified experimentally.
In this paper, we propose generative Continuous Flow Networks, named CFlowNets for short, for continuous control tasks to generate policies that can be proportional to continuous reward functions. Applying GFlowNets to continuous control tasks is exceptionally challenging. In generative flow networks, the transition probability is defined as the ratio of action flow and state flow. For discrete state and action spaces, we can form a DAG and compute the state flow by traversing a node's incoming and outgoing flows. Conversely, it is impossible for continuous tasks to traverse all state-action pairs and corresponding rewards. To address this issue, we use important sampling to approximate the integrals over inflows and outflows in the flow-matching constraint, where we use a deep neural network to predict the parent nodes of each state in the sampled trajectory. The main contributions of this paper are summarized as the following:
Main Contributions: 1) We extend the theoretical formulation and flow matching theorem of previous GFlowNets to continuous scenarios. Based on this, a loss function for training CFlowNets is presented; 2) We propose an efficient way to sample actions with probabilities approximately proportional to the output of the flow network, and propose a flow sampling approach to approximate continuous inflows and outflows, which allows us to construct a continuous flow matching loss; 3) We theoretically analyze the error bound between sampled flows and inflows/outflows, and the tail becomes minor as the number of flow samples increases; 4) We conduct experiments based on continuous control tasks to demonstrate that CFlowNets can outperform current state-of-the-art RL algorithms, especially in terms of exploration capabilities. To the best of our knowledge, our work is the first to empirically demonstrate the effectiveness of flow networks on continuous control tasks. The codes are available at http://gitee.com/mindspore/models/tree/master/research/gflownets/cflownets 2 PRELIMINARIES 2.1 MARKOV DECISION PROCESS A stochastic, discrete-time and sequential decision task can be described as a Markov Decision Process (MDP) , which is canonically formulated by the tuple: M = S, A, P, R, γ .
(1)
In the process, s ∈ S represents the state space of the environment. At each time step, agent receives a state s and selects an action a on the action space A. This results in a transition to the next state s according to the state transition function P (s |s, a) : S × A × S → [0, 1]. Then the agent gets the reward r based on the reward function R(s, a) : S × A → R. A stochastic policy π maps each state to a distribution over actions π(·|s) and gives the probability π(a|s) of choosing action a in state s. The agent interacts with the environment by executing the policy π and obtaining the admissible trajectories {(s t , a t , r t , s t+1 )} n t=1 , where n is the trajectory length. The goal of an agent is to maximize the discounted return E s0:n,a0:n [ ∞ t=0 γ t r t | s 0 = s, a 0 = a, π], where E is the expectation over the distribution of the trajectories and γ ∈ [0, 1) is the discount factor.
GENERATIVE FLOW NETWORK
GFlowNet sees the MDP as a flow network. Define s = T (s, a) and F (s) as the node's transition and the total flow going through s. Define an edge/action flow F (s, a) = F (s → s ) as the flow through an edge s → s . The training process of vanilla GFlowNets needs to sum the flow of parents and children through nodes (states), which depends on the discrete state space and discrete action space. The framework is optimized by the following flow consistency equations: s,a:T (s,a)=s
F (s, a) = R (s ) + a ∈A(s ) F (s , a ) ,(2)
which means that for any node s, the incoming flow equals the outgoing flow, which is the total flow F (s) of node s.
CFLOWNETS: THEORETICAL FORMULATION
Considering a continuous task with tuple (S, A), where S denotes the continuous state space and A denotes the continuous action space. Define a trajectory τ = (s 1 , ..., s n ) in this continuous task as a sequence sampled elements of S such that every transition a t : s t → s t+1 ∈ A. Further, we define an acyclic trajectory τ = (s 1 , ..., s n ) as a trajectory satisfies the acyclic constraint: ∀s m ∈ τ, s k ∈ τ, m = k, we have s m = s k . Denote s 0 and s f respectively as the initial state and the final state related with the continuous task (S, A), we define the complete trajectory as any sampled acyclic trajectory from (S, A) starting in s 0 and ending in s f . Correspondingly, a transition s → s f into the final state is defined as the terminating transition, and F (s → s f ) is a terminating flow.
A trajectory flow F (τ ) : τ → R + is defined as any nonnegative function defined on the set of complete trajectories τ . For each trajectory τ , the associated flow F (τ ) contains the number of particles (Bengio et al., 2021b) sharing the same path τ . In addition, the tuple (S, A, F ) is called a continuous flow network. Let T (s, a) = s indicate an action a that could make a transition from state s to attain s . Then we make the following assumptions. Assumption 1. Assume that the continuous take (S, A) is an "acyclic" task, which means that arbitrarily sampled trajectories τ are acyclic, i.e.,
s i = s j , ∀s i , s j ∈ τ = (s 0 , ..., s n ), i = j.
Assumption 2. Assume the flow function F (s, a) is Lipschitz continuous, i.e.,
|F (s, a) − F (s, a )| ≤ L||a − a ||, a, a ∈ A, (3) |F (s, a) − F (s , a)| ≤ L||s − s ||, s, s ∈ S,(4)
where L is a constant. Assumption 3. Assume that for any state pair (s t , s t+1 ), there is a unique action a t such that T (s t , a t ) = s t+1 , i.e., taking action a t in s t is the only way to get to s t+1 . Hence we can define s t := g(s t+1 , a t ), where g(·) is a transition function. And assume actions are the translation actions.
The necessity and rationality of Assumptions 1-3 are analyzed in the appendix. Under Assumption 1, we define the parent set P(s t ) of a state s t as the set that contains all of the direct parents of s t that could make a direct transition to s t , i.e., P(s t ) = {s ∈ S : T (s, a ∈ A) = s t }. Similarly, define the child set C(s t ) of a state s t as the set contains all of the direct children of s t that could make a direct transition from s t , i.e., C(s t ) = {s ∈ S : T (s t , a ∈ A) = s}. Then, we have the following continuous flow definitions, where Assumptions 2-3 make these integrals integrable and meaningful. Definition 1 (Continuous State Flow). The continuous state flow F (s) : S → R is the integral of the complete trajectory flows passing through the state:
F (s) = τ :s∈τ F (τ )dτ.(5)
Definition 2 (Continuous Inflows). For any state s t , its inflows are the integral of flows that can reach state s t , i.e.,
s∈P(st) F (s → s t )ds = s:T (s,a)=st F (s, a)ds = F (s t ) = a:T (s,a)=st F (s, a)da,(6)
where a : s → s t and s = g(s t , a) since Assumption 3 holds. Definition 3 (Continuous Outflows). For any state s t , the outflows are the integral of flows passing through state s t with all possible actions a ∈ A, i.e.,
s∈C(st) F (s t → s)ds = F (s t ) = a∈A F (s t , a)da.(7)
Based on the above definitions, we can define the transition probability P (s → s |s) of edge s → s as a special case of conditional probability introduced in Bengio et al. (2021b). In particular, the forward transition probability is given by
P F (s t+1 |s t ) := P (s t → s t+1 |s t ) = F (s t → s t+1 ) F (s t ) .(8)
Similarly, the backwards transition probability is given by
P B (s t |s t+1 ) := P (s t → s t+1 |s t+1 ) = F (s t → s t+1 ) F (s t+1 ) .(9)
For any trajectory sampled from a continuous task (S, A), we have
∀τ = (s 1 , ..., s n ), P F (τ ) := n−1 t=1 P F (s t+1 |s t ) (10) ∀τ = (s 1 , ..., s n ), P B (τ ) := n−1 t=1 P B (s t |s t+1 ),(11)
and we further have
∀s ∈ S\{s f },
Furthermore,F uniquely defines a Markovian flow F matchingF such that
F (τ ) = n+1 t=1F (s t−1 → s t ) n t=1F (s t ) .(14)
Theorem 1 means that as long as any non-negative function satisfies the flow matching conditions, a unique flow is determined. Therefore, for sparse reward environments, i.e., R(s) = 0, ∀s = s f , we can obtain the target flow by training a flow network that satisfies the flow matching conditions. Such learning machines are called CFlowNets, and we have the following continuous loss function:
L(τ ) = s f st=s1 st−1∈P(st) F (s t−1 → s t )ds t−1 − R(s t ) − st+1∈C(st) F (s t → s t+1 )ds t+1 2 .
However, obviously, the above continuous loss function cannot be directly applied in practice. Next, we propose a method to approximate the continuous loss function based on the sampled trajectories to obtain the flow model. Middle: We randomly sample actions to approximately calculate the inflows and outflows, where a DNN is used to estimate the parent states. Right: Continuous flow matching loss is used to train the CFlowNet based on making inflows equal to outflows or reward.
OVERALL FRAMEWORK
The overview framework of CFlowNets is shown in Figure 1, including the environment interaction, flow sampling, and training procedures. During the environment interaction phase (Left part of Figure 1), we sample an action probability buffer based on the forward-propagation of CFlowNets. We name this process the action selection procedure, as detailed in Section 4.2. After acquiring the action, the agent can interact with the environment to update the state, and this process repeats several steps until the complete trajectory is sampled. Once a buffer of complete trajectories is available, we randomly sample K actions and compute the child states to approximately calculate the outflows. For the inflows, we use these sampled actions together with the current state as the input to the deep neural network G to estimate the parent states. Based on these, we can approximately determine the inflows. We name this process the flow matching approximation procedure (Middle part of Figure 1), as detailed in Section 4.3. Finally, based on the approximate inflows and outflows, we can train a CFlowNet based on the continuous flow matching loss function (Right part of Figure 1), as details in Section 4.4. The pseudocode is provided in Appendix C.
ACTION SELECTION PROCEDURE
Starting from an empty set, CFlowNets aim to obtain complete trajectories τ = (s 0 , s 1 , ..., s f ) ∈ T by iteratively sampling a t ∼ π(a t |s t ) = F (st,at)
F (st) with tuple {(s t , a t , r t , s t+1 )} f t=0 .
However, it is difficult to sample trajectories strictly according to the corresponding probability of a t , since the actions are continuous, we cannot get the exact action probability distribution function based on the flow network F (s t , a t ). To solve this problem, at each state s t , we first uniformly sample M actions from A and generate an action probability buffer P = {F (s t , a i )} M i=1 , which is used as an approximation of action probability distributions. Then we sample an action from P according to the corresponding probabilities of all actions. Obviously, actions with larger F (s t , a i ) will be sampled with higher probability. In this way, we approximately sample actions from a continuous distribution according to their corresponding probabilities. Remark 1. After the training process, for tasks that require a larger reward, we can sample actions with the maximum flow output in P during the test process to obtain a relatively higher reward. How the output of the flow model is used is flexible, and we can adjust it for different tasks.
FLOW MATCHING APPROXIMATION
Once a batch of trajectories B is available, to satisfy flow conditions, we require that for any node s t , the inflows a:T (s,a)=st F (s, a)da equals the outflows a∈A F (s t , a)da, which is the total flow F (s t ) of node s t . However, obviously, we cannot directly calculate the continuous inflows and outflows to complete the flow matching condition. An intuitive idea is to discretize the inflows and outflows based on a reasonable approximation and match the discretized flows. To do this, we sample K actions independently and uniformly from the continuous action space A and calculate corresponding F (s t , a k ), k = 1, ..., K as the outflows, i.e., we use the following approximation:
a∈A F (s t , a)da ≈ µ(A) K K k=1 F (s t , a k ),(15)
where µ(A) denotes the measure of the continuous action space A.
By contrast, an approximation of inflow is more difficult since we should find the parent states first.
To solve this problem, we construct a deep neural network G (named "retrieval" neural network) parameterized by φ with (s t+1 , a t ) as the input while s t as the output, and train this network based on B with the MSE loss. That is, we want use G to fit function g(·). The network G is usually easy to train since we consider tasks satisfy Assumption 3, and we can obtain a high-precision network G through simple pre-training. As the training progresses, we can also occasionally update G based on the sampled trajectories to ensure accuracy. Then, the inflows can be calculated approximately:
a:T (g(st,a),a)=st F (g(s t , a), a)da ≈ µ(A) K K k=1 F (G φ (s t , a k ), a k ).(16)
Next, by assuming that the flow function F (s, a) is Lipschitz continuous in Assumption 2, we could provide a non-asymptotic analysis for the error between the sample inflows/outflows and the true inflows/outflows. Theorem 2 establishes the error bound between the sample outflows (resp. inflows) and the actual outflows (resp. inflows) in the tail form and shows that the tail is decreasing exponentially. Furthermore, the tail gets much smaller with the increase of K, which means the sample outflows (resp. inflows) are a good estimation of the actual outflows (resp. inflows). Theorem 2. Let {a k } K k=1 be sampled independently and uniformly from the continuous action space A. Assume G φ can optimally output the actual state s t with (s t+1 , a t ). For any bounded continuous action a ∈ A and any state s t ∈ S, we have
P µ(A) K K k=1 F (s t , a k ) − a∈A F (s t , a)da ≥ t ≤ 2 exp − Kt 2 2(Lµ(A)diam(A)) 2 (17) and P µ(A) K K k=1 F (G φ (s t , a k ), a k ) − a:T (s,a)=st F (s, a)da ≥ t ≤ 2 exp − Kt 2 2 Lµ(A)(diam(A) + diam(S)) 2 ,(18)
where L is the Lipschitz constant, diam(A) denotes the diameter of the action space A and diam(S) denotes the diameter of the state space S.
LOSS FUNCTION
Based on (15) and (16), the continuous loss function can be approximated by
L θ (τ ) = s f st=s1 K k=1 F θ (G φ (s t , a k ), a k ) − λR(s t ) − K k=1 F θ (s t , a k ) 2 ,(19)
where θ is the parameter of the flow network F (·) and λ = K/µ(A). Note that in many tasks we cannot obtain exact µ(A). For such tasks, we can directly set λ to 1, and then adjust the reward shaping to ensure the convergence of the algorithm 1 .
It is noteworthy that the magnitude of the state flow at different locations in the trajectory may not match. For example, the initial node flow is likely to be larger than the ending node flow. To solve this problem, inspired the log-scale loss introduced in GFlowNets (Bengio et al., 2021a), we can modify (19) into:
L θ (τ ) = s f st=s1 log + K k=1 exp F log θ (G φ (s t , a k ), a k ) − log + λR(s t ) + K k=1 exp F log θ (s t , a k ) 2 ,(20)
where is a hyper-parameter that helps to trade off small versus large flows and helps avoid the numerical problem of taking the logarithm of tiny flows. Note that Theorem 2 cannot be used to guarantee the unbiasedness of (20) because log E(x) = E log(x). But experiments show that this approximation works well.
RELATED WORKS
Generative Flow Networks. Generative flow networks are proposed to enhance exploration capabilities by generating a distribution proportional to the rewards over terminating states (Bengio et al., 2021b;a). Since the network only samples actions based on the distribution of the corresponding rewards, rather than focusing only on actions that maximize rewards such as reinforcement learning, it can perform well on tasks with more diverse reward distributions, and has been successfully applied to molecule generation (Bengio et al., 2021a;Malkin et al., 2022;, discrete probabilistic modeling (Zhang et al., 2022b), structure learning (Deleu et al., 2022), causal discovery Li et al. (2022) and graph neural network Li et al. (2023). The connection between deep generative models and GFlowNets is discussed in Zhang et al. (2022a) through the lens of Markov trajectory learning. In Bengio et al. (2021b), an idea is proposed for adapting GFlowNets to continuous tasks by replacing sums with integrals for continuous variables. Malkin et al. (2022) and Bengio et al. (2021b) propose detailed balance (DB) and trajectory balance (TB) objectives, which use parametric forward and backward policies in the objective function. These new objective functions do not require evaluating the flow model on multiple parents of a state, which is more efficient, especially for high-dimensional environments. Malkin et al. (2022) and Bengio et al. (2021b) mentioned that these objective functions can also be used in continuous scenarios by replacing the policy likelihoods in the objective with probability densities. A possible disadvantage is that it is not easy to estimate P F and P B in a continuous environment, since the state space is much larger than in a discrete scenario, and a small error in modeling probability densities can greatly affect the final performance. How to combine DB and TB with CFlowNets will be a worthy future work.
Continuous Reinforcement Learning. Policy gradient algorithms are widely used for reinforcement learning problems with continuous action spaces. The deterministic policy gradient (DPG) (Silver et al., 2014) algorithm is an actor-critic (Grondman et al., 2012;Rosenstein et al., 2004) method that uses an estimate of the learned value Q(s, a) to train a deterministic policy µ : S → A parameterized by θ µ . Compared with CFlowNets, the policy is updated by applying the chain rule to the expected return J from the start distribution with respect to the policy parameters:
∇ θ µ J ≈ E D ∇ θ µ Q s, a | θ Q a=µ(s|θ µ ) = E D ∇ a Q s, a | θ Q a=µ(st) ∇ θ µ µ (s | θ µ ) ,(21)
where D is the replay buffer. The policy aims to maximize the expectation of future rewards, which are estimated by Q-learning. In this setting, the trajectories generated by the policy may be relatively homogeneous. However, the training goal of CFlowNets is to define a distribution proportional to the rewards over terminating states, resulting in more diverse trajectories that are beneficial for exploring the environment.
Later, deep DPG (DDPG) (Lillicrap et al., 2015) improves DPG and has good sample efficiency but suffers from extreme brittleness and hyperparameter sensitivity. Therefore, it is difficult to extend DDPG to complex, high-dimensional tasks. To improve DDPG, twin delayed DDPG (TD3) (Fujimoto et al., 2018) adopts an actor-critic framework and considers the interaction between value update and function approximation error and in the policy. There are also some policy gradient (Sutton et al., 1999;Kohl & Stone, 2004;Khadka & Tumer, 2018) based algorithms that can be adapted for continuous tasks, such as proximal policy optimization (PPO) (Schulman et al., 2017) algorithms, asynchronous advantage actor-critic (A3C) (Stooke & Abbeel, 2018), and importance weighted actor-learner architecture (IMPALA) (Espeholt et al., 2018). PPO has the benefits of trust region policy optimization (Schulman et al., 2015), enabling multiple batches of data to be updated together. Therefore, it is simpler to implement, more general, and has lower sample complexity.
Recently, phasic policy gradient (PPG) (Cobbe et al., 2021) is proposed to decouple the training between policy and value function while keeping their feature sharing, and PPG optimizes each objective with an appropriate level of sample reuse to improve sample efficiency. Most of these improved policy gradient methods can be classified as aiming at maximizing reward, so none of them are better suited for exploration tasks than CFlowNets.
Furthermore, some maximum entropy (Pitis et al., 2020;Haarnoja et al., 2018a;Hazan et al., 2019;Yarats et al., 2021) based reinforcement learning algorithms can also be adapted for continuous tasks, such as soft actor-critic (SAC) (Haarnoja et al., 2018b). By maximizing the expected reward and entropy, the actor network of SAC can successfully complete tasks while acting as randomly as possible. The difference between CFlowNets and SAC is: 1) SAC selects actions by a Gaussian policy, which is less expressive than using a general unnormalized action p.d.f. F (s, a); 2) In the general case, SAC learns to be proportional to the long-term return, which generates the trajectory distribution satisfying p(τ ) ∝ R(τ ) with R(τ ) is the return of τ . CFlowNets considers all possible trajectories that lead to a terminal state s f , and learn the policy to generate s f with p(s f ) ∝ R(s f ).
EXPERIMENTS
To demonstrate the effectiveness of the proposed CFlowNets, we conduct experiments on several continuous control tasks with sparse rewards, including Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse. The visualization of these environments is shown in Figures 7, 8 and 9. Then we compare CFlowNets with a few state-of-the-art baseline RL algorithms, such as DDPG (Lillicrap et al., 2015), TD3 (Fujimoto et al., 2018), PPO (Schulman et al., 2017), and SAC (Haarnoja et al., 2018b). More implementation details are provided in Appendix D. Figure 2 illustrates the distributions of learned policies for CFlowNets and RL algorithms. All curves are max-min normalized. The gray curve is the ground truth of reward distribution generated by the agent's different actions when it goes to coordinates (7, 7), which indicates that the optimal action here is to go right or up. The red curve shows the flow network output of CFlowNets under different actions, indicating that CFlowNets have an excellent fitting ability to the reward. In contrast, other reinforcement learning algorithms have difficulty fitting the actual reward distribution well. Figures 3(a)-(c) show the number of valid-distinctive trajectories explored as training progresses in Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse environment, respectively. After a certain number of training epochs, 10000 trajectories are collected. A valid-distinctive trajectory is defined as a reward above a threshold δ r while the MSE between the trajectory and other trajectories is greater than another threshold δ mse . That is, if the returns of both trajectories are high, but the two are close and the MSE is small, we consider it only one valid-distinctive exploration. δ r in Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse is set as 0.5, -0.2, 5.0, respectively. δ mse in Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse is set as 0.02, 4.0, 1.0, respectively. As can be seen from the figure, DDPG, TD3 and PPO have the worst exploration ability, only one valid-distinctive trajectory is generated. SAC explores better at the beginning of training, and decreases as the training progresses and gradually converges. In contrast, the exploration ability of CFlowNets is very outstanding, the number of trajectories explored far exceeds other algorithms, and the exploration ability has been stable as the training progresses.
Figures 3(d)-(f) indicate the rewards during the training process in Point-Robot-Sparse, Reacher-
Goal-Sparse, and Swimmer-Sparse environment, respectively. The shaded region represents 95% confidence interval across 5 runs. Figure 3(d) and Figure 3(e) show that CFlowNets has the fastest and more stable upward trend, and the final reward is ahead of that of other algorithms by a large margin. In contrast, CFlowNets do not perform as well as other algorithms in Figure 3(f). Since the rewards in Point-Robot-Sparse and Reacher-Goal-Sparse are more evenly distributed, so these two tasks are more inclined to exploration. CFlowNets has better exploration ability and hence can converge stably. As for Swimmer-Sparse, its reward distribution is relatively steep, and sampling near the maximum reward can achieve faster convergence. It is reasonable for CFlowNets to perform worse than RL on this task in terms of reward. However, in this environment, CFN can still maintain a good exploration ability.
CONCLUSION
In this paper, we propose generative continuous flow networks to enhance exploration in continuous control tasks. The theoretical formulation of CFlowNets is first presented. Then, a training framework for CFlowNets is proposed, including the action selection process, the flow approximation algorithm, and the continuous flow matching loss function. Theoretical analysis shows that the error of the flow approximation decreases rapidly as the number of flow samples increases. Experimental results on continuous control tasks illustrate the performance advantages of CFlowNets compared to many reinforcement learning methods. Especially in the exploration ability, the effect of CFlowNets far exceeds other state-of-the-art reinforcement learning algorithms.
Limitations: Similar to GFlowNets, CFlowNets aims to sample actions according to the flow network, rather than selecting actions with maximizing rewards. Therefore, CFlowNets are more suitable for exploration-biased tasks. It does not perform as well as reinforcement learning on tasks that aim to maximize reward. Of course, the purpose of CFlowNets is not to completely replace reinforcement learning, but as a supplement to reinforcement learning, giving a new option for continuous control tasks. Future work: Future work will be how to combine CFlowNets with DB and TB objective functions to improve training efficiency. Necessity: For most environments, it is difficult to generate cycles when sampling a trajectory in a continuous space. Since ∀t, µ({s 0 , ..., s t }) = 0 and µ(A) = µ(A\{s 0 , ..., s t }), that is, the probability of s t+1 ∈ {s 0 , ..., s t } is very small. However, cycles often arise when certain environments have some special constraints. For example, a simple pendulum task (see Figure 4), the action is to control the pendulum to rotate from the previous position to the next position at a certain angle. For this task, it is difficult for a pendulum to rotate to exactly the same position in continuous space. However, if a wall is added to the task, the pendulum can easily go to the same position (see Figure 5), i.e. a cycle will occur. Therefore, we still need to add an acyclic assumption to make the theory and performance of CFlowNets guaranteed.
Rationality: This assumption is reasonable because for many continuous environments it is difficult to form cycles in trajectories without special constraints. Even for tasks prone to form cycles, we can directly add time steps in the state space to satisfy this assumption.
A.2 WHY IS ASSUMPTION 2 NECESSARY AND REASONABLE?
Necessity: This assumption is mainly used to guarantee the existence of flow-related integrals, and to ensure that Theorem 2 holds.
Rationality:
We justify this assumption based on simulations. As shown in Figure 6 Alistarh et al. (2018) all use this assumption to prove the convergence of algorithms.
A.3 WHY IS ASSUMPTION 3 NECESSARY AND REASONABLE?
Necessity: This assumption is used in Definition 2 and enables the retrieval neural network to fit the function g(s, a). While there is a one-to-one correspondence between most environment state transitions and actions, there are still some special cases where, given a state pair (s, s ), there can be an infinite number of actions. For example: for Pendulum-with-Wall in Figure 5, after reaching the wall, continuing to increase the action will not continue to change the state s . In addition, a special case of the translation action could be T (s, a) = s + a or using the special linear group, such that Definition 2 and 3 hold. The translation action is used to ensure that there is no Jacobian term in the continuous flow definition.
Rationality: This assumption is a property of many environments and therefore reasonable. For environments that do not satisfy this assumption, we can try to satisfy this assumption by modifying the state to add more information. For example, we can add the duration of the action to the state space of Pendulum-with-Wall task in Figure 5. Even if the action increases after reaching the wall, the position information will not be changed, but the duration will increase, so that the state transition and the action will correspond one-to-one. The worst case is that we cannot change the environment to satisfy the assumption. At this time, we mainly need to solve the problem that the output of the retrieval neural network G cannot be multiple when the input is fixed. One of our conjectures is that maybe we can alleviate this problem by adding some small random noise to the input, but this idea has not been tested.
(22)
Furthermore,F uniquely defines a Markovian flow F matchingF such that
F (τ ) = n+1 t=1F (s t−1 → s t ) n t=1F (s t )
.
(23)
Proof. The proof is an extension of that of Proposition 19 in Bengio et al. (2021b) to the continuous case. We first prove the necessity part of the proof. Given a flow network, for non-initial and nonfinal nodes on a trajectory, the set of complete trajectories passing through state s is the union of the sets of trajectories going through s → s for all s ∈ P(s ), and also is the union of the sets of trajectories going through s → s for all s ∈ C(s ), i.e., Then we finish the necessity part. Next we show sufficiency. LetẐ =F (s 0 ) be the partition function andP F be the forward probability function, then there exists a unique Markovian flow F with forward transition probability function P F =P F and partition function Z according to Proposition 18 in Bengio et al. (2021b), and such that
{τ ∈ T : s ∈ τ } = s∈P(s ) {τ ∈ T : s → s ∈ τ } = s ∈C(s ) {τ ∈ T : s → s ∈ τ }.F (τ ) =Ẑ n+1 t=1P F (s t |s t−1 ) = n+1 t=1F (s t−1 → s t ) n t=1F (s t ) ,(24)
where s n+1 = s f . In addition, according to Lemma 1, we have
τ ∈T0,s P B (τ )dτ = τ ∈T0,s st→st+1∈τ P B (s t |s t+1 )dτ = 1.
Lemma 1. Considering a continuous task (S, A), where we have the transition probabilities defined in (8) and (9). Define T s,f and T 0,s as the set of trajectories sampled from a continuous task starting in s and ending in s f ; and starting in s 0 and ending in s, respectively. Then we have
∀s ∈ S\{s f }, τ ∈T s,f P F (τ )dτ = 1 (25) ∀s ∈ S\{s 0 }, τ ∈T0,s P B (τ )dτ = 1.(26)
Thus, we have for s = s 0 :
F (s ) =Ẑ τ ∈T 0,s (st→st+1)∈τP F (s t+1 |s t )dτ =ẐF (s ) F (s 0 ) τ ∈T 0,s (st→st+1)∈τP B (s t |s t+1 )dτ =F (s ).(27)
Combine (27) with P F =P F yields ∀s → s ∈ A, F (s → s ) =F (s → s ). Finally, according to Proposition 16 in Bengio et al. (2021b), for any Markovian flow F matchingF on states and edges, we have F (τ ) = F (τ ), which shows the uniqueness property. Then we complete the proof.
B.2 PROOF OF THEOREM 2
Theorem 2. Let {a k } K k=1 be sampled independently and uniformly from the continuous action space A. Assume G φ can optimally output the actual state s t with (s t+1 , a t ). For any bounded continuous action a ∈ A and any state s t ∈ S, we have
P µ(A) K K k=1 F (s t , a k ) − a∈A F (s t , a)da ≥ t ≤ 2 exp − Kt 2 2(Lµ(A)diam(A)) 2 (28) and P µ(A) K K k=1 F (G φ (s t , a k ), a k ) − a:T (s,a)=st F (s, a)da ≥ t ≤ 2 exp − Kt 2 2 Lµ(A)(diam(A) + diam(S)) 2 ,(29)
where L is the Lipschitz constant, diam(A) denotes the diameter of the action space A and diam(S) denotes the diameter of the state space S.
Proof. First, we show that the expectation of sample outflow is the true outflow and the expectation of sample inflow is the true inflow in Lemma 2.
Lemma 2. Let {a k } K k=1 be sampled independently and uniformly from the continuous action space A. Assume G φ can optimally output the actual state s t with (s t+1 , a t ). Then for any state s t ∈ S, we have
E µ(A) K K k=1 F (s t , a k ) = a∈A F (s t , a)da(30)
and
E µ(A) K K k=1 F (G φ (s t , a k ), a k ) = a:T (s,a)=st F (s, a)da,(31)
where s = g(s t , a).
Then, define the following terms:
Γ k = µ(A) K F (s t , a k ) − 1 K a∈A F (s t , a)da = 1 K a∈A [F (s t , a k ) − F (s t , a)] da(32)
and
Λ k = µ(A) K F (G φ (s t , a k ), a k ) − 1 K a:T (s,a)=st F (s, a)da (33) = 1 K a:T (s,a)=st [F (G φ (s t , a k ), a k ) − F (s, a)] da,(34)
where s = g(s t , a).
Note that the variables {Γ k } K k=1 are independent and E[Γ k ] = 0, k = 1, . . . , K according to Lemma 2. So the following equations hold
P µ(A) K K k=1 F (s t , a k ) − a∈A F (s t , a)da ≥ t = P K k=1 Γ k ≥ t(35)
and
P µ(A) K K k=1 F (G φ (s t , a k ), a k ) − a:T (s,a)=st F (s, a)da ≥ t = P K k=1 Λ k ≥ t .(36)
Since F (s, a) is a Lipschitz function, we have
|Γ k | ≤ 1 K a∈A F (s t , a k ) − F (s t , a) da ≤ L K a∈A ||a k − a||da ≤ Lµ(A)diam(A) K .(37)
Together with Assumption 3, that is, for any pair of (s, a) satisfying T (s, a) = s t , a is unique if we fix s, we have
|Λ k | ≤ 1 K a:T (s,a)=st F (G φ (s t , a k ), a k ) − F (s, a) da ≤ 1 K a:T (s,a)=st F (G φ (s t , a k ), a k ) − F (s, a k ) + F (s, a k ) − F (s, a) da ≤ 1 K a:T (s,a)=st L||G φ (s t , a k ) − s|| + L||a k − a||da ≤ Lµ(A) diam(A) + diam(S) K .(38)
Lemma 3 (Hoeffding's inequality, Vershynin (2018)). Let x 1 , . . . , x K be independent random variables. Assume the variables {x k } K k=1 are bounded in the interval [T l , T r ]. Then for any t > 0,we have
P K k=1 (x k − Ex k ) ≥ t ≤ 2 exp − 2t 2 K(T r − T l ) 2 .(39)
Incorporating T r = L K µ(A)diam(A) and T l = − L K µ(A)diam(A) in Lemma 3 with (37), and incorporating T r = L K µ(A)(diam(A) + diam(S)) and T l = − L K µ(A)(diam(A) + diam(S)) in Lemma 3 with (38), we complete the proof.
B.3 PROOF OF LEMMA 1
Lemma 1. Considering a continuous task (S, A), where we have the transition probabilities defined in (8) and (9). Define T s,f and T 0,s as the set of trajectories sampled from a continuous task starting in s and ending in s f ; and starting in s 0 and ending in s, respectively. Then we have
∀s ∈ S\{s f }, τ ∈T s,f P F (τ )dτ = 1 (40) ∀s ∈ S\{s 0 }, τ ∈T0,s P B (τ )dτ = 1.(41)
Proof. We show by strong induction that (40) holds, mainly following the proof of Lemma 5 in Bengio et al. (2021b), and then extending to (41) Induction steps: Consider d > 1, by noting (12) we have
τ ∈T s,f P F (τ )dτ = s ∈C(s) τ ∈T s→s ,f P F (τ )dτ ds (42) = s ∈C(s) τ ∈T s ,f P F (s |s)P F (τ )dτ ds (43) = s ∈C(s) P F (s |s)ds τ ∈T s ,f P F (τ )dτ = 1,(44)
where the last equality follows by the induction hypotheses.
B.4 PROOF OF LEMMA 2
Lemma 2. Let {a k } K k=1 be sampled independently and uniformly from the continuous action space A. Assume G φ can optimally output the actual state s t with (s t+1 , a t ). Then for any state s t ∈ S, we have
where s = g(s t , a).
Proof. Since {a k } K k=1 is sampled independently and uniformly from the continuous action space A, then we have
E [F (s t , a k )] = 1 µ(A) a∈A F (s t , a)da.(47)
Therefore, we obtain
E µ(A) K K k=1 F (s t , a k ) = µ(A) K K k=1 E [F (s t , a k )] (48) = a∈A F (s t , a)da.(49)
Since Assumption 3 holds, for any pair of (s, a) satisfying T (s, a) = s t , a is unique if we fix s, we have
E [F (G φ (s t , a k ), a k )] = 1 µ(A) a:T (s,a)=st F (s, a)da,
where s = g(s t , a).
Therefore, we get E µ(A) K K k=1 F (G φ (s t , a k ), a k ) = µ(A) K K k=1 E [F (G φ (s t , a k ), a k )] = a:T (s,a)=st F (s, a)da.
Then we complete the proof.
C PSEUDOCODE OF CFLOWNETS
For clarity, we show pseudocode for CFlowNets in Algorithm 1.
Algorithm 1 Generative Continuous Flow Networks (CFlowNets) Algorithm
Initialize: Flow network θ; a pretrained retrieval network G φ ; and empty buffer D and P 1: repeat 2:
Set t = 0, s = s 0 3:
while s = terminal and t<T do 4:
Uniformly sample M actions {a i } M i=1 from action space A 5: Compute edge flow F θ (s t , a i ) for each a i ∈ {a i } M i=1 to generate P 6:
Sample a t ∼ P and execute a t in the environment to obtain r t+1 and s t+1 7:
t = t + 1 8: end while 9:
Store episodes {(s t , a t , r t , s t+1 )} T t=1 in replay buffer D 10:
[Optional] Fine-tuning retrieval network G φ based on D As shown in Figures 7, 8 and 9, we provide the visualization of Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse tasks. In Point-Robot-Sparse, the goal of the agent is to navigate two different goals. The agent starts at the starting coordinate (0, 0) and moves towards the target coordinate one step at a time. The environment has two target coordinates (5, 10) and (10, 5) with a maximum episode length of 12, and the environment returns a reward only when the last step is reached. Rewards are issued by measuring the distance between the agent's current position and the target node, and the closer the distance, the greater the reward. Each time the agent can take a step from any angle to the upper right.
Both Reacher-Goal-Sparse and Swimmer-Sparse are adapted from OpenAI Gym's MuJoCo environment. In the Reacher-Goal-Sparse, "Reacher" is a two-jointed robotic arm. The goal of the agent is to reach a randomly generated target by moving the robot's end effector. Figure 8 shows the movement process of the robotic arm. By adjusting the torque applied at the hinge joint, the end effector can gradually approach the target. In the Swimmer-Spars, the "swimmer" is suspended in a two-dimensional pool, and the goal is to move as fast as possible towards the right or left. Figure 9 shows the shape change process of the robot during motion. By taking the action that applies torque on the rotors and using the fluids friction, the robot can swim faster. We set the maximum number of steps to 50 for these two environments. For Reacher-Goal-Sparse, when the last step is reached, the environment returns a reward that measures how far the agent is from the randomly generated target. The closer the agent is to the target, the greater the reward. For Swimmer-Sparse, the farther to the left or right from the starting point, the greater the reward returned.
D.2 ADDITIONAL ANALYSIS Figure 10 shows that the average reward and reward distribution of different algorithms on the Point-Robot-OneGoal-Sparse task, where an agent needs to navigate to a specific location. Figure 10 (a) indicates that CFLowNets can obtain the highest average return compared to other RL-based algorithms. In Figure 10 (b), all algorithms are able to fit the reward distribution well under the one goal setting, while CFlowNets can achieve better. Note that RL algorithms can also learn the reward distribution in this task, since maximizing the reward is the optimal policy in the case of a single objective, and the policy is not difficult to learn.
In Figure 11, we provide the action reward distribution of different algorithms with 2e4 total timesteps on Point-Robot-Sparse with Point (4,8), Point (8,4) and Point (7,7), respectively. Note that unlike Figure 2, where the total number of timesteps is 1e5, here we show the result with 2e4 total timesteps since we found DDPG is overfit after 1e5 timesteps in this task. Therefore we show the results without overfitting for a fairer comparison. We can see that no matter at which point, the policy of CFlowNets can better match the real reward distribution. For example, at points (4,8) and (8,4), CFlowNet tends to choose actions that guide the agent towards (5, 10) and (10, 5), respectively. For a location between two goals (point (7,7)), there are two directions that allow the agent to reach goals with high rewards. In contrast, the policy learned by RL algorithms can only occasion- ally match the true reward distribution of a certain point, and cannot stably match every point. This also shows that the policies learned by RL algorithms is relatively simple. CFlowNets learn more diverse policies for agents to reach different goals with high rewards, while other methods usually find one goal instead of all potentially high reward locations. Figure 13 show the results of trajectories visualization produced by different algorithms. In the Point-Robot-OneGoal-Sparse task, the trajectories of DDPG, TD3, and PPO are single, while SAC can select actions from the policy probability distribution, so different trajectories can be obtained. In contrast, CFlowNets found more diverse trajectories and also found the highest reward goal (thickened red trajectory), which means that CFlowNets can better explore the region near the goal. In the Point-Robot-Sparse task, the RL-based algorithms seek only one goal. However, CFlowNets can find all goals.
It is worth noting that in Figure 13 (e), the density of CFlowNets sampling trajectories is not as dense as in Figure 12 (e) near the maximum reward. Rather, it is denser on the diagonal. This is because in most positions, the action probability of choosing to go up and to the right is relatively high, so it is easier to go to the diagonal direction in combination. In addition, the reward on the line between two goals is not small. When sampling according to the output of the flow model as a probability, many trajectories themselves are more likely to reach the diagonal. Figure 14 shows the true reward distribution of Point-Robot-Sparse, where the reward is higher in the area near two goals and the line between two goals.
D.3 EXPERIMENT RESULTS ON HIGHWAY-PARKING-SPARSE
We evaluate the performance of CFLowNets on Highway-Parking-Sparse, which is an ego-vehicle control task. As shown in Figure 15, the goal is to make the ego-vehicle park in a given space with the appropriate orientation by adjusting its controller. The dimension of the vehicle observation is 18, consisting of the distance between the vehicle and parking, the vehicle speed, the triangular heading information, the goal the agent should attempt to achieve, and the goal that it currently achieves. The action space includes control over the throttle and steering angle, and the reward function is set as the distance between the ego-vehicle and parking. Figure 16 shows the average reward and the number of valid-distinctive trajectories explored as training progresses of different algorithms, which illustrates that the performance of CFlowNets is more promising than other RLbased algorithms. Even for higher-dimensional continuous tasks, CFlowNets have very competitive reward results (outperforming DDPG, TD3, and SAC), while achieving much better exploration performance than RL-based algorithms.
D.4 BASELINES
We compare our proposed CFlowNets to the following baselines:
• Deep Deterministic Policy Gradient (DDPG) (Lillicrap et al., 2015). https://github. com/sfujim/TD3/blob/master/DDPG.py • Twin Delayed Deep Deterministic Policy Gradient (TD3) (Fujimoto et al., 2018). https: //github.com/sfujim/TD3 • Soft Actor-Critic (SAC) (Haarnoja et al., 2018b).
https://github.com/ denisyarats/pytorch_sac/ • Proximal Policy Optimization (PPO) (Schulman et al., 2017). https://github.com/ DLR-RM/stable-baselines3/blob/master/stable_baselines3/ppo/ ppo.py
D.5 HYPER-PARAMETER
We provide the hyper-parameters of all compared methods under different environments in Table 1, Table 2, Table 3, Table 4, and Table 5.
As for "Total Timesteps", "Start Traning Timestep", "Max Episode Length", "Actor Network Hidden Layers", "Critic Network Hidden Layers", "Optimizer", "Learning Rate", and "Discount Fac-tor", we set them the same for all algorithms for a fair comparison. As for these specific parameters for baseline algorithms, we remain them the same as those in the original code to achieve good performance. As for these specific parameters of our CFlowNets, we set the number of sample flows to 100 and the action probability buffer size to 1000 to tradeoff the performance and computational load. Note that CFlowNets dose not require as large a replay buffer size as other RL algorithms, since the exploration ability of CFlowNets is better than that of others. And a good policy can already be learned from a small replay buffer. This is also an advantage of CFlowNets compared to RL based algorithms.
P
F (s |s)ds = 1 and ∀s ∈ S\{s 0 }, s ∈P(s) P B (s |s)ds = 1. (12)Given any trajectory τ = (s 0 , ..., s n , s) that starts in s 0 and ends in s, a Markovian flow(Bengio et al., 2021b) is defined as the flow that satisfies P (s → s |τ ) = P (s → s |s) = P F (s |s), and the corresponding flow network (S, A, F ) is called a Markovian flow network(Bengio et al., 2021b). Then, we present Theorem 1 proved in the appendix B.1, which is an extension of Proposition 19 inBengio et al. (2021b) to continuous scenarios.Theorem 1 (Continuous Flow Matching Condition). Consider a non-negative functionF (s, a) taking a state s ∈ S and an action a ∈ A as inputs. Then we haveF corresponds to a flow if and only if the following continuous flow matching conditions are satisfied: ∀s > s 0 ,F (s ) = s∈P(s )F (s → s )ds = s:T (s,a)=s F (s, a : s → s )ds ∀s < s f ,F (s ) = s ∈C(s )F (s → s )ds = a∈AF (s , a)da.
Figure 1 :
1Overall framework of CFlowNets. Left: During the environment interaction phase, we sample actions to update states with probabilities proportional to the reward according to CFlowNet.
Figure 2 :
2Reward distributions on Point-Robot-Sparse Task.
Figure 3 :
3Comparison results of CFlowNets, DDPG, TD3, SAC and PPO on Point-Robot-Sparse, Reacher-Goal-Sparse, and Swimmer-Sparse tasks. Top: Number of valid-distinctive trajectories generated under 10000 explorations. Bottom: The average reward of different methods.
Figure 4 :
4Pendulum. It is difficult for the state to be completely consistent in this continuous space.
Figure 5 :
5Pendulum-with-Wall. The state becomes consistent when reaching the wall.
Figure 6 :
6Accumulated maximum Lipschitz constant of flow network F (s, a).
, we calculate |F (s,a)−F (s,a )| a−a and |F (s,a)−F (s ,a)| s−s of each sample tuple (s, a, a ) and (s, s , a) to analysis the Lipschitz constant, respectively. Their accumulated maximum Lipschitz constants are shown in Figures 6 (a) and (b), respectively. Clearly, there exists a finite Lipschitz constant for our flow network. In addition, Lipschitz continuous is a common assumption of neural networks, just some quick examples: Du et al. (2019); Jacot et al. (2018); Allen-Zhu et al. (2019);
. (Continuous Flow Matching Condition). Consider a non-negative functionF (s, a) taking a state s ∈ S and an action a ∈ A as inputs. Then we haveF corresponds to a flow if and only if the following continuous flow matching conditions are satisfied: ∀s > s 0 ,F (s ) = s∈P(s )F (s → s )ds = s:T (s,a)=s F (s, a : s → s )ds ∀s < s f ,F (s ) = s ∈C(s )F (s → s )ds = a∈AF (s , a)da.
FF
(s ) = τ :s ∈τ F (τ )dτ = s∈P(s ) τ :s→s ∈τ F (τ )dτ ds = s∈P(s ) F (s → s )ds, (s → s )ds .
is trivial. Define d as the maximum trajectory length in T s,f , s = s f , we have: Base cases: If d = 1, then τ ∈T s,f P F (τ )dτ = P F (s → s f ) = 1 holds by noting T s,f = {(s → s f )}.
F
(G φ (s t , a k ), a k ) = a:T (s,a)=st F (s, a)ds,
Figure 7 :Figure 8 :
78Visualization of Point-Robot-Sparse task. Visualization of Reacher-Goal-Sparse task.
Figure 10 :Figure 11 :Figure 12 :Figure 13 :
10111213The average reward and reward distributions of CFlowNets, DDPG, TD3, SAC and PPO on Point-Robot-OneGoal-Sparse task. The reward distributions of different points on Point-Robot-Sparse task. Sampled trajectories on Point-Robot-OneGoal-Sparse task. Sampled trajectories on Point-Robot-Sparse task.
Figure 14 :
14Reward distributions on Point-Robot-Sparse Task.
Figure 12 and
12Figure 12 and Figure 13 show the results of trajectories visualization produced by different algorithms. In the Point-Robot-OneGoal-Sparse task, the trajectories of DDPG, TD3, and PPO are single, while SAC can select actions from the policy probability distribution, so different trajectories can be obtained. In contrast, CFlowNets found more diverse trajectories and also found the highest reward goal (thickened red trajectory), which means that CFlowNets can better explore the region near the goal. In the Point-Robot-Sparse task, the RL-based algorithms seek only one goal. However, CFlowNets can find all goals.
Figure 16 :
16The average reward and number of valid-distinctive trajectories generated under 10000 explorations of CFlowNets, DDPG, TD3, and SAC on Highway-Parking-Sparse.
Dinghuai Zhang, Ricky TQ Chen, Nikolay Malkin, and Yoshua Bengio. Unifying generative models with gflownets. arXiv preprint arXiv:2209.02606, 2022a.Dinghuai Zhang, Nikolay Malkin, Zhen Liu, Alexandra Volokhova, Aaron Courville, and Yoshua
Bengio. Generative flow networks for discrete probabilistic modeling. arXiv preprint
arXiv:2202.01361, 2022b.
A DISCUSSIONS
A.1 WHY IS ASSUMPTION 1 NECESSARY AND REASONABLE?
Table 1 :
1Hyper-parameters of CFlowNets under different environments.
Table 4 :
4Hyper-parameter of SAC under different environments.Point-Robot-Sparse Reacher-Goal-Sparse Swimmer-SparseTotal Timesteps
100,000
100,000
100,000
Start Traning Timestep
4,000
7,500
7,500
Max Episode Length
12
50
50
Actor Network Hidden Layers
[256,256]
[256,256]
[256,256]
Critic Network Hidden Layers
[256,256]
[256,256]
[256,256]
Optimizer
Adam
Adam
Adam
Learning Rate
0.0003
0.0003
0.0003
Batchsize
1024
1024
1024
Discount Factor
0.99
0.99
0.99
Replay Buffer Size
100,000
100,000
100,000
Target Update Interval
1
1
1
Table 5 :
5Hyper-parameter of PPO under different environments. Point-Robot-Sparse Reacher-Goal-Sparse Swimmer-SparseTotal Timesteps
100,000
100,000
100,000
Max Episode Length
12
50
50
Policy Network Hidden Layers
[64,64]
[64,64]
[64,64]
Value Network Hidden Layers
[64,64]
[64,64]
[64,64]
Optimizer
Adam
Adam
Adam
Learning Rate
0.0003
0.0003
0.0003
Batchsize
64
64
64
Discount Factor
0.99
0.99
0.99
GAE Parameter
0.95
0.95
0.95
Timesteps per Update
2048
2048
2048
Number of Epochs
10
10
10
Clipping Parameter
0.2
0.2
0.2
Value Loss Coefficient
0.5
0.5
0.5
CFLOWNETS: TRAINING FRAMEWORK For continuous tasks, it is usually difficult to access all state-action pairs to calculate continuous inflows and outflows. In the following, we propose the CFlowNets training framework to address this problem, which includes an action sampling process, a flow matching approximation process. Then, CFlowNets can be trained based on an approximate flow matching loss function.
A commonly used reward shaping method is to multiply the reward by a constant and adjust the reward to an appropriate range to ensure better convergence. Therefore, after setting λ to 1, a reasonable reward shaping operation can also compensate for the influence of λ error.
The convergence of sparsified gradient methods. Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, Cédric Renggli, Advances in Neural Information Processing Systems. 31Dan Alistarh, Torsten Hoefler, Mikael Johansson, Nikola Konstantinov, Sarit Khirirat, and Cédric Renggli. The convergence of sparsified gradient methods. Advances in Neural Information Pro- cessing Systems, 31, 2018.
A convergence theory for deep learning via overparameterization. Zeyuan Allen-Zhu, Yuanzhi Li, Zhao Song, ; , PMLRInternational Conference on Machine Learning. Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray,39Learning dexterous in-hand manipulationZeyuan Allen-Zhu, Yuanzhi Li, and Zhao Song. A convergence theory for deep learning via over- parameterization. In International Conference on Machine Learning, pp. 242-252. PMLR, 2019. OpenAI: Marcin Andrychowicz, Bowen Baker, Maciek Chociej, Rafal Jozefowicz, Bob McGrew, Jakub Pachocki, Arthur Petron, Matthias Plappert, Glenn Powell, Alex Ray, et al. Learning dexterous in-hand manipulation. The International Journal of Robotics Research, 39(1):3-20, 2020.
Flow network based generative models for non-iterative diverse candidate generation. Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, Yoshua Bengio, Emmanuel Bengio, Moksh Jain, Maksym Korablyov, Doina Precup, and Yoshua Bengio. Flow network based generative models for non-iterative diverse candidate generation, 2021a.
. Yoshua Bengio, Tristan Deleu, Edward J Hu, Salem Lahlou, Mo Tiwari, Emmanuel Bengio, Gflownet foundations. Yoshua Bengio, Tristan Deleu, Edward J. Hu, Salem Lahlou, Mo Tiwari, and Emmanuel Bengio. Gflownet foundations, 2021b.
Phasic policy gradient. Jacob Karl W Cobbe, Oleg Hilton, John Klimov, Schulman, International Conference on Machine Learning. PMLRKarl W Cobbe, Jacob Hilton, Oleg Klimov, and John Schulman. Phasic policy gradient. In Interna- tional Conference on Machine Learning, pp. 2020-2027. PMLR, 2021.
Tristan Deleu, António Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, Yoshua Bengio, arXiv:2202.13903Bayesian structure learning with generative flow networks. arXiv preprintTristan Deleu, António Góis, Chris Emezue, Mansi Rankawat, Simon Lacoste-Julien, Stefan Bauer, and Yoshua Bengio. Bayesian structure learning with generative flow networks. arXiv preprint arXiv:2202.13903, 2022.
Gradient descent finds global minima of deep neural networks. Simon Du, Jason Lee, Haochuan Li, Liwei Wang, Xiyu Zhai, International conference on machine learning. PMLRSimon Du, Jason Lee, Haochuan Li, Liwei Wang, and Xiyu Zhai. Gradient descent finds global minima of deep neural networks. In International conference on machine learning, pp. 1675- 1685. PMLR, 2019.
Scalable distributed deep-rl with importance weighted actor-learner architectures. Lasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, International conference on machine learning. PMLRLasse Espeholt, Hubert Soyer, Remi Munos, Karen Simonyan, Vlad Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, et al. Impala: Scalable distributed deep-rl with im- portance weighted actor-learner architectures. In International conference on machine learning, pp. 1407-1416. PMLR, 2018.
Addressing function approximation error in actorcritic methods. Scott Fujimoto, Herke Hoof, David Meger, International conference on machine learning. PMLRScott Fujimoto, Herke Hoof, and David Meger. Addressing function approximation error in actor- critic methods. In International conference on machine learning, pp. 1587-1596. PMLR, 2018.
A survey of actor-critic reinforcement learning: Standard and natural policy gradients. Lucian Ivo Grondman, Busoniu, A D Gabriel, Robert Lopes, Babuska, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 426Ivo Grondman, Lucian Busoniu, Gabriel AD Lopes, and Robert Babuska. A survey of actor-critic reinforcement learning: Standard and natural policy gradients. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), 42(6):1291-1307, 2012.
Latent space policies for hierarchical reinforcement learning. Tuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, Sergey Levine, International Conference on Machine Learning. PMLRTuomas Haarnoja, Kristian Hartikainen, Pieter Abbeel, and Sergey Levine. Latent space policies for hierarchical reinforcement learning. In International Conference on Machine Learning, pp. 1851-1860. PMLR, 2018a.
Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, Sergey Levine, International conference on machine learning. PMLRTuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In International confer- ence on machine learning, pp. 1861-1870. PMLR, 2018b.
Provably efficient maximum entropy exploration. Elad Hazan, Sham Kakade, Karan Singh, Abby Van Soest, International Conference on Machine Learning. PMLRElad Hazan, Sham Kakade, Karan Singh, and Abby Van Soest. Provably efficient maximum entropy exploration. In International Conference on Machine Learning, pp. 2681-2691. PMLR, 2019.
Neural tangent kernel: Convergence and generalization in neural networks. Arthur Jacot, Franck Gabriel, Clément Hongler, Advances in neural information processing systems. 31Arthur Jacot, Franck Gabriel, and Clément Hongler. Neural tangent kernel: Convergence and gen- eralization in neural networks. Advances in neural information processing systems, 31, 2018.
Biological sequence design with gflownets. Moksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, F P Bonaventure, Chanakya Dossou, Jie Ajit Ekbote, Tianyu Fu, Michael Zhang, Dinghuai Kilgour, Zhang, International Conference on Machine Learning. PMLRMoksh Jain, Emmanuel Bengio, Alex Hernandez-Garcia, Jarrid Rector-Brooks, Bonaventure FP Dossou, Chanakya Ajit Ekbote, Jie Fu, Tianyu Zhang, Michael Kilgour, Dinghuai Zhang, et al. Biological sequence design with gflownets. In International Conference on Machine Learning, pp. 9786-9801. PMLR, 2022.
Reinforcement learning: A survey. Leslie Pack Kaelbling, Andrew W Michael L Littman, Moore, Journal of artificial intelligence research. 4Leslie Pack Kaelbling, Michael L Littman, and Andrew W Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237-285, 1996.
Evolution-guided policy gradient in reinforcement learning. Shauharda Khadka, Kagan Tumer, Advances in Neural Information Processing Systems. 31Shauharda Khadka and Kagan Tumer. Evolution-guided policy gradient in reinforcement learning. Advances in Neural Information Processing Systems, 31, 2018.
Deep reinforcement learning for autonomous driving: A survey. Ibrahim B Ravi Kiran, Victor Sobh, Patrick Talpaert, Ahmad A Al Mannion, Senthil Sallab, Patrick Yogamani, Pérez, IEEE Transactions on Intelligent Transportation Systems. B Ravi Kiran, Ibrahim Sobh, Victor Talpaert, Patrick Mannion, Ahmad A Al Sallab, Senthil Yoga- mani, and Patrick Pérez. Deep reinforcement learning for autonomous driving: A survey. IEEE Transactions on Intelligent Transportation Systems, 2021.
Policy gradient reinforcement learning for fast quadrupedal locomotion. Nate Kohl, Peter Stone, IEEE International Conference on Robotics and Automation. IEEE3Proceedings. ICRA'04Nate Kohl and Peter Stone. Policy gradient reinforcement learning for fast quadrupedal locomotion. In IEEE International Conference on Robotics and Automation, 2004. Proceedings. ICRA'04. 2004, volume 3, pp. 2619-2624. IEEE, 2004.
Wenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, Yan Pang, arXiv:2210.08185Generative flow networks for causal discovery. arXiv preprintWenqian Li, Yinchuan Li, Shengyu Zhu, Yunfeng Shao, Jianye Hao, and Yan Pang. Gflowcausal: Generative flow networks for causal discovery. arXiv preprint arXiv:2210.08185, 2022.
Dag matters! gflownets enhanced explainer for graph neural networks. Wenqian Li, Yinchuan Li, Zhigang Li, Hao Jianye, Yan Pang, International conference on learning representations. Wenqian Li, Yinchuan Li, Zhigang Li, Jianye HAO, and Yan Pang. Dag matters! gflownets enhanced explainer for graph neural networks. In International conference on learning representations, 2023.
P Timothy, Jonathan J Lillicrap, Alexander Hunt, Nicolas Pritzel, Tom Heess, Yuval Erez, David Tassa, Daan Silver, Wierstra, arXiv:1509.02971Continuous control with deep reinforcement learning. arXiv preprintTimothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971, 2015.
Nikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, Yoshua Bengio, arXiv:2201.13259Trajectory balance: Improved credit assignment in gflownets. arXiv preprintNikolay Malkin, Moksh Jain, Emmanuel Bengio, Chen Sun, and Yoshua Bengio. Trajectory balance: Improved credit assignment in gflownets. arXiv preprint arXiv:2201.13259, 2022.
Virtual to real reinforcement learning for autonomous driving. Xinlei Pan, Yurong You, Ziyan Wang, Cewu Lu, arXiv:1704.03952arXiv preprintXinlei Pan, Yurong You, Ziyan Wang, and Cewu Lu. Virtual to real reinforcement learning for autonomous driving. arXiv preprint arXiv:1704.03952, 2017.
Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. Silviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, Jimmy Ba, International Conference on Machine Learning. PMLRSilviu Pitis, Harris Chan, Stephen Zhao, Bradly Stadie, and Jimmy Ba. Maximum entropy gain exploration for long horizon multi-goal reinforcement learning. In International Conference on Machine Learning, pp. 7750-7761. PMLR, 2020.
Supervised actor-critic reinforcement learning. Learning and Approximate Dynamic Programming: Scaling Up to the Real World. Andrew G Michael T Rosenstein, Jennie Barto, Andy Si, Warren Barto, Donald Powell, Wunsch, Michael T Rosenstein, Andrew G Barto, Jennie Si, Andy Barto, Warren Powell, and Donald Wun- sch. Supervised actor-critic reinforcement learning. Learning and Approximate Dynamic Pro- gramming: Scaling Up to the Real World, pp. 359-380, 2004.
Trust region policy optimization. John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, Philipp Moritz, International conference on machine learning. PMLRJohn Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. Trust region policy optimization. In International conference on machine learning, pp. 1889-1897. PMLR, 2015.
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, arXiv:1707.06347Proximal policy optimization algorithms. arXiv preprintJohn Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017.
Safe, multi-agent, reinforcement learning for autonomous driving. Shai Shalev-Shwartz, Shaked Shammah, Amnon Shashua, arXiv:1610.03295arXiv preprintShai Shalev-Shwartz, Shaked Shammah, and Amnon Shashua. Safe, multi-agent, reinforcement learning for autonomous driving. arXiv preprint arXiv:1610.03295, 2016.
Deterministic policy gradient algorithms. David Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, Martin Riedmiller, International conference on machine learning. PMLRDavid Silver, Guy Lever, Nicolas Heess, Thomas Degris, Daan Wierstra, and Martin Riedmiller. Deterministic policy gradient algorithms. In International conference on machine learning, pp. 387-395. PMLR, 2014.
Accelerated methods for deep reinforcement learning. Adam Stooke, Pieter Abbeel, arXiv:1803.02811arXiv preprintAdam Stooke and Pieter Abbeel. Accelerated methods for deep reinforcement learning. arXiv preprint arXiv:1803.02811, 2018.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
Policy gradient methods for reinforcement learning with function approximation. S Richard, David Sutton, Satinder Mcallester, Yishay Singh, Mansour, Advances in neural information processing systems. 12Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. Policy gradient meth- ods for reinforcement learning with function approximation. Advances in neural information processing systems, 12, 1999.
High-dimensional probability: An introduction with applications in data science. Roman Vershynin, Cambridge university press47Roman Vershynin. High-dimensional probability: An introduction with applications in data science, volume 47. Cambridge university press, 2018.
Reinforcement learning with prototypical representations. Denis Yarats, Rob Fergus, Alessandro Lazaric, Lerrel Pinto, International Conference on Machine Learning. PMLRDenis Yarats, Rob Fergus, Alessandro Lazaric, and Lerrel Pinto. Reinforcement learning with pro- totypical representations. In International Conference on Machine Learning, pp. 11920-11931. PMLR, 2021.
| [] |
[
"Repair of Reed-Solomon Codes in the Presence of Erroneous Nodes",
"Repair of Reed-Solomon Codes in the Presence of Erroneous Nodes"
] | [
"Stanislav Kruglik ",
"Gaojun Luo ",
"Wilton Kim ",
"Shubhransh Singhvi [email protected] \nSignal Processing & Communications Research Center\nInternational Institute of Information Technology\nHyderabadIndia\n",
"Han Mao Kiah ",
"San Ling ",
"Huaxiong Wang [email protected] ",
"\nSchool of Physical and Mathematical Sciences\nNanyang Technological University\nSingapore\n"
] | [
"Signal Processing & Communications Research Center\nInternational Institute of Information Technology\nHyderabadIndia",
"School of Physical and Mathematical Sciences\nNanyang Technological University\nSingapore"
] | [] | We consider the repair scheme of Guruswami-Wootters for the Reed-Solomon code and ask: can we correctly repair a failed node in the presence of erroneous nodes? Equivalently, we consider the collection of downloaded traces as a code and investigate its code-distance properties. We propose three lower bounds on its minimum distance and study methods to efficiently correct errors close to these bounds. | 10.48550/arxiv.2305.03442 | [
"https://export.arxiv.org/pdf/2305.03442v1.pdf"
] | 258,546,728 | 2305.03442 | 37cc4b6885f55166feae6128f9f0f9edeae60a65 |
Repair of Reed-Solomon Codes in the Presence of Erroneous Nodes
Stanislav Kruglik
Gaojun Luo
Wilton Kim
Shubhransh Singhvi [email protected]
Signal Processing & Communications Research Center
International Institute of Information Technology
HyderabadIndia
Han Mao Kiah
San Ling
Huaxiong Wang [email protected]
School of Physical and Mathematical Sciences
Nanyang Technological University
Singapore
Repair of Reed-Solomon Codes in the Presence of Erroneous Nodes
We consider the repair scheme of Guruswami-Wootters for the Reed-Solomon code and ask: can we correctly repair a failed node in the presence of erroneous nodes? Equivalently, we consider the collection of downloaded traces as a code and investigate its code-distance properties. We propose three lower bounds on its minimum distance and study methods to efficiently correct errors close to these bounds.
I. INTRODUCTION
Distributed storage becomes more popular as data volume increases exponentially in all industries. The data can be represented as x ∈ F k for some finite field F. To protect against erasures, the data is encoded into c = (c 1 , . . . , c n ) and each c i is kept in the server i. One important performance metric of distributed storage is the total amount of information to download to perform the recovery, called the repair bandwidth, which was introduced in [1]. Reed-Solomon code [2], [3] is widely used as it allows one to recover x by utilizing k available servers. However, this approach is not optimal for the case of repairing a single erasure as we need to download k codesymbols to recover one codesymbol.
Many studies have been conducted to improve the repair bandwidth [4], [5]. The pioneering work of Guruswami and Wootters [6] revisits the widely used Reed-Solomon codes and shows that it is possible to decrease the repair bandwidth of single erasure dramatically when more than k nodes are available. Roughly speaking, in this scheme, instead of downloading k symbols in F, we download (n − 1) sub-symbols in a base field B called traces. Then using certain parity-check equations (see Section II-A for details), we then recover the failed node.
Later, the Guruswami-Wooters repair scheme, or the trace repair framework, was extended to different scenarios in a number of works [7]- [14]. All of these studies, however, assume that all available nodes give correct information.
In this paper, we consider the case where nodes can provide wrong information and we attempt to answer the following question: is it possible to correctly repair a node with low bandwidth in the presence of erroneous nodes? Previously, the problem of erroneous trace correction in Reed-Solomon repair was solved for the case of the high sub-packetization regime [15]. Our approach, on the other hand, is applicable to any sub-packetization level. Furthermore, applications extend beyond coding for distributed storage. In the case of secret sharing schemes based on Reed-Solomon codes (i.e., Shamir's secret sharing scheme [16]), our methods allow shareholders to successfully obtain a secret in the presence of malicious shareholders.
A. Motivating Example
Let F = GF (16). Set n = 16 and k = 2, and we consider a [16,2,15]−RS code. So, we can correct five errors and four erasures. Hence, in the classical approach, when there are at most five erroneous nodes, we download (16−4) = 12 symbols from any twelve available nodes to repair a failed node. In other words, the repair bandwidth in this case is 12(4) = 48 bits.
On the other hand, we consider F as an extension field of B = GF (4). Then Guruswami-Wooters repair scheme [6] allow us to repair a failed node by downloading 15 traces (see Theorem 2). Later, we show that the traces form a B-linear code with minimum distance 11 (see Theorem 3). Therefore, using these 15 traces, we are able to correct five errors. Here, our repair bandwidth is 15(2) = 30 bits only.
B. Our Contributions
In the spirit of Guruswami and Wootters, our primary objective is to simply understand what can be done for Reed-Solomon codes. Specifically, we focus on the Guruswami-Wootters repair scheme (which we review in Section II-A) and ask: can we correctly repair a failed node in the presence of erroneous nodes? Equivalently, we consider the collection of traces as a B-linear code T and ask what is the minimum distance of this code. In Section III, we first show this code is in fact a subcode of a generalized Reed-Solomon code. Hence, we are able to apply efficent decoding algorithms like Berlekamp-Welch to correct errors. This gives us a lower bound when k is small. For larger values of k, we construct additional parity check equations in F and use lifted decoding to correct errors (see Section III-A). Finally, we use the character sums to provide another lower bound. We remark that similar techniques were used in [17]- [22], but most of these works focus on polynomial trace codes, while we consider more general rational trace codes. To efficiently correct errors close to these bounds, we modify the famous Guruswami-Sudan list-decoding algorithm in Section IV. Finally, in Section V, we compare the various bounds obtained in the paper.
II. PRELIMINARIES
Let [n] denote the set of integers {1, . . . , n}. Let B be the finite field with p m elements and F be the extension of degree t ≥ 1. So, |F| = p mt and |B| = p m . We refer to the elements of F as symbols and to the elements of B as sub-symbols. We use F[x] to denote a ring of polynomials over finite field F.
An F-linear [n, k] code C is k-dimensional subspace of F n . We denote the dual of code C by C ⊥ and so, for each c = (c 1 , . . . , c n ) ∈ C and c ⊥ = (c ⊥ 1 , . . . , c ⊥ n ) ∈ C ⊥ , it holds that n i=1 c i c ⊥ i = 0. We denote the minimum distance of C with d(C) and the Singleton bound states that d(C) ≤ n − k + 1 (see for example [17]). Codes that attain this bound are called maximum-distance separable (MDS) codes and in this work, we focus on the following class of MDS codes. Definition 1. Let A ⊆ F. The Reed-Solomon code RS(A, k) of dimension k with evaluation points A is defined as:
RS(A, k) {(f (α)) α∈A : f ∈ F[x], deg(f (X)) ≤ k − 1} ,
while the generalized Reed-Solomon code RS(A, k) of dimension k with evaluation points A ⊆ F and multiplier vector λ λ λ ∈ F n \ {0} is defined as:
GRS(A, k, λ λ λ) {(λαf (α)) α∈A : f ∈ F[x], deg(f (X)) ≤ k − 1} .
Clearly, the generalized Reed-Solomon code with multiplier vector λ λ λ = (1, . . . , 1) is a Reed-Solomon code of the same length and dimension. It is well known (see [17]) that dual of
RS(A, k) is GRS(A, |A| − k, λ λ λ) for λ λ λ = (λ α ) α∈A where λ j = 1 αi∈A\{αj } (α j − α i ) .(1)
Note that when A = F, we have λ α = 1 for all α ∈ A.
If it is clear from the context, we use f (x) to denote the polynomial of degree at most k − 1 corresponding to RS(A, k) and r(x) to denote the polynomial of degree at most |A|−k −1 corresponding to the dual codeword in C ⊥ .
A. Trace Repair Framework
In this section, we discuss about trace repair framework to recover a single erased node. The main idea of trace repair framework is that we want to recover a symbol in F by using sub-symbols in B. Without loss of generality, let us assume that f (0) is erased. Let A ⊆ F \ {0} be the set of evaluation points.
We consider trace function Tr : F → B defined as
Tr(x) = t−1 i=0 x |B| i , for all x ∈ F.(2)
Clearly, Tr(x) is a polynomial in x with degree p mt−m . Next, we discuss how this trace function helps us in the recovery. We regard F as a B-linear vector space of dimension t and let
{u 1 , . . . , u t } be a basis of F over B. Furthermore, there exists a trace-dual basis { u 1 , . . . , u t } for F such that Tr(u i u j ) = 1 if i = j,
and Tr(u i u j ) = 0, otherwise. The following result plays a crucial role in our evaluation framework.
Proposition 1 ([23, Ch. 2]). Let {u 1 , . . . , u t } be a B-basis of F.
Then there exists a trace-dual basis { u 1 , . . . , u t } and we can write each element x ∈ F as
x = t i=1 Tr(u i x) u i .
This means, in order to recover f (0), we need to determine Tr(u i λ 0 f (0)) for all i = 1, . . . , t by downloading certain information from the remaining nodes. To do so, we consider
p i (x) = Tr(u i x) x , for all i = 1, . . . , t.(3)
We can check that p i is a polynomial of degree p mt−m − 1 and p i (0) = u i . If p mt−m − 1 ≤ |A| − k, then the following parity check equations hold
u i λ 0 f (0) = α∈A p i (α)λ α f (α).(4)
Applying trace function both sides of (4), we obtain
Tr(u i λ 0 f (0)) = α∈A Tr(p i (α)λ α f (α)) = α∈A Tr(u i α)Tr λ α f (α) α .(5)
Therefore, it suffices to download Tr(λ α f (α)/α) from node α. This motivates us to study the following code.
Definition 2. The repair-trace code with evaluation points A ⊆ F \ {0} is defined as:
T(A, k) {(Tr(λ α f (α)/α)) α∈A : f ∈ F[x], deg(f (X)) ≤ k − 1} . (6) Remark 3. It is possible that |T(A, k)| < |F| k .
That is, in the definition of T, it is possible for two distinct polynomials f and g (with degrees at most k − 1) to correspond to the same codeword. In other words,
Tr(λ α f (α)/α) = Tr(λ α g(α)/α) for α ∈ A. Nevertheless, T(A, k) is a B-linear code.
The above technique is summarised into the theorem below.
Theorem 2 ([6, Guruswami-Wootters]). If |A| ≥ p mt−m −1+k and given c ∈ T(A, k), we can efficiently compute f (0).
Our main task is to determine the minimum distance of T(A, k). If the distance is d, then we are able to correct (d − 1)/2 errors. In addition, we also investigate algorithms that are able to correct the number of errors efficiently.
B. Main Results
In this conference paper, to simplify our exposition, we focus on the case where the data is stored on a full-length Reed-Solomon code of length n = p mt , dimension k and code rate R. Hence, λ α = 1 for all α. Then the repair-trace code is simply T(A, k) = {(Tr(f (α)/α)) α∈A : deg(f (X)) ≤ k − 1} and we summarize our results in the following theorem. The following bounds on d hold:
(i) (Degree Bound). If k ≤ p m , then d ≥ p mt − 1 − ∆ d 1 , where ∆ (k − 1)p mt−m , when k ≥ 2, p mt−m − 1, when k = 1. (7) (ii) (Lifted Decoding). If k ≤ p mt − p mt−m , then d ≥ p mt −k p mt−m d 2 . (iii) (Character Sum Bound). If k < 1 + p mt −1 √ p mt ,then d ≥ d 3 , where d 3 p m −p p m p mt − 1 − (k − 1) p mt , when m ≥ 2, p−1 p p t − 1 − (k − 1) p t , when m = 1.(8)
We can efficiently correct up to the distances promised by Theorem 3(i) and (ii). For Theorem 3(iii), we modify the famous Guruswami-Sudan algorithm to correct errors close to the character sum bound. We do note that results from Theorem 3(i) and (ii) can be generalized for non-full-length Reed-Solomon codes.
III. LOWER BOUNDS FOR MINIMUM DISTANCE
In this section, we prove Theorem 3. Recall that A is a set of nonzero elements in F. First, we consider the code
T 1 α p mt−m c α α∈A
: c ∈ T(A, k) and show that it is a subcode of some generalized Reed-Solomon code.
Proposition 4. Let ∆ be defined in (7). If ∆ < |A|, then T 1 ⊆ GRS(A, ∆ + 1, µ µ µ 1 ) for some µ µ µ 1
Proof. We note that c * ∈ T 1 can be represented as
(F (α)) α∈A = α p mt−m Tr( f (α) α ) α∈A = α p mt−m t−1 i=0 f (α) α p mi α∈A = α p mt−m−1 f (α) + · · · + f (α) p mt−m α∈A , where F (x) is a polynomial of degree max(k + p mt−m − 2, . . . , (k − 1)p mt−m )
. This fact finishes the proof.
Since T 1 is equivalent to the repair-trace code T(A, k) (since α = 0 for all α ∈ A), the minimum distance of T(A, k) is at least |A| − ∆ and we obtain Theorem 3(i). We note that we can efficiently correct up to the promised distance using any Reed-Solomon code bounded-distance decoder (see [17]).
A. Lifted Decoding
Proposition 4 applies only when k is at most √ n. In this section, we study the other extreme and look for bounds that apply when k is large. In fact, we demonstrate that T(A, k) is a subcode of a generalized Reed-Solomon code with a different set of parameters and sub-constant minimum distance at least p m (1 − R). To this end, we form the following set of paritycheck equations similar to (5).
Lemma 5. For 2 ≤ ≤ p mt −k p mt−m , we have that α∈A α Tr f (α) α = 0.
Proof. Let {u 1 , . . . , u t } be the basis of F over B and {η 1 , . . . , η t } be its trace-dual basis. We have the following codewords of the dual of RS(A ∪ {0}, k):
r ( ) i (x) Tr u i x x ,(9)
for all i = 1, . . . , t and = 2, . . . , p mt −k p mt−m . It is clear that r
i (0)f (0) + α∈A r ( ) i (α)f (α) = 0.
Following the definition of r
( ) i (x), we have α∈A f (α) Tr(u i α ) α = 0.(10)
Applying the trace function to both sides of (10) and employing the fact that Tr(aTr(b)) = Tr(bTr(a)) = Tr(a)Tr(b) we have α∈A Tr u i α Tr f (α) α = 0.
Utilizing the linearity of trace function, we have
Tr α∈A uiα l Tr f (α) α = Tr ui α∈A α Tr f (α α = 0 . Consequently, t i=1 η i Tr u i α∈A α Tr f (α) α = α∈A α Tr f (α) α = 0.
Then the following proposition is immediate from Lemma 5.
Proposition 6. T(A, k) ⊆ GRS(A, p mt − p mt −k p mt−m , µ µ µ 2 ) for some multiplier µ µ µ 2 .
Proof. From Lemma 5, it is clear that parity-check matrix of code T has the following rows
H = α 2 1 α 2 2 · · · α 2 n α 3 1 α 3 2 · · · α 3 n . . . . . . . . . α 1 α 2 · · · α n .
for = p mt −k p mt−m and A = {α 1 , . . . , α n }. It is clear that the dual of the code generated by H is a GRS(A, |A| − + 1, µ µ µ 2 ) for some multiplier µ µ µ 2 . Therefore T(A, k) is a subcode of the latter and we obtain the proposition. Therefore, every nonzero codeword in T(A, k) has weight at least p mt −k p mt−m and statement of Theorem 3(ii) follows.
B. Character Sum Bound
In this subsection, we prove the Theorem 3(iii) by modifying the proof of [17, Theorem 5.4] for two cases, m = 1 and m > 1. Before we proceed further, let us provide a short overview of character sums and refer the reader to [23], [24] for more details. Assume that ω = e 2iπ p is a primitive p-th root of complex unity. It is well known that for any x ∈ B it holds that
a∈B\{0} ω ax = p − 1 if x = 0 −1 otherwise.(11)
For any element a from F, we can define an additive character as function χ a (x) = ω AbsTr(ax) , where x ∈ F and AbsTr(·) is the trace function from F to the finite filed with p element. Character defined by χ 0 (x) = x is called trivial, while all other characters are called non-trivial. The additive character χ 1 (x) is said to be canonical. It is well known that all additive characters of F form a group of order p m isomorhic to the additive group of F and the following property holds
χ a+b (x) = χ a (x)χ b (x).(12)
The orthogonality relation of additive characters is given by
x∈F χ a (x) = 0, if a = 0 p mt , if a = 0(13)
By the same way, for any element a from multiplicative group of F we can define a multiplicative character as a function Ψ a (g k ) = e 2iπak p mt −1 , where g is a fixed primitive element of F. Character defined by Ψ 0 (x) = 1 is called trivial, while all other characters are called non-trivial. It is well known that all multiplicative characters of F form a group of order p mt − 1 isomorphic to the multiplicative group of F and the following property holds
Ψ ab (x) = Ψ a (x)Ψ b (x).(14)
Our further derivations rely on the upper bound for absolute value of the following non-degenerate sum
S(χ a , Ψ b ; φ, ϕ) = x∈F\S χ a (φ(x))Ψ b (ϕ(x)),(15)
where S denotes the set of poles of functions φ(
x) ∈ F[x] and ϕ(x) ∈ F[x]. Non-degenerate property means that aφ(x) = h(x) p − h(x) + c and ϕ = ch(x) p mt −1 for any h(x) ∈ F[x] and c ∈ F. It is clear that aφ(x) = h(x) p − h(x) + c and ϕ(x) = ch(x) p mt −1 imply that χ a (φ(x)) and Ψ b (ϕ(x)
) are respective constant numbers for each x ∈ F \ S. Essentially, we have the following generalization of Weil estimate proved by Castro and Moreno to the case of rational functions φ(x) and ϕ(x) in [25] in notations of [26] and [27].
|S(χ a , Ψ b ; φ, ϕ)| = | x∈F\S χ a (φ)Ψ b (ϕ)| ≤ (l + l 0 + l 1 − l 2 − 2) p mt(16)
By setting ϕ(x) = 1 and
φ(x) = f (x) x so that aφ(x) = h(x) p − h(x) + c for any h(x) ∈ F[x]
and c ∈ F, we receive the following estimate:
x∈F\{0} χ a f (x) x ≤ (k − 1) p mt .(17)
Proposition 8. If A = F \ {0} and m = 1, then every nonzero word in T(A, k) has weight at least
p − 1 p |A| − (k − 1) p t(18)
Proof. We distinguish between two cases.
Case 1. f (x) = x(h(x)) p − xh(x) + xb for some h ∈ F[x]
and b ∈ F. In this case,
c j = Tr f (α j ) α j = Tr (h(α j ) p ) − Tr(h(α j )) + Tr(b) = Tr (h(α j )) p − Tr(h(α j )) + Tr(b) = Tr(b),
In other words, c is a multiple of the all-ones vector.
Case 2. f (x) = x(h(x)) p − xh(x) + xb for any h ∈ F[x] and b ∈ F. In this case we can form the non-degenerate sum and apply an estimate (16). For p-th root of complex unity we can write down that
p t −1 j=1 a∈B\{0} ω acj = (p − 1)(p t − 1 − w(c)) − w(c) = (p − 1)(p t − 1) − pw(c),(19)
where w(c) is the Hamming weight of the codeword c.
Utilizing the fact that ω aTr
( f (x) x ) for a ∈ B \ {0} is non-trivial additive character χ a ( f (x) x ) we have a∈B\{0} p t −1 j=1 w acj = a∈B\{0} x∈F\{0} χ a ( f (x) x ) ≤ a∈B\{0} x∈F\{0} χ a ( f (x) x ) .(20)
Applying the estimate (16) we have
(p − 1)(p t − 1) − pw(c) ≤ (p − 1)(k − 1) p t .(21)
Combining two cases, we get the proposition statement.
p m − p p m |A| − (k − 1) p mt(22)
Proof. Let c = Tr f (α) α α∈A be a codeword of T(A, k). Let λ 1 be the canonical additive character of B. By the orthogonality relation of additive characters, we deduce that
w(c) = p mt − 1 − # α ∈ A : Tr f (α) α = 0 = p mt − 1 − 1 p m α∈A a∈B λ 1 aTr f (α) α = p mt − 1 − 1 p m a∈B α∈A χ a f (α) α = (p mt − 1)(p m − 1) p m − 1 p m a∈B\{0} α∈A χ a f (α) α
From the above equation, we have a∈B\{0} α∈A
χ a f (α) α = (p mt −1)(p m −1)−p m w(c)(23)
We distinguish between two cases Case 1 af (x)
x = (h(x)) p − h(x) + b for some a ∈ B \ {0}, h(x) ∈ F[x]
and b ∈ F. In this case, the number of such a is at most p − 1 and let B be the collection of such a. In fact, if for
a 1 from B it holds that a1f (x) x = (h(x)) p − h(x) + b, then for a 2 from the same set it holds that a2f (x) x = a 2 a −1 1 ((h(x)) p − h(x) + b). Hence, χ a2 f (x)
x is a constant number for each x ∈ A when a 2 a −1 1 belongs to the finite field with p elements. Utilizing the estimate (17), we have a∈B\{0} α∈A
χ a f (α) α ≤ a∈B\{0} α∈A χ a f (α) α ≤ (p − 1)#A + a∈B\({0}∪B) α∈A χ a f (α) α ≤ (p − 1)(p mt − 1) + (p m − p)(k − 1) p mt .
By (23), we obtain that w(c) ≥
(p m −p) (p mt −1)−(k−1) √ p mt p m . Case 2 f (x) = x(h(x)) p − xh(x) + xb for any a ∈ B \ {0}, h(x) ∈ F[x]
and b ∈ F. Using a method analogous to Case 1, we deduce that
w(c) ≥ (p m − 1) (p mt − 1) − (k − 1) p mt p m(24)
Taking the minimum over two cases, we get the proposition statement.
Combining Proposition 8 and Proposition 9 together, we get Theorem 3(iii).
IV. MODIFIED GURUSWAMI-SUDAN ALGORITHM
In this section, we study efficient bounded-distance decoders for T(A, k). Formally, we fix integer e, a codeword c ∈ T(A, k) and output y ∈ F |A| p such that c and y differ in at most e positions. The input to the bounded-distance decoder is y and our task is to find c in polynomial time. Note that for the bounded-distance decoder to succeed, it is necessary that 2e + 1 is at most the minimum distance for T(A, k).
Recall the values of d 1 and d 2 in Theorem 3(i) and (ii). In both proofs (that is, Propositions 4 and 6), we demonstrated that T(A, k) (or its equivalent code) is a subcode of some GRS code. Hence, we can apply any bounded-distance decoder for Reed-Solomon codes, like the Berlekamp-Welch algorithm, and correct any e errors, where e is at most (max{d 1 , d 2 } − 1)/2. Therefore, it remains to find an efficient bounded-distance decoder that corrects (d −1)/2 errors, where d is a lower bound on the minimum distance of T(A, k). One such lower bound is the value d 3 given in Theorem 3(iii). To this end, we modify the famous Guruswami-Sudan algorithm for list decoding to perform this task. Unfortunately, we are unable to guarantee that we can correct up to (d 3 − 1)/2 errors. Nevertheless, we find some numerical examples where we are close to these values (see Table I and II).
First, we recall the following restatement of Guruswami-Sudan algorithm due to Koetter and Vardy [28,Theorem 2].
Theorem 10 (Guruswami-Sudan [29]). Fix ∆ ≤ n and µ. Set δ to be the smallest integer such that
N 1,∆ δ+1 ∆ δ − ∆ 2 δ ∆ + 1 > nµ(µ+1) 2 .
Next, set e = n − δ/µ . Given y ∈ F n , we can find all polynomials F (X) of degree at most ∆ such that F (α i ) = y i in at most e positions. Let F be the set of these polynomials. Furthermore, we can find all F (X)'s in polynomial time (in n, µ and |F|).
Next, we described a simple procedure that allows us to correct errors for T(A, k).
Modified Guruswami-Sudan Decoder.
INPUT: Integer e as defined in Theorem 10 (note that e is defined by some integer µ) and y ∈ F n p . OUTPUT: L ⊆ T(A, k) such that c and y differs in at most e positions. (Step 1) We apply Guruswami-Sudan algorithm with field F q and ∆ as defined in (7). Hence, after this step, we have a set of polynomials F. has minimum distance at least d . If e ≤ (d − 1)/2, the set L returned by the modified Guruswami-Sudan decoder has size at most one. Furthermore, the set L has size at most one.
Proof. The fact that |L| is at most one follows directly from usual coding arguments. Suppose otherwise that L comprises two words c 1 and c 2 that differ from y in at most e positions. Then the Hamming distance of c 1 and c 2 is at most 2e, contradicting the distance property of T(A, k).
Thus, it remains to show that Step 2 can be performed efficiently. Since T(A, k) is a F p -linear code and so, there exists check matrix H over F p . Therefore, determining whether c belongs to T(A, k) is equivalent to checking at cH T = 0. This completes the proof.
V. NUMERICAL RESULTS
In this section, we provide a comparison of the number of correctable errors corresponding to the different lower bounds on minimum distance given in Theorem 3. In Tables I and II, we set (p, m, t) = (5, 1, 2) and (p, m, t) = (2, 4, 3), respectively, and vary the parameter k. In addition, we also determine the number of correctable errors that the modified Guruswami-Sudan algorithm according to Proposition 11. We see that for moderate values of k, the modified Guruswami-Sudan algorithm is able to correct up beyond the bounds promised by the degree and lifted decoding bounds. Unfortunately, in most cases, we fall short of the character sum bound and it is interesting if we are able to efficiently decode close to the latter bound. For completeness, in Table I, we compute the exact the minimum distance of repair-trace code.
Finally, to further justify our approach, we compare the repair bandwidth of our approach with the repair bandwidth of the classical approach. Specifically, we consider an RS(A, k) with distance n−k +1 that corrects e errors and s erasures whenever 2e + s ≤ n − k. Therefore, in the classical approach, in the presence of e erroneous helper nodes, we need to download at least n − (n − k − 2e) = k + 2e symbols to repair any failed nodes. In other words, the bandwidth is (k+2e) log 2 p mt bits.
On the other hand, suppose that T(A, k) has minimum distance d * . Again, we have that T(A, k) corrects e errors
Theorem 3 .
3Consider the full-length Reed-Solomon code and let 0 be the failed node f (0). Let d be the minimum distance of corresponding repair-trace code T(A, k) with A = F \ {0}.
i
(0) = 0 and the polynomial r i ( )(x) is of degree at most p mt−m − 1 ≤ p mt − k − 1 for all i and . Then we have the following parity-check equations for code RS(A ∪ {0}, k). r ( )
Proposition 7 ([ 27 ,
727Lemma 2.1]). Let φ(x), ϕ(x) be rational functions in F, χ a be non-trivial additive character on F and Ψ b be non-trivial multiplicative character on F. Let S be the set of poles of functions φ and ϕ in F. Further, let l be the number of distinct zeros and non-infinite poles of φ. Let l 1 be the number of all poles of ϕ and l 0 be the sum of their multiplicities. Let l 2 be the number of non-infinite poles of ϕ which are zeros or poles of φ. Then
Proposition 9 .
9If A = F \ {0} and m > 1, then every nonzero word in T(A, k) has weight at least
(Step 2 )
2For each F (X) ∈ F, we determine whether the wordc (F (α i )/α p t−1 i ) i∈[n] belongs to T(A, k).We add c to L if and only if it belongs to T(A, k).Proposition 11. Let e be as defined earlier.Suppose T(A, k)
ACKNOWLEDGEMENTS.This research/project is supported by the National Research1 9 9 1 9 9 1 2 9 9 1 7 9 1 3 6 6 1 5 6 1 4 5 4 1 3 4 1 5 4 1 1 1 1 1 64 and s erasures whenever 2e + s ≤ d * − 1. Then repeating the same computation as before, we obtain the bandwidth (n − d * + 2e) log 2 p m bits. InFig. 1, we consider the case p = 5, m = 1, t = 2 and n = 25. We then vary the number of number of erroneous helper nodes and determine the corresponding bandwidth (according to our estimates of the minimum distance). We see that when the number of erroneous helper nodes is moderate, our approach has savings for repair bandwidth.VI. CONCLUSIONWe investigate the Reed-Solomon repair problem in the presence of erroneous information from helper nodes under Guruswami-Wootters scheme. We consider the collection of downloaded traces as code and investigate its code-distance properties. Three lower bounds on its minimum distance and modification of the famous Guruswami-Sudan algorithm to efficiently correct errors close to these bounds are proposed. However, this is just the tip of the iceberg, and we point out several open questions: is it possible to generalize this approach to repair schemes based on subspace polynomials[8],[10],[13]? Do all of our results hold for non-full-length Reed-Solomon codes? How do these results compare to the parameters of existing polynomial trace codes?
Network coding for distributed storage systems. A G Dimakis, P B Godfrey, Y Wu, M J Wainwright, K Ramchandran, IEEE Transactions on Information Theory. 569A. G. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright, and K. Ramchandran, "Network coding for distributed storage systems," IEEE Transactions on Information The- ory, vol. 56, no. 9, pp. 4539-4551, 2010.
Polynomial codes over certain finite fields. I S Reed, G Solomon, Journal of the Society for Industrial and Applied Mathematics. 82I. S. Reed and G. Solomon, "Polynomial codes over certain finite fields," Journal of the Society for Industrial and Applied Mathematics, vol. 8, no. 2, pp. 300-304, 1960.
Practical considerations in repairing reed-solomon codes. T X Dinh, L Y Nhi Nguyen, L J Mohan, S Boztas, T T Luong, S H Dau, 2022 IEEE International Symposium on Information Theory. T. X. Dinh, L. Y. Nhi Nguyen, L. J. Mohan, S. Boztas, T. T. Luong, and S. H. Dau, "Practical considerations in repairing reed-solomon codes," in 2022 IEEE Interna- tional Symposium on Information Theory (ISIT), 2022, pp. 2607-2612.
Erasure coding for distributed storage: An overview. S B Balaji, M N Krishnan, M Vajha, V Ramkumar, B Sasidharan, P V Kumar, Science China Information Sciences. 6110S. B. Balaji, M. N. Krishnan, M. Vajha, V. Ramkumar, B. Sasidharan, and P. V. Kumar, "Erasure coding for distributed storage: An overview," Science China Infor- mation Sciences, vol. 61, no. 10, p. 100 301, 2018.
An overview of coding for distributed storage systems. S Liu, F Oggier, Network Coding and Subspace Designs. ChamSpringer International PublishingS. Liu and F. Oggier, "An overview of coding for distributed storage systems," in Network Coding and Sub- space Designs. Cham: Springer International Publishing, 2018, pp. 363-383.
Repairing reedsolomon codes. V Guruswami, M Wootters, IEEE Transactions on Information Theory. 639V. Guruswami and M. Wootters, "Repairing reed- solomon codes," IEEE Transactions on Information The- ory, vol. 63, no. 9, pp. 5684-5698, 2017.
Low bandwidth repair of the RS(10,4) Reed-Solomon code. I Duursma, H Dau, 2017 Information Theory and Applications Workshop (ITA. I. Duursma and H. Dau, "Low bandwidth repair of the RS(10,4) Reed-Solomon code," in 2017 Information Theory and Applications Workshop (ITA), 2017, pp. 1-10.
Optimal repair schemes for some families of full-length Reed-Solomon codes. H Dau, O Milenkovic, 2017 IEEE International Symposium on Information Theory. H. Dau and O. Milenkovic, "Optimal repair schemes for some families of full-length Reed-Solomon codes," in 2017 IEEE International Symposium on Information Theory (ISIT), 2017, pp. 346-350.
Repairing Reed-Solomon codes with multiple erasures. H Dau, I M Duursma, H M Kiah, O Milenkovic, IEEE Transactions on Information Theory. 6410H. Dau, I. M. Duursma, H. M. Kiah, and O. Milenkovic, "Repairing Reed-Solomon codes with multiple erasures," IEEE Transactions on Information Theory, vol. 64, no. 10, pp. 6567-6582, 2018.
Repairing Reed-Solomon codes via subspace polynomials. S H Dau, T X Dinh, H M Kiah, T T Luong, O Milenkovic, IEEE Transactions on Information Theory. 6710S. H. Dau, T. X. Dinh, H. M. Kiah, T. T. Luong, and O. Milenkovic, "Repairing Reed-Solomon codes via sub- space polynomials," IEEE Transactions on Information Theory, vol. 67, no. 10, pp. 6395-6407, 2021.
Low-Bandwidth Recovery of Linear Functions of Reed-Solomon-Encoded Data. N Shutty, M Wootters, 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), ser. Leibniz International Proceedings in Informatics (LIPIcs). 215N. Shutty and M. Wootters, "Low-Bandwidth Recovery of Linear Functions of Reed-Solomon-Encoded Data," in 13th Innovations in Theoretical Computer Science Conference (ITCS 2022), ser. Leibniz International Pro- ceedings in Informatics (LIPIcs), vol. 215, 2022, 117:1- 117:19.
Nonlinear repair of Reed-Solomon codes. R Con, I Tamo, IEEE Transactions on Information Theory. 688R. Con and I. Tamo, "Nonlinear repair of Reed- Solomon codes," IEEE Transactions on Information The- ory, vol. 68, no. 8, pp. 5165-5177, 2022.
Repairing reed-solomon codes evaluated on subspaces. A Berman, S Buzaglo, A Dor, Y Shany, I Tamo, IEEE Transactions on Information Theory. 6810A. Berman, S. Buzaglo, A. Dor, Y. Shany, and I. Tamo, "Repairing reed-solomon codes evaluated on subspaces," IEEE Transactions on Information Theory, vol. 68, no. 10, pp. 6505-6515, 2022.
Explicit low-bandwidth evaluation schemes for weighted sums of reed-solomon-coded symbols. H M Kiah, W Kim, S Kruglik, S Ling, H Wang, 2023H. M. Kiah, W. Kim, S. Kruglik, S. Ling, and H. Wang, "Explicit low-bandwidth evaluation schemes for weighted sums of reed-solomon-coded symbols," in 2023
IEEE International Symposium on Information Theory (ISIT). 2023IEEE International Symposium on Information Theory (ISIT), 2023.
Repair of rs codes with optimal access and error correction. Z Chen, M Ye, A Barg, 2020 IEEE International Symposium on Information Theory (ISIT). Z. Chen, M. Ye, and A. Barg, "Repair of rs codes with optimal access and error correction," in 2020 IEEE International Symposium on Information Theory (ISIT), 2020, pp. 589-594.
How to share a secret. A Shamir, Commun. ACM. 2211A. Shamir, "How to share a secret," Commun. ACM, vol. 22, no. 11, pp. 612-613, 1979.
Introduction to coding theory. R M Roth, ISBN: 978-0-521-84504-5Cambridge University PressR. M. Roth, Introduction to coding theory. Cambridge University Press, 2006, ISBN: 978-0-521-84504-5.
Hamming weights of the duals of cyclic codes with two zeros. C Li, Q Yue, F Li, IEEE Transactions on Information Theory. 607C. Li, Q. Yue, and F. Li, "Hamming weights of the duals of cyclic codes with two zeros," IEEE Transactions on Information Theory, vol. 60, no. 7, pp. 3895-3902, 2014.
A class of two-weight and three-weight codes and their applications in secret sharing. K Ding, C Ding, IEEE Transactions on Information Theory. 6111K. Ding and C. Ding, "A class of two-weight and three-weight codes and their applications in secret shar- ing," IEEE Transactions on Information Theory, vol. 61, no. 11, pp. 5835-5842, 2015.
Linear codes with two or three weights from weakly regular bent functions. C Tang, N Li, Y Qi, Z Zhou, T Helleseth, IEEE Transactions on Information Theory. 623C. Tang, N. Li, Y. Qi, Z. Zhou, and T. Helleseth, "Linear codes with two or three weights from weakly regular bent functions," IEEE Transactions on Information Theory, vol. 62, no. 3, pp. 1166-1176, 2016.
Constructions of optimal binary locally recoverable codes via a general construction of linear codes. G Luo, X Cao, IEEE Transactions on Communications. 698G. Luo and X. Cao, "Constructions of optimal binary locally recoverable codes via a general construction of linear codes," IEEE Transactions on Communications, vol. 69, no. 8, pp. 4987-4997, 2021.
Application of optimal p-ary linear codes to alphabet-optimal locally repairable codes. G Luo, S Ling, Des. Codes Cryptography. 905G. Luo and S. Ling, "Application of optimal p-ary linear codes to alphabet-optimal locally repairable codes," Des. Codes Cryptography, vol. 90, no. 5, pp. 1271-1287, 2022.
R Lidl, H Niederreiter, Finite Fields, 2nd, ser. Encyclopedia of Mathematics and its Applications. Cambridge University PressR. Lidl and H. Niederreiter, Finite Fields, 2nd, ser. Ency- clopedia of Mathematics and its Applications. Cambridge University Press, 1996.
P Charpin, A Pott, A Winterhof, Character Sums and Polynomials. Berlin; BostonDe GruyterP. Charpin, A. Pott, and A. Winterhof, Eds., Character Sums and Polynomials. Berlin, Boston: De Gruyter, 2013.
Mixed exponential sums over finite fields. F N Castro, C J Moreno, Proceedings of the American Mathematical Society. 1289F. N. Castro and C. J. Moreno, "Mixed exponential sums over finite fields," Proceedings of the American Mathematical Society, vol. 128, no. 9, pp. 2529-2537, 2000.
Using stepanov's method for exponential sums involving rational functions. T Cochrane, C Pinner, Journal of Number Theory. 1162T. Cochrane and C. Pinner, "Using stepanov's method for exponential sums involving rational functions," Journal of Number Theory, vol. 116, no. 2, pp. 270-292, 2006.
Primitive elements with prescribed traces. A R Booker, S D Cohen, N Leong, T Trudgian, Finite Fields and Their Applications. 842022A. R. Booker, S. D. Cohen, N. Leong, and T. Trudgian, "Primitive elements with prescribed traces," Finite Fields and Their Applications, vol. 84, p. 102 094, 2022.
Algebraic soft-decision decoding of reed-solomon codes. R Koetter, A Vardy, IEEE Transactions on Information Theory. 4911R. Koetter and A. Vardy, "Algebraic soft-decision de- coding of reed-solomon codes," IEEE Transactions on Information Theory, vol. 49, no. 11, pp. 2809-2825, 2003.
Improved decoding of reed-solomon and algebraic-geometric codes. V Guruswami, M Sudan, Proceedings 39th Annual Symposium on Foundations of Computer Science. 39th Annual Symposium on Foundations of Computer ScienceV. Guruswami and M. Sudan, "Improved decoding of reed-solomon and algebraic-geometric codes," in Pro- ceedings 39th Annual Symposium on Foundations of Computer Science, 1998, pp. 28-37.
| [] |
[
"Operads, homotopy theory and higher categories in algebraic quantum field theory",
"Operads, homotopy theory and higher categories in algebraic quantum field theory"
] | [
"Marco Benini [email protected] \nDipartimento di Matematica\nUniversità di Genova\nVia Dodecaneso 3516146GenovaItaly\n\nINFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly\n",
"Alexander Schenkel [email protected] \nSchool of Mathematical Sciences\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom\n"
] | [
"Dipartimento di Matematica\nUniversità di Genova\nVia Dodecaneso 3516146GenovaItaly",
"INFN\nSezione di Genova\nVia Dodecaneso 3316146GenovaItaly",
"School of Mathematical Sciences\nUniversity of Nottingham\nNG7 2RDUniversity Park, NottinghamUnited Kingdom"
] | [] | This chapter provides a non-technical overview and motivation for the recent interactions between algebraic quantum field theory (AQFT) and rather abstract mathematical disciplines such as operads, model categories and higher categories. | null | [
"https://export.arxiv.org/pdf/2305.03372v1.pdf"
] | 258,547,079 | 2305.03372 | dfca5867ea90b40bd311a4f2c491ba996dd0e189 |
Operads, homotopy theory and higher categories in algebraic quantum field theory
5 May 2023 April 2023
Marco Benini [email protected]
Dipartimento di Matematica
Università di Genova
Via Dodecaneso 3516146GenovaItaly
INFN
Sezione di Genova
Via Dodecaneso 3316146GenovaItaly
Alexander Schenkel [email protected]
School of Mathematical Sciences
University of Nottingham
NG7 2RDUniversity Park, NottinghamUnited Kingdom
Operads, homotopy theory and higher categories in algebraic quantum field theory
5 May 2023 April 2023algebraic quantum field theoryoperadsmodel categorieshigher category theory MSC 2020: 81Txx18Mxx18Nxx
This chapter provides a non-technical overview and motivation for the recent interactions between algebraic quantum field theory (AQFT) and rather abstract mathematical disciplines such as operads, model categories and higher categories.
Introduction
Algebraic quantum field theory (in short, AQFT) is a well-developed and time-honored mathematical framework in which one can define and study quantum field theories (QFTs) on the Minkowski spacetime [28] or, more generally, on globally hyperbolic Lorentzian manifolds [19]. AQFT has already had many interesting and fruitful interactions with a broad range of mathematical disciplines, including operator algebras, functional analysis, Lorentzian geometry and also category theory. We refer the reader to the book [18] for an overview, as well as to the other chapters of this volume that cover different aspects of AQFT.
As the title suggests, the focus of this chapter is on the relatively recent interactions between AQFT and rather abstract areas of pure mathematics such as operad theory, homotopy theory (implemented through model categories) and higher category theory. The main goal is to illustrate the motivations for such interactions and the achievements they lead to, which can be concisely, yet incompletely, summarized as follows:
• Via operad theory, one can identify and describe the fundamental algebraic structures underlying AQFT. This leads to a plethora of universal constructions that are, among other things, useful to (i) understand deeply the time-slice axiom (which, informally speaking, encodes the concept of time-evolution in AQFT), leading to classification results for AQFTs in low dimensions, (ii) construct new AQFTs out of old ones, with examples given by localto-global extensions (refining Fredenhagen's universal algebra [24]) or the extraction of the chiral observables of a 2-dimensional conformal AQFT (refining Rehren's work [35]), and (iii) compare AQFT with other axiomatizations of QFT, such as factorization algebrasà la Costello and Gwilliam [22,23].
• Via abstract homotopy theory, see e.g. [31,30], one can develop model categories whose objects are, for instance, cochain complex valued AQFTs and whose weak equivalences are spacetime-wise quasi-isomorphisms. This is the natural habitat in which examples of gauge-theoretic AQFTs that are constructed in terms of the BRST/BV formalism [25,26] live. Furthermore, abstract homotopy theory is also needed to develop a gauge-theoretic generalization of retarded and advanced Green's operators, which in particular opens up new avenues to extend the well-established free (i.e. non-interacting) AQFT constructions from [1,2] to gauge-theoretic and also higher gauge-theoretic examples. Finally, when combined with operad theory, insights can be obtained about higher homotopical phenomena in the underlying algebraic structure of AQFTs (or a lack thereof).
• Via higher category theory, one can introduce a vastly generalized concept of AQFT that does not rely on the existence of suitable function algebras on the classical moduli spaces of fields. Instead of algebras, such AQFTs assign linear categories or dg-categories, with examples given by quantizations (in the sense of derived algebraic geometry [37,20]) of quasi-coherent modules on the derived moduli stacks of fields. In contrast to the usual BRST/BV formalism, this approach is non-perturbative in the sense that it carries information about the gauge group and not only its Lie algebra. Unfortunately, constructing examples of such (dg-)category valued AQFTs is rather involved and presently only simple toy-models are understood.
This chapter will substantiate and explain in more detail the items above. Our presentation is intentionally informal, which will unavoidably lead to omissions. To compensate for these shortcomings, we will provide wherever appropriate references in which precise statements and the relevant technical details can be found. We do not expect readers to be familiar with operads, homotopy theory or higher categories, even though some background will be helpful to more easily follow the narrative of this chapter. A detailed introduction to these subjects is beyond the scope of this chapter. However, we shall explain the main ideas informally and where useful pictorially.
Operads
The basic idea: An operad is an object that captures n-to-1 operations together with their behavior under composition. One of the simplest and most illustrative examples is given by the unital associative operad uAs. This operad is generated by a 0-to-1 operation (the unit) and a 2-to-1 operation (the multiplication), which we shall visualize as follows .
(2.1a)
These operations should compose in a unital and associative fashion, i.e.
= = = , (2.1b)
where the 1-to-1 operation in the unitality conditions is the identity operation. All operads appearing in this chapter will be symmetric operads, i.e. there is an action of the permutation group Σ n on the set of n-to-1 operations that is compatible in a suitable sense with compositions, see e.g. [38] for the details.
Operads can be generalized in many ways, and the most relevant generalization for us is to decorate the inputs and outputs with colors. The resulting concept is that of colored (symmetric) operads, which for brevity we shall simply call operads. The most basic example here is a category C: The objects c ∈ C determine the colors and the morphisms f : c → c ′ can be thought of as 1-to-1 operations. Note that there are no n-to-1 operations, for n = 1, when regarding a category C as an operad.
Given any (colored symmetric) operad O, one can form its category Alg O (T) of algebras with values in a symmetric monoidal category (T, ⊗, I, γ), e.g. that of vector spaces Vec K over a field K. An algebra over the operad O should be thought of as a representation of the abstract operations described by O in terms of morphisms in T. More precisely, an algebra A ∈ Alg O (T) consists of a family of objects A(c) ∈ T, for all colors c ∈ O, together with a family of Tmorphisms
A(φ) : n i=1 A(c i ) −→ A(c ′ ) ,(2.2)
for all n-to-1 operations φ : (c 1 , . . . , c n ) → c ′ in O, that is compatible with compositions, identities and permutation actions. Not surprisingly, an algebra A ∈ Alg uAs (T) over the unital associative operad is a unital associative algebra in T, with unit I → A and multiplication A ⊗ A → A respectively given by representing the 0-to-1 and 2-to-1 operations from (2.1). Note that an algebra A ∈ Alg C (T) over the operad determined by a category C is a functor A : C → T.
The AQFT operad: AQFTs are typically defined on the category Loc m of connected mdimensional oriented and time-oriented globally hyperbolic Lorentzian manifolds, with morphisms given by orientation and time-orientation preserving causal isometric embeddings, see e.g. [18]. This category comes with distinguished classes of (tuples of) morphisms that capture important Lorentzian geometric features. For the definition of AQFTs and hence of the AQFT operad, the following two classes are essential:
(i) A pair of morphisms f 1 : M 1 → N ← M 2 : f 2 to a common target is called causally disjoint if their images can not be connected by a causal curve, i.e. The usual definition of an m-dimensional AQFT is in terms of a functor A : Loc m → Alg uAs (T) to a category of unital associative algebras that satisfies the Einstein causality and time-slice axioms. This can be reformulated in a more elegant and conceptual way as follows (see [9,Theorem 2.9] for the details): An AQFT is given by the assignment
J N (f 1 (M 1 )) ∩ f 2 (M 2 ) = ∅ with J N (S) := J + N (S) ∪ J − N (S) ⊆ NM AQFT A(M ) ∈ Alg uAs (T) (2.3a)
of a unital associative algebra to each spacetime M ∈ Loc m , together with the assignment
N M 1 · · · Mn AQFT n i=1 A(M i ) −→ A(N ) (2.3b)
of a unital associative algebra morphism to each tuple f i : (2.3c)
M i → N i=1
We would like to stress that this definition of AQFTs comes with two different kinds of multiplications: (1) On every spacetime M ∈ Loc m we have an associative and unital multiplication on A(M ) ∈ Alg uAs (T), which we denote by a · a ′ ∈ A(M ), for all a, a ′ ∈ A(M ). This multiplication has its origins in quantum theory and it is physically interpreted as the (associative but noncommutative) product of quantum observables. (2) For every pair of causally disjoint spacetime embeddings f 1 :
M 1 → N ← M 2 : f 2 we have another multiplication A(M 1 ) ⊗ A(M 2 ) → A(N )
, a 1 ⊗ a 2 → a 1 • a 2 that takes quantum observables from causally disjoint M 1 and M 2 and produces a quantum observable in A(N ). The origin of this multiplication lies in Lorentzian geometry (not in quantum physics!) and it should be interpreted as a kind of "composition of causally disjoint subsystems". The fact that • is by hypothesis an algebra morphism with respect to · then implies Einstein causality via the following Eckmann-Hilton-type argument: For all a 1 ∈ A(M 1 ) and a 2 ∈ A(M 2 ), we compute
A(f 1 )(a 1 ) · A(f 2 )(a 2 ) = (a 1 • ½ 2 ) · (½ 1 • a 2 ) = (a 1 · ½ 1 ) • (½ 2 · a 2 ) = (½ 1 · a 1 ) • (a 2 · ½ 2 ) = (½ 1 • a 2 ) · (a 1 • ½ 2 ) = A(f 2 )(a 2 ) · A(f 1 )(a 1 ) ,(2.4)
i.e. spacelike separated quantum observables commute.
From this description it becomes rather obvious that there exists an operad whose algebras in the symmetric monoidal category T are precisely AQFTs. This operad takes the form
O (Locm,⊥) [W −1 ] := P (Locm,⊥) ⊗ BV uAs [W −1 ] .
(2.5)
The first factor P (Locm,⊥) denotes the operad describing "compositions of causally disjoint subsystems" (2.3b). The second factor uAs is the unital associative operad from (2.1) and it describes the spacetime-wise multiplication of quantum observables. The symbol ⊗ BV is the Boardman-Vogt tensor product of operads, which takes care of the compatibility conditions between the two kinds of operations. The AQFT operad (2.5) generalizes in a straightforward way to the case where (Loc m , ⊥) is replaced by an orthogonal category (C, ⊥), see e.g. [12,Definition 3.4], and W ⊆ Mor C is a class of morphisms in C. We denote the category of algebras over the operad O (
(C, ⊥ C ) → (D, ⊥ D ), i.e. F (f 1 ) ⊥ D F (f 2 ) for all f 1 ⊥ C f 2 , defines a pullback functor F * : AQFT(D, ⊥ D ) −→ AQFT(C, ⊥ C ) (2.7)
at the level of AQFT categories. We shall illustrate below that this is useful for passing between and comparing different flavors of AQFTs.
Operadic Kan extensions: Suppose that the symmetric monoidal category T is closed (i.e. internal homs exist) and bicomplete (i.e. all small limits and colimits exist). One then might ask if the functor (2.7) admits a left and/or a right adjoint, which in the context of ordinary category theory are known as left/right Kan extensions. In contrast to categories, which we think of as describing 1-to-1 operations, operads have a built-in asymmetry as they describe nto-1 operations but no 1-to-n operations for n ≥ 2. This favors the existence of a left adjoint over a right adjoint to the functor (2.7). The following well-known existence result for operadic left Kan extensions (see e.g. [12, Theorem 2.11] for a spelled out proof) has, as we shall see below, interesting consequences for AQFT.
Proposition 2.1. Suppose that T is a bicomplete closed symmetric monoidal category. Then, for every orthogonal functor F : (C, ⊥ C ) → (D, ⊥ D ), the pullback functor (2.7) admits a left adjoint, i.e. we have an adjunction
F ! : AQFT(C, ⊥ C ) / / AQFT(D, ⊥ D ) : F * o o . (2.8)
In stark contrast to this, F * does not in general admit a right adjoint. Classification results in low dimensions: Operadic techniques are also vital to obtain a deeper understanding of the time-slice axiom. Let (C, ⊥) be an orthogonal category and L : C → C[W −1 ] a localization functor for the underlying category C at a set W ⊆ Mor C of morphisms, which we think of as an abstraction of Cauchy morphisms. We upgrade L to an orthogonal functor L : (C, ⊥) → (C[W −1 ], ⊥ W ) by endowing the localized category with the push-forward orthogonality relation ⊥ W , i.e. the minimal orthogonality relation such that
L(f 1 ) ⊥ W L(f 2 ) for all f 1 ⊥ f 2 .
Using (2-)functoriality of the assignment of AQFT operads, we obtain an operad
morphism O L : O (C,⊥) → O (C[W −1 ],⊥ W ) whichAQFT(C, ⊥) W ≃ AQFT C[W −1 ], ⊥ W . (2.9)
This proposition paves the way to classification results for AQFTs in low dimensions.
Example 2.5. This example is based on [4, Appendix A].
The localization of the orthogonal category (Loc 1 , ∅) of 1-dimensional connected spacetimes at all Cauchy morphisms is given by
Loc 1 [W −1 ], ∅ ≃ BR, ∅ , (2.10)
where BR denotes the groupoid with a single object, denoted by , and morphisms given by the additive group (R, +, 0). Using Proposition 2.4, one then finds that an AQFT on (Loc 1 , ∅) that satisfies the time-slice axiom is equivalent to the datum
A( ) R -- ∈ Fun BR, Alg uAs (T) (2.11)
of a single algebra endowed with an R-action, which we interpret as time-evolution. Hence, 1-dimensional AQFTs describe precisely the same content as ordinary quantum mechanics. ▽ Example 2.6. The previous example can be generalized to 2-dimensional conformal AQFTs that are defined on the orthogonal category (CLoc 2 , ⊥), see [5,Section 3]. The result is that an AQFT on (CLoc 2 , ⊥) that satisfies the time-slice axiom is equivalent to the datum
A( ) Emb(R) 2 Emb( , ) / / A( ) Diff(S 1 ) 2 | | (2.12)
of two algebras, one for the Minkowski spacetime A( ) and one for the flat cylinder A( ), together with the displayed actions of the embedding monoid Emb(R) 2 , of the diffeomorphism group Diff(S 1 ) 2 and of the conformal embeddings Emb( , ). (These actions have to be compatible with causal disjointness ⊥.) This relates 2-dimensional conformal AQFT and representation theoretic approaches to conformal QFTs. ▽ Comparison with factorization algebras: Factorization algebras [22,23] provide a different axiomatization of QFT that is applicable to various geometric contexts, such as topological, holomorphic, Riemannian, and also Lorentzian geometry. In the Lorentzian context, the general definition of Costello and Gwilliam can be refined slightly to the more appropriate variant of timeorderable prefactorization algebras [8], which emphasize time-orderability of spacetime regions instead of the coarser concept of disjointness. The relevant geometric definitions are as follows:
(iii) A tuple f i : M i → N i=1,...,n of Loc m -morphisms is called time-ordered if J + N (f i (M i )) ∩ f j (M j ) = ∅, for all i < j.
(iv) A tuple is called time-orderable if there exists a permutation ρ ∈ Σ n (called time-ordering permutation) such that the permuted tuple f ρ(i) :
M ρ(i) → N i=1,...,n is time-ordered.
A time-orderable prefactorization algebra (tPFA) on Loc m is given by the assignment
M tPFA F(M ) ∈ T (2.13a)
of an object in T to each spacetime M ∈ Loc m , together with the assignment
N M 1 M 2 Mn . . . tPFA n i=1 F(M i ) −→ F(N ) (2.13b) of a T-morphism to each time-orderable tuple f i : M i → N i=1,.
..,n . The latter assignment is subject to compatibility with compositions of tuples, unitality and equivariance with respect to permutation actions. A tPFA is said to satisfy the time-slice axiom if, in analogy to (2.3c), it assigns to each Cauchy morphism a T-isomorphism F(M )
∼ = −→ F(N ).
Comparing this definition with the one of AQFTs in (2.3), one realizes that tPFAs have less spacetime-wise algebraic structure (one assigns plain objects in T and not unital associative algebras) but more composition products since there are more time-orderable tuples than mutually causally disjoint ones. In the context of free QFTs, it was observed first in [27] that, by means of the time-slice axiom, one can construct suitable spacetime-wise multiplications in a tPFA. This observation was then generalized in [8]
Φ ! : tPFA(Loc m ) W, add ∼ = / / AQFT(Loc m , ⊥) W, add : Φ * o o . (2.16)
The functor Φ ! admits an explicit description that is spelled out in [8, Section 3].
Homotopy theory
Motivation from BRST/BV: The construction of gauge-theoretic examples of AQFTs is usually carried out in the context of the BRST/BV formalism, see e.g. [25,26]. This leads to AQFTs that take values in the symmetric monoidal category T = Ch K of cochain complexes of vector spaces over a field K of characteristic 0 (typically K = C), which in short we call dg-AQFTs. Working with cochain complexes provides a natural habitat for the ghost fields and antifields, both of which have non-zero cohomological degree, as well as the BRST and BV differentials. Looking closer into these constructions, one observes that (the isomorphism type of) their output depends on the auxiliary choices that one makes along the way, such as gauge fixings and auxiliary fields. This unpleasant and seemingly unphysical feature is overcome by realizing that isomorphisms are not the right concept to compare two cochain complexes, but rather one should use the weaker concept of quasi-isomorphisms. So, loosely speaking, the best one can hope for is that the BRST/BV formalism constructs from a given physical input (say, a classical action functional) a dg-AQFT A that is determined uniquely up to quasi-isomorphisms.
In particular, making different choices for, e.g., gauge fixings and/or auxiliary fields should lead to quasi-isomorphic outputs A ′ ∼ −→ A.
Ensuring that the output of a construction does not depend (up to quasi-isomorphisms) on auxiliary intermediate choices is hard to do by hand, hence one needs mathematical technology that takes care of these issues. Model category theory [31,30], which will be briefly discussed in the next paragraph, provides powerful tools for this purpose. (∞-category theory provides an alternative and more modern approach. We prefer to work with model categories since they are more explicit and hence better suited for concrete computations.)
The basic idea behind model categories: A model category is a bicomplete category M together with the choice of three classes of morphisms, called weak equivalences, fibrations and cofibrations, that satisfy various axioms, see e.g. [31, Section 1.1]. The weak equivalences should be thought of as a generalization of the concept of isomorphisms in ordinary category theory. In particular, two weakly equivalent objects in a model category are considered to be "the same". The role of the fibrations and cofibrations is less direct, but still important, as they are used to prove existence of, and determine models for, derived functors.
To motivate the latter concept, let us start with the important (but trivial) observation that a functor F : M → N between two model categories in general does not preserve weak equivalences, which is manifestly inconsistent with the basic idea that weakly equivalent objects should be regarded as being "the same". For certain types of functors, this issue can be resolved by deforming them to derived functors. An important case where this is possible is when the functor is part of a Quillen adjunction, which is an adjunction The crucial feature of derived functors is that they preserve weak equivalences, i.e. they are compatible with the main philosophy that weakly equivalent objects are "the same". dg-AQFT model categories and Quillen adjunctions: Motivated by the BRST/BV formalism, let us consider dg-AQFTs, i.e. AQFTs that take values in the bicomplete closed symmetric monoidal category T = Ch K of cochain complexes of vector spaces over a field K of characteristic 0. The latter is a symmetric monoidal model category, with weak equivalences given by the quasi-isomorphisms, fibrations given by degree-wise surjective cochain maps and cofibrations determined implicitly by a lifting property, see e.g. [31, Sections 2.3 and 4.2] for the details. Using fundamental insights by Hinich [29], we obtain the following important structural result for dg-AQFTs, which was first reported in [11]. (2) For any orthogonal functor F : (C, ⊥ C ) → (D, ⊥ D ), the adjunction
F ! : dgAQFT(C, ⊥ C ) / / dgAQFT(D, ⊥ D ) : F * o o (3.4)
from Proposition 2.1 is a Quillen adjunction.
The model categories from item (1) provide a natural context for receiving AQFT model constructions via the BRST/BV formalism [25,26] and, very importantly, for ensuring that the outputs do not depend on intermediate choices such as gauge fixings and/or auxiliary fields. The latter aspects have been studied in detail in the context of linear quantum gauge theories, see in particular [17,3,7]. The Quillen adjunctions from item (2) lead to interesting applications, such as derived variants of the local-to-global extensions discussed in Example 2.2. Toy-examples for the latter, which illustrate the rich interplay between gauge-theoretic structures and derived functors, are described in [11, Appendix A].
Homotopy-coherent AQFTs: The model category dgAQFT(C, ⊥) from Theorem 3.1 describes dg-AQFTs in which all algebraic structures compose strictly, i.e. without non-trivial homotopy coherence data. One may therefore ask whether there exists a potentially more general and flexible concept of homotopy-coherent AQFTs which relates to dg-AQFT such as, e.g., A ∞algebras relate to associative dg-algebras and L ∞ -algebras relate to dg-Lie algebras. Operad theory provides the necessary tools to investigate and answer this question, see e.g. [15,29] for more details. In the present context, a homotopy-coherent AQFT is then, by definition, a homotopy algebra over the dg-operad O (C,⊥) ⊗ K ∈ dgOp K that is obtained by turning all sets of operations into cochain complexes via (−) ⊗ K : Set → Ch K , S → S ⊗ K := s∈S K. Again by definition, a homotopy algebra over O (C,⊥) ⊗ K is an ordinary algebra over any choice of cofibrant (or Σ-cofibrant, to obtain smaller models) resolution O (C,⊥) ∞ ∼ −→ O (C,⊥) ⊗ K of the given dg-operad. This means that homotopy-coherent AQFTs are described by the category
dgAQFT ∞ (C, ⊥) := Alg O (C,⊥) ∞ Ch K ,(3.5)
which we again endow with the projective model structure.
It turns out that homotopy-coherent AQFTs do not carry additional content when working, as we always do, over a field K of characteristic 0. This is a consequence of general rectification theorems, see e.g. [ is a Quillen equivalence. In particular, every homotopy-coherent AQFT A ∞ ∈ dgAQFT ∞ (C, ⊥) admits a strictification to a dg-AQFT that is given by the derived unit A ∞ ∼ A st := Rq * Lq ! (A ∞ ).
Open Problem 3.3. Recalling from (2.5) the presentation of the AQFT operad as a Boardman-Vogt tensor product O (C,⊥) = P (C,⊥) ⊗ BV uAs, there is a pathway to introduce a potentially richer concept of homotopy-coherent AQFTs. Regarding both factors, i.e. P (C,⊥) and uAs, as ∞-operads in the sense of [33], we can form their ∞-operadic tensor product P (C,⊥) ⊗ ∞ uAs. This ∞-operad is a priori different from the operads obtained by cofibrant resolutions. Indeed, from Lurie-Dunn additivity [ Clearly, the strict time-slice axiom is a special case of the homotopy variant. It is important to emphasize that the typical examples of dg-AQFTs arising from the BRST/BV formalism [25,26,3,7] satisfy the homotopy time-slice axiom, but not the strict one. Conceptually, this is not at all a problem, since quasi-isomorphic and not only isomorphic objects should be regarded as being "the same". However, from a more practical point of view, there are complications. For instance, certain constructions in AQFT, such as determining the relative Cauchy evolution (RCE) and the associated stress-energy tensor [19], become much more involved for theories that satisfy only the homotopy time-slice axiom, see e.g. [16]. This motivates and justifies the search for strictification theorems for the homotopy time-slice axiom. In order to set the stage, we first have to introduce model categories for both variants of the time-slice axiom. In the strict case, we use Proposition 2.4 and endow
dgAQFT(C, ⊥) W := Alg O (C[W −1 ],⊥ W ) Ch K (3.7)
with the projective model structure. In the homotopy case, we follow [21, Section 3] and consider In particular, in this case every dg-AQFT A ∈ dgAQFT(C, ⊥) hoW satisfying the homotopy time-slice axiom admits a strictification to a dg-AQFT satisfying the strict time-slice axiom that is given by the underived unit A ∼ → A st := L * i * (A).
dgAQFT(C, ⊥) hoW := Alg O (C,⊥) [W −1 ] ∞ Ch K ≃ L W dgAQFT(C, ⊥) .
Example 3.6. The following types of AQFTs are covered by this theorem, see [4,Example 3.4] for more details: (i) 1-dimensional dg-AQFTs on Loc 1 , (ii) 2-dimensional conformal dg-AQFTs on CLoc 2 , and (iii) Haag-Kastler-type dg-AQFTs on the slice category Loc m /M . In particular, (i) and (ii) imply that the classification results from Examples 2.5 and 2.6 generalize to the case of dg-AQFTs satisfying the homotopy time-slice axiom. ▽ Open Problem 3.7. It is currently unclear to us if there exist strictification theorems, in the form of Theorem 3.5 or other forms, for the higher-dimensional case of (Loc m , ⊥), for m ≥ 2, with W the Cauchy morphisms. ⊲ Remark 3.8. The existence of (at least some) strictification theorems for the homotopy timeslice axiom for dg-AQFTs seems to be a phenomenon that is linked to the 1-dimensional nature of time-evolution in Lorentzian geometry. Analogous strictification results are clearly not available for topological QFTs, when formulated as locally constant factorization algebras. Indeed, locally constant factorization algebras on R m are equivalent to algebras over the homotopically nontrivial E m -operads [33, Theorem 5.4.5.9], hence they can not be strictified for m ≥ 2. △
Construction of examples:
The reader might now ask how one can construct explicit examples of dg-AQFTs on Loc m . In this paragraph we shall sketch the construction of linear (higher) gauge-theoretic models, which form the simplest class of dg-AQFTs. The technical details can be found in [7].
This construction starts from similar input as the one of free factorization algebras by Costello and Gwilliam [22], however it deviates in later steps. As basic input, we take the following
Λ 0 M d / / (0) Λ 1 M δ d / / (1) Λ 1 M δ / / (2) Λ 0 M ,(3.10)
where Λ p M denotes the p-th exterior power of the cotangent bundle (i.e. p-forms), d is the de Rham differential and δ the codifferential. As typical for the BV formalism, the underlying Zgraded vector bundle encodes the ghost fields c (in degree −1), the fields A (in degree 0), the antifields A ‡ (in degree 1) and the antifields for ghosts c ‡ (in degree 2). The differential Q
/ / / o / o / o / o / o / o / o / o / o F ∈ dgtPFA(Loc m ) ,(3.11)
which can be constructed via BV quantization, see [22] and also [7] for the Lorentzian context. However, this setup is insufficient to construct a dg-AQFT, which requires the additional input of retarded G + and advanced G − Green's homotopies [6], generalizing the concept of retarded and advanced Green's operators. Informally, G ± is a (pseudo-)natural family
G ± K ∈ Γ ∞ K (F M ), Γ ∞ J ± M (K) (F M ) −1 : K ⊆ M compact (3.12)
of homotopies with support properties precisely as retarded/advanced Green's operators that trivialize the inclusion cochain maps incl = ∂G ± K :
Γ ∞ K (F M ) → Γ ∞ J ± M (K) (F M ) for all compacta K ⊆ M .
The precise definition (including pseudo-naturality) is more involved and can be found in [6,Definition 3.5]. In this paper many pleasant properties of retarded/advanced Green's homotopies (generalizing the usual properties of retarded/advanced Green's operators [2,1]) have been proven. Most notably, we have Example 3.12. The linear Yang-Mills complex from Example 3.10 admits a natural Green's witness given by
W M = (−1) Λ 0 M (0) Λ 1 M δ o o (1) Λ 1 M id o o (2) Λ 0 M d o o .
(3.13)
The associated retarded/advanced Green's homotopies read as (3.15) that satisfies the homotopy time-slice axiom. The latter can be constructed via canonicalcommutation-relations (CCR) quantization, see [7] for the details. This construction is compatible in the following sense with the dg-tPFA construction (3.11): The functor (2.15) generalizes to the dg-setting, yielding a right Quillen functor
Ω 0 K (M ) F M , Q M , (−, −) M , W M M ∈Locm CCR quantization / / / o / o / o / o / o / o / o / o / o A ∈ dgAQFT(Loc m , ⊥) hoWΦ * : dgAQFT(Loc m , ⊥) hoW −→ dgtPFA(Loc m ) hoW . (3.16)
Applying this functor to the dg-AQFT (3.15) yields a dg-tPFA that is isomorphic to (3.11), i.e. Φ * (A) ∼ = F for any free BV theory on Loc m with a natural Green's witness.
Open Problem 3.13. We expect, but currently do not know how to prove, that (3.16) induces a Quillen equivalence when restricted to suitably defined additive objects. A formal model categorical proof of this claim, which would generalize the Comparison Theorem 2.7 from 1-categorical targets T to the richer dg-context, is an interesting and important problem for future works. ⊲ Remark 3.14. For the construction of interacting models via perturbative AQFT [25,26] we refer the reader to the relevant chapters of this volume or to the monograph [36]. △
Higher categories
Why cochain complexes aren't enough: The unital associative (dg-)algebra that is assigned by a (dg-)AQFT A to a spacetime M should be interpreted as a quantization of the function algebra on the moduli "space" of fields on M , i.e.
A(M ) = O moduli "space" of fields on M ∈ Alg uAs (Ch K ) .
(4.1)
It therefore depends strongly on the type of "spaces" one considers whether or not such an assignment is sensible. For instance, in the context of (derived) algebraic geometry, there is just a small class of spaces, called (derived) affine schemes, that are captured by their function algebras. Other spaces, such as (derived) stacks, are in general not. For example, considering the classifying stack BG := [ * /G] associated with a reductive affine group scheme G, one finds that its function algebra
O(BG) ≃ N • (G, K) ≃ K = O( * ) (4.2)
is weakly equivalent to that of a point, i.e. all information about the group G gets lost. (Here N • denotes normalized group cochains.) This insufficiency of function algebras is a non-perturbative gauge-theoretic feature. Indeed, considering only the formal neighborhood of a point x : * → X in a (derived) stack X, i.e. working perturbatively around a given field, one obtains a formal moduli problem that admits an algebraic description in terms of L ∞ -algebras or dually in terms of their Chevalley-Eilenberg dg-algebras, see [32,34].
This issue is well-known to (derived) algebraic geometers, who have proposed an interesting solution, see e.g. [37] for an informal overview and [20] for the technical details. Instead of assigning to a (derived) stack X its function algebra O(X) ∈ Alg E∞ (Ch K ), which as we have seen above is insufficient, one should better consider its symmetric monoidal dg-category of quasicoherent modules QCoh(X) ∈ Alg E∞ (dgCat K ). Informally speaking, the latter describes the (dg-)vector bundles over X, hence it is a priori richer than the function algebra, which one can think of as the sections of the trivial line bundle. As evidence that this approach is better behaved, let us consider again the classifying stack BG associated with a reductive affine group scheme. One can show (see e.g. [10, Proposition 2.17] for a spelled out proof) that the associated symmetric monoidal dg-category of quasi-coherent modules
QCoh(BG) ≃ Ch O(G) K =: dgRep(G) (4.3)
is given by the symmetric monoidal dg-category of O(G)-dg-comodules, or in other words Grepresentations on cochain complexes. The group G can be reconstructed from this symmetric monoidal dg-category via a Tannakian reconstruction argument, hence no information is lost.
Transferring these ideas to the context of AQFT motivates us to replace (4.1) with the assignment of quantized dg-categories (in the sense of [37,20]) to spacetimes M , which we informally interpret as
A(M ) = QCoh moduli "space" of fields on M ∈ dgCat K . (4.4)
In the special case where the moduli "spaces" of fields are only 1-stacks, without derived structure, one can simplify this framework by considering (locally presentable) K-linear categories instead of dg-categories. In the running example given by the classifying stack BG, this simplification yields a restriction from dg-representations dgRep(G) to the ordinary symmetric monoidal category
Rep(G) := Vec O(G) K
of O(G)-comodules. In the paragraphs below, we shall start with this simplified setting because it can be formalized in the simpler context of 2-category theory instead of ∞-category theory.
Foundations for 2-AQFTs: Let us denote by Pr K the closed symmetric monoidal 2-category of locally presentable K-linear categories, co-continuous K-linear functors and natural transformations. See [9] for a concise review. Pr K serves as the target for the 2-categorical generalization of AQFT that implements the ideas explained in the previous paragraph. For the technical details see [9].
Remark 4.2. Spelling this out in more detail, a 2-AQFT A ∈ 2AQFT(C, ⊥) is an assignment
M 2-AQFT A(M ) ∈ Pr K (4.6a)
of a locally presentable K-linear category to each object M ∈ C, together with the assignment
N M 1 · · · Mn 2-AQFT n ⊠ i=1 A(M i ) −→ A(N ) (4.6b)
of a co-continuous K-linear functor to each tuple f i :
M i → N i=1,.
..,n of mutually orthogonal C-morphisms, i.e. f i ⊥ f j for all i = j, and of coherence data witnessing the compatibility with compositions, units and permutation actions. Comparing this with ordinary AQFTs (2.3), we note that 2-AQFTs are simply given by replacing the category of unital associative algebras Alg uAs (Vec K ) by the 2-category Pr K of locally presentable K-linear categories. This is precisely the generalization that was motivated and explained in the previous paragraph. △ Remark 4.3. Higher algebraic structures that are similar to the 2-AQFTs from Definition 4.1 also appeared in the study of category valued factorization homology by Ben-Zvi, Brochier and Jordan [13,14]. △
The most pressing question is of course whether or not the concept of 2-AQFTs is indeed a generalization of the ordinary concept of AQFTs from Section 2. The following result, which was proven in [9,Theorem 4.3], shows that this is the case. that exhibits ordinary AQFTs as a coreflective full 2-subcategory of the 2-category of 2-AQFTs.
In simple terms, this result states that ordinary AQFTs can be considered equivalently as special examples of 2-AQFTs by applying the inclusion functor ι. The 2-AQFT ι(A) associated with an ordinary A ∈ AQFT(C, ⊥) takes a particularly simple form: It assigns to M ∈ C the category ι(A)(M ) = A(M ) Mod ∈ Pr K of modules over the algebra A(M ) ∈ Alg uAs (Vec K ) and to a tuple of mutually orthogonal C-morphisms the induced module functor associated with the algebra map n i=1 A(M i ) → A(N ). It is natural to ask whether there exist 2-AQFTs that do not arise from ordinary AQFTs via the inclusion functor ι. We call such 2-AQFTs non-truncated, while the ones arising from ι are called truncated. The mathematical problem can be formulated as follows: Find objects A ∈ 2AQFT(C, ⊥) such that the counit ǫ A : ιπ(A) → A of the adjunction (4.7) is not an equivalence. Simple examples of non-truncated 2-AQFTs can be obtained by an orbifold construction: Suppose that A ∈ AQFT(C, ⊥) is an ordinary AQFT that is endowed with the action of a finite group G. The associated orbifold 2-AQFT A G ∈ 2AQFT(C, ⊥) is defined by assigning to M ∈ C the category Example 4.6. The simplest example for a non-truncated 2-AQFT is given by the orbifold construction K G ∈ 2AQFT(C, ⊥) associated with the trivial AQFT K ∈ AQFT(C, ⊥) and the trivial action of a non-trivial finite group G. ▽ Towards ∞-AQFTs: The previous paragraph should be seen as a warm-up for the richer case where the target category is the symmetric monoidal ∞-category dgCat K of dg-categories. This will cover a whole range of new examples arising from quantizations of derived stacks.
Definition 4.7. The ∞-category of ∞-AQFTs on an orthogonal category (C, ⊥) is defined as the ∞-category of algebras over the operad P (C,⊥) (in the sense of [33]) with values in dgCat K , i.e.
∞AQFT(C, ⊥) := Alg ∞ P (C,⊥) dgCat K . (4.9)
Unfortunately, this framework is presently not explored in much detail. Let us however note that there is a construction that assigns to any dg-AQFT A ∈ dgAQFT(C, ⊥) an ∞-AQFT. The latter is given by assigning to objects M ∈ C the dg-categories A(M ) dgMod ∈ dgCat K of A(M )-dg-modules and to tuples of mutually disjoint C-morphism the induced dg-module functor.
Open Problem 4.8. Generalize that relates the dg-AQFTs from Section 3 to ∞-AQFTs. It would be particularly interesting to prove that this ∞-adjunction exhibits the ∞-category associated with the model category dgAQFT(C, ⊥) as a coreflective full ∞-subcategory ∞AQFT(C, ⊥). ⊲ Despite the currently undeveloped foundations of ∞-AQFTs, there is promising evidence that interesting examples for such objects can be obtained from quantization constructions in derived algebraic geometry [20,37]. A first attempt towards constructing examples was taken in [10,Section 4]. In more detail, consider the canonical phase space of Yang-Mills theory with structure group any reductive affine group scheme G on a lattice approximation of the Cauchy surface Σ. This phase space is the derived cotangent stack T * Con G (Σ) G(Σ) (4.11) over the quotient stack of G-connections modulo the gauge group on Σ, together with the canonical unshifted symplectic structure. A quantization of the dg-category of quasi-coherent modules over this derived stack was described in [10,Proposition 3.13], which can be interpreted in terms of D-modules over the quotient stack Con G (Σ) G(Σ) . It was then shown in [10,Proposition 4.10] that the assignment A(Σ) := QCoh T * Con G (Σ) G(Σ) ∈ dgCat K (4.12)
of these quantized dg-categories to lattices (or more generally to directed graphs) can be promoted to a (lattice variant of) an ∞-AQFT in the sense of Definition 4.7.
Open Problem 4.9. It would be interesting to construct examples of ∞-AQFTs on spacetime lattices instead of spatial ones. This would allow one to investigate the time-slice axiom in the context of ∞-AQFT, which tentatively should be of the form that each Cauchy morphism f : M → N is assigned to a quasi-equivalence A(f ) : A(M ) ∼ → A(N ) of dg-categories. ⊲
the union of the causal future and past of a subset S ⊆ N . We often denote causally disjoint morphisms by f 1 ⊥ f 2 . (ii) A morphism f : M → N in Loc m is called a Cauchy morphism if its image f (M ) ⊆ N contains a Cauchy surface of N . We denote the class of Cauchy morphisms by W ⊆ Mor Loc m .
,...,n of mutually causally disjoint Loc m -morphisms, i.e. f i ⊥ f j for all i = j. The latter assignment is required to be compatible with compositions of tuples, unital with respect to the identities id : M → M , and equivariant with respect to permutation actions. The AQFT is said to satisfy the time-slice axiom if it assigns to each Cauchy morphism an Alg uAs (T)-isomorphism, i.e.
C,⊥) by AQFT(C, ⊥) := Alg O (C,⊥) (T) (2.6a) and the category of algebras over the localized operad O (C,⊥) [W −1 ] by AQFT(C, ⊥) W := Alg O (C,⊥) [W −1 ] (T) . (2.6b) Note that AQFT(C, ⊥) ∅ = AQFT(C, ⊥), so the first definition is a special case of the second one. More surprisingly, we shall see in Proposition 2.4 below that the second definition is covered by the first one if one replaces (C, ⊥) by the localized orthogonal category (C[W −1 ], ⊥ W ). Arranging orthogonal categories, orthogonal functors and their natural transformation into a 2-category Cat ⊥ , see [4, Definition 2.1], allows us to upgrade the assignment (C, ⊥) → O (C,⊥) to a 2-functor O : Cat ⊥ → Op to the 2-category of operads, morphisms and transformations, see[4, Proposition 2.6]. From this, one obtains that the assignment (C, ⊥) → AQFT(C, ⊥) of AQFT categories (2.6) is contravariantly 2-functorial. In particular, each orthogonal functor F :
Example 2. 2 .
2The most basic and intuitive example is given by considering the orthogonal functor j : (Loc ⋄ m , ⊥) → (Loc m , ⊥) that describes the full orthogonal subcategory embedding of topologically trivial spacetimes (i.e. diffeomorphic to R m , hence looking like diamonds ⋄) into the category of all spacetimes. In this case the operadic left Kan extension j ! : AQFT(Loc ⋄ m , ⊥) → AQFT(Loc m , ⊥) extends AQFTs that are defined only on diamonds to all spacetimes. This is a refinement of Fredenhagen's universal algebra construction[24], see[12, Section 5] for a detailed comparison. ▽ Remark 2.3. Even though operadic right Kan extension often do not exist, there are some important and interesting exceptions. For instance, orbifoldization of AQFTs [12, Section 4.5] and the extraction of the chiral observables of a 2-dimensional conformal AQFT [5, Section 5] are formalized by right adjoints. △
F
: model categories M and N, such that the left adjoint F preserves cofibrations and the right adjoint G preserves fibrations. (See also [30, Proposition 8.5.3] for a list of equivalent characterizations that is useful in practice.) Choosing a cofibrant resolution (Q : M → M, q : Q ⇒ id M ) for M and a fibrant resolution (R : N → N, r : id N ⇒ R) for N, which exist by the model category axioms, one defines the derived functors LF := F • Q : M −→ M , RG := G • R : N −→ N . (3.2)
complex valued AQFTs is a model category for the following projective model structure: A morphism ζ : A → B is a weak equivalence (resp. fibration) if each componentζ M : A(M ) → B(M ),for M ∈ C, is a quasi-isomorphism (resp. degree-wise surjective cochain map). The cofibrations are determined by the left lifting property with respect to all acyclic fibrations, i.e. morphisms that are simultaneously weak equivalences and fibrations.
Theorem 3 . 2 .
32Let (C, ⊥) be any orthogonal category and q : O (C,⊥) ∞ ∼ −→ O (C,⊥) ⊗ K any (Σ-)cofibrant resolution of the AQFT dg-operad. Then the induced Quillen adjunction q ! : dgAQFT ∞ (C, ⊥) ∼ / / dgAQFT(C, ⊥) : q * o o (3.6)
Definition 3. 4 .
4A dg-AQFT A ∈ dgAQFT(C, ⊥) is said to satisfy the strict time-slice axiom for W ⊆ Mor C if it sends each W -morphism f : M → N to an isomorphism A(f ) : A(M ) ∼ = −→ A(N ). It is said to satisfy the homotopy time-slice axiom if it sends each W -morphism f : M → N to a quasi-isomorphism A(f ) : A(M ) ∼ −→ A(N ).
step involves the homotopical localization O (C,⊥) [W −1 ] ∞ ∈ dgOp K of the AQFT dgoperad at W , whose category of algebras is endowed with the projective model structure. The second step uses the Quillen equivalence from [21, Theorem 3.13] to identify the left-hand side with the left Bousfield localization of the projective model category from Theorem 3.1 at a suitable class of morphisms W that is determined from W . Using the localization operad morphism O L : O (C,⊥) → O (C[W −1 ],⊥ W ) , we obtain by [4, Proposition 2.24] a Quillen adjunction L ! : dgAQFT(C, ⊥) hoW / / dgAQFT(C, ⊥) W : us to compare the strict and homotopy time-slice axioms. The following result is proven in [4, Theorem 3.6].Theorem 3.5. Suppose that L : (C, ⊥) → (C[W −1 ], ⊥ W ) is a reflective orthogonal localization, which means that the orthogonal localization functor L admits a fully faithful right adjoint orthogonal functor i : (C[W −1 ], ⊥ W ) → (C, ⊥). Then (3.9) is a Quillen equivalence.
Definition 3.9. A free BV theory on M ∈ Loc m is a tuple F M , Q M , (−, −) M consisting of a complex of linear differential operators (F M , Q M ) and a compatible (−1)-shifted fiber metric (−, −) M . A free BV theory on Loc m is a natural family F M , Q M , (−, −) M M ∈Locm of free BV theories on all M ∈ Loc m . Example 3.10. The prime example to keep in mind is linear Yang-Mills theory. In this case the complex of linear differential operators reads as (F M , Q M ) = (−1)
M encodes the action of gauge transformations via d as well as the dynamics given by the linear Yang-Mills operator δ d. The (−1)-shifted fiber metric (−, −) M : F M ⊗ F M → M × R[−1] is given by the pairings (A ‡ , A) M = * −1 (A ‡ ∧ * A) and (c ‡ , c) M = − * −1 (c ‡ ∧ * c) between fields/ghosts and their antifields, where * denotes the Hodge operator. This captures the (−1)-shifted symplectic structure and its dual shifted Poisson structure (the antibracket) from the BV formalism. More examples, including higher gauge-theoretic ones, can be found in [7, Examples 3.6, 3.7 and 3.8]. ▽ Associated to any free BV theory on Loc m is a time-orderable dg-prefactorization algebra F M , Q M , (−, −) M M ∈Locm BV quantization
Theorem 3 .( 2 )
3211. (1) Uniqueness: The Kan complex of retarded/advanced Green's homotopies on (F M , Q M ) is either empty or contractible. See [6, Proposition 3.9]. Existence: Suppose that (F M , Q M ) admits a Green's witness, i.e. a degree −1 differential operator W M such that P M := Q M W M +W M Q M is a Green hyperbolic differential operator. Then (F M , Q M ) admits both a retarded and an advanced Green's homotopy. See [6, Theorem 4.8].
Definition 4 . 1 .
41The 2-category of 2-AQFTs on an orthogonal category (C, ⊥) is defined as the 2-category of weak algebras over the operad P (C,⊥) (see(2.5) for its relation to the AQFT operad) with values in Pr K , i.e.2AQFT(C, ⊥) := Alg weak P (C,⊥) Pr K .(4.5)
Theorem 4. 4 .
4For each orthogonal category (C, ⊥), there exists a bicategorical adjunction ι : AQFT(C, ⊥) / / 2AQFT(C, ⊥)
A
G (M ) := A(M ) Mod O(G) ∈ Pr K (4.8)of G-equivariant A(M )-modules and to a tuple of mutually orthogonal C-morphisms the associated induced module functor, see [9, Section 5] for the details. The question whether or not the 2-AQFT A G is truncated can be reduced to an algebraic problem, see[9, Theorem 5.11].
Theorem 4. 5 .
5A G ∈ 2AQFT(C, ⊥) is truncated if and only if the algebra extension A(M ) O(G) ⊆ A(M ) of G-invariants is O(G)-Hopf-Galois, for all M ∈ C.
Theorem 4.4 by establishing an ∞-adjunction ι : dgAQFT(C, ⊥) / / ∞AQFT(C, ⊥)
to a model-independent statement. In order to state these results, let us denote by tP m [W −1 ] the operad encoding time-orderable prefactorization algebras that satisfy the time-slice axiom and by tPFA(Loc m ) W := Alg tPm[W −1 ] (T) (2.14) its category of algebras. It is shown in [8, Remark 5.2] that there exists an operad morphism Φ : tP m [W −1 ] → O (Locm,⊥) [W −1 ] to the AQFT operad, which defines a pullback functor Φ * : AQFT(Loc m , ⊥) W −→ tPFA(Loc m ) W , (2.15) i.e. every AQFT has an underlying tPFA. Restricting to the full subcategories of additive objects [8, Definitions 2.11 and 2.16], i.e. tPFAs/AQFTs whose value on any M ∈ Loc m agrees with the colimit of the values on the category of all relatively compact U → M in Loc m , one obtains the following main result. Theorem 2.7. There exists an isomorphism of categories
15, Theorem 4.1] and [29, Theorem 2.4.5].
33, Theorem 5.1.2.2], one finds that uAs ⊗ ∞ uAs ≃ E 2 is the homotopically non-trivial E 2 -operad, while (uAs ⊗ BV uAs) ∞ ≃ E ∞ is the homotopically trivial E ∞ -operad.In the AQFT
context, where Einstein causality is implemented by an Eckmann-Hilton-type argument (2.4),
we believe that the ∞-operad P (C,⊥) ⊗ ∞ uAs describes an E 2 -variant of the causality axiom. A
detailed study of this ∞-operad is an interesting open problem.
⊲
Strictification of the time-slice axiom: In the previous paragraph we have silently avoided
talking about the time-slice axiom. Recall that this is implemented in the 1-categorical setting
by a localization of operads, see (2.5). In the model categorical context there exist two a priori
different options for the time-slice axiom.
,(3.14)where E ± are the retarded/advanced Green's operators for the d'Alembertian = δ d + d δ. ▽The theory of Green's witnesses and Green's homotopies allows us to associate to any free
Green-hyperbolic operators on globally hyperbolic spacetimes. C Bär, arXiv:1310.0738Commun. Math. Phys. 33331585math-phC. Bär, "Green-hyperbolic operators on globally hyperbolic spacetimes," Commun. Math. Phys. 333, no. 3, 1585 (2015) [arXiv:1310.0738 [math-ph]].
Wave equations on Lorentzian manifolds and quantization. C Bär, N Ginoux, F Pfäffle, arXiv:0806.1036Eur. Math. Soc. math.DGC. Bär, N. Ginoux and F. Pfäffle, Wave equations on Lorentzian manifolds and quantization, Eur. Math. Soc., Zürich (2007) [arXiv:0806.1036 [math.DG]].
Linear Yang-Mills theory as a homotopy AQFT. M Benini, S Bruinsma, A Schenkel, arXiv:1906.00999Commun. Math. Phys. 3781185math-phM. Benini, S. Bruinsma and A. Schenkel, "Linear Yang-Mills theory as a homotopy AQFT," Commun. Math. Phys. 378, no. 1, 185 (2020) [arXiv:1906.00999 [math-ph]].
Strictification theorems for the homotopy timeslice axiom. M Benini, V Carmona, A Schenkel, arXiv:2208.04344Lett. Math. Phys. 1131math-phM. Benini, V. Carmona and A. Schenkel, "Strictification theorems for the homotopy time- slice axiom," Lett. Math. Phys. 113, no. 1, 20 (2023) [arXiv:2208.04344 [math-ph]].
A skeletal model for 2d conformal AQFTs. M Benini, L Giorgetti, A Schenkel, arXiv:2111.01837Commun. Math. Phys. 3951math-phM. Benini, L. Giorgetti and A. Schenkel, "A skeletal model for 2d conformal AQFTs," Commun. Math. Phys. 395, no. 1, 269-298 (2022) [arXiv:2111.01837 [math-ph]].
M Benini, G Musante, A Schenkel, arXiv:2207.04069Green hyperbolic complexes on Lorentzian manifolds. math-phM. Benini, G. Musante and A. Schenkel, "Green hyperbolic complexes on Lorentzian mani- folds," arXiv:2207.04069 [math-ph].
Quantization of Lorentzian free BV theories: factorization algebra vs algebraic quantum field theory. M Benini, G Musante, A Schenkel, arXiv:2212.02546math-phM. Benini, G. Musante and A. Schenkel, "Quantization of Lorentzian free BV theories: factorization algebra vs algebraic quantum field theory," arXiv:2212.02546 [math-ph].
Model-independent comparison between factorization algebras and algebraic quantum field theory on Lorentzian manifolds. M Benini, M Perin, A Schenkel, arXiv:1903.03396Commun. Math. Phys. 377math-phM. Benini, M. Perin and A. Schenkel, "Model-independent comparison between factorization algebras and algebraic quantum field theory on Lorentzian manifolds," Commun. Math. Phys. 377, 971-997 (2020) [arXiv:1903.03396 [math-ph]].
Categorification of algebraic quantum field theories. M Benini, M Perin, A Schenkel, L Woike, arXiv:2003.13713Lett. Math. Phys. 111235math-phM. Benini, M. Perin, A. Schenkel and L. Woike, "Categorification of algebraic quantum field theories," Lett. Math. Phys. 111, no. 2, 35 (2021) [arXiv:2003.13713 [math-ph]].
Quantization of derived cotangent stacks and gauge theory on directed graphs. M Benini, J P Pridham, A Schenkel, arXiv:2201.10225math-phM. Benini, J. P. Pridham and A. Schenkel, "Quantization of derived cotangent stacks and gauge theory on directed graphs," arXiv:2201.10225 [math-ph].
Homotopy theory of algebraic quantum field theories. M Benini, A Schenkel, L Woike, arXiv:1805.08795Lett. Math. Phys. 10971487math-phM. Benini, A. Schenkel and L. Woike, "Homotopy theory of algebraic quantum field theories," Lett. Math. Phys. 109, no. 7, 1487 (2019) [arXiv:1805.08795 [math-ph]].
Operads for algebraic quantum field theory. M Benini, A Schenkel, L Woike, arXiv:1709.08657Commun. Contemp. Math. 2322050007math-phM. Benini, A. Schenkel and L. Woike, "Operads for algebraic quantum field theory," Com- mun. Contemp. Math. 23, no. 2, 2050007 (2021) [arXiv:1709.08657 [math-ph]].
Integrating quantum groups over surfaces. D Ben-Zvi, A Brochier, D Jordan, arXiv:1501.04652J. Topol. 114math.QAD. Ben-Zvi, A. Brochier and D. Jordan, "Integrating quantum groups over surfaces," J. Topol. 11, no. 4, 874-917 (2018) [arXiv:1501.04652 [math.QA]].
Quantum character varieties and braided module categories. D Ben-Zvi, A Brochier, D Jordan, arXiv:1606.04769Sel. Math. New Ser. 24math.QAD. Ben-Zvi, A. Brochier and D. Jordan, "Quantum character varieties and braided module categories," Sel. Math. New Ser. 24, 4711-4748 (2018) [arXiv:1606.04769 [math.QA]].
Resolution of coloured operads and rectification of homotopy algebras. C Berger, I Moerdijk, Contemp. Math. A. Davydov, M. Batanin, M. Johnson, S. Lack and A. Neeman431Amer. Math. SocCategories in algebra, geometry and mathematical physicsC. Berger and I. Moerdijk, "Resolution of coloured operads and rectification of homotopy algebras," in: A. Davydov, M. Batanin, M. Johnson, S. Lack and A. Neeman (eds.), Cate- gories in algebra, geometry and mathematical physics, Contemp. Math. 431, 31-58, Amer. Math. Soc., Providence, RI (2007).
Relative Cauchy evolution for linear homotopy AQFTs. S Bruinsma, C J Fewster, A Schenkel, arXiv:2108.10592Commun. Math. Phys. 3922math-phS. Bruinsma, C. J. Fewster and A. Schenkel, "Relative Cauchy evolution for linear homotopy AQFTs," Commun. Math. Phys. 392, no. 2, 621-657 (2022) [arXiv:2108.10592 [math-ph]].
Algebraic field theory operads and linear quantization. S Bruinsma, A Schenkel, arXiv:1809.05319Lett. Math. Phys. 10911math-phS. Bruinsma and A. Schenkel, "Algebraic field theory operads and linear quantization," Lett. Math. Phys. 109, no. 11, 2531-2570 (2019) [arXiv:1809.05319 [math-ph]].
R Brunetti, C Dappiaggi, K Fredenhagen, J Yngvason, Advances in algebraic quantum field theory. HeidelbergSpringer VerlagR. Brunetti, C. Dappiaggi, K. Fredenhagen and J. Yngvason, Advances in algebraic quantum field theory, Springer Verlag, Heidelberg (2015).
The generally covariant locality principle: A new paradigm for local quantum field theory. R Brunetti, K Fredenhagen, R Verch, arXiv:math-ph/0112041Commun. Math. Phys. 237R. Brunetti, K. Fredenhagen and R. Verch, "The generally covariant locality principle: A new paradigm for local quantum field theory," Commun. Math. Phys. 237, 31-68 (2003) [arXiv:math-ph/0112041].
Shifted Poisson structures and deformation quantization. D Calaque, T Pantev, B Toën, M Vaquié, G Vezzosi, arXiv:1506.03699J. Topol. 102math.AGD. Calaque, T. Pantev, B. Toën, M. Vaquié and G. Vezzosi, "Shifted Poisson structures and deformation quantization," J. Topol. 10, no. 2, 483-584 (2017) [arXiv:1506.03699 [math.AG]].
New model structures for algebraic quantum field theory. V Carmona, arXiv:2107.14176Lett. Math. Phys. 11333math-phV. Carmona, "New model structures for algebraic quantum field theory," Lett. Math. Phys. 113, 33 (2023) [arXiv:2107.14176 [math-ph]].
K Costello, O Gwilliam, Factorization algebras in quantum field theory. CambridgeCambridge University Press1K. Costello and O. Gwilliam, Factorization algebras in quantum field theory: Volume 1, New Mathematical Monographs 31, Cambridge University Press, Cambridge (2017).
K Costello, O Gwilliam, Factorization algebras in quantum field theory. CambridgeCambridge University Press2K. Costello and O. Gwilliam, Factorization algebras in quantum field theory: Volume 2, New Mathematical Monographs 41, Cambridge University Press, Cambridge (2021).
Quantum and non-commutative analysis: Past, present and future perspectives. K Fredenhagen, H. Araki, K. R. Ito, A. Kishimoto and I. OjimaKluwer Academic PublishersGlobal observables in local quantum physicsK. Fredenhagen, "Global observables in local quantum physics," in: H. Araki, K. R. Ito, A. Kishimoto and I. Ojima (eds.), Quantum and non-commutative analysis: Past, present and future perspectives, Kluwer Academic Publishers (1993).
Batalin-Vilkovisky formalism in the functional approach to classical field theory. K Fredenhagen, K Rejzner, arXiv:1101.5112Commun. Math. Phys. 314mathphK. Fredenhagen and K. Rejzner, "Batalin-Vilkovisky formalism in the functional approach to classical field theory," Commun. Math. Phys. 314, 93-127 (2012) [arXiv:1101.5112 [math- ph]].
Batalin-Vilkovisky formalism in perturbative algebraic quantum field theory. K Fredenhagen, K Rejzner, arXiv:1110.5232Commun. Math. Phys. 317mathphK. Fredenhagen and K. Rejzner, "Batalin-Vilkovisky formalism in perturbative algebraic quantum field theory," Commun. Math. Phys. 317, 697-725 (2013) [arXiv:1110.5232 [math- ph]].
Relating nets and factorization algebras of observables: free field theories. O Gwilliam, K Rejzner, arXiv:1711.06674Commun. Math. Phys. 373math-phO. Gwilliam and K. Rejzner, "Relating nets and factorization algebras of observables: free field theories," Commun. Math. Phys. 373, 107-174 (2020) [arXiv:1711.06674 [math-ph]].
An algebraic approach to quantum field theory. R Haag, D Kastler, J. Math. Phys. 5848R. Haag and D. Kastler, "An algebraic approach to quantum field theory," J. Math. Phys. 5, 848 (1964).
Rectification of algebras and modules. V Hinich, arXiv:1311.4130Doc. Math. 20math.QAV. Hinich, "Rectification of algebras and modules," Doc. Math. 20, 879-926 (2015) [arXiv:1311.4130 [math.QA]].
Model categories and their localizations. P S Hirschhorn, Math. Surveys Monogr. 99Amer. Math. SocP. S. Hirschhorn, Model categories and their localizations, Math. Surveys Monogr. 99, Amer. Math. Soc., Providence, RI (2003).
Model categories. M Hovey, Math. Surveys Monogr. 63Amer. Math. SocM. Hovey, Model categories, Math. Surveys Monogr. 63, Amer. Math. Soc., Providence, RI (1999).
Derived algebraic geometry X: Formal moduli problems. J Lurie, J. Lurie, "Derived algebraic geometry X: Formal moduli problems," https://www.math.ias.edu/~lurie/papers/DAG-X.pdf.
Higher algebra. J Lurie, J. Lurie, Higher algebra, https://www.math.ias.edu/~lurie/papers/HA.pdf.
Unifying derived deformation theories. J P Pridham, arXiv:0705.0344Adv. Math. 2243math.AGJ. P. Pridham, "Unifying derived deformation theories," Adv. Math. 224, no. 3, 772-826 (2010) [arXiv:0705.0344 [math.AG]].
Chiral observables and modular invariants. K H Rehren, arXiv:hep-th/9903262Commun. Math. Phys. 208K. H. Rehren, "Chiral observables and modular invariants," Commun. Math. Phys. 208, 689-712 (2000) [arXiv:hep-th/9903262].
Perturbative algebraic quantum field theory: An introduction for mathematicians. K Rejzner, Mathematical Physics Studies. SpringerK. Rejzner, Perturbative algebraic quantum field theory: An introduction for mathematicians, Mathematical Physics Studies, Springer Cham (2016).
Derived algebraic geometry and deformation quantization. B Toën, arXiv:1403.6995Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansSeoulmath.AGB. Toën, "Derived algebraic geometry and deformation quantization," Proceedings of the International Congress of Mathematicians, Seoul (2014) [arXiv:1403.6995 [math.AG]].
Colored operads. D Yau, Graduate Studies in Mathematics. 170Amer. Math. Soc., ProvidenceD. Yau, Colored operads, Graduate Studies in Mathematics 170, Amer. Math. Soc., Provi- dence, RI (2016).
| [] |
[
"Applications of Lattice Gauge Equivariant Neural Networks",
"Applications of Lattice Gauge Equivariant Neural Networks"
] | [
"Matteo Favoni \nInstitute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria\n",
"Andreas Ipp \nInstitute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria\n",
"David I Müller \nInstitute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria\n"
] | [
"Institute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria",
"Institute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria",
"Institute for Theoretical Physics\nTU Wien\nWiedner Hauptstr. 8-101040ViennaAustria"
] | [] | The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice. * | 10.1051/epjconf/202227409001 | [
"https://export.arxiv.org/pdf/2212.00832v1.pdf"
] | 254,220,916 | 2212.00832 | 25315cbbadda2addea8630bc9608931eca9354be |
Applications of Lattice Gauge Equivariant Neural Networks
Matteo Favoni
Institute for Theoretical Physics
TU Wien
Wiedner Hauptstr. 8-101040ViennaAustria
Andreas Ipp
Institute for Theoretical Physics
TU Wien
Wiedner Hauptstr. 8-101040ViennaAustria
David I Müller
Institute for Theoretical Physics
TU Wien
Wiedner Hauptstr. 8-101040ViennaAustria
Applications of Lattice Gauge Equivariant Neural Networks
2 Speaker and corresponding author
The introduction of relevant physical information into neural network architectures has become a widely used and successful strategy for improving their performance. In lattice gauge theories, such information can be identified with gauge symmetries, which are incorporated into the network layers of our recently proposed Lattice Gauge Equivariant Convolutional Neural Networks (L-CNNs). L-CNNs can generalize better to differently sized lattices than traditional neural networks and are by construction equivariant under lattice gauge transformations. In these proceedings, we present our progress on possible applications of L-CNNs to Wilson flow or continuous normalizing flow. Our methods are based on neural ordinary differential equations which allow us to modify link configurations in a gauge equivariant manner. For simplicity, we focus on simple toy models to test these ideas in practice. *
Introduction
In the past decade, neural networks (NNs) have been established as an essential tool with numerous applications in e.g. computer science and the natural sciences. The crucial role played by symmetries in a large amount of scientific problems has attracted the idea that the inclusion of such symmetries in the NN architecture could be beneficial in enhancing their performance. For example, convolutional neural networks (CNNs) are based on the inclusion of translational symmetry as an inherent property of their architecture. In computer vision problems such as image classification, this proved to be a powerful idea, due to the fact that the position of a particular feature that has to be detected is irrelevant and it is just the presence of the feature that matters. This approach has been generalized to include other symmetries in group equivariant convolutional neural networks (G-CNNs) [1], which take into account not just translational, but e.g. also rotational and reflection symmetry. Recently, this idea has been further extended to local symmetries [2]. The more general framework dealing with symmetries in neural networks is called geometric deep learning [3].
In theoretical physics, and more specifically in lattice field theories with global symmetries, CNNs or G-CNNs have been successfully applied to solving regression problems and detecting phase transitions [4][5][6][7][8][9][10]. In the context of Abelian and non-Abelian gauge theories, which exhibit local symmetries, there has been progress in the direction of incorporating [17] gauge symmetry in the network architecture [11][12][13][14][15]. For example, gauge equivariant normalizing flows [11,12,15,16] can be used in place of Monte Carlo simulations to sample uncorrelated gauge configurations while retaining gauge symmetry. Similarly, a lattice gauge equivariant convolutional neural network (L-CNN) was proposed in our paper [17], in which the elementary layers of the architecture individually preserve gauge symmetry. L-CNNs have been used successfully for regression tasks and in principle can also be employed for the generation of gauge configurations. The continuous flow approach proposed in [18,19] provides a continuous generalization of normalizing flows applied to lattice field theory. In contrast to normalizing flows, this continuous formulation allows for a straightforward inclusion of exact global symmetries. At its core, continuous flows are an application of neural ordinary differential equations (NODEs) [20], which are ordinary differential equations (ODEs) parametrized by NNs.
In these proceedings, we first review the basics of lattice gauge theory and the L-CNN architecture, and we show how NODEs can be modified to study Wilson flow [21] and exemplify it with an SU(2) toy model.
Lattice gauge equivariant neural networks
Lattice gauge theory is a discretized version of SU(N c ) Yang-Mills theory, in which spacetime is approximated by a periodic hypercubic lattice in D + 1 dimensions with imaginary time and with lattice spacing a. In the lattice discretizations, the continuous gauge fields A µ are replaced by the link variables U x,µ via the following definition:
U x,µ = P exp ig x+μ x dx ν A ν (x ) ,(1)
where P denotes path ordering, g is the coupling constant and the integral is performed over the straight line connecting the site x to the site x +μ. The gauge fields are elements of the su(N c ) algebra, while the links can be interpreted as the parallel transporters along the lattice edges and live in the SU(N c ) group. In practice, we employ the fundamental representation of U x,µ , where links are represented as complex N c × N c matrices. It is possible to multiply adjacent links, the repetition of which leads to arbitrary Wilson lines. If the start and end point of a Wilson line coincide, closed loops are formed and are called Wilson loops. The simplest Wilson loop on a hypercubic lattice is the plaquette given by
U x,µν = U x,µ U x+μ,ν U † x+μ+ν,ν U † x+ν,µ .(2)
The Wilson action [22], formulated in terms of plaquettes,
S W [U] = 2 g 2 x∈Λ µ<ν Re Tr 1 − U x,µν ,(3)
is equivalent to the Yang-Mills action in the continuum limit a → 0. A general lattice gauge transformation applied to links
U x,µ → Ω x U x,µ Ω † x+μ (4)
induces a local transformation of the plaquettes
U x,µν → Ω x U x,µν Ω † x .(5)
These transformations leave the Wilson action unchanged, meaning the theory is invariant under SU(N c ) lattice gauge transformations. In order to build up an L-CNN [17], we can consider its individual layers, each of which is designed to respect gauge equivariance. First, the input consists of the set of gauge links U of a particular lattice configuration and locally transforming objects W, which in practice we choose to be the plaquettes, but can also be the Polyakov loops (closed Wilson lines wrapping around the periodic boundary of the lattice). The L-Conv layer is a gauge equivariant convolution, which acts as a parallel transporter of locally transforming objects W, while L-Bilin performs a multiplication of such objects (more specifically, it is a bilinear layer). We proved that the repeated application of these two operations can grow arbitrarily sized Wilson (or Polyakov) loops. Moreover, it is possible to introduce non-linearity via L-Act layers, which behave like activation functions in traditional CNNs. The Trace layer yields a gauge invariant output that can be passed to a traditional CNN. A possible realization of such a network is depicted in Fig. 1. By virtue of the ability of generating any loop and the non-linearity, L-CNNs can be seen as universal approximators of gauge-equivariant functions on the lattice.
Among the relevant results found in [17], it is worth mentioning that the L-CNNs performed very well on the regression of Wilson loops up to a size of 4×4 and simple observables such as the topological charge density, beating traditional CNNs on the same task.
Adaptation of NODEs to lattice gauge theory
NODEs are ODEs parametrized by neural networks [20]. As in the original paper, we will focus on first order ODEs:
dz dt = f(z(t), θ, t).(6)
The unknown function z(t) is a time-dependent D-dimensional vector and f(z(t), θ, t) is a D-dimensional function parametrized by a priori unknown weights θ. In particular one can choose f(z(t), θ, t) to be represented by a NN. NODEs can be understood as generalizations of residual networks [23] with continuous depth, where the time coordinate t is used in place of the discrete depth of the network. Starting with an input state z 0 = z(t 0 ) at t = t 0 , the NODE can be formally solved by
z(t 1 ) = z 0 + t 1 t 0 dt f(z(t ), θ, t ),(7)
which provides predicted states z(t 1 ) at some final time t = t 1 . In this manner, NODEs map arbitrary input states to output states similar to generic NNs. The mapping depends on the NN architecture and the weights θ. NODEs can thus be used to solve regression problems:
given a dataset characterized by the initial conditions z i 0 = z i (t 0 ) (where i ∈ {1, . . . , N samples }), which are used as input, and the desired output vectorsz i 1 , which represent the labels, the weights θ can be optimized such that the final states approximate the labels as accurately as possible. In practice, this is done with the aid of an ODE integrator, such as Euler or Runge-Kutta. We can require that the discrepancy between the labels and the predicted final states is minimized by introducing a loss function such as the mean squared error (MSE), L(θ) = i (z i 1 − z i (t 1 )) 2 /N samples , and run the training procedure in order to optimize the weights θ.
While the approach described above only uses the final state labels in the optimization problem, it is also possible to include the discrepancies of the whole state evolution (i.e. more points t j along the trajectory z(t j )) in the loss function for successful training. If we only use the final states at t 1 , then it is crucial that the dataset provides sufficient information to reconstruct the underlying dynamics.
We can adapt the previous scheme to study continuous flow applied to lattice gauge con-
figurations dU x,µ (τ) dτ = iH µ [U x,µ (τ), θ, τ] U x,µ (τ),(8)
where U x,µ ∈ SU(N c ) are gauge links, τ is flow time and H µ [U(τ), θ, τ] is a NN parametrized by the weights θ with a traceless and Hermitian output. This last requirement guarantees that the gauge links do not leave the group during the evolution. In order to retain gauge equivariance, H µ can be modeled with an L-CNN. Our dataset consists of the initial conditions U i x,µ,0 = U i x,µ (τ 0 ) and the desired output configurationsŨ i
x,µ,1 , which define input and labels respectively. A standard ODE integrator would in general break the group structure, so we make use of the iterative application of the exponential map
U i x,µ (τ j+1 ) = exp iH µ [U i (τ j ), θ, τ j ]∆τ U i x,µ (τ j )(9)
for time evolution. Since H µ is traceless and Hermitian, i.e. it can be understood as a su(N c ) algebra element, the links remain in the group manifold. The final configuration is then used in a loss function, such as
L(θ) = 1 N samples x,µ i Ũ x,µ,1 − U i x,µ (τ 1 ) 2 ,(10)
where . . . denotes the Frobenius norm.
SU(2) Wilson flow toy model
We test the adaptation of the NODE approach to Wilson flow [21] using a toy model consisting of a single SU(2) link characterized by the action S [U] = Re Tr (U 2 ). Starting with randomly distributed initial matrices U i 0 ∈ SU(2), we generate a dataset of flowed matrices U i 1 by applying gradient descent on S [U] using group derivatives akin to Wilson flow in lattice gauge theory. The action exhibits two minima, ±1, toward which Wilson flow lets the links evolve depending on the initial conditions. If Tr U > 0, the link is flowed toward the north pole (+1), otherwise the dynamics is directed toward the south pole (−1). For links with TrU = 0, the dynamics are stuck. Our goal is to use NODEs to reconstruct these dynamics via the flow Eq. (8).
In order to visualize the dataset, we use the following parametrization of SU(2) where σ i are the Pauli matrices. We then normalize u 0 , u 1 and u 2 by introducing u j = u j / u 2 0 + u 2 1 + u 2 2 for j ∈ {0, 1, 2}, so that eachũ j lies on a three-dimensional sphere, while the remaining parameter u 3 determines the color of a point, as shown in Fig. 2. As anticipated, the link variables, which are initially homogeneously distributed on the sphere, flow toward one of the two minima, which in Fig. 2 correspond to the north and south pole of the sphere.
U = u 0 1 + iσ a u a , u 0 = 1 2 Tr (U), u i = 1 2i Tr (Uσ i ),(11)
Training NODEs in principle needs backpropagation through the ODE solver. The problem with standard backpropagation is that it requires to keep the whole evolution of the system, which leads to increased memory consumption. For more complicated systems, this can easily saturate memory. A solution to this problem lies in the adjoint sensitivity method [20], which avoids having to store the entire history of the Wilson flow by solving the evolution backwards. Our implementation of this method is still a work in progress, so for this simple system we rely on standard backpropagation.
For our model, the matrix H in Eq. (8) is constructed with the following steps: the complex entries of U are split into real and imaginary parts. They are then fed into a multi-layer perceptron with real weights. The output is recombined into a complex matrix, which is generally neither Hermitian nor traceless. Therefore, we take its anti-hermitian traceless part, 2iN c ), which projects the output onto the su(2) algebra. The application of the exponential map yields a matrix in SU (2). This guarantees that the evolution of U takes place without leaving the group.
[C] ah = C −C † /(2i) − 1 Tr C −C † /(
We choose the Frobenius norm averaged over the lattice as our loss function and train on a dataset of 50000 samples using a batch size of 100 and a learning rate of 10 −3 for 100 epochs. The multi-layer perceptron we employed has four hidden layers with 16, 64, 32 and 16 nodes respectively. We use tanh(x) as an activation function after every layer except the last.
After training, we test on 4000 samples with the same final Wilson flow time τ = 1. The results are shown in Fig. 3. The left panel shows the trajectories of the ground truth (blue) and the predicted trajectories (red) during the NODE flow. The right panel shows the MSE as a function of flow time. Since the loss of 5 · 10 −6 is very small, the two evolutions are visually indistinguishable. Since the loss function in Fig. 3 (b) seems to increase quadratically as a function of flow time, we investigate the loss at larger times τ > 1 outside the training interval. In Fig. 4 we test 4000 samples and extrapolate to flow times up to τ = 10. The deterioration of the performance is clear, since the loss jumps up to values that are three orders of magnitude larger compared to the highest loss found during testing in the interval τ ∈ [0, 1]. Investigating our data more closely, we found two types of mispredictions which can contribute to a large loss. For one specific sample in our test dataset, the predicted trajectory moves in the opposite direction of the actual one. There are also some trajectories at large times τ that tend to overshoot the ground truth values. In both cases, we found that these trajectories originate from points that lie within a thin neighborhood of the equator (TrU ≈ 0) and are in general very difficult for the network to evolve correctly. Despite these flaws, the results are encouraging, considering that we employed a simple multi-layer perceptron which is not adapted to the symmetries of the problem. Therefore, a network structure incorporating additional symmetries (U → ΩUΩ † with Ω ∈ SU(2)), could further improve the performance in this toy model.
Conclusions and outlook
In these proceedings, we reviewed the structure of L-CNNs and their successful application in regression tasks. These architectures are very flexible and their layers can be composed to modify gauge link configurations. We discussed how this can be achieved in the context of NODEs and tested it on an SU(2) Wilson flow toy model with a single link. Based on these experiments, we intend extend the toy model to actual lattice configurations and apply L-CNNs to Wilson flow.
Figure 1 :
1A possible L-CNN architecture.Figure from
Figure 2 :
2Visualization of 1000 samples from the dataset. Left: distribution of the initial conditions U i 0 . Right: distribution of the labelsŨ i 1 , found by evolving the initial conditions according to the action S [U] = Re Tr (U 2 ) up to τ 1 = 1.
Figure 3 :
3Test results. (a) Evolution of 30 samples projected on the three-dimensional sphere. The ground truth and the prediction lie on top of each other. (b) The corresponding Frobenius loss as a function of flow time.
Figure 4 :
4Results of our extrapolation up to τ = 10 based on training up to τ = 1. (a) Extrapolated evolution of 30 samples. The ground truth and the prediction are very close. (b) Loss function as a function of flow time. At large times the loss increases by up to three orders of magnitude compared to the original interval τ ∈ [0, 1].
(a) Ground truth and predicted trajectories (b) Loss as a function of flow time
This work has been supported by the Austrian Science Fund FWF No. P32446, No. 34764 and Doctoral program No. W1252-N27. The Titan V GPU used for this research was donated by the NVIDIA Corporation.
Group Equivariant Convolutional Networks. T S Cohen, M Welling, 1602.07576Proceedings of The 33rd International Conference on Machine Learning (JMLR, 2016). The 33rd International Conference on Machine Learning (JMLR, 2016)48T.S. Cohen, M. Welling, Group Equivariant Convolutional Networks, in Proceedings of The 33rd International Conference on Machine Learning (JMLR, 2016), Vol. 48, pp. 2990-2999, 1602.07576
Gauge Equivariant Convolutional Networks and the Icosahedral CNN. T S Cohen, M Weiler, B Kicanaoglu, M Welling, Proceedings of the 36th International Conference on Machine Learning (JMLR, 2019). the 36th International Conference on Machine Learning (JMLR, 2019)974615T.S. Cohen, M. Weiler, B. Kicanaoglu, M. Welling, Gauge Equivariant Convolutional Networks and the Icosahedral CNN, in Proceedings of the 36th International Confer- ence on Machine Learning (JMLR, 2019), Vol. 97, pp. 1321-1330, 1902.04615
. J E Gerken, J Aronsson, O Carlsson, H Linander, F Ohlsson, C Petersson, D Persson, 2105.13926J.E. Gerken, J. Aronsson, O. Carlsson, H. Linander, F. Ohlsson, C. Petersson, D. Persson (2021), 2105.13926
. K Zhou, G Endrődi, L G Pang, H Stöcker, 1810.12879Phys. Rev. D. 10011501K. Zhou, G. Endrődi, L.G. Pang, H. Stöcker, Phys. Rev. D 100, 011501 (2019), 1810.12879
. D L Boyda, M N Chernodub, N V Gerasimeniuk, V A Goy, S D Liubimov, A V Molochkov, Phys. Rev. D. 10310971D.L. Boyda, M.N. Chernodub, N.V. Gerasimeniuk, V.A. Goy, S.D. Liubimov, A.V. Molochkov, Phys. Rev. D 103, 014509 (2021), 2009.10971
. S Blücher, L Kades, J M Pawlowski, N Strodthoff, J M Urban, 2003.01504Phys. Rev. D. 10194507S. Blücher, L. Kades, J.M. Pawlowski, N. Strodthoff, J.M. Urban, Phys. Rev. D 101, 094507 (2020), 2003.01504
. D Bachtis, G Aarts, B Lucini, 2007.00355Phys. Rev. E. 10253306D. Bachtis, G. Aarts, B. Lucini, Phys. Rev. E 102, 053306 (2020), 2007.00355
. S Bulusu, M Favoni, A Ipp, D I Müller, D Schuh, 2103.14686Phys. Rev. D. 10474504S. Bulusu, M. Favoni, A. Ipp, D.I. Müller, D. Schuh, Phys. Rev. D 104, 074504 (2021), 2103.14686
. D Bachtis, G Aarts, B Lucini, 2102.09449Phys. Rev. D. 10374510D. Bachtis, G. Aarts, B. Lucini, Phys. Rev. D 103, 074510 (2021), 2102.09449
. D Bachtis, G Aarts, F Di Renzo, B Lucini, 2107.00466Phys. Rev. Lett. 12881603D. Bachtis, G. Aarts, F. Di Renzo, B. Lucini, Phys. Rev. Lett. 128, 081603 (2022), 2107.00466
. G Kanwar, M S Albergo, D Boyda, K Cranmer, D C Hackett, S Racanière, D J Rezende, P E Shanahan, Phys. Rev. Lett. 1256413G. Kanwar, M.S. Albergo, D. Boyda, K. Cranmer, D.C. Hackett, S. Racanière, D.J. Rezende, P.E. Shanahan, Phys. Rev. Lett. 125, 121601 (2020), 2003.06413
. D Boyda, G Kanwar, S Racanière, D J Rezende, M S Albergo, K Cranmer, D C Hackett, P E Shanahan, Phys. Rev. D. 1035456D. Boyda, G. Kanwar, S. Racanière, D.J. Rezende, M.S. Albergo, K. Cranmer, D.C. Hackett, P.E. Shanahan, Phys. Rev. D 103, 074504 (2021), 2008.05456
. A Tomiya, Y Nagai, 2103.11965A. Tomiya, Y. Nagai (2021), 2103.11965
. D Luo, G Carleo, B K Clark, J Stokes, 2012.05232Phys. Rev. Lett. 127276402D. Luo, G. Carleo, B.K. Clark, J. Stokes, Phys. Rev. Lett. 127, 276402 (2021), 2012.05232
. R Abbott, 2207.08945Phys. Rev. D. 10674506R. Abbott et al., Phys. Rev. D 106, 074506 (2022), 2207.08945
. M S Albergo, D Boyda, D C Hackett, G Kanwar, K Cranmer, S Racanière, D J Rezende, P E Shanahan, 2101.08176M.S. Albergo, D. Boyda, D.C. Hackett, G. Kanwar, K. Cranmer, S. Racanière, D.J. Rezende, P.E. Shanahan (2021), 2101.08176
. M Favoni, A Ipp, D I Müller, D Schuh, 2012.12901Phys. Rev. Lett. 12832003M. Favoni, A. Ipp, D.I. Müller, D. Schuh, Phys. Rev. Lett. 128, 032003 (2022), 2012.12901
. P De Haan, C Rainone, M C N Cheng, R Bondesan, 2110.02673P. de Haan, C. Rainone, M.C.N. Cheng, R. Bondesan (2021), 2110.02673
. M Gerdes, P De Haan, C Rainone, R Bondesan, M C N Cheng, 2207.00283M. Gerdes, P. de Haan, C. Rainone, R. Bondesan, M.C.N. Cheng (2022), 2207.00283
Neural Ordinary Differential Equations. R T Q Chen, Y Rubanova, J Bettencourt, D K Duvenaud, 1806.07366Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. GarnettCurran Associates, Inc31R.T.Q. Chen, Y. Rubanova, J. Bettencourt, D.K. Duvenaud, Neural Ordinary Differ- ential Equations, in Advances in Neural Information Processing Systems, edited by S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, R. Garnett (Cur- ran Associates, Inc., 2018), Vol. 31, 1806.07366
. M Lüscher, 1006.4518JHEP. 0892M. Lüscher, JHEP 08, 071 (2010), [Erratum: JHEP 03, 092 (2014)], 1006.4518
. K G Wilson, Phys. Rev. D. 102445K.G. Wilson, Phys. Rev. D 10, 2445 (1974)
Deep Residual Learning for Image Recognition. K He, X Zhang, S Ren, J Sun, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR. K. He, X. Zhang, S. Ren, J. Sun, Deep Residual Learning for Image Recognition, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016), pp. 770-778
| [] |
[
"All-2D Material Inkjet-Printed Capacitors: Towards Fully-Printed Integrated Circuits",
"All-2D Material Inkjet-Printed Capacitors: Towards Fully-Printed Integrated Circuits"
] | [
"Robyn Worsley \nSchool of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Lorenzo Pimpolari \nDipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly\n",
"Daryl Mcmanus \nSchool of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Ning Ge \nHP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA\n",
"Robert Ionescu \nHP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA\n",
"Jarrid A Wittkopf \nHP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA\n",
"Adriana Alieva \nSchool of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Giovanni Basso \nDipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly\n",
"Massimo Macucci \nDipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly\n",
"Giuseppe Iannaccone \nDipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly\n",
"Kostya S Novoselov \nSchool of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK\n",
"Helen Holder \nHP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA\n",
"Gianluca Fiori \nDipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly\n",
"Cinzia Casiraghi *email:[email protected] \nSchool of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK\n"
] | [
"School of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK",
"Dipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly",
"School of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK",
"HP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA",
"HP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA",
"HP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA",
"School of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK",
"Dipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly",
"Dipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly",
"Dipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly",
"School of Physics and Astronomy\nUniversity of Manchester\nM13 9PLManchesterUK",
"HP Labs\n1501 Page Mill Road94304Palo AltoCaliforniaUSA",
"Dipartimento di Ingegneria dell'Informazione\nUniversità di Pisa\n56122PisaItaly",
"School of Chemistry\nUniversity of Manchester\nM13 9PLManchesterUK"
] | [] | A well-defined insulating layer is of primary importance in the fabrication of passive (e.g. capacitors) and active (e.g. transistors) components in integrated circuits. One of the most widely known 2-Dimensional (2D) dielectric materials is hexagonal boron nitride (hBN).Solution-based techniques are cost-effective and allow simple methods to be used for device fabrication. In particular, inkjet printing is a low-cost, non-contact approach, which also allows for device design flexibility, produces no material wastage and offers compatibility with almost any surface of interest, including flexible substrates.In this work we use water-based and biocompatible graphene and hBN inks to fabricate all-2D material and inkjet-printed capacitors. We demonstrate an areal capacitance of 2.0 ± 0.3 nF cm -2 for a dielectric thickness of ~3 µm and negligible leakage currents, averaged across more than 100 devices. This gives rise to a derived dielectric constant of 6.1 ± 1.7. The inkjet printed hBN dielectric has a breakdown field of 1.9 ± 0.3 MV cm -1 . Fully printed capacitors with sub-µm hBN layer thicknesses have also been demonstrated. The capacitors are then exploited in two fully printed demonstrators: a resistor-capacitor (RC) low-pass filter and a graphene-based field effect transistor. | 10.1021/acsnano.8b06464 | [
"https://arxiv.org/pdf/1812.05712v1.pdf"
] | 53,730,782 | 1812.05712 | 4f982b503bf680c8a42dd41c0d2ad939e31781ac |
All-2D Material Inkjet-Printed Capacitors: Towards Fully-Printed Integrated Circuits
Robyn Worsley
School of Chemistry
University of Manchester
M13 9PLManchesterUK
Lorenzo Pimpolari
Dipartimento di Ingegneria dell'Informazione
Università di Pisa
56122PisaItaly
Daryl Mcmanus
School of Chemistry
University of Manchester
M13 9PLManchesterUK
Ning Ge
HP Labs
1501 Page Mill Road94304Palo AltoCaliforniaUSA
Robert Ionescu
HP Labs
1501 Page Mill Road94304Palo AltoCaliforniaUSA
Jarrid A Wittkopf
HP Labs
1501 Page Mill Road94304Palo AltoCaliforniaUSA
Adriana Alieva
School of Chemistry
University of Manchester
M13 9PLManchesterUK
Giovanni Basso
Dipartimento di Ingegneria dell'Informazione
Università di Pisa
56122PisaItaly
Massimo Macucci
Dipartimento di Ingegneria dell'Informazione
Università di Pisa
56122PisaItaly
Giuseppe Iannaccone
Dipartimento di Ingegneria dell'Informazione
Università di Pisa
56122PisaItaly
Kostya S Novoselov
School of Physics and Astronomy
University of Manchester
M13 9PLManchesterUK
Helen Holder
HP Labs
1501 Page Mill Road94304Palo AltoCaliforniaUSA
Gianluca Fiori
Dipartimento di Ingegneria dell'Informazione
Università di Pisa
56122PisaItaly
Cinzia Casiraghi *email:[email protected]
School of Chemistry
University of Manchester
M13 9PLManchesterUK
All-2D Material Inkjet-Printed Capacitors: Towards Fully-Printed Integrated Circuits
12D-materialsprinted electronicsinkjetcapacitorsintegrated circuits
A well-defined insulating layer is of primary importance in the fabrication of passive (e.g. capacitors) and active (e.g. transistors) components in integrated circuits. One of the most widely known 2-Dimensional (2D) dielectric materials is hexagonal boron nitride (hBN).Solution-based techniques are cost-effective and allow simple methods to be used for device fabrication. In particular, inkjet printing is a low-cost, non-contact approach, which also allows for device design flexibility, produces no material wastage and offers compatibility with almost any surface of interest, including flexible substrates.In this work we use water-based and biocompatible graphene and hBN inks to fabricate all-2D material and inkjet-printed capacitors. We demonstrate an areal capacitance of 2.0 ± 0.3 nF cm -2 for a dielectric thickness of ~3 µm and negligible leakage currents, averaged across more than 100 devices. This gives rise to a derived dielectric constant of 6.1 ± 1.7. The inkjet printed hBN dielectric has a breakdown field of 1.9 ± 0.3 MV cm -1 . Fully printed capacitors with sub-µm hBN layer thicknesses have also been demonstrated. The capacitors are then exploited in two fully printed demonstrators: a resistor-capacitor (RC) low-pass filter and a graphene-based field effect transistor.
An integrated circuit (IC) is an assembly of different electronic components, fabricated as a single unit, in which non-linear devices, such as transistors and diodes, and linear devices, such as capacitors and resistors, and their interconnections are all built up on the same substrate, typically silicon.
Two-dimensional (2D) materials 1 are very attractive as building blocks for next-generation nanoelectronic devices as they offer straightforward integration with existing fabrication processes and they can be transferred on to any substrate, including plastic, allowing extending research into flexible electronics, which started with organic materials. [2][3][4][5][6][7][8] In addition, the ability to combine different 2D materials in heterostructures 9 allows the possible integration of a wide range of all-2D material devices in ICs. At present, a large family of 2D crystals has been isolated and new 2D materials are constantly being investigated. As such, a wide variety of unique and interesting electronic properties are available for exploitation.
Incorporating these materials into many-layered heterostructures allows an almost infinite combination of structures exploiting their unique properties, thereby enabling a large variety of electronic devices to be produced, connected and directly integrated. In this way, only one family of materials will be required for complete IC fabrication.
In order to produce 2D material based ICs, successful fabrication of a wide variety of working devices must be demonstrated using a simple technique. Inkjet printing provides an attractive method to tackle this problem: it allows maximum design flexibility, and it does not require expensive masks to be made or time-consuming resist process steps, as in traditional electronic fabrication. This allows for quick and easy integration of the various components into a fully-printed IC. Furthermore, inkjet printing is an additive and non-contact technique, which produces no material wastage and is compatible with almost any surface of interest, including flexible substrates. [10][11][12] Finally, in the case of 2D materials, inkjet printing can also be used to make fully printed heterostructures: all inkjet printed transistors 13,14 , photodetectors 15 and memories 15 have been demonstrated. However, these devices are usually integrated with traditional components. Therefore, more work needs to be done to demonstrate an all-2D material fully-printed IC. In this framework, it is of fundamental importance to be able to fully inkjet print all components. In particular, passive components, such as capacitors, occur in large numbers within electrical systems, forming the basis of many sensors and circuits, and therefore must demonstrate high reliability and good performance. 16 Establishing a reliable technology for printing of the dielectric layer is also a stepping stone for obtaining high-performing printed Field Effect Transistors (FETs). The simplest way to fabricate an all-2D material capacitor is to use hexagonal boron nitride (hBN) as a dielectric placed between two graphene (Gr) electrodes, by forming the heterostructure Gr B /hBN/Gr T , where Gr B and Gr T are the bottom and top electrodes, respectively. Previous attempts to fabricate capacitor devices using hBN as the dielectric have involved non-printable based techniques, such as chemical vapour deposition (CVD), often using evaporated gold as electrodes, as reported in Refs. 17 and 18 . As a result of high material quality and thickness in the few tens of nm, these devices are capable of achieving large capacitance; however, the methods are expensive and time-consuming. In addition, problems have been reported regarding residual polymer after film transfer 19 and leakage through the dielectric. 18 Solution based techniques have also been investigated for the fabrication of hBN-based capacitors: from hBN membranes made by vacuum filtration 20 to screen-printed hBN. 21 Both methods offer a low-cost approach for producing dielectric films, yet the processes offer poor design flexibility, high waste amounts, and generate thick films, thereby lowering the capacitance. Capacitor devices with inkjet-printed graphene electrodes and a spray-coated hBN dielectric were demonstrated in Ref. 22 . Whilst spray-coating produces thinner films compared with vacuum filtered membranes and screen printing, materials wastage remains high and device design is limited. Layer-by-layer (LBL) assembly of hBN films using complementary polymers allows for precise control of the film thickness and leads to ultrathin dielectrics, which demonstrate high capacitance. 23 However, the annealing temperatures required (600 °C) are too high for many flexible substrates, which often consist of polymeric films that would melt at such a temperature. 23 Notably, Ref. 13 showed an attempt to produce fully inkjet-printed hBN capacitor devices using Ag for the electrodes, obtaining a capacitance of 8.7 nF cm -2 for an hBN thickness of 1.2 µm. 13 In this work, we use water-based and biocompatible Gr and hBN inks to fabricate all-2D material and inkjet-printed capacitors. All inks have been produced using the same process and with the same ink solvent formulation already reported in our previous study, 15 where we also demonstrated the ink's biocompatibility. In Ref. 15 the inks were used to fully print heterostructure-based devices, in particular photodetectors and read-only memories. In this work, we aim at demonstrating a different fully printed heterostructure-based device, exploiting the properties of the hBN ink, i.e. a capacitor. Note that Ref. 15 did not show any characterization of the hBN inks and no device containing hBN. Here we provide a full characterisation of the dielectric properties of hBN: we measured an average breakdown field of 1.93 ± 0.3 MV cm -1 for printed hBN films of ~3 µm thickness. This value is comparable to the breakdown voltage of many dielectric polymers. More than 100 devices have been fabricated and tested. The fabrication yield depends on the overlapped area and thickness of the hBN, but on average we found a yield of 62%. The Gr B /hBN/Gr T capacitor devices demonstrate a capacitance of the orders of nF cm -2 for a thickness of ~3 µm and negligible leakage currents. Fully printed capacitors with sub-µm hBN layer thicknesses have also been demonstrated. We used the capacitors to demonstrate a fully printed low-pass filter, made of 2D materials only, and a graphene-based field effect transistor.
As one of the fundamental circuitry components, capacitors can be implemented within electronic systems across a number of fields, including wearable or epidermic electronic applications. The biocompatibility of the constituent 2D inks 15 is therefore an attractive feature for devices, which could be used inside the body for in situ monitoring 24,25 or for wearable electronics, such as tattoo-based sensors. [26][27][28]
RESULTS AND DISCUSSION
Water-based and biocompatible graphene and boron nitride inks were prepared using liquid-phase exfoliation according to the protocol developed in our previous work. 15 The solvent formulation was specifically engineered to have the correct rheological properties for inkjet printing and to minimise remixing at the interface, allowing fabrication of heterostructures using only inkjet printing. The vertically-stacked Gr B /hBN/Gr T capacitors were fabricated by first printing the graphene ink (60 printed passes, at a concentration of ~2 mg ml -1 ) on to a pre-cleaned glass substrate (Fisher Scientific Clear Glass Slides, 1.0-1.2 mm thickness) to produce a ~0.15 mm by ~3 mm graphene rectangle, which acts as the bottom electrode (Gr B ). A glass substrate was chosen to avoid any parasitic capacitive effects that might be seen with SiO 2 . An hBN rectangle, with a width of ~0.5 mm and length in the range of 1-2 mm, was printed across the Gr B using 80 printing passes of hBN ink at a concentration of ~2 mg ml -1 . Finally, the top graphene electrode (Gr T ), with length ~3 mm, was printed in the centre of the hBN rectangle, perpendicular to the Gr B . In this work, the width of the Gr T (Δx T ) was varied incrementally between ~0.15 mm and ~1.28 mm, in order to investigate how the capacitance is changing with increasing overlapping area (A) between the graphene electrodes. The thicknesses of the deposited graphene and hBN films were measured using a profilometer (Supporting Information, Section 'Film Thickness Characterisation'), showing that for 60 printed passes of graphene, the electrode thickness is ~300 nm and for 80 printed passes of hBN, the dielectric film thickness is ~3 µm, with a roughness of ~700 nm.
Figure 1a
shows a schematic diagram of a Gr B /hBN/Gr T capacitor device. Figure 1b shows a magnified image of the active overlap area between the two graphene electrodes.
One can observe the uniformity in droplet deposition and the good separation between the Gr B , hBN and Gr T regions of the heterostructure i.e. there is no material intermixing at the interfaces. Re-mixing at the interface is prevented through inclusion of a shear-thinning biocompatible binder within the ink solvent formulation. 15 The shear-thinning nature of the binder ensures that, once deposited onto the substrate, the viscosity of the ink substantially increases, such that mixing between different 2D material inks is minimised within the heterostructure device. 15 An optical picture of 14 devices printed on glass is shown in Figure 1c. Figure The impedance of the printed Gr B /hBN/Gr T capacitors has been characterised by means of a precision LCR meter (HP 4284A), working in the parallel R p -C p circuit mode, for which the modulus of the impedance, |Z|, is given by: 20,29
|Z| = √1 + 2 2 2 = √( + + 2 2 2 ) 2 + ( 2 ) 2 1 + ( ) 2(1)
|Z| can also be expressed considering the equivalent circuit as in the inset of Figure 2a, where ω is the angular frequency, R ESR is the series resistance, C is the capacitance and R LEAK is a figure of merit of the leakage through the hBN dielectric. Figure 2a shows |Z| for a capacitor with C = 6 pF (measured at 1 kHz) up to 1 MHz (red circles) fitted via the R ESR , R LEAK and C parameters. As can be seen, R LEAK has a value >1 GΩ (negligible losses) and R ESR has a value of 90 kΩ, with ideal capacitor behaviour up to 10 5 Hz. An R ESR value of 90 kΩ corresponds well to the measured resistance values normally achieved for Gr electrodes deposited via inkjet printing using our water-based inks. 15 This value can be easily adjusted by changing the thickness and the width of the graphene electrodes. Figure 2b and 2c show the measurements performed with an HP4284A LCR meter i.e. C p and R p as a function of frequency, in the frequency range between 1 kHz and 1 MHz, for devices with increasing A (i.e. obtained by increasing Δx T from 500 µm to 800 µm, with constant dielectric thickness of ~3 µm (Δx B = 170 µm)). One can observe the decrease in both capacitance (C p ) and parallel resistance (R p ) with increasing frequency, in agreement with
Refs. 18,21,22 . Indeed, as expected, the parameters of the equivalent parallel circuit representation (R p and C p ), depend on the frequency. 30 Figure 2b also shows that the C p for a given frequency increases with increasing overlap area, as expected in a capacitor realized with standard fabrication techniques. Similar results were reported in Ref. 22 : the larger the area, the larger the capacitance. Figure 2d shows the C p plotted vs. frequency for two and three capacitor devices for a fixed A of 0.036 mm 2 , connected in parallel. The obtained values at a given frequency scale proportionally to the number of devices connected. The ability to connect capacitors in parallel is crucial for achieving large enough capacitance values in ICs. Figure S9 shows the capacitance values obtained for a set of devices with constant overlap area (A = 0.075 mm 2 ) and variable thickness. As in traditional capacitors, the capacitance decreases for increasing dielectric thickness. In order to obtain a large dataset of capacitance values, the active capacitor overlap area was varied by printing a different width of top graphene electrode for each set of devices, keeping all other parameters constant. Figure 3 shows the average capacitance value (measured at 1 kHz) plotted as a function of A/t, for each set of devices printed, on log-log scale. The data fitting (black line, Figure 3) highlights a linear increase in capacitance with increasing A/t, as expected using the parallel plate capacitor model. 30,31 Following this model, the capacitance is given by: 31
C = ε 0 ε r A t(2)
where ε 0 is the electric constant of vacuum and ε r is the relative dielectric constant. Thus, the y-axes offset of the C vs. A/t data on the log-log scale ( Figure 3) gives ε 0 ε r . From the data in Figure 3, we calculated a dielectric constant of 6.1, which is in agreement with that reported for CVD hBN and inkjet printed Ag/hBN/Ag capacitors. 13,17 However, a strong discrepancy is observed with the points from Ref. 22 others, including our work, reporting a dielectric constant between 5 and 12 13,17 and finally, one group reporting a dielectric constant of >200. 18 The data are summarized in Table S1.
The reason why our dielectric constant is higher than that of bulk hBN could be attributed to Table S2.
The dielectric strength is comparable to that of hBN measured previously in our group through fabrication of a vacuum filtered membrane (2.5 MV cm -1 , in vacuum) 38 and in agreement with the range reported in literature for hBN (1.5-2.5 MV cm -1 ). 13,32 We can also observe that the dielectric strength of the inkjet printed hBN films is comparable to that of well-known dielectric materials (Table S3). 39 In particular, dielectric polymers have been widely used in electronic devices as a result of their amenability to solution-processing and low cost patterning techniques. A wide variety of chemical structures are available and careful control of the polymerisation reaction conditions enables the material characteristics and dielectric properties to be tuned. 40 As the dielectric strength and dielectric constant of our hBN films are comparable with that of dielectric polymers, hBN shows strong potential for use in organic electronics. We remark that hBN requires relatively high thickness compared to polymers; on the other side, many dielectric polymers (in particular, high-k polymers) are not suitable for inkjet printing due to their high viscosity and solubility in a limited number of solvents, which often are not suitable for inkjet printing (e.g. anisole, n-butyl acetate).
In order to study the yield of Gr B /hBN/Gr T capacitor devices fabricated through inkjet-printing, multiple devices (between 2 and 14) were printed for each selected value of overlap area. Devices showing a linear I-V characteristic i.e. behaved as a short circuit, were classified as non-functional. Short circuits are a result of occasional inconsistencies in the deposited film which occur when the inkjet printer nozzle becomes blocked. Additionally, in some instances, the leakage through the hBN dielectric was considered to be too high for the device to be truly classified as a functional capacitor. To be considered functional, we selected only devices satisfying the following condition: R LEAK > 10/(ωC). This condition ensures that the capacitive part of the impedance is prevalent over the resistive part. The corresponding data regarding the number of devices printed and the number of those devices that were functional across the various A/t ratios can be found in the supporting information (Table S4). The yield is dependent on the overlap area, but overall an average yield of 62% has been found. In order to demonstrate that the developed technology can be applied also in Field Effect Transistors and on flexible substrates, we have fabricated a Graphene-based Field Effect Transistor (GFET) printed on paper. In particular, paper is selected because it is one of the cheapest flexible substrates, which is also recyclable and foldable, and allows achievement of the best electrical performance with our inks. 15 The inset of Figure More complex circuits can be made by increasing the library of printed components, such as diodes and high-mobility transistors.
MATERIALS AND METHODS
Ink Preparation:
Bulk graphite (purchased from Graphexel or Sigma-Aldrich, 99.5% grade) and bulk boron nitride (purchased from Sigma-Aldrich, >1 µm, 98% grade) powders were used to prepare the inks. The bulk powders were dispersed in de-ionised water (resistivity 18.2 MΩ cm -1 ) at a concentration of 3 mg mL -1 and 1-pyrenesulphonic acid sodium salt (PS1), purchased from Sigma-Aldrich, purity ≥ 97%, was added at a concentration of 1 mg mL -1 . The graphite and boron nitride dispersions were sonicated for 72 h and 120 h respectively using a 300 W Hilsonic HS 1900/Hilsonic FMG 600 bath sonicator at 20 °C.
The resultant dispersions were centrifuged at 3500 rpm (g factor = 903) for 20 minutes at 20 °C using a Sigma 1-14K refrigerated centrifuge in order to separate out and discard the residual bulk, non-exfoliated flakes. The remaining supernatant, now containing the correct flake size and monolayer percentage, was centrifuged twice to remove excess PS1 from the dispersion. After washing, the precipitate was re-dispersed in the printing solvent, made as described in Ref. 15 .
The concentration of the resultant inks were assessed using a Varian Cary 5000 UV-Vis spectrometer and the Beer-Lambert law, with extinction coefficients of 2460 (at 660 nm) and
1000 L g -1 m -1 (at 550 nm) for graphene 41 Fitting was performed using an RC parallel circuit model. 43 The capacitor electrode has been connected to the measurement system through the tip of the probe station, touching a silver pad previously defined.
1d, 1e and 1f show optical images of representative devices with increasing A, ranging from 0.0361 to 0.192 mm 2 .
the chemical composition (additives, residual solvent, etc.) and the morphology of the deposited films (e.g. a high interfacial roughness increases the real capacitor area, compared to the geometric area). Previous works have measured the dielectric constant of hBN membranes produced using inks based on N-Methyl-2-pyrrolidone (NMP)22 or mixed solvents 38 , while our inks, although based on water, also contain stabilisers and additives.15 Thus, the different composition of the starting ink, including residual water, could reflect in a different dielectric constant. The surface roughness measured for our capacitors (Supporting Information, Section 'Topography Characterisation') is relatively small compared to the area of the film, and thus the difference between geometric and real area of the capacitor should be negligible. In addition, different deposition techniques give rise to different film morphologies (Supporting Information, Section 'Characterisation of hBN Dielectric Properties'), which can affect the orientation of the flakes and the porosity, which in turn will determine the dielectric constant. For example, we have fabricated an hBN membrane by vacuum filtration with the same printable ink used in this work, and measured a dielectric constant of ~2 (Section 'Characterisation of hBN Dielectric Properties',Figure S8).The printed hBN layer demonstrates a good dielectric strength of 1.93 MV cm -1 and an average breakdown voltage of ~600 V across seven devices (Δx B = 190 µm, Δx T = 190 µm, t = 3.1 µm). Breakdown voltages for each of these seven devices are shown in
Finally
, we exploit the use of the printed capacitors within devices. We have fully printed on a glass substrate a simple RC low-pass filter, composed by a graphene resistance, R = 17 MΩ and a capacitance, C = 7.6 pF, as shown in the inset of Figure 4a. In particular, the resistor has been fabricated by means of graphene lines (5 printed passes), while the capacitor is composed of graphene top and bottom electrodes (60 printed passes), embedding 70 printed passes of hBN dielectric.Figure 4ashows that the measured frequency response of the filter is in very good agreement with theoretical results, with a cut-off frequency of approximately 1.2 kHz (i.e., theoretical cut-off frequency is f = (2πRC) -1 = 1.232 kHz).
4b shows the GFET layout, where source, drain and gate electrodes are printed with Ag ink, while the channel and the dielectric have been printed by means of graphene and hBN inks, respectively. The sketch of the longitudinal cross section is shown in the inset of Figure 4c. The length and the width of the channel are 70 µm and 500 µm, respectively. 25 printed passes of hBN were deposited and the substrate used was paper (see Materials and Methods). The transfer and the output characteristics are shown in Figure 4b and Figure 4c. As can be seen, the gate is able to modulate the current in the channel, while keeping the gate leakage always lower than 1 nA, in the whole range of applied gate voltages. CONCLUSIONS Herein, we have shown all-2D material inkjet printed Gr B /hBN/Gr T capacitors as passive components for IC. The achievable capacitance can be easily tuned through adjustment of the device active area and by connecting multiple devices in parallel or series. For an hBN dielectric thickness of ~3 µm, an average areal capacitance of 2.0 ± 0.3 nF cm -2 was measured. The inkjet printed hBN has a dielectric constant of 6.1 ± 1.7 and breakdown strength of 1.9 ± 0.3 MV cm -1 .The capacitor component can be easily connected to others, e.g. other printed capacitors and graphene resistors, to form simple all-2D material circuits.
and hBN42 , respectively. The inks used for printing were diluted to a concentration of 2 mg mL -1 .Printing: A Dimatix DMP-2800 inkjet printer (purchased from Fujifilm Dimatix) was used to print the Gr B /hBN/Gr T capacitor devices onto glass microscope slides (Fisher Scientific Clear Glass Slides, 1.0-1.2 mm thickness) using a 16-nozzle cartridge with 23 µm nozzle diameter and typical droplet volume of 10 pL. The printer platen was heated to 60 °C and a drop spacing of 40 µm was utilised, as this gives the smaller sheet resistance for the printed film.15 For both the top and bottom graphene electrodes, 60 printing passes were deposited and for the hBN dielectric layer 80 printing passes were deposited. A step-by-step annealing procedure was employed, in which the devices were annealed under vacuum for 2 hours at a temperature of 150 °C following the deposition of each layer in the Gr B /hBN/Gr T heterostructure stack.The RC low-pass filter, also printed onto a glass slide substrate, is a combination of a Gr B /hBN/Gr T capacitor device with a printed graphene resistor. For the capacitor, the Gr B , hBN and Gr T films were deposited using 60, 70 and 60 printed passes of Gr, hBN and Gr ink respectively. For the resistor, a serpentine layout was produced using 5 printed passes of Gr ink.A Gr-based field effect transistor was also printed onto a paper substrate (PEL P60, from Printed Electronics Limited) using silver ink (1 printed pass, Sigma-Aldrich) for the source, drain and gate electrodes. Graphene ink was used to deposit the channel (25 printed passes) and hBN ink (25 printed passes) to fabricate the dielectric film. The channel has a length of 70 µm, a width of 500 µm and a resistance of 10 kΩ.Characterisation: Line scans were taken to measure the thickness of the printed features using a Bruker Dektak XT Stylus Profiler (stylus radius of 12.5 µm, stylus force of 3 mg, scan speed of 100 µm s -1 , scan resolution of 0.33 µm). Gwyddion SPM analysis software was used to generate RMS roughness and effective surface area values from the 2D profilometry map data. In order to further study the surface topology of the printed films, AFM images were taken using a Bruker MultiMode 8 in PeakForce QNM mode with ScanAsyst-Air probes. Cross-sectional SEM images were taken using a Zeiss Sigma HV instrument.Capacitance Measurements: I-V characteristics for the Gr B /hBN/Gr T capacitor devices were acquired using a Keithley 4200 Semiconductor Characterization System parameter analyser. Capacitance data were collected in vacuum in the frequency range 1 kHz to 1 MHz using an HP 4284A Precision LCR meter with short-circuit and open-circuit correction (bias voltage, V b = 0 V and measurement voltage, V m = 1 V).
Figure 1 .Figure 2
12a) Schematic diagram of Gr B /hBN/Gr T inkjet printed capacitor device, defining the width of the bottom Gr electrode (Δx B ), the width of the top Gr electrode (Δx T ) and the electrode overlap area (A). b) Magnified image of the overlap region between the two graphene electrodes, clearly showing the separation between the layers of the heterostructure. c) Photograph of 14 capacitor devices printed onto glass. d, e, f) Optical images of representative small, medium and large area capacitor devices, . a Impedance modulus values for a representative Gr B /hBN/Gr T capacitor device plotted as a function of frequency. The fitting values shown correspond to a 'leaky capacitor with equivalent series resistance (ESR)' model, inset. b) Representative measured capacitance values plotted as a function of frequency for inkjet printed Gr B /hBN/Gr T capacitors with bottom graphene electrode width Δx B = 170 µm, hBN thickness t = 3.1 µm, and the width of the top graphene electrode, Δx T , is varied between 500 µm and 800 µm. c) Representative resistance values plotted as a function of frequency for inkjet printed Gr B /hBN/Gr T capacitors where Δx T is varied between 500 µm and 800 µm. d) Measured capacitance values plotted as a function of frequency for inkjet printed Gr B /hBN/Gr T capacitors connected in parallel.
Figure 3 .
3Capacitance plotted as a function of the area to thickness (A/t) ratio for the allinkjet-printed Gr B /hBN/Gr T capacitors in this work (open circlessingle capacitance; black circlescapacitances in parallel), Au/CVD-hBN/Au devices with values taken at 2 kHz (blue circles)17 , Gr/hBN/Gr devices in which the hBN has been spray-coated (red circles)22 and allinkjet-printed Ag/BN/Ag devices (black circle)13 . The black line is a linear fit to our experimental data. The red line is a linear fit to the experimental data from Ref.22.
Figure 4 .
4Demonstrator Devices. a) Module of the frequency response of a first-order lowpass RC filter. Both experimental (red) and theoretical (green) results are shown. A photograph of the printed circuit is shown in the inset. b, c) Transfer and output characteristics of the fabricated GFET. The layout and the longitudinal cross-section of the GFET are shown in the insets.
Dielectric constant measurements: The hBN ink was assessed for use as a dielectric material through measurement of the dielectric constant. To measure the dielectric constant, a membrane was produced through vacuum filtration using the printable hBN ink. A total of 20 mg of hBN material was deposited, giving a membrane thickness of 170 µm. Gold contacts were evaporated onto either side of the membrane and a capacitance measurement was made, allowing the dielectric constant to be calculated using Equation 2 (Main Text). The breakdown voltage and breakdown field were measured for 7 capacitor devices (electrode overlap area of 0.036 mm 2 , capacitance ~2 pF) using the Keithley 4200 Semiconductor Characterization System parameter analyser. Conflict of Interest: The authors declare no competing financial interest. Corresponding Author E-mail: [email protected] This work is partially supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 648417 and No 770047, and by the project HETERO2D and the Graphene core 2 under grant No 785219. C. Casiraghi and K. Novoselov acknowledge the Grand Challenge EPSRC grant EP/N010345/1. D. McManus acknowledges funding from the EPSRC in the framework of the CDT Graphene NOWNANO. R. Worsley acknowledges M. Turner for useful discussions. Information Available: Hexagonal boron nitride ink characterisation; film thickness and topography characterization; hBN dielectric constant, breakdown voltage, dielectric strength measurements; statistics and effect of hBN thickness. This material is available free of charge via the Internet at http://pubs.acs.org.ASSOCIATED CONTENT
AUTHOR INFORMATION
ACKNOWLEDGMENTS
Supporting
. K S Novoselov, D Jiang, F Schedin, T J Booth, V Khotkevich, Novoselov, K. S.; Jiang, D.; Schedin, F.; Booth, T. J.; Khotkevich, V. V;
Two-Dimensional Atomic Crystals. S Morozov, A K Geim, Proc. Natl. Acad. Sci. U. S. A. 102Morozov, S. V; Geim, A. K. Two-Dimensional Atomic Crystals. Proc. Natl. Acad. Sci. U. S. A. 2005, 102, 10451-10453.
Synthesis of Electrically Conducting Organic Polymers: Halogen Derivatives of Polyacetylene, (CH)X. H Shirakawa, E J Louis, A G Macdiarmid, C K Chiang, A J Heeger, J. Chem. Soc. Chem. Commun. 0Shirakawa, H.; Louis, E. J.; MacDiarmid, A. G.; Chiang, C. K.; Heeger, A. J. Synthesis of Electrically Conducting Organic Polymers: Halogen Derivatives of Polyacetylene, (CH)X. J. Chem. Soc. Chem. Commun. 1977, 0, 578-580.
A Two-Dimensional-Surface "State Diagram" for Polyaniline. W R Salaneck, I Lundström, W S Huang, A G Macdiarmid, Synth. Met. 13Salaneck, W. R.; Lundström, I.; Huang, W. S.; Macdiarmid, A. G. A Two- Dimensional-Surface "State Diagram" for Polyaniline. Synth. Met. 1986, 13, 291-297.
. C W Tang, S A Vanslyke, Organic Electroluminescent Diodes. Appl. Phys. Lett. 51Tang, C. W.; Vanslyke, S. A. Organic Electroluminescent Diodes. Appl. Phys. Lett. 1987, 51, 10-13.
An All-Organic "Soft" Thin Film Transistor with Very High Carrier Mobility. F Garnier, G Horowitz, X Peng, D Fichou, Adv. Mater. 2Garnier, F.; Horowitz, G.; Peng, X.; Fichou, D. An All-Organic "Soft" Thin Film Transistor with Very High Carrier Mobility. Adv. Mater. 1990, 2, 639-643.
Light-Emitting Diodes Based on Conjugated Polymers. J H Burroughes, D D C Bradley, A R Brown, R N Marks, K Mackay, R H Friend, P L Burns, A B Holmes, Nature. 347Burroughes, J. H.; Bradley, D. D. C.; Brown, A. R.; Marks, R. N.; Mackay, K.; Friend, R. H.; Burns, P. L.; Holmes, A. B. Light-Emitting Diodes Based on Conjugated Polymers. Nature 1990, 347, 539-541.
Macromolecular Electronic Device: Field-Effect Transistor with a Polythiophene Film. A Tsumura, H Koezuka, T Ando, Appl Phys Lett. 49Tsumura, A.; Koezuka, H.; Ando, T. Macromolecular Electronic Device: Field-Effect Transistor with a Polythiophene Film. Appl Phys Lett 1986, 49, 1210-1212.
N Stutzmann, R H Friend, H Sirringhaus, Polymer Field-Effect Transistors. Science. 299Stutzmann, N.; Friend, R. H.; Sirringhaus, H. Self-Aligned, Vertical-Channel, Polymer Field-Effect Transistors. Science (80-. ). 2003, 299, 1881-1884.
Van Der Waals Heterostructures. A K Geim, I V Grigorieva, Nature. 499Geim, A. K.; Grigorieva, I. V. Van Der Waals Heterostructures. Nature 2013, 499, 419-425.
Ink-Jet Printing of Doped Polymers for Organic Light Emitting Devices. T R Hebner, C C Wu, D Marcy, M H Lu, J C Sturm, Appl. Phys. Lett. 72Hebner, T. R.; Wu, C. C.; Marcy, D.; Lu, M. H.; Sturm, J. C. Ink-Jet Printing of Doped Polymers for Organic Light Emitting Devices. Appl. Phys. Lett. 1998, 72, 519-521.
High-Resolution Inkjet Printing of All-Polymer Transistor Circuits. H Sirringhaus, T Kawase, R H Friend, T Shimoda, M Inbasekaran, W Wu, E P Woo, Science. 290Sirringhaus, H.; Kawase, T.; Friend, R. H.; Shimoda, T.; Inbasekaran, M.; Wu, W.; Woo, E. P. High-Resolution Inkjet Printing of All-Polymer Transistor Circuits. Science (80-. ). 2000, 290, 2123-2126.
Inkjet Technology for Digital Fabrication. I M Hutchings, G Martin, Hutchings, I. M.; Martin, G. Inkjet Technology for Digital Fabrication;
Fully Inkjet-Printed Two-Dimensional Material Field-Effect Heterojunctions for Wearable and Textile Electronics. T Carey, S Cacovich, G Divitini, J Ren, A Mansouri, J M Kim, C Wang, C Ducati, R Sordan, F Torrisi, Nat. Commun. 8Carey, T.; Cacovich, S.; Divitini, G.; Ren, J.; Mansouri, A.; Kim, J. M.; Wang, C.; Ducati, C.; Sordan, R.; Torrisi, F. Fully Inkjet-Printed Two-Dimensional Material Field-Effect Heterojunctions for Wearable and Textile Electronics. Nat. Commun. 2017, 8, 1-11.
All-Printed Thin-Film Transistors from Networks of Liquid-Exfoliated Nanosheets. A G Kelly, T Hallam, C Backes, A Harvey, A S Esmaeily, I Godwin, J Coelho, V Nicolosi, J Lauth, A Kulkarni, S Kinge, L D A Siebbeles, G S Duesberg, J N Coleman, Science. 356Kelly, A. G.; Hallam, T.; Backes, C.; Harvey, A.; Esmaeily, A. S.; Godwin, I.; Coelho, J.; Nicolosi, V.; Lauth, J.; Kulkarni, A.; Kinge, S.; Siebbeles, L. D. A.; Duesberg, G. S.; Coleman, J. N. All-Printed Thin-Film Transistors from Networks of Liquid- Exfoliated Nanosheets. Science (80-. ). 2017, 356, 69-73.
Water-Based and Biocompatible 2D Crystal Inks for All-Inkjet-Printed Heterostructures. D Mcmanus, S Vranic, F Withers, V Sanchez-Romaguera, M Macucci, H Yang, R Sorrentino, K Parvez, S.-K Son, G Iannaccone, K Kostarelos, G Fiori, C Casiraghi, Nat. Nanotechnol. 12McManus, D.; Vranic, S.; Withers, F.; Sanchez-Romaguera, V.; Macucci, M.; Yang, H.; Sorrentino, R.; Parvez, K.; Son, S.-K.; Iannaccone, G.; Kostarelos, K.; Fiori, G.; Casiraghi, C. Water-Based and Biocompatible 2D Crystal Inks for All-Inkjet-Printed Heterostructures. Nat. Nanotechnol. 2017, 12, 343-350.
All-Inkjet-Printed Electrical Components and Circuit Fabrication on a Plastic Substrate. B J Kang, C K Lee, J H Oh, Microelectron. Eng. 97Kang, B. J.; Lee, C. K.; Oh, J. H. All-Inkjet-Printed Electrical Components and Circuit Fabrication on a Plastic Substrate. Microelectron. Eng. 2012, 97, 251-254.
Boron Nitride − Graphene Nanocapacitor and the Origins of Anomalous Size-Dependent Increase of Capacitance. G Shi, Y Hanlumyuang, Z Liu, Y Gong, W Gao, B Li, J Kono, J Lou, R Vajtai, P Sharma, P M Ajayan, Nano Lett. 14Shi, G.; Hanlumyuang, Y.; Liu, Z.; Gong, Y.; Gao, W.; Li, B.; Kono, J.; Lou, J.; Vajtai, R.; Sharma, P.; Ajayan, P. M. Boron Nitride − Graphene Nanocapacitor and the Origins of Anomalous Size-Dependent Increase of Capacitance. Nano Lett. 2014, 14, 1739-1744.
Fabrication of Large Area Hexagonal Boron Nitride Thin Films for Bendable Capacitors. N Guo, J Wei, Y Jia, H Sun, Y Wang, K Zhao, X Shi, Nano Res. 6Guo, N.; Wei, J.; Jia, Y.; Sun, H.; Wang, Y.; Zhao, K.; Shi, X. Fabrication of Large Area Hexagonal Boron Nitride Thin Films for Bendable Capacitors. Nano Res. 2013, 6, 602-610.
Effect of Polymer Residues on the Electrical Properties of Large-Area Graphene-Hexagonal Boron Nitride Planar Heterostructures. Y Y Stehle, D Voylov, I V Vlassiouk, M G Lassiter, J Park, J K Sharma, A P Sokolov, G Polizos, Nanotechnology. 28Stehle, Y. Y.; Voylov, D.; Vlassiouk, I. V.; Lassiter, M. G.; Park, J.; Sharma, J. K.; Sokolov, A. P.; Polizos, G. Effect of Polymer Residues on the Electrical Properties of Large-Area Graphene-Hexagonal Boron Nitride Planar Heterostructures. Nanotechnology 2017, 28, 1-7.
Electrical Transport and Network Percolation in Graphene and Boron Nitride Mixed-Platelet Structures. R Debbarma, S Behura, P Nguyen, T S Sreeprasad, V Berry, ACS Appl. Mater. Interfaces. 8Debbarma, R.; Behura, S.; Nguyen, P.; Sreeprasad, T. S.; Berry, V. Electrical Transport and Network Percolation in Graphene and Boron Nitride Mixed-Platelet Structures. ACS Appl. Mater. Interfaces 2016, 8, 8721-8727.
Screen-Printable Electronic Ink of Ultrathin Boron Nitride Nanosheets. A M Joseph, B Nagendra, E B Gowd, K P Surendran, ACS Omega. 1Joseph, A. M.; Nagendra, B.; Gowd, E. B.; Surendran, K. P. Screen-Printable Electronic Ink of Ultrathin Boron Nitride Nanosheets. ACS Omega 2016, 1, 1220- 1228.
All-Printed Capacitors from Graphene-BN-Graphene Nanosheet Heterostructures. A G Kelly, D Finn, A Harvey, T Hallam, J N Coleman, Appl. Phys. Lett. 109Kelly, A. G.; Finn, D.; Harvey, A.; Hallam, T.; Coleman, J. N. All-Printed Capacitors from Graphene-BN-Graphene Nanosheet Heterostructures. Appl. Phys. Lett. 2016, 109, 1-5.
Solution-Processed Dielectrics Based on Thickness-Sorted Two-Dimensional Hexagonal Boron Nitride Nanosheets. J Zhu, J Kang, J Kang, D Jariwala, J D Wood, J W T Seo, K S Chen, T J Marks, M C Hersam, Nano Lett. 15Zhu, J.; Kang, J.; Kang, J.; Jariwala, D.; Wood, J. D.; Seo, J. W. T.; Chen, K. S.; Marks, T. J.; Hersam, M. C. Solution-Processed Dielectrics Based on Thickness- Sorted Two-Dimensional Hexagonal Boron Nitride Nanosheets. Nano Lett. 2015, 15, 7029-7036.
A Human Pilot Trial of Ingestible Electronic Capsules Capable of Sensing Different Gases in the Gut. K Kalantar-Zadeh, K J Berean, N Ha, A F Chrimes, K Xu, D Grando, J Z Ou, N Pillai, J L Campbell, R Brkljača, K M Taylor, R E Burgell, C K Yao, S A Ward, C S Mcsweeney, J G Muir, P R Gibson, Nat. Electron. 1Kalantar-Zadeh, K.; Berean, K. J.; Ha, N.; Chrimes, A. F.; Xu, K.; Grando, D.; Ou, J. Z.; Pillai, N.; Campbell, J. L.; Brkljača, R.; Taylor, K. M.; Burgell, R. E.; Yao, C. K.; Ward, S. A.; McSweeney, C. S.; Muir, J. G.; Gibson, P. R. A Human Pilot Trial of Ingestible Electronic Capsules Capable of Sensing Different Gases in the Gut. Nat. Electron. 2018, 1, 79-87.
Tattoo-Paper Transfer as a Versatile Platform for All-Printed Organic Edible Electronics. G E Bonacchini, C Bossio, F Greco, V Mattoli, Y H Kim, G Lanzani, M Caironi, Adv. Mater. 30Bonacchini, G. E.; Bossio, C.; Greco, F.; Mattoli, V.; Kim, Y. H.; Lanzani, G.; Caironi, M. Tattoo-Paper Transfer as a Versatile Platform for All-Printed Organic Edible Electronics. Adv. Mater. 2018, 30, 1-8.
Temporary-Tattoo for Long-Term High Fidelity Biopotential Recordings. L Bareket, L Inzelberg, D Rand, M David-Pur, D Rabinovich, B Brandes, Y Hanein, Sci. Rep. 6Bareket, L.; Inzelberg, L.; Rand, D.; David-Pur, M.; Rabinovich, D.; Brandes, B.; Hanein, Y. Temporary-Tattoo for Long-Term High Fidelity Biopotential Recordings. Sci. Rep. 2016, 6, 1-8.
Graphene Electronic Tattoo Sensors. S Kabiri Ameri, R Ho, H Jang, L Tao, Y Wang, L Wang, D M Schnyer, D Akinwande, N Lu, ACS Nano. 11Kabiri Ameri, S.; Ho, R.; Jang, H.; Tao, L.; Wang, Y.; Wang, L.; Schnyer, D. M.; Akinwande, D.; Lu, N. Graphene Electronic Tattoo Sensors. ACS Nano 2017, 11, 7634-7641.
Ultraconformable Temporary Tattoo Electrodes for Electrophysiology. L M Ferrari, S Sudha, S Tarantino, R Esposti, F Bolzoni, P Cavallari, C Cipriani, V Mattoli, F Greco, Adv. Sci. 5Ferrari, L. M.; Sudha, S.; Tarantino, S.; Esposti, R.; Bolzoni, F.; Cavallari, P.; Cipriani, C.; Mattoli, V.; Greco, F. Ultraconformable Temporary Tattoo Electrodes for Electrophysiology. Adv. Sci. 2018, 5, 1-11.
Electrochemical Impedance Spectroscopy and Its Applications. A Lasia, Illustrate. SpringerLasia, A. Electrochemical Impedance Spectroscopy and Its Applications, Illustrate.; Springer, 2014.
The Capacitor Handbook. C J Kaiser, Springer Science1st ed.Kaiser, C. J. The Capacitor Handbook, 1st ed.; Springer Science, 1993.
Chapter Four -Nanotechnology in Electrochemical Capacitors. E Goikolea, R Mysyk, Goikolea, E.; Mysyk, R. Chapter Four -Nanotechnology in Electrochemical Capacitors;
Synthesis and Characterization of Hexagonal Boron Nitride Film As a Dielectric Layer for Graphene Devices. K K Kim, A Hsu, X Jia, S M Kim, Y Shi, M Dresselhaus, T Palacios, J Kong, ACS Nano. 6Kim, K. K.; Hsu, A.; Jia, X.; Kim, S. M.; Shi, Y.; Dresselhaus, M.; Palacios, T.; Kong, J. Synthesis and Characterization of Hexagonal Boron Nitride Film As a Dielectric Layer for Graphene Devices. ACS Nano 2012, 6, 8583-8590.
Boron Nitride Substrates for High-Quality Graphene Electronics. C R Dean, A F Young, I Meric, C Lee, L Wang, S Sorgenfrei, K Watanabe, T Taniguchi, P Kim, K L Shepard, J Hone, Nat. Nanotechnol. 5Dean, C. R.; Young, A. F.; Meric, I.; Lee, C.; Wang, L.; Sorgenfrei, S.; Watanabe, K.; Taniguchi, T.; Kim, P.; Shepard, K. L.; Hone, J. Boron Nitride Substrates for High- Quality Graphene Electronics. Nat. Nanotechnol. 2010, 5, 722-726.
Scanning Tunnelling Microscopy and Spectroscopy of Ultra-Flat Graphene on Hexagonal Boron Nitride. J Xue, J Sanchez-Yamagishi, D Bulmash, P Jacquod, A Deshpande, K Watanabe, T Taniguchi, P Jarillo-Herrero, B J Leroy, Nat. Mater. 10Xue, J.; Sanchez-Yamagishi, J.; Bulmash, D.; Jacquod, P.; Deshpande, A.; Watanabe, K.; Taniguchi, T.; Jarillo-Herrero, P.; LeRoy, B. J. Scanning Tunnelling Microscopy and Spectroscopy of Ultra-Flat Graphene on Hexagonal Boron Nitride. Nat. Mater. 2011, 10, 282-285.
Layers Dependent Dielectric Properties of Two Dimensional Hexagonal Boron Nitridenanosheets. L Wang, Y Pu, A K Soh, Y Shi, S Liu, AIP Adv. 6Wang, L.; Pu, Y.; Soh, A. K.; Shi, Y.; Liu, S. Layers Dependent Dielectric Properties of Two Dimensional Hexagonal Boron Nitridenanosheets. AIP Adv. 2016, 6, 1-6.
Dielectric Constant of Boron Nitride Films Synthesized by Plasma-Assisted Chemical Vapor Deposition. T Sugino, T Hori, C Kimura, Jpn. J. Appl. Phys. 39Sugino, T.; Hori, T.; Kimura, C. Dielectric Constant of Boron Nitride Films Synthesized by Plasma-Assisted Chemical Vapor Deposition. Jpn. J. Appl. Phys. 2000, 39, 1101-1104.
Plasma-Assisted Chemical Vapor Deposition and Characterization of Boron Nitride Films. S V Nguyen, T Nguyen, H Treichel, O Spindler, J. Electrochem. Soc. 141Nguyen, S. V.; Nguyen, T.; Treichel, H.; Spindler, O. Plasma-Assisted Chemical Vapor Deposition and Characterization of Boron Nitride Films. J. Electrochem. Soc. 1994, 141, 1633-1638.
Heterostructures Produced from Nanosheet-Based Inks. F Withers, H Yang, L Britnell, A P Rooney, E Lewis, A Felten, C R Woods, V Sanchez Romaguera, T Georgiou, A Eckmann, Y J Kim, S G Yeates, S J Haigh, A K Geim, K S Novoselov, C Casiraghi, Nano Lett. 14Withers, F.; Yang, H.; Britnell, L.; Rooney, A. P.; Lewis, E.; Felten, A.; Woods, C. R.; Sanchez Romaguera, V.; Georgiou, T.; Eckmann, A.; Kim, Y. J.; Yeates, S. G.; Haigh, S. J.; Geim, A. K.; Novoselov, K. S.; Casiraghi, C. Heterostructures Produced from Nanosheet-Based Inks. Nano Lett. 2014, 14, 3987-3992.
Characterization of Atomic Layer Deposition HfO2 , Al2O3, and Plasma-Enhanced Chemical Vapor Deposition Si3N4 as Metal-Insulator-Metal Capacitor Dielectric for GaAs HBT Technology. J Yota, H Shen, R Ramanathan, J. Vac. Sci. Technol. A Vacuum, Surfaces, Film. 31Yota, J.; Shen, H.; Ramanathan, R. Characterization of Atomic Layer Deposition HfO2 , Al2O3, and Plasma-Enhanced Chemical Vapor Deposition Si3N4 as Metal- Insulator-Metal Capacitor Dielectric for GaAs HBT Technology. J. Vac. Sci. Technol. A Vacuum, Surfaces, Film. 2013, 31, 1-9.
Gate Dielectrics for Organic Field-Effect Transistors: New Opportunities for Organic Electronics. A Facchetti, M.-H Yoon, T J Marks, Adv. Mater. 17Facchetti, A.; Yoon, M.-H.; Marks, T. J. Gate Dielectrics for Organic Field-Effect Transistors: New Opportunities for Organic Electronics. Adv. Mater. 2005, 17, 1705- 1725.
High-Yield Production of Graphene by Liquid-Phase Exfoliation of Graphite. Nat. Nanotechnol. 3High-Yield Production of Graphene by Liquid-Phase Exfoliation of Graphite. Nat. Nanotechnol. 2008, 3, 563-568.
Nanosheets Produced by Liquid Exfoliation of Layered Materials. Science. 331Two-Dimensional Nanosheets Produced by Liquid Exfoliation of Layered Materials. Science (80-. ). 2011, 331, 568-571.
A H Robbins, W C Miller, Capacitance Capacitors, Circuit Analysis: Theory and Practice; Cengage Learning. p 345Robbins, A. H.; Miller, W. C. Capacitors and Capacitance. In Circuit Analysis: Theory and Practice; Cengage Learning, 2013; p 345.
| [] |
[
"Employing Roget's Thesaurus in Automatic Pun Recognition and Interpretation",
"Employing Roget's Thesaurus in Automatic Pun Recognition and Interpretation"
] | [
"Elena Mikhalkova [email protected] \nInstitute of Philology and Journalism\nInstitute of Mathematics and Computer Science\nTyumen State University Tyumen\n625003Russia\n",
"Yuri Karyakin [email protected] \nTyumen State University Tyumen\n625003Russia\n"
] | [
"Institute of Philology and Journalism\nInstitute of Mathematics and Computer Science\nTyumen State University Tyumen\n625003Russia",
"Tyumen State University Tyumen\n625003Russia"
] | [] | The article describes a model of automatic interpretation of English puns, based on Roget's Thesaurus. In a pun, the algorithm discovers two groups of words, which belong to two main semantic fields. The fields become a semantic vector, based on which, an SVM classifier learns to recognize puns. A rule-based model is then applied for recognition of intentionally ambiguous (target) words and their definitions. | 10.18653/v1/s17-2072 | [
"https://arxiv.org/pdf/1707.05479v1.pdf"
] | 773,248 | 1707.05479 | c879ced031bd9a6715556214db263bef15270381 |
Employing Roget's Thesaurus in Automatic Pun Recognition and Interpretation
Jul 2017. 2017 Task 7
Elena Mikhalkova [email protected]
Institute of Philology and Journalism
Institute of Mathematics and Computer Science
Tyumen State University Tyumen
625003Russia
Yuri Karyakin [email protected]
Tyumen State University Tyumen
625003Russia
Employing Roget's Thesaurus in Automatic Pun Recognition and Interpretation
Jul 2017. 2017 Task 7
The article describes a model of automatic interpretation of English puns, based on Roget's Thesaurus. In a pun, the algorithm discovers two groups of words, which belong to two main semantic fields. The fields become a semantic vector, based on which, an SVM classifier learns to recognize puns. A rule-based model is then applied for recognition of intentionally ambiguous (target) words and their definitions.
Introduction
The following terminology is basic in our research of puns. A pun is a) a short humorous genre, where a word or phrase is intentionally used in two meanings, b) a means of expression, the essence of which is to use a word or phrase so that in the given context the word or phrase can be understood in two meanings simultaneously. A target word is a word, that appears in two meanings. A homographic pun is a pun that "exploits distinct meanings of the same written word" (Miller and Gurevych, 2015) (these can be meanings of a polysemantic word, or homonyms, including homonymic word forms). A heterographic pun is a pun, in which the target word resembles another word, or phrase in spelling; we will call the latter the second target word. Consider the following example (the Banker joke):
"I used to be a banker, but I lost interest."
The Banker joke is a homographic pun; interest is the target word. Unlike it, the Church joke below is a heterographic pun; propane is the target word, profane is the second target word:
"When the church bought gas for their annual barbecue, proceeds went from the sacred to the propane."
Our model of automatic pun analysis is based on the following premise: in a pun, there are two groups of words, and their meanings, that indicate the two meanings in which the target word is used. These groups can overlap, i.e. contain the same polysemantic words, used in different meanings.
In the Banker joke, words, and collocations banker, lost interest point at the professional status of the narrator, and his/her career failure. At the same time, used to, lost interest tell a story of losing emotional attachment to the profession: the narrator became disinterested. The algorithm of pun recognition, which we suggest, discovers these two groups of words, based on common semes 1 (Subtask 1), finds the words, which belong to the both groups, and chooses the target word (Subtask 2), and, based on the common semes, picks up the best suitable meaning, which the target word exploits (Subtask 3). In case of heterographic puns, in Subtask 2, the algorithm looks for the word, or phrase, which appears in one group and not in the other.
Subtask 1: Mining Semantic Fields
We will call a semantic field a group of words and collocations, which share a common seme. In taxonomies, like WordNet (Kilgarriff and Fellbaum, 2000), and Roget's Thesaurus (Roget, 2004) (further referred to as Thesaurus), semes appear as hierarchies of word meanings. Top-levels attract words with more general meanings (hypernyms). For example, Thesaurus has six top-level Classes, that divide into Divisions, that divide into Sections, and so on, down to the fifth lowest level. WordNet's structure is not so transparent. CITE!!! 10 TOP-semes Applying such dictionaries to get semantic fields (the mentioned common groups of words) in a pun is, therefore, the task of finding two most general hypernyms in WordNet, or two relevant Classes among the six Classes in Thesaurus. We chose Thesaurus, as its structure is only five levels deep, Classes labels are not lemmas themselves, but arbitrary names (we used numbers instead), and it allows parsing on a certain level, and insert corrections (adding lemmas, merging subsections, etc. 2 ). After some experimentation, instead of Classes, we chose to search for relevant Sections, which are 34 subdivisions of the six Classes 3 .
After normalization (including change to lowercase; part-of-speech tagging, tokenization, and lemmatization with NLTK tools (Bird et al., 2009); collocation extraction 4 ; stop-words removal 5 ), the algorithm collects Section numbers for every word, and collocation, and removes duplicates (in Thesaurus, homonyms proper can belong to different subdivisions in the same or different Sections). Table 1 shows what Sections words of the Banker joke belong to.
Then the semantic vector of a pun is calculated. Every pun p is a vector in a 34-dimensional space:
p i = p i (s 1i , s 2i , ..., s 34i )
The value of every element s ki equals the number of words in a pun, which belong to a Section S k . The algorithm passes from a Section to a Section, each time checking every word w ji in the bunch of extracted words l i . If a word belongs to a Section, the value of s ki RAISES BY???? 1:
s ki = l i j=1 {1|w ji ∈ S k } , k = 1, 2, ..., 34, i = 1, 2, 3...
For example, the semantic vector of the Banker joke looks as follows: see Table 2.
To test the algorithm, we, first, collected 2484 puns from different Internet resources and, second, built a corpus of 2484 random sentences of length 5 to 25 words from different NLTK corpora (Bird et al., 2009) plus several hundred aphorisms and proverbs from different Internet sites. We shuffled and split the sentences into two equal groups, the first two forming a training set, and the other two a test set. The classification was conducted, using different Scikitlearn (Pedregosa et al., 2011) algorithms. We also singled out 191 homographic puns, and 198 heterographic puns, and tested them against the same number of random sentences. In all the tests 6 , the Scikit-learn algorithm of SVM with the Radial Basis Function (RBF) kernel produced the highest average F-measure results (f = fpuns+f random 2
). In addition, its results are smoother, comparing the difference between precision, and recall (which leads to the highest F-measure scores) within the two classes (puns, and random sentences), and between the classes (average scores). Table 3 illustrates results of different algorithms in class "Puns" (not average results between puns, and not puns). The results were higher for the split selection, reaching 0.79 (homographic), and 0.78 (heterographic) scores of F-measure. The common selection got the maximum of 0.7 for average Fmeasure in several tests. The higher results of split selection may be due to a larger training set.
Subtask 2: Hitting the Target Word
We suggest that, in a homographic pun, the target word is a word, which immediately belongs to two semantic fields; in a heterographic pun, the target word belongs to at least one discovered semantic field, and does not belong to the other. However, in reality, words in a sentence tend to belong to too many fields, and they create noise in the search. To reduce influence of noisy fields, we included such non-semantic features in the model as the tendency of the target word to occur at the end of a sentence, and part-of-speech distribution, given in (Miller and Gurevych, 2015). A-group (W A ) and B-group (W B ) are groups of words in a pun, which belong to the two semantic fields, sharing the target word. Thus, for some s ki , k becomes A, or B 7 . A-group attracts the maximum number of p Banker {1, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 2, 0, 1, 0, 0, 2, 1, 1, 0, 0, 0, 4, 2, 0, 0} In the Banker joke, s Ai = 4, A = 30 (Possessive Relations); words, that belong to this group, are use, lose, banker, interest. B-group is the second largest group in a pun:
s Bi = max k (s ki /s Ai ), k = 1, 2, ..., 34
In the Banker joke, s Bi = 2. There are three groups of words, which have two words in them: B 1 = 19, Results Of Reasoning: be, lose; B 2 = 24, Volition In General: use, interest; B 3 = 31, Affections In General: banker, interest. Ideally, there should be a group of about three words, and collocations, describing a person's inner state (used to be, lose, interest), and two words (lose, interest) in W A are a target phrase. However, due to the shortage of data about collocations in dictionaries, W B is split into several smaller groups. Consequently, to find the target word, we have to appeal to other word features. In testing the system on homographic puns, we relied on the polysemantic character of words. If in a joke, there are more than one value of B, W B candidates merge into one, with duplicates removed, and every word in W B becomes the target word candidate: c ∈ W B . In the Banker joke, W B is a list of be, lose, use, interest, banker; B = {19, 24, 31}. Based on the definition of the target word in a homographic pun, words from W B , that are also found in W A , should have a privilege. Therefore, the first value v α , each word gets, is the output of the Boolean function:
v α (c) = 2 if(c ∈ W A ) ∧ (c ∈ W B ) 1 if(c / ∈ W A ) ∧ (c ∈ W B )
The second value v β is the absolute frequency of a word in the union of B 1 , B 2 , etc., including
duplicates: v β (c) = f c (W B 1 ∪ W B 2 ∪ W B 3 ).
The third value v γ is a word position in the sentence: the closer the word is to the end, the bigger this value is. If the word occurs several times, the algorithm counts the average of the sums of position numbers.
The fourth value is part-of-speech probability v δ . Depending on the part of speech, the word be-
v δ (c) = 0.502, if c − N oun 0.338, if c − V erb 0.131, if c − Adjective 0.016, if c − Adverb 0.013, otherwise
The final step is to count rates, using multiplicative convolution, and choose the word with the maximum rate:
z 1 (W B ) = c| max c (v α × v β × v γ × v δ )
Values of the Banker joke are illustrated in Table 4.
In the solution for heterographic puns, we built a different model of B-group. Unlike homographic puns, here the target word is missing in W B (the reader has to guess the word or phrase, homonymous to the target word). Accordingly, we rely on the completeness of the union of W A and W B : among the candidates for W B (the second largest groups), such groups are relevant, that form the longest list with W A (duplicates removed). In Ex. 2 (the Church joke), W A = go, gas, annual, barbecue, propane, and two groups form the largest union with it: W B = buy, proceeds + sacred, church. Every word in W A and W B can be the target word. The privilege passes to words, used only in one of the groups. Ergo, the first value is:
v α (c) = 2 if(c ∈ W A ) ⊕ (c ∈ W B ) 1 otherwise
Frequencies are of no value here; values of position in the sentence, and part-of-speech distribution remain the same. The function output is:
z 1 (W B ) = c| max c (v α × v γ × v δ )
Values of the Church joke are illustrated in Table 5.
Subtask 3: Mapping Roget's Thesaurus to Wordnet
In the last phase, we implemented an algorithm which maps Roget's Sections to synsets in Wordnet. In homographic puns, definitions of a word in Wordnet are analyzed similarly to words in a pun, when searching for semantic fields, the words belong to. For example, words from the definitions of the synset interest belong to the following Roget's Sections: Synset(interest.n.01)=a sense of concern with and curiosity about someone or something: (21,19,31,24,1,30,6,16,3,31,19,12,2,0); Synset(sake.n.01)=a reason for wanting something done: 15, 24, 18, 7, 19, 11, 2, 31, 24, 30, 12, 2, 0, 26, 24, etc. When A-Section is discovered (for example, in the Banker joke, A=30 (Possessive Relations)), the synset with the maximum number of words in its definition, which belong to A-Section, becomes the A-synset. The B-synset is found likewise for the B-group with the exception that it should not coincide with Asynset. In heterographic puns the B-group is also a marker of the second target word. Every word in the index of Roget's Thesaurus is compared to the known target word using Damerau-Levenshtein distance. The list is sorted in increasing order, and the algorithm begins to check what Roget's Sections every word belongs to, until it finds the word that belongs to a Section (or the Section, if there is only one) in the B-group. This word becomes the second target word. Nevertheless, as we did not have many trial data, but for the four examples, released before the competition, the first trials of the program on a large collection returned many errors, so we changed the algorithm for the B-group as follows.
Homographic puns, first run. B-synset is calculated on the basis of sense frequencies (the output is the most frequent sense). If it coincides with A-synset, the program returns the second frequent
synset.
Homographic puns, second run. B-synset is calculated on the basis of Lesk distance, using builtin NLTK Lesk function (Bird et al., 2009). If it coincides with A-synset, the program returns another synset on the basis of sense frequencies, as in the first run.
Heterographic puns, first run. The second target word is calculated, based on Thesaurus and Damerau-Levenshtein distance; words, missing in Thesaurus, are analyzed as their WordNet hypernyms. In both runs for heterographic puns, synsets are calculated, using the Lesk distance.
Heterographic puns, second run. The second target word is calculated on the basis of Brown corpus (NLTK (Bird et al., 2009)): if the word stands in the same context in Brown as it is in the pun, it becomes the target word. The size of the context window is (0; +3) for verbs, (0;+2) for adjectives; (-2;+2) for nouns, adverbs and other parts of speech within the sentence, where a word is used. Table 6 illustrates competition results of our system.
Conclusion
The system, that we introduced, is based on one general supposition about the semantic structure of puns and combines two types of algorithms: supervised learning and rule-based. Not surprisingly, the supervised learning algorithm showed better results in solving an NLP-task, than the rulebased. Also, in this implementation, we tried to combine two very different dictionaries (Roget's Thesaurus and Wordnet). And, although reliability of Thesaurus in reproducing a universal semantic map can be doubted, it seems to be a quite effective source of data, still, when used in Subtask 1. The attempts to map it to Wordnet seem rather weak, so far, concerning the test results, which also raises a question: if different dictionaries treat meaning of words differently, can there be an objective and/or universal semantic map, to apply as the foundation for any WSD task?
Word Section No., Section name in ThesaurusI
-
use
24, Volition In General
30, Possessive Relations
to
-
be
0, Existence
19, Results Of Reasoning
a
-
banker 31, Affections In General
30, Possessive Relations
but
-
lose
21, Nature Of Ideas Communicated
26, Results Of Voluntary Action
30, Possessive Relations
19, Results Of Reasoning
interest 30, Possessive Relations
25, Antagonism
24, Volition In General
7, Causation
31, Affections In General
16, Precursory Conditions And Operations
1, Relation
Table 1: Semantic fields in the Banker joke
Table 2 :
2Semantic vector of the Banker jokeMethod
Precision Recall F-measure
Common selection
SVM with linear kernel
0.67
0.68
0.67
SVM with polynomial kernel
0.65
0.79
0.72
SVM with Radial Basis Function (RBF) kernel 0.70
0.70
0.70
SVM with linear kernel, normalized data
0.62
0.74
0.67
Homographic puns
SVM with RBF kernel
0.79
0.80
0.79
Multinomial Naive Bayes
0.71
0.80
0.76
Logistic Regression, standardized data
0.77
0.71
0.74
Heterographic puns
SVM with RBF kernel
0.77
0.79
0.78
Logistic Regression
0.74
0.75
0.74
Table 3 :
3Tests for pun recognition.words in a pun:
s Ai = max
k
s ki , k = 1, 2, ..., 34
Table 4 :
4Values of the Banker joke.longs to, it gets the following rate:
Table 5 :
5Values of the Church joke.
Table 6 :
6Competition results.
Bits of meaning. Semes are some parts of meaning, present both in the word and in its hypernym. Moving up the taxonomy, like Thesaurus, or WordNet, hypernyms become more general, and the seme, connecting them to the word, becomes more general, too.
For example, we edited Thesaurus, adding words, which were absent in it. If a word in a pun was missing in Thesaurus, the system checked up for its hypernyms in Wordnet, and added the word to those Sections in Thesaurus, which contained the hypernyms.3 Sections are not always immediate subdivisions of a Class. Some Sections are grouped in Divisions.4 To extract collocations and search for them in Thesaurus, we applied our own procedure, based on a part-of-speech analysis.5 After lemmatization, all words are analyzed in collocations, but only nouns, adjectives, and verbs compose a list of separate words.
The tests were run before the competition. Results of the competition for our system are given inTable 6. 7 s ki is always an integer; WA and WB are always lists of words; A is always an integer, B is a list of one or more integers.
Natural language processing with Python: analyzing text with the natural language toolkit. Steven Bird, Ewan Klein, Edward Loper, Reilly Media, IncSteven Bird, Ewan Klein, and Edward Loper. 2009. Natural language processing with Python: analyz- ing text with the natural language toolkit. O'Reilly Media, Inc.
Wordnet: An electronic lexical database. Adam Kilgarriff, Christiane Fellbaum, Adam Kilgarriff and Christiane Fellbaum. 2000. Wordnet: An electronic lexical database.
Automatic disambiguation of English puns. Tristan Miller, Iryna Gurevych, ACL (1). Tristan Miller and Iryna Gurevych. 2015. Automatic disambiguation of English puns. In ACL (1). pages 719-729.
Scikit-learn: Machine learning in Python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of Machine Learning Research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research 12(Oct):2825-2830.
Roget's thesaurus of English words and phrases. Peter Mark, Roget , Project GutenbergPeter Mark Roget. 2004. Roget's thesaurus of English words and phrases. Project Gutenberg.
| [] |
[
"A comprehensive thermodynamic model for temperature change in i-caloric effects",
"A comprehensive thermodynamic model for temperature change in i-caloric effects"
] | [
"A M G Carvalho \nDepartamento de Engenharia Mecânica\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil\n\nDepartamento de Engenharia Química\nUniversidade Federal de São Paulo\n09913-030DiademaSPBrazil\n\nInstituto de Física Armando Dias Tavares\nUniversidade do Estado do Rio de Janeiro\nUERJ\nRua São Francisco Xavier524, 20550-013Rio de JaneiroRJBrazil\n",
"W Imamura \nDepartamento de Engenharia Mecânica\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil\n\nDepartamento de Química\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil\n\nCentro de Tecnologia\nUniversidade Federal de Alagoas\n57072-970MaceióALBrazil\n"
] | [
"Departamento de Engenharia Mecânica\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil",
"Departamento de Engenharia Química\nUniversidade Federal de São Paulo\n09913-030DiademaSPBrazil",
"Instituto de Física Armando Dias Tavares\nUniversidade do Estado do Rio de Janeiro\nUERJ\nRua São Francisco Xavier524, 20550-013Rio de JaneiroRJBrazil",
"Departamento de Engenharia Mecânica\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil",
"Departamento de Química\nUniversidade Estadual de Maringá\n87020-900MaringáPRBrazil",
"Centro de Tecnologia\nUniversidade Federal de Alagoas\n57072-970MaceióALBrazil"
] | [] | Solid-state cooling based on i-caloric effects may be an alternative to conventional vaporcompression refrigeration systems. The adiabatic temperature change (Δ ) is one of the parameters that characterize the i-caloric effects, therefore it is important to obtain the correct Δ values and, whenever possible, to correlate this parameter with thermodynamic and microscopic quantities. In this work, we propose a comprehensive thermodynamic model that allows us to determine the adiabatic temperature change from non-adiabatic measurements of temperature change induced by a field change. Our model fits efficiently temperature versus time and temperature change versus the inverse of the field change rate data for three different materials presenting different i-caloric effects. The results indicate the present model is a very useful and robust tool to obtain the correct Δ values and to correlate Δ with other thermodynamic quantities.Solid-state cooling based on i-caloric effects may be the next-generation of refrigeration technologies and have received much attention in the last decades. i-caloric effect can be referred to the change in temperature or entropy, in reponse to an adiabatic or isothermal process, respectivelly, via application of external stimuli of field changes upon a given material. The change in temperature and entropy should be, at least, partially reversible, e.g., if temperature increases and entropy decreases when applying a field, temperature should descrease and entropy should increase when removing the field. Depending on the type of external fieldsuch as mechanical, electric, and magnetic fields -, the i-caloric effect is called mechanocaloric, eletrocaloric and magnetocaloric. Mechanocaloric effect includes the particular cases elastocaloric effect (driven by | 10.1140/epjp/s13360-023-04052-8 | [
"https://export.arxiv.org/pdf/2303.15292v1.pdf"
] | 257,766,429 | 2303.15292 | f648a9f426d0402019c580d47c65833173a9392b |
A comprehensive thermodynamic model for temperature change in i-caloric effects
A M G Carvalho
Departamento de Engenharia Mecânica
Universidade Estadual de Maringá
87020-900MaringáPRBrazil
Departamento de Engenharia Química
Universidade Federal de São Paulo
09913-030DiademaSPBrazil
Instituto de Física Armando Dias Tavares
Universidade do Estado do Rio de Janeiro
UERJ
Rua São Francisco Xavier524, 20550-013Rio de JaneiroRJBrazil
W Imamura
Departamento de Engenharia Mecânica
Universidade Estadual de Maringá
87020-900MaringáPRBrazil
Departamento de Química
Universidade Estadual de Maringá
87020-900MaringáPRBrazil
Centro de Tecnologia
Universidade Federal de Alagoas
57072-970MaceióALBrazil
A comprehensive thermodynamic model for temperature change in i-caloric effects
1
Solid-state cooling based on i-caloric effects may be an alternative to conventional vaporcompression refrigeration systems. The adiabatic temperature change (Δ ) is one of the parameters that characterize the i-caloric effects, therefore it is important to obtain the correct Δ values and, whenever possible, to correlate this parameter with thermodynamic and microscopic quantities. In this work, we propose a comprehensive thermodynamic model that allows us to determine the adiabatic temperature change from non-adiabatic measurements of temperature change induced by a field change. Our model fits efficiently temperature versus time and temperature change versus the inverse of the field change rate data for three different materials presenting different i-caloric effects. The results indicate the present model is a very useful and robust tool to obtain the correct Δ values and to correlate Δ with other thermodynamic quantities.Solid-state cooling based on i-caloric effects may be the next-generation of refrigeration technologies and have received much attention in the last decades. i-caloric effect can be referred to the change in temperature or entropy, in reponse to an adiabatic or isothermal process, respectivelly, via application of external stimuli of field changes upon a given material. The change in temperature and entropy should be, at least, partially reversible, e.g., if temperature increases and entropy decreases when applying a field, temperature should descrease and entropy should increase when removing the field. Depending on the type of external fieldsuch as mechanical, electric, and magnetic fields -, the i-caloric effect is called mechanocaloric, eletrocaloric and magnetocaloric. Mechanocaloric effect includes the particular cases elastocaloric effect (driven by
uniaxial stress), barocaloric effect (driven by hydrostatic pressure) and twistocaloric (driven by pure torsion), besides more general cases.
The search for materials that present i-caloric effects large enough to be used in cooling technology has became a challenge. Large i-caloric effects can be found in intermetallics, 1,2,3,4,5,6,7 ceramics, 8 plastic crystals, 9 alkanes, 10,11 spin-crossover systems, 12,13,14 composites 15 etc. In general, large i-caloric effects appears in materiais around first-or second-order transitions, but polymers 16,17,18,19,20,21,22 can also exhibit large effects with or without transitions.
Adiabatic temperature change (ΔTS) and isothermal entropy change (ΔST) are the main parameters that characterize the i-caloric effects. If one of these parameters is large, we say the icaloric effect is large. For refrigeration, it is desirable that both parameters are large in certain conditions. Understanding the behavior of ΔTS and ΔST as a function of temperature, applied field and other parameters is important to correlate the i-caloric effect with microscopic and thermodynamic quantities.
In this work, we focus our attention to the temperature change due to field change. Based on thermodynamic models applied to magnetocaloric effect 23 and elastocaloric effect, 24 we propose a comprehensive model to understand the temperature change behavior observed in i-caloric materials and to obtain further information:
( ) ( ) ̇( ) = −ℎ( )( ( ) − 1 ) +̇( ) + ( ) ∆ ( ) ( ) ,(1)
where i is the intensive variable that changes in time and provokes the corresponding i-caloric effect; t is the time; ̇≡ ⁄ and T is the temperature; T1 is the initial temperature; and c are the density and the specific heat of the material, respectively, which may depend on the variation of the intensive variable i; h is the volumetric heat transfer coefficient, ℎ( ) = ℎ 0 ( ) ( ) , where ℎ 0 is the heat transfer coefficient, A is the heat transfer surface area and V is the material volume; ̇≡ ⁄ and is the work (per volume unit) done by intensive variable i on the material, not considering the latent heat due to a first-order transition; ∆ is the specific entropy variation at the transition presenting latent heat; ̇≡ ⁄ and is the mass fraction of one phase at a first-order transition.
As an initial approach, we consider processes where the external field variation, ∆ , does not change significantly the volumetric heat transfer coefficient (h), the specific heat (c) and the material density ( ), i.e., we keep h, c and fixed. Besides, ∆ does not depend on time. Then, integrating Eq. (1), we have
∫ 2 1 = − ℎ ∫ [ ( ) − 1 ] 2 1 + 1 ∫ 2 1 + ∆ ∫ ( ) ̇( ) 2 1 .(2)
The term ∆ ∫ ( ) ( ) 2 1 represents the temperature change ∆ associated to a first-order transition. We also have ∫
∆ = − ℎ 2 ∆ ∆ + 1 12 + ∆ .(3)
Imposing on Eq. (3) the adiabatic condition, ℎ = 0, we have
∆ = 1 12 + ∆ .(4)
Subtracting Eq. (4) from Eq. (3), we get
∆ − ∆ = − ℎ 2 ∆ ∆ .(5)
It is easy to see that
∆ = ∆ 1 + ℎ 2 ∆ .(6)
Comparing Eq. (c) The present approach is simpler and more direct.
We can use Eq. (6) to analyze datasets of ∆ as a function of or 1/ . Since we have a ∆ value for each rate , we consider ∆ = ∆ ̅ and we find the best ∆ ̅ to fit each ∆ vs. 1/ dataset in the present paper.
To analyze the behavior of the material's temperature as a function of time, we must solve Eq. (1) accordingly. Essentially, we solve Eq. (1) for two sequential processes: a field change process (from field 1 and temperature 1 to field 2 and temperature 2 ), hereinafter called process 1→2 ; and an isofield process (from 2 to 1 , at constant field 2 ), hereinafter called process 2→1 .
For the process 1→2 , the solution for Eq. (1) is
( ) = 1 ( ) { 1 ( 1 ) + 1 ∫ ( ′)[ℎ 1 +̇( ′)] ′ 1 } ,(7)
where
( ) = Exp [ ℎ ( − 1 ) − ∆ ∫̇( ′) ′ 1 ].
For the process 2→1 , = 0, and Eq. (1) becomes
̇( ) = − ℎ [ ( ) − 1 ] + ∆̇( ) ( ) .(8)
Eq. (8) is solved for ≥ 2 , resulting in
( ) = 1 ( ) { 2 ( 2 ) + ℎ 1 ∫ ( ′) ′ 2 } .(9)
To obtain the adiabatic temperature change (∆ ), we impose on Eq. (1) the adiabatic condition, ℎ = 0. Thus, Eq. (1) becomes
̇( ) =̇( ) + ∆̇( ) ( ) .(10)
Eq. (10) is solved for 1 ≤ ≤ 2 , resulting in
( ) = 1 ( ) { 1 ( 1 ) + 1 ∫ ( ′)̇( ′) ′ 1 } ,(11)
where
( ) = Exp [− ∆ ∫̇ℎ ( ′) ′ 1 ]
. It is not difficult to see that
Δ = ( 2 ) − 1 = 1 ( 2 ) { 1 ( 1 ) + 1 ∫ ( )̇( ) 2 1 } − 1 .(12)
To test our approaches, we apply the present model to different materials and different icaloric effects. We use Eqs. (7), (9) and (12) to fit temperature vs. time data in order to obtain Δ . We also use Eq. (6) to fit the temperature change (Δ ) vs. rate -1 in order to obtain Δ . If our approaches are satisfactory, both Δ values should be close.
Experimental data for magnetocaloric effect in metallic gadolinium 23 and the fits from the present model are shown in Fig. 1. The functions and parameters used in the model are listed in Table S1 (in the Supplementary Information). Temperature (T) vs. time (t) data and Δ vs. r -1 data were obtained at ~292 K, which is very close to the Curie temperature of Gd.
Firstly, we fit T vs. t data [ Fig.1(a)] for t ≥ t2 using Eq. (9), which requires the function ̇( ) and the parameters Δ , h, c and ρ. As magnetic transition of Gd does not present phase coexistence or latent heat, ( ) = 0 and Δ = 0. In this case, we need three independent parameters to perform the fitting procedure. As we know c and ρ values for Gd, 25,26 we only have to find h. After that, we fit T vs. t data for t1 ≤ t ≤ t2, using Eq. (7), which requires the function ̇( ) and the parameters h, c and ρ. Since we have h, c and ρ, we only have to find an appropriate ̇( ), which is described in Table S1.
For t1 ≤ t ≤ t2, with μ0ΔH = 3 T and r = 0.01 T s -1 , experimental Δ is 5.6 K. In order to obtain the adiabatic temperature change (Δ ), we use Eq. (12), which requires the functions and parameters previously obtained. Then, we get Δ = 6.7 K (20% higher than the measured Δ ), indicating that the experimental process 1→2 with a rate of 0.01 T s -1 is far from the adiabatic condition. Magnetic-field-induced Δ for Gd is shown in Fig. 1(b) as a function of the inverse of the rate of magnetic field change. The three experimental points 23 are fitted using Eq. (6). Since we already have h, c and ρ values, we have to find Δ and Δ values for the best fit. Here we get Δ = 6.9 K. Comparing with Δ obtained from T vs. t data, we note a difference of only 3%, which shows our approaches are valid in this case. (b) Temperature change vs. rate -1 , where experimental data was obtained at 308 K. 27 Symbols represent experimental data and lines represent the fits from the model. Fig. 2 shows experimental data for compressive elastocaloric effect in (Ni50Mn31.5Ti18.5)99.8B0.2 (in bulk) 27 and the fits from the present model. The functions and parameters used in the model in this case are listed in Table S1 (in the Supplementary Information). T vs. t data and Δ vs. r -1 data were obtained at 308 K, which is above martensite-austenite transition temperature. 27 As performed for Gd, we firstly fit T vs. t data for Ni-Mn-Ti alloy [ Fig.2(a)] in the interval t ≥ t2, using Eq. (9), which requires the function ̇( ) and the parameters Δ , h, c and ρ. Since we know c and ρ values for this material, 27,28 we have to determine ( ), Δ and h for the best fit. As we have three independent parameters to find, we optimized this fit in conjunction with the fit of the interval t1 ≤ t ≤ t2, which also requires the function ̇( ). According to Ref. 27, Δ = 76 J kg -1 K -1 for (Ni50Mn31.5Ti18.5)99.8B0.2. Using this value, we were not able to fit both intervals satisfactorily. For the fit shown in Fig. 2(a), Δ = 46 J kg -1 K -1 ; besides, ̇( ) = 0 [and ( ) = 0], which is consistent with the hypothesis that, in this case, the temperature change is entirely due to the structural phase transition. Considering that Δ previously reported 27 is correct, the divergence between the reported value and the theoretical one suggests two scenarios (that may coexist): (1) the specific heat of austenite phase is significantly different from specific heat of strain-induced martensite phase around the structural transition, affecting theoretical Δ value [since the ratio Δ ⁄ appears in both Eqs. (7) and (9)]; (2) the strain-induced transition is not complete, then Δ from Δ experiment is lower than Δ from differential scanning calorimetry (DSC).
For t1 ≤ t ≤ t2 [with Δ = 700 MPa, strain ( ) of 0.07 and rate (r) of 0.16 s -1 ], experimental Δ is 26.9 K. In order to obtain the adiabatic temperature change (Δ ), we use Eq. (12), which requires the functions and parameters previously obtained in the fitting procedure. Then, we get Δ = 29.9 K (11% higher than the measured Δ ), indicating that the experimental process 1→2 with the rate of 0.16 s -1 is not close to the adiabatic condition. Here we see that only a very fast change in the intensive variable i is not enough to stablish a quasi-adiabatic process. In this case, the time for the process 1→2 is very small (less than 1 s), but the theoretical volumetric heat transfer coefficient is enormous (h = 8.2×10 5 W m -3 K -1 ).
Strain-induced Δ for Ni-Mn-Ti alloy is shown in Fig. 2(b) as a function of the inverse of the strain rate. Several experimental points 27 are fitted using Eq. (6). Since we already have h, c and ρ values, we have to find Δ and Δ values for the best fit. Here we get Δ = 30.2 K. Comparing with Δ obtained from T vs. t data, we note a difference of only 1%, which shows our approaches are also valid in this case. Fig. 3 shows experimental data for tractive elastocaloric effect in Ni50.4Ti49.6 (20-μm-thick films) 29 and the fits from the present model. The functions and parameters used in the model in this case are listed in Table S1 (in the Supplementary Information). T vs. t data and Δ vs. r -1 data were obtained at room temperature, near martensite-austenite transition temperature 29 .
As performed for the previous materials, we firstly fit T vs. t data for Ni-Ti film [ Fig. 3(a)] in the interval t ≥ t2, using Eq. (9), which requires the function ̇( ) and the parameters Δ , h, c and ρ. Since we know c and ρ values for this material, 29 we have to determine ( ), Δ and h for the best fit. As in the case of Ni-Mn-Ti, we have three independent parameters to find, then we optimized this fit in conjunction with the fit of the interval t1 ≤ t ≤ t2, which also requires the function ̇( ). For Ni-Ti, we get ̇( ) ≠ 0 and Δ = 23 J kg -1 K -1 . The reported latent heat for Ni-Ti film, obtained from DSC, 29 is 20 kJ kg -1 , resulting in Δ ≅ 74 J kg -1 K -1 . Interestingly, it is also mentioned that from Δ experiment the estimated latent heat is much lower (7.2 kJ kg -1 ), indicating that the material only undergoes part of the phase transformation during load cycling, according to Ref. 29. The latent heat of 7.2 kJ kg -1 results in Δ = 24 J kg -1 K -1 (considering 300 K as the reference temperature), which is very close to the theoretical Δ value (23 J kg -1 K -1 ) used in the fitting procedure.
For t1 ≤ t ≤ t2 (with Δ = 500 MPa, = 0.053 and r = 1.0 s -1 ), experimental Δ is 16.4 K. In order to obtain the adiabatic temperature change (Δ ), we use Eq. (12), which requires the functions and parameters previously obtained in the fitting procedure. Then, we get Δ = 17.7 K (8% higher than the measured Δ ), indicating that the experimental process 1→2 for Ni-Ti, with the rate of 1.0 s -1 , is closer to the adiabatic condition than Ni-Mn-Ti (11% higher than the measured Δ ), with the rate of 0.16 s -1 . The strain rate for Ni-Ti is 6.3 times larger than the rate for Ni-Mn-Ti, and the theoretical volumetric heat transfer coefficient, h, for Ni-Ti is 2.8×10 6 W m -3 K -1 , 3.4 times larger than the theoretical h for Ni-Mn-Ti. From the definition of the volumetric heat transfer coefficient (ℎ = ℎ 0 ⁄ , where heat transfer surface area and is the material volume), this result is consistent, since Ni-Mn-Ti is a bulk sample while Ni-Ti is a 20-μm-thick film (much higher ⁄ ratio).
Strain-induced Δ for Ni-Ti is shown in Fig. 3(b) as a function of the inverse of the strain rate. Five experimental points 29 are fitted using Eq. (6). Since we already have h, c and ρ values, we have to find Δ and Δ values for the best fit. Here we get Δ = 17.1 K. Comparing with Δ obtained from T vs. t data, we note a difference of only 3%, which shows our approaches are also valid in this case.
For Ni-Ti, Δ obtained from T vs. t data has higher values for much lower strain rates. We found 19.2 K and 20.1 K for the strain rates of 0.05 s -1 and 0.02 s -1 , respectively. During Δ experiments, Ni-Ti films are stretched and ⁄ ratio may increase significantly, i.e., ℎ may increase significantly. Eqs. (7) and (9), used to fit T vs. t data, were derived with the condition of constant ℎ. Therefore, even with reasonable fits, it is expected that these equations may yield overestimated (or underestimated) Δ values when ℎ changes significantly during the process 1→2 . Interestingly, if the process 1→2 is not far from adiabatic condition (as is the case for Ni-Ti with strain rate of 1.0 s -1 ), the fits and Δ obtained are highly satisfactory. It is also interesting that Δ vs. r -1 data is satisfactorily fitted as well, even Eq. (6) being derived using the same consideration of constant h. In this case, the free parameter ∆ in Eq. (6) seems to compensate the variation of the volumetric heat transfer coefficient.
As a conclusion, the thermodynamic model proposed in this work allows us to determine the adiabatic temperature change (Δ ) from non-adiabatic measurements of ∆ through two different approaches: (a) from temperature change vs. rate -1 data and using eq. (6); (b) from temperature vs. time data and using eqs. (7) and (9). Our model fits efficiently temperature vs. time and temperature change vs. rate -1 data for three different materials presenting different i-caloric effects: magnetocaloric effect in metallic gadolinium in bulk, tractive elastocaloric effect in (Ni50Mn31.5Ti18.5)99.8B0.2 in bulk and compressive elastocaloric effect in films of Ni50.4Ti49.6. In all examples presented and detailed in this paper, Δ values obtained from both approaches are very close, showing both approaches and, consequently, our model are valid.
We noticed Δ for Ni-Ti alloy obtained using the second approach (from T vs. t data) has higher values for much lower strain rates (not shown). This is due to the fact that, during Δ experiments, Ni-Ti films are stretched and ℎ may increase significantly, since ℎ = ℎ 0 ⁄ and the ratio ⁄ may increase significantly. Therefore, to use this approach, it is important to keep ℎ nearly constant during the experiments, which may be an issue when stretching films (in general) and elastomers.
Analyzing eq. (9), it is not difficult to see that if there is not latent heat and phase coexistence [∆ = 0 and ( ) = 0] and we know two of the parameters h, c and ρ, we may find the third one by fitting the curve referent to the process 2→1 . For materials that do not present firstorder transition, this should be valid in all temperatures and applied fields. For materials that present first-order transitions, this should be valid for temperature and applied field intervals off the region of phase coexistence.
In summary, the virtues of the present model indicate that it is a very useful and robust tool to obtain the correct Δ values and to correlate Δ with other thermodynamic quantities. Furthermore, this model is possibly valid for any i-caloric effect. This work was supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico -CNPq (Proc. 163391/2020-3).
( 6 )
6with equation 6 from Ref. 24, we notice they are equivalent, with Φ ≡ ℎ 2 ∆ . Here, it is important to point out three main differences between the present approach and that one reported in Ref. 24: . (2) represents the temperature change associated to a firstorder transition, not the adiabatic temperature change ∆ ; (b) It is not possible to achieve Eq. (6) from Eq. (1) using the approach from Ref. 24;
FIG. 1 .
1Results for metallic gadolinium in bulk. (a) Temperature vs. time data, where temperature increases due to a positive magnetic field change (from 0 up to 3 T), with a rate of 0.01 T s -1 ; 23 (b) Temperature change vs. rate -1 , where experimental data was obtained at 292 K.23 Symbols represent experimental data and lines represent the fits from the model.
FIG. 2 .
2Results for (Ni50Mn31.5Ti18.5)99.8B0.2 in bulk. (a) Temperature vs. time data, where temperature increases due to a positive compressive tension change of 700 MPa, resulting in a strain ( ) of 0.07, with a rate of 0.16 s -1 ;
̅̅̅̅̅̅̅̅ is the average temperature difference between the material and the reservoir during the external field variation. Considering the field changes in a constant rate , then ∆~1 and − 12
1
= 12 and ∫ [ ( ) − 1 ]
2
1
= − 1
̅̅̅̅̅̅̅̅ ∆ , with ∆ = 2 − 1 .
− 1
̅̅̅̅̅̅̅̅ ≅
∆
2
. Since ≡ , we have ∆ =
∆
⁄
, where ∆ is the field change between times 1 and 2 .
Therefore, − 1
̅̅̅̅̅̅̅̅ ∆ =
∆
2
∆ . We can thus rewrite Eq. (2) as
FIG.3. Results for 20-μm-thick films of Ni50.4Ti49.6. (a) Temperature vs. time data, where temperature increases from room temperature (T0) due to a positive tractive tension change of 500 MPa, resulting in a strain of 0.053, with a rate of 1.0 s -1 ; (b) Temperature change vs. rate -1 , where experimental data was obtained at room temperature.29 Symbols represent experimental data and lines represent the fits from the model.
. V K Pecharsky, K A GschneidnerJr, Phys. Rev. Lett. 78234494V. K. Pecharsky, and K. A. Gschneidner, Jr., Phys. Rev. Lett. 78(23), 4494 (1997).
. H Wada, Y Tanabe, Appl. Phys. Lett. 79203302H. Wada, and Y. Tanabe, Appl. Phys. Lett. 79(20), 3302 (2001).
. B Li, J Du, W J Ren, W J Hu, Q Zhang, D Li, Z D Zhang, Appl. Phys. Lett. 92242504B. Li, J. Du, W. J. Ren, W. J. Hu, Q. Zhang, D. Li, and Z. D. Zhang, Appl. Phys. Lett. 92, 242504 (2008).
. J Cui, Y Wu, J Muehlbauer, Y Hwang, R Radermacher, S Fackler, M Wuttig, I Takeuchi, Appl. Phys. Lett. 10173904J. Cui, Y. Wu, J. Muehlbauer, Y. Hwang, R. Radermacher, S. Fackler, M. Wuttig, and I. Takeuchi, Appl. Phys. Lett. 101 073904 (2012).
. A M G Carvalho, J C G Tedesco, M J M Pires, M E Soffner, A O Guimarães, A M A A Mansanares, Coelho, Appl. Phys. Lett. 102192410A. M. G. Carvalho, J. C. G. Tedesco, M. J. M. Pires, M. E. Soffner, A. O. Guimarães, A. M. Mansanares. A. A. Coelho, Appl. Phys. Lett. 102, 192410 (2013).
. T Samanta, P Lloveras, A U Saleheen, D L Lepkowski, E Kramer, I Dubenko, P W Adams, D P Young, M Barrio, J L Tamarit, N Ali, S Stadler, Appl. Phys. Lett. 11221907T. Samanta, P. Lloveras, A. U. Saleheen, D. L. Lepkowski, E. Kramer, I. Dubenko, P. W. Adams, D. P. Young, M. Barrio, J. L. Tamarit, N. Ali, and S. Stadler, Appl. Phys. Lett. 112, 021907 (2018).
. J He, Z Wei, W Sun, X Lu, S Ma, J Liu, Intermetallics. 139107348J. He, Z. Wei, W. Sun, X. Lu, S. Ma, and J. Liu, Intermetallics 139, 107348 (2021).
. A S Mischenko, Q Zhang, J F Scott, R W Whatmore, N D Mathur, Science. 3111270A. S. Mischenko, Q. Zhang, J. F. Scott, R. W. Whatmore, N. D. Mathur, Science 311, 1270 (2006).
. B Li, Y Kawakita, S Ohira-Kawamura, T Sugahara, H Wang, J Wang, Y Chen, S I , B. Li, Y. Kawakita, S. Ohira-Kawamura, T. Sugahara, H. Wang, J. Wang, Y. Chen, S. I.
. S Kawaguchi, K Kawaguchi, K Ohara, D Li, R Yu, T Mole, T Hattori, S Kikuchi, Z Yano, W Zhang, S Ren, O Lin, K Sakata, Z Nakajima, Zhang, Nature. 567506Kawaguchi, S. Kawaguchi, K. Ohara, K. Li, D. Yu, R. Mole, T. Hattori, T. Kikuchi, S. Yano, Z. Zhang, W. Ren, S. Lin, O. Sakata, K. Nakajima, and Z. Zhang, Nature 567, 506 (2019).
. J Lin, P Tong, K Zhang, K Tao, W Lu, X Wang, X Zhang, W Song, Y Sun, Nat. Commun. 13596J. Lin, P. Tong, K. Zhang, K. Tao, W. Lu, X. Wang, X. Zhang, W. Song, and Y. Sun, Nat. Commun. 13, 596 (2022).
. C A Miliante, A M Christmann, R P Soares, J R Bocca, C S Alves, A M G Carvalho, A R Muniz, J. Mater. Chem. A. 108344C. A. Miliante, A. M. Christmann, R. P. Soares, J. R. Bocca, C. S. Alves, A. M. G. Carvalho, A. R. Muniz, J. Mater. Chem. A 10, 8344 (2022).
. P J Von Ranke, B P Alho, R M Ribas, E P Nobrega, A Caldas, V S R Sousa, M V Colaço, L F Marques, D L Rocco, P O Ribeiro, Phys. Rev. B. 98224408P. J. von Ranke, B. P. Alho, R. M. Ribas, E. P. Nobrega, A. Caldas, V. S. R. de Sousa, M. V. Colaço, L. F. Marques, D. L. Rocco, P. O. Ribeiro, Phys. Rev. B 98, 224408 (2018).
. P J Von Ranke, B P Alho, P H S Silva, R M Ribas, E P Nobrega, V S R Sousa, A M G Carvalho, P O Ribeiro, Phys. Status Solid B. 2100108P. J. von Ranke, B. P. Alho, P. H. S. da Silva, R. M. Ribas, E. P. Nobrega, V. S. R. de Sousa, A. M. G. Carvalho, P. O. Ribeiro, Phys. Status Solid B 2100108 (2021).
. M Romanini, Y Wang, K Gürpinar, G Ornelas, P Lloveras, Y Zhang, W Zheng, M Barrio, A Aznar, A Gràcia-Condal, B Emre, O Atakol, C Popescu, H Zhang, Y Long, L Balicas, J L Tamarit, A Planes, M Shatruk, L Mañosa, Adv. Mater. 33M. Romanini, Y. Wang, K. Gürpinar, G. Ornelas, P. Lloveras, Y. Zhang, W. Zheng, M. Barrio, A. Aznar, A. Gràcia-Condal, B. Emre, O. Atakol, C. Popescu, H. Zhang, Y. Long, L. Balicas, J. L. Tamarit, A. Planes, M. Shatruk, and L. Mañosa, Adv. Mater. 33, 2008076 (2021).
. W Imamura, E O Usuda, E S N Lopes, A M G Carvalho, J. Mater. Sci. 57311W. Imamura, E. O. Usuda, E. S. N. Lopes, and A. M. G. Carvalho, J. Mater. Sci. 57, 311 (2022).
. N M Bom, W Imamura, E O Usuda, L S Paixão, A M G Carvalho, ACS Macro Lett. 731N. M. Bom, W. Imamura, E. O. Usuda, L. S. Paixão, and A. M. G. Carvalho, ACS Macro Lett. 7, 31 (2017).
. A M G Carvalho, W Imamura, E O Usuda, N M Bom, Eur. Polym. J. 99212A. M. G. Carvalho, W. Imamura, E. O. Usuda, and N. M. Bom, Eur. Polym. J. 99, 212 (2018).
. E O Usuda, W Imamura, N M Bom, L S Paixão, A M G Carvalho, ACS Appl. Polym. Mater. 11991E. O. Usuda, W, Imamura, N. M. Bom, L. S. Paixão, and A. M. G. Carvalho, ACS Appl. Polym. Mater. 1, 1991 (2019).
. W Imamura, E O Usuda, L S Paixão, N M Bom, A M Gomes, A M G Carvalho, CJPS. 38999W. Imamura, E. O. Usuda, L. S. Paixão, N. M. Bom, A. M. Gomes, and A. M. G. Carvalho. CJPS 38, 999 (2020).
. N M Bom, E O Usuda, M S Gigliotti, D J M Aguiar, W Imamura, L S Paixão, A M G Carvalho, CJPS. 38769N. M. Bom, E. O. Usuda, M. S. Gigliotti, D. J. M. Aguiar, W. Imamura, L. S. Paixão, and A. M. G. Carvalho, CJPS 38, 769 (2020).
. J R Bocca, S Favaro, C S Alves, A M G Carvalho, J R BarbosaJr, A Santos, F C , J. R. Bocca, S. L Favaro, C. S. Alves, A. M. G. Carvalho, J. R. Barbosa Jr., A. Santos, F. C.
. W A S Colman, C Conceição, E Caglioni, Radovanovic, Polym. Test. 100107251Colman, W. A. S. Conceição, C. Caglioni, and E. Radovanovic, Polym. Test. 100, 107251 (2021).
. N Weerasekera, K P K Ajjarapu, K Sudan, G Sumanasekera, K Kate, B Bhatia, Front. Energy Res. 10887006N. Weerasekera, K. P. K. Ajjarapu, K. Sudan, G. Sumanasekera, K. Kate, and B. Bhatia, Front. Energy Res. 10, 887006 (2022).
. A M G Carvalho, C Salazar Mejía, C A Ponte, L E L Silva, J Kaštil, J Kamarád, A M Gomes, Appl. Phys. A. 122246A. M. G. Carvalho, C. Salazar Mejía, C. A. Ponte, L. E. L. Silva, J. Kaštil, J. Kamarád, and A. M. Gomes, Appl. Phys. A 122, 246 (2016).
. S Qian, L Yuan, J Yu, G Yan, Appl. Phys. Lett. 111223902S. Qian, L. Yuan, J. Yu, and G Yan, Appl. Phys. Lett. 111, 223902 (2017).
. M Griffel, R E Skochdopole, F H Spedding, Phys. Rev. 934657M. Griffel, R. E. Skochdopole, and F. H. Spedding, Phys. Rev. 93(4), 657 (1954).
. D Cong, W Xiong, A Planes, Y Ren, L Mañosa, P Cao, Z Nie, X Sun, Z Yang, X Hong, Y Wang, Phys. Rev. Lett. 122255703D. Cong, W. Xiong, A. Planes, Y. Ren, L. Mañosa, P. Cao, Z. Nie, X. Sun, Z. Yang, X. Hong, and Y. Wang, Phys. Rev. Lett. 122, 255703 (2019).
. A Aznar, A Gràcia-Condal, A Planes, P Lloveras, M Barrio, J-L Tamarit, W Xiong, D Cong, C Popescu, L Mañosa, Phys. Rev. Mat. 344406A. Aznar, A. Gràcia-Condal, A. Planes, P. Lloveras, M. Barrio, J-L. Tamarit, W. Xiong, D. Cong, C. Popescu, and L. Mañosa, Phys. Rev. Mat. 3, 044406 (2019).
. H Ossmer, F Lambrecht, M Gültig, C Chluba, E Quandt, M Kohl, Acta Materialia. 819H. Ossmer, F. Lambrecht, M. Gültig, C. Chluba, E. Quandt, and M. Kohl, Acta Materialia 81, 9 (2014).
. A Aznar, A Gràcia-Condal, A Planes, P Lloveras, M Barrio, J-L Tamarit, W Xiong, D Cong, C Popescu, L Mañosa, Phys. Rev. Mat. 344406A. Aznar, A. Gràcia-Condal, A. Planes, P. Lloveras, M. Barrio, J-L. Tamarit, W. Xiong, D. Cong, C. Popescu, and L. Mañosa, Phys. Rev. Mat. 3, 044406 (2019).
. H Ossmer, F Lambrecht, M Gültig, C Chluba, E Quandt, M Kohl, Acta Materialia. 819H. Ossmer, F. Lambrecht, M. Gültig, C. Chluba, E. Quandt, and M. Kohl, Acta Materialia 81, 9 (2014).
. M Griffel, R E Skochdopole, F H Spedding, Phys. Rev. 934657M. Griffel, R. E. Skochdopole, and F. H. Spedding, Phys. Rev. 93(4), 657 (1954).
. D Cong, W Xiong, A Planes, Y Ren, L Mañosa, P Cao, Z Nie, X Sun, Z Yang, X Hong, Y Wang, Phys. Rev. Lett. 122255703D. Cong, W. Xiong, A. Planes, Y. Ren, L. Mañosa, P. Cao, Z. Nie, X. Sun, Z. Yang, X. Hong, and Y. Wang, Phys. Rev. Lett. 122, 255703 (2019).
| [] |
[
"CADe TOOLS FOR EARLY DETECTION OF BREAST CANCER",
"CADe TOOLS FOR EARLY DETECTION OF BREAST CANCER"
] | [
"U Bottigli \nUniversità di Sassari and Sezione INFN di Cagliari\nItaly\n",
"P G Cerello \nINFN di Torino\nItaly\n",
"P Delogu \nUniversità and Sezione\nINFN di Pisa\nItaly\n",
"M E Fantacci \nUniversità and Sezione\nINFN di Pisa\nItaly\n",
"F Fauci \nPalermo and Sezione\nUniversità di\nINFN di Catania\nItaly\n",
"G Forni \nFederico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly\n",
"B Golosio \nUniversità di Sassari and Sezione INFN di Cagliari\nItaly\n",
"A Lauria \nFederico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly\n",
"E Lopez \nINFN di Torino\nItaly\n",
"R Magro \nPalermo and Sezione\nUniversità di\nINFN di Catania\nItaly\n",
"G L Masala \nUniversità di Sassari and Sezione INFN di Cagliari\nItaly\n",
"P Oliva \nUniversità di Sassari and Sezione INFN di Cagliari\nItaly\n",
"R Palmiero \nFederico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly\n",
"G Raso \nPalermo and Sezione\nUniversità di\nINFN di Catania\nItaly\n",
"A Retico \nUniversità and Sezione\nINFN di Pisa\nItaly\n",
"S Stumbo \nUniversità di Sassari and Sezione INFN di Cagliari\nItaly\n",
"S Tangaro \nUniversità di Bari and Sezione INFN di Cagliari\nItaly\n"
] | [
"Università di Sassari and Sezione INFN di Cagliari\nItaly",
"INFN di Torino\nItaly",
"Università and Sezione\nINFN di Pisa\nItaly",
"Università and Sezione\nINFN di Pisa\nItaly",
"Palermo and Sezione\nUniversità di\nINFN di Catania\nItaly",
"Federico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly",
"Università di Sassari and Sezione INFN di Cagliari\nItaly",
"Federico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly",
"INFN di Torino\nItaly",
"Palermo and Sezione\nUniversità di\nINFN di Catania\nItaly",
"Università di Sassari and Sezione INFN di Cagliari\nItaly",
"Università di Sassari and Sezione INFN di Cagliari\nItaly",
"Federico II\" and Sezione\nUniversità \"\nINFN di Napoli\nItaly",
"Palermo and Sezione\nUniversità di\nINFN di Catania\nItaly",
"Università and Sezione\nINFN di Pisa\nItaly",
"Università di Sassari and Sezione INFN di Cagliari\nItaly",
"Università di Bari and Sezione INFN di Cagliari\nItaly"
] | [] | A breast neoplasia is often marked by the presence of microcalcifications and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. Our collaboration, among italian physicists and radiologists, has built a large distributed database of digitized mammographic images and has developed a Computer Aided Detection (CADe) system for the automatic analysis of mammographic images and installed it in some Italian hospitals by a GRID connection. Regarding microcalcifications, in our CADe digital mammogram is divided into wide windows which are processed by a convolution filter; after a self-organizing map analyzes each window and produces 8 principal components which are used as input of a neural network (FFNN) able to classify the windows matched to a threshold. Regarding massive lesions we select all important maximum intensity position and define the ROI radius. From each ROI found we extract the parameters which are used as input in a FFNN to distinguish between pathological and non-pathological ROI. We present here a test of our CADe system, used as a second reader and a comparison with another (commercial) CADe system. | null | [
"https://export.arxiv.org/pdf/physics/0410082v1.pdf"
] | 1,155,376 | physics/0410082 | 934a7cbca6fe8784fc82bfad0b810c02c75c9eed |
CADe TOOLS FOR EARLY DETECTION OF BREAST CANCER
U Bottigli
Università di Sassari and Sezione INFN di Cagliari
Italy
P G Cerello
INFN di Torino
Italy
P Delogu
Università and Sezione
INFN di Pisa
Italy
M E Fantacci
Università and Sezione
INFN di Pisa
Italy
F Fauci
Palermo and Sezione
Università di
INFN di Catania
Italy
G Forni
Federico II" and Sezione
Università "
INFN di Napoli
Italy
B Golosio
Università di Sassari and Sezione INFN di Cagliari
Italy
A Lauria
Federico II" and Sezione
Università "
INFN di Napoli
Italy
E Lopez
INFN di Torino
Italy
R Magro
Palermo and Sezione
Università di
INFN di Catania
Italy
G L Masala
Università di Sassari and Sezione INFN di Cagliari
Italy
P Oliva
Università di Sassari and Sezione INFN di Cagliari
Italy
R Palmiero
Federico II" and Sezione
Università "
INFN di Napoli
Italy
G Raso
Palermo and Sezione
Università di
INFN di Catania
Italy
A Retico
Università and Sezione
INFN di Pisa
Italy
S Stumbo
Università di Sassari and Sezione INFN di Cagliari
Italy
S Tangaro
Università di Bari and Sezione INFN di Cagliari
Italy
CADe TOOLS FOR EARLY DETECTION OF BREAST CANCER
A breast neoplasia is often marked by the presence of microcalcifications and massive lesions in the mammogram: hence the need for tools able to recognize such lesions at an early stage. Our collaboration, among italian physicists and radiologists, has built a large distributed database of digitized mammographic images and has developed a Computer Aided Detection (CADe) system for the automatic analysis of mammographic images and installed it in some Italian hospitals by a GRID connection. Regarding microcalcifications, in our CADe digital mammogram is divided into wide windows which are processed by a convolution filter; after a self-organizing map analyzes each window and produces 8 principal components which are used as input of a neural network (FFNN) able to classify the windows matched to a threshold. Regarding massive lesions we select all important maximum intensity position and define the ROI radius. From each ROI found we extract the parameters which are used as input in a FFNN to distinguish between pathological and non-pathological ROI. We present here a test of our CADe system, used as a second reader and a comparison with another (commercial) CADe system.
INTRODUCTION
Early diagnosis of breast cancer in asymptomatic women strongly reduces breast cancer mortality [1]. Screening programs, which consist in a mammographic examination performed for 49-69 years old women, is nowadays the best way to obtain this important aim. It has been estimated that screening programs radiologists fail to detect up to approximately 25% breast cancers visible on retrospective reviews and that this percentage increases if minimal signs are considered [2,3]. Sensitivity (percentage of pathologic images correctly classified) and specificity (percentage of non pathologic images correctly classified) of this examination increase if the images are analysed independently by two radiologists [4]. So independent double reading is now strongly recommended as it allows to reduce the rate of false negative examinations by 5-15% [5,6]. Recent technological progress has allowed to develop a number of Computer Aided Detection (CADe) systems [7], which can provide an automated detection of pathological strucures and act as a second reader to assist radiologists in diagnosis.
DESCRIPTION OF A GPCALMA STATION
A breast neoplasia is often marked by the presence of microcalcifications and massive lesions in the mammogram, so we have developed a CADe system able to provide detection of these markers. Traditional mammograms can be digitised by means of a CCD linear scanner with a 85 µm pitch and 4096 gray levels. The images (18x24 cm 2 ) are stored in 10.5 Mbytes data files.
On these images our CADe is able to provide an automated detection for both microcalcifications and massive lesions (see fig. 1). Figure 1: an example of the output of GPCALMA CADe. The green circle indicates the the ROI indicated by the radiologist as suspected for the presence of spiculated lesions with granular microcalcifications. The CADe has correctly indicated two ROIs suspect for the presence of pathological masses (red circles, threshold=0.9) and two ROIs suspect for the presence of microcalcifications (red rectangles, threshold=0.95). The histological examination has confirmed this detection, indicating the presence of a ductal infiltrating carcinoma with granular microcalcifications.
GPCALMA (Grid Platform for Computed Assisted Library for MAmmography) database is made of about 5500 distributed over various Italian hospitals. Different nodes will be connected using GRID technologies, allowing each node to work on the whole database.
GPCALMA CADe SOFTWARE
Opacities and Spiculated Lesions
Masses are rather large objects with very different shapes and faint contrast, slowly increasing with time. In GPCALMA database, the mean diameter of such lesions, as indicated by our radiologists, is 2.1cm. We have developed algorithms for recognition of opacities in general and specifically for spiculated lesions, which present a particular spike shape.
The interesting areas are selected by the construction of a structure with concentric rings centered on local intensity maxima until the mean pixel value reaches a fixed threshold, thus identifying ROIs consisting of circles of radius R. As a further step, for searching spiculated lesions, a spiral is unrolled around each maximum. For opacities, features are extracted by calculating the average intensity, variance and skewness (index of asymmetric distribution) of the pixel value distributions in circles of radius 1/3 R, 2/3 R and R, respectively. In the case of spiculated lesions, the number of oscillations per turn is calculated and processed by means of a Fourier Transform.
The features so extracted are used as input of a feed-forward neural network which perform the final classification. This network has an output neuron whose threshold (i.e. a number comprised between 0 and 1) represents the of suspiciousness of the corresponding ROI.
Microcalcifications clusters
A microcalcification is a rather small (0.1 to 1.0 mm in diameter) but very brilliant object. Some of them, either grouped in cluster or isolated, may indicate the presence of a cancer. In the GPCALMA database, the mean diameter of microcalcification clusters, as indicated by our radiologists, is 2.3cm.
Microcalcification cluster analysis is made using the following approach:
• digital mammogram is divided into 60x60 pixels wide overlapping windows;
• windows are statistically selected comparing the local and the global maxima;
• windows are shrunk from 60x60 to 7x7 and are classified (with or without microcalcifications clusters) using a FFNN with 49 input, 6 hidden, and 2 output neurons; • windows are processed by a convolution filter to reduce the large structures; • a self-organizing map (a Sanger's neural network) analyzes each window and produces 8 principal components; • the principal components are used as input of a FFNN able to classify the windows matched to a threshold (the response of the output neuron of the neural network); • windows are sorted by the threshold; • at maximum three windows are memorized, if its threshold exceeds a given value;
• selected windows are zoomed to 180x180 pixels, i.e. 15x15 mm 2 ;
• overlapping windows are clusterized.
COMPARISON WITH A COMMERCIAL CAD SYSTEM
Hence GPCALMA CADe is thoght to act as a second reader, we have tested its performances as an increase of radiologist detection performances, and we have compared them to radiologist detection improvement due to another CADe system: CADx Second Look, that is a commercial (FDA approved) CADe station.
A data set made of 190 (70 pathological, 120 without lesions) images has been shown to three different radiolologists (A, B, C), with a different degree of experience, A being the most expert and C a beginner. They made diagnosis on them in three different ways: without the support of any CADe system, supported by CADx and supported by GPCALMA CADe.
Results are presented in terms of sensitivity, that is the fraction of positive cases correctly detected to total positive cases, and specificity, that is the fraction of negative cases correctly detected to total negative cases In tables 1 and 2 the results of the comparison of GPCALMA and Second Look, used as second readers, are shown.
TABLE 1 :
1Sensitivity (and confidence interval) Alone (C.I.)With CADx (C.I.)With GPCALMA (C.I.)
TABLE 2 :
2Specificity (and confidence interval)
Alone (C.I.)
With CADx (C.I.)
With GPCALMA (C.I.)
A
87.5% (3.0%)
84.2% (3.3%)
87.5% (3.0%)
B
91.7% (2.6%)
85.9% (3.2%)
88.4% (2.9%)
C
74.2% (4.0%)
70.8% (4.2%)
70.9% (4.1%)
. Lancet. 355Lancet 2000, 355, 1822-1823.
A stochastic method for authomated detection of microcalcifications in digital mammograms" in Information processing in medical imaging. N Karssemejer, Springer-VerlagNew YorkN. Karssemejer, "A stochastic method for authomated detection of microcalcifications in digital mammograms" in Information processing in medical imaging, Springer-Verlag New York, 227-238, 1991.
Reading screening mammograms with the help of neural networks. N Karssmejer, Nederlands Tijdschriff geneeskd. 143N. Karssmejer, "Reading screening mammograms with the help of neural networks", Nederlands Tijdschriff geneeskd, 143/45, 2232-2236, 1999.
. S A Feig, M Yaffe, Radiologic Clinics of North America. 3361205S.A. Feig and M.Yaffe, Radiologic Clinics of North America, Vol.33 n.6, 1205, 1995.
Professional quality assurance for mammographic programs. R E Bird, Radiology. 177R.E. Bird, "Professional quality assurance for mammographic programs", Radiology 177, 587-592, 1990.
Benefit of indipendent double reading in a population based mammography screening program. E L Thurfjell, K A Lernevall, A A S Taube, Radiology. 191E.L. Thurfjell, K.A. Lernevall, A.A.S. Taube, "Benefit of indipendent double reading in a population based mammography screening program" Radiology 191, 241-244, 1994.
Can computer help radiologists read mammograms?. C J Viborny, Radiology. 191C.J. Viborny "Can computer help radiologists read mammograms?" Radiology 191, 315-317, 1994.
| [] |
[
"Noisy Bayesian optimization for variational quantum eigensolvers Noisy Bayesian optimization for variational quantum eigensolvers",
"Noisy Bayesian optimization for variational quantum eigensolvers Noisy Bayesian optimization for variational quantum eigensolvers"
] | [
"Giovanni Iannelli [email protected] ",
"Karl Jansen [email protected] ",
"Giovanni Iannelli ",
"\nDeutsches Elektronen-Synchrotron DESY\nDepartment of Physics\nInstitut für Physik\nUniversity of Cyprus\nPlatanenallee 6, Panepistimiou Street 115738, 2109Zeuthen, Aglantzia, NicosiaGermany, Cyprus\n",
"\nDipartimento di Fisica\nHumboldt-Universität zu Berlin\nNewtonstraße 1512489BerlinGermany\n",
"\nUniversità degli Studi di Roma \"Tor Vergata\"\nVia della Ricerca Scientifica 100133RomeItaly\n"
] | [
"Deutsches Elektronen-Synchrotron DESY\nDepartment of Physics\nInstitut für Physik\nUniversity of Cyprus\nPlatanenallee 6, Panepistimiou Street 115738, 2109Zeuthen, Aglantzia, NicosiaGermany, Cyprus",
"Dipartimento di Fisica\nHumboldt-Universität zu Berlin\nNewtonstraße 1512489BerlinGermany",
"Università degli Studi di Roma \"Tor Vergata\"\nVia della Ricerca Scientifica 100133RomeItaly"
] | [
"The 38th International Symposium on Lattice Field Theory"
] | The variational quantum eigensolver (VQE) is a hybrid quantum-classical algorithm used to find the ground state of a Hamiltonian using variational methods. In the context of this Lattice symposium, the procedure can be used to study lattice gauge theories (LGTs) in the Hamiltonian formulation. Bayesian optimization (BO) based on Gaussian process regression (GPR) is a powerful algorithm for finding the global minimum of a cost function, e.g. the energy, with a very low number of iterations using data affected by statistical noise. This work proposes an implementation of GPR and BO specifically tailored to perform VQE on quantum computers already available today. | 10.22323/1.396.0251 | [
"https://arxiv.org/pdf/2112.00426v1.pdf"
] | 244,773,347 | 2112.00426 | 01a4b9a0a299f1cdc47c6534abf3d432fabaf9aa |
Noisy Bayesian optimization for variational quantum eigensolvers Noisy Bayesian optimization for variational quantum eigensolvers
LATTICE2021 26th-30th July, 2021
Giovanni Iannelli [email protected]
Karl Jansen [email protected]
Giovanni Iannelli
Deutsches Elektronen-Synchrotron DESY
Department of Physics
Institut für Physik
University of Cyprus
Platanenallee 6, Panepistimiou Street 115738, 2109Zeuthen, Aglantzia, NicosiaGermany, Cyprus
Dipartimento di Fisica
Humboldt-Universität zu Berlin
Newtonstraße 1512489BerlinGermany
Università degli Studi di Roma "Tor Vergata"
Via della Ricerca Scientifica 100133RomeItaly
Noisy Bayesian optimization for variational quantum eigensolvers Noisy Bayesian optimization for variational quantum eigensolvers
The 38th International Symposium on Lattice Field Theory
LATTICE2021 26th-30th July, 2021Zoom/Gather@Massachusetts Institute of Technology * Speaker Attribution-NonCommercial-NoDerivatives 4.0 International License (CC BY-NC-ND 4.0). https://pos.sissa.it/ arXiv:2112.00426v1 [quant-ph]
The variational quantum eigensolver (VQE) is a hybrid quantum-classical algorithm used to find the ground state of a Hamiltonian using variational methods. In the context of this Lattice symposium, the procedure can be used to study lattice gauge theories (LGTs) in the Hamiltonian formulation. Bayesian optimization (BO) based on Gaussian process regression (GPR) is a powerful algorithm for finding the global minimum of a cost function, e.g. the energy, with a very low number of iterations using data affected by statistical noise. This work proposes an implementation of GPR and BO specifically tailored to perform VQE on quantum computers already available today.
Introduction
Quantum algorithms have the potential to be exponentially quicker than classical alternatives in many noteworthy scientific applications. Examples are quantum machine learning [1], quantum chemistry [2], and many others [3]. Unfortunately, many of these applications are not yet implementable on current noisy intermediate-scale quantum (NISQ) computers [4] and need to wait until noise sources can be suppressed down to a threshold that makes quantum computers usable in practice or to even build fault-tolerant quantum computers [5].
However, many interesting problems of LGTs can already be studied with NISQ devices [6]. In particular, if LGTs are studied in their Hamiltonian formulation, quantum algorithms do not generally suffer from the sign problem [7,8]. An important ready-to-use algorithm is the variational quantum eigensolver (VQE) [9], which is a hybrid quantum-classical algorithm for finding the ground (and excited) state of a given Hamiltonian H using the variational principle. The quantum part of VQE deals with measuring the expectation value of the Hamiltonian, i.e. the energy, in a given multi-qubit state, while the classical part consists of searching among a family of multi-qubit states generated by a parametrized quantum circuit to find the state that minimizes the energy.
The algorithm proposed in this proceedings is a classical optimizer that aims to find a good approximation of the ground state reducing as much as possible the number of energy measurements. The approach chosen here is known as Bayesian global optimization. Its first application dates back in the 60s [10], while its modern implementations are based on a more recent work [11]. The backbone of this method is Gaussian process regression (GPR), which is an interpolation method based on Bayesian inference of Gaussian processes. It allows us to create predictive models of black-box functions using a limited amount of (noisy) data. At each optimization iteration, this model is used to determine a set of parameters presumably close to the global minimum point. This step is performed following a procedure called acquisition function optimization.
The algorithm proposed here to optimize the energy differs from the other alternatives commonly used in VQE as it uses not only the estimated values of the energy, but also the values of their statistical errors. The motivation is to lower the number of quantum measurements at each step: the procedure is well defined even for imprecise energy measurements as long as their errors is approximately Gaussian due to the central limit theorem. Results of this algorithm are compared to other commonly chosen alternatives using simulators of noisy devices.
Quantum expectation estimation
Given a Hamiltonian H , it first needs to be written as a polynomial of sigma matrices:
= ∑︁ ℎ + ∑︁ ℎ ⊗ + . . .(1)
where the ℎs are real parameters, Latin indices identify the qubit on which sigma matrices are acting, and Greek indices the sigma matrices coordinates. The quantum expectation estimation (QEE) algorithm [9] computes the expectation value of the energy ≡ | H | for any input multi-qubit state | with a possible quantum advantage with respect to equivalent classical approaches. Furthermore, QEE is already implementable in NISQ devices as the computation of can be decomposed in many short quantum programs, therefore reducing the impact of quantum noise.
However, due to the probabilistic nature of quantum measurements, we have only access to a stochastic variable that estimates (see for example [12] for the Qiskit implementation). In order to get a precise estimation, it is possible to perform multiple independent measurements, also called shots, and use their mean as an estimator of . Let us consider the case of measurements in the absence of quantum noise. As the number of shots goes to infinity, the central limit theorem tells us that the mean converges to a Gaussian centered in whose variance is estimated by the standard error of the mean. Therefore, given a number of independent shots, calling 1 , ..., the energy measurements of each single shot, it is possible to measure the following energy estimator on a quantum computer:ˆ≡
1 ∑︁ =1 (2) Var[ˆ] = 1 ( − 1) ∑︁ =1 ( −ˆ) 2(3)
It is important to emphasize that there is a difference between the statistical noise and the quantum noise. The statistical noise is the (approximately) Gaussian deviation ofˆcaused by the probabilistic nature of quantum measurements, while quantum noise is the deviation caused by the imperfections of real quantum devices. The impact of quantum noise to QEE is to add a BIAS to the estimator of Eq. (2). This BIAS can be significantly reduced using error mitigation techniques (a comparison can, e.g., be found in Section 5.1 of [13]).
In the rest of these proceedings, for simplifying the notation, the number of shots will be omitted. Since on the quantum computer a parametrized quantum circuit is employed with parameters , we will denote the parametrized energy ( ) with error Δ ( ) with ( ) ≡ ( ) and Δ ( ) ≡ √︁ Var[ˆ( )].
Variational quantum eigensolver
The objective Hamiltonian H needs first to be written in the form of, e.g., Eq. (1). Then, a family of qubit states | ( ) is introduced with a -dimensional parameter set . This can be achieved by applying a parametrized quantum circuit ( ) on a fixed initial multi-qubit state, usually chosen to be |0 · · · 0 :
| ( ) ≡ ( ) |0 · · · 0(4)
This state parametrization allows us to define a parametrized energy:
( ) ≡ ( ) | H | ( )(5)
which can be evaluated using the QEE algorithm for any value of . Then, the VQE consists of approximating the ground state | min performing the following optimization:
min
( ) ≥ min | H | min = min(6)
It is important to note that most of the optimizers commonly used within VQE take as input only measurements of the estimator in Eq. (2) and not of its error in Eq. (3). In some cases they rely on performing energy measurements that are precise enough to be considered (almost) exact, which means choosing the number of shots to be large enough so that the error in Eq. (3) can be neglected. This is done, for example, in the original VQE paper [9] using the Nelder-Mead [14] optimizer, as well as in a well-performing recently published optimizer algorithm [15] specifically tailored for VQE. On the other hand, some algorithms leverage on the statistical noise of input measurements to escape local minima, as for example in the SPSA [16] optimizer. The algorithm proposed in this proceedings differs from the considered alternatives as it uses both the estimator of Eq. (2) and its error of Eq. (3).
Bayesian optimization
In the Bayesian optimization approach one first needs an initialization step with a sequence of energy measurements obtained with circuit parameters chosen (quasi) randomly. After this initial step, the core of the algorithm consists of two building blocks. At each iteration, it first uses GPR to create a predictive model of the parametrized energy in Eq. (5) using the information of the previous energy measurements. Then, this predictive model is used to define an acquisition function that assigns to each circuit parameter values a positive score. The parameters that maximize the acquisition function, i.e. the one with the highest score, will then be chosen for the next energy measurement.
Gaussian process regression
A Gaussian process (GP) maps a set of -dimensional circuit parameters 1 , ..., to stochastic variables 1 , ..., distributed as:
( ) = det(2 ) − 1 2 exp − 1 2 ∑︁ ( − ) ( −1 ) ( − ) where ≡ ( ), ≡ ( , ), ( ) is the GP mean function and ( , ) is the GP covariance function.
Given a set of energy measurements 1 , ..., obtained with the corresponding circuit parameter values 1 , ..., with their respective measurement errors Δ 1 , ..., Δ , Gaussian process regression (GPR) [17] is a procedure that finds a GP whose mean function ( ) interpolates the unknown energy functions ( ). For details about GPR see, e.g., the textbook [18]. For what concerns our presentation, we report the analytical result for the mean and covariance functions of the posterior obtained with GPR:
( | ) = ( ) + ∑︁ ( , ) ( −1 ) ( − ( )) ( , | ) = ( , ) − ∑︁ ( , ) ( −1 ) ( , )(7)
Also known as kriging in geostatistics.
where ≡ ( , ) + Δ 2 . The posterior mean ( | ) is our surrogate model of the unknown parametrized energy, while ( , | ) is an estimation of its Gaussian covariance. We emphasize that the measurement errors evaluated with Eq. (3) are used in the evaluation of , and this formula is exact in the case of Gaussian errors, which is asymptotically true as the number of shots grows.
The choice of the prior mean ( ) and the prior covariance function ( , ) is subjective, as it usually happens in the context of Bayesian inference. Their choice has an impact on the geometry of the posterior GP of Eq. (7). The possibility of using different ( ) and ( , ) can then be an advantage as it can be used to impose certain properties that are motivated from the physics of the considered problem. For example, in the noiseless case, the parametrized energy ( ) of Eq. (5) is in ∞ when using commonly chosen quantum circuits [15], and selecting ( ), ( , ) ∈ ∞ will impose this property on the posterior mean of Eq. (7). A common choice is to set ( ) to a constant:
( ) =(8)
and ( , ) to the RBF kernel:
RBF ( , ) = 2 exp − ( − ) 2 2ℓ 2(9)
where , , ℓ are hyperparameters that can be fixed with maximum likelihood estimation of type II (MLE-II) [18]. In most applications, the energy ( ) of Eq. (5) is not only ∞ , but 2 -periodic for each = 1, ..., . This property can be imposed to a GP using as covariance function the periodic kernel [19]:
( , ) = 2 exp − 2 ℓ 2 sin 2 − 2(10)
Acquisition function
The circuit parameter values of each optimization step are chosen to be the maximum of an appropriately defined acquisition function. A common choice is the expected improvement (EI) acquisition function [20]. Calling min ≡ min( 1 , ...,
) the minimum of the previously measured energies andˆ( ) the energy prediction given by the surrogate model, the EI is defined as:
EI ( ) ≡ Eˆ( ) [max(0, min −ˆ( ))](11)
where the expectation value Eˆ( ) [...] is evaluated among all the possible values ofˆ( ) evaluated from the surrogate model.
A great feature of the EI is that both EI ( ) and its gradient EI ( ) are available in closed form if a GP is used as surrogate model [11]. While the EI proves to be very effective for noiseless objective functions, it is not as effective in presence of statistical noise [21]. Since this is the case for VQE, a better choice is an extension of the EI called noisy expected improvement (NEI) [22].
The NEI is an extension of the EI that is not available in closed form. However, it is possible to efficiently evaluate it with (quasi-)Monte Carlo methods. Let E 1 , ..., E be noiseless (quasi)random energy functions sampled from the posterior of Eq. (7), and let EI ( |E 1 ), ..., EI ( |E ) be EIs defined over them. Then:
NEI ( ) 1 ∑︁ EI ( |E )(12)
Each summand of Eq. (12) is available in closed form so that its evaluation can be done fast. Furthermore, also its gradient is computable analytically, which can be useful for optimizing the acquisition function.
Outline of the algorithm
Here we describe our proposed algorithm step by step and specify details of its implementation. The Hamiltonian and the parametrized circuit of Eq. (4) are problem dependent. Quantum computing libraries have procedures for evaluating the parametrized energy of Eq. (5) with any value of the -dimensional circuit parameter using the estimator in Eq. (2) and its error of Eq. (3). This measurement is obtained performing shots. We used the Qiskit [23] quantum computing library for our tests.
Once the routine for measuring ( ) is defined, the Bayesian optimization procedure is entirely implemented on a classical computer. We built our test on top of the libraries Ax [24], BoTorch [25] and GPyTorch [26]. The Bayesian optimization is then performed as follows:
1. Generate quasi-random dimensional points 1 , ..., ∈ [0, 2 ] with a Sobol sequence [27]. In our tests, we used = 3.
and the corresponding estimation of the minimum energy min is:
min ≡ ( min | )(14)
6. Sample noiseless energy functions from the posterior GP found at point 4. For each of the samples, perform a different noiseless GPR using the same hyperparameters found in point 3, and compute the EIs necessary for the NEI approximation of Eq. (12). In our tests, we used = 20. 7. Perform a global optimization of the approximated NEI to find its maximum point NEI . We performed this optimization with the default procedure of BoTorch, which is a multistart L-BFGS-B [28], where 20 restart points are selected as those with the maximum acquisition function value out of 1000 points drawn from the Sobol sequence in [0, 2 ] . The SciPy [29] implementation of L-BFGS-B is used for this step.
8. Add the NEI maximum found at point 7 to the parameter set +1, ← NEI and iterate from point 2 with ← + 1 until a break condition is reached which could be realized, e.g., when the minimum of Eq. (13) is stable for a certain number of iterations. Alternatively a fixed number of iterations might be chosen beforehand in order to keep the number of quantum measurements under control.
Testing with IBMQ
We tested the algorithm described in section 5 on a simple two qubits Hamiltonian of the transverse-field Ising model with coupling set to one:
H = − 1 ⊗ 2 − 1 − 2(15)
The parametrization of Eq. (4) was achieved using the following quantum circuit:
|0 ( 1 ) • ( 3 ) ( 5 ) |0 ( 2 ) ( 4 ) ( 6 )
This circuit was constructed using the procedure described in [30]. It does not have redundant parameters and can cover the whole Hilbert space if we exclude states which are equivalent after the application of a global phase. For the quantum measurement, we used the Qiskit simulator using the noise model of IBMQ Santiago quantum device. Assuming that only a fixed total number of shots is at our disposal, we tried different algorithms with different number of shots per measurement in order to find the setup that uses this assumed budget most efficiently. We first analyze the results obtained with two algorithms that have been proposed for this specific task: the SPSA [16] in its implementation [12] available in Qiskit, and the NFT [15] . Then we compare their performance with two implementations of BO using respectively the RBF and the periodic kernels of Eqs. (9) and (10). Each algorithm is tested using 20, 40 and 80 number of measurement, each respectively obtained with 64, 32 and 16 number of shots. Therefore, the total number of shots used is 1280 in all cases.
The optimization results are then compared to the exact values of the ground state and the ground state energy. In particular, it is possible to evaluate the state fidelity of a parametrized state with respect to the exact ground state. The fidelity is equal to the square of the scalar product of two states and it quantifies their proximity as its maximum value of 1 is reached when the two states are identical.
After each iteration of the tested algorithms, both the energy and the fidelity have been recorded. The results shown are the average of what was obtained with 20 independent runs starting with Its implementation is available at https://github.com/ken-nakanishi/nftopt. different random initial conditions. Their error corresponds to the standard error computed out of these 20 repetitions.
SPSA The results for SPSA are shown in Fig. 1. The performances slightly improve decreasing the number of shots per measurements, but none of the three setups gets close to the exact solution. Here is used the same notation of Fig. 1 The NFT algorithm has overall a very quick convergence rate and a low requirement of CPU resources due to its usage of an analytical formula of the target energy. However, its solutions do The Qiskit implementation of SPSA was used with its default settings. These results were obtained setting the variable reset_interval=4 in the function made available by the authors. not have a good precision with very small number of shots per measurements as this algorithm does not have built-in methods to infer the real value of the energy, but it has to rely on the mean value of Eq. (2).
Bayesian optimizer Results obtained with the RBF kernel are reported in Fig. 3, while in Fig. 4 we show those obtained with the periodic kernel. The two implementations behave in a very similar way. In both cases the performances improve reducing the number of shots per measurement and the solution gets closer to the exact values without a loss of stability. Here is used the same notation of Fig. 1 Comparison of the algorithms In Fig. 5 we report the results of each of the considered algorithms in their best setup, which is 32 shots per measure for NFT and 16 for the others. SPSA is outperformed by NFT and the BOs, which have similar performances in this setup. However, the BOs have better estimation of the ground state energy as its value is inferred with GPR. It removes the tradeoff present in NFT between precision and speed at the cost of increasing CPU time. RBF initially converges faster than periodic, but at the end the solution with the highest fidelity was found with the periodic kernel.
Conclusions and outlooks
Our conclusions are that BO using GPR and NEI is a good choice for VQE in case of noisy measurements obtained with a few number of shots as it can use both the energy mean value of Eq. (2) and its error of Eq. (3). It outperforms SPSA in the here considered case and has a convergence rate similar to the one obtained with NFT. However, BO provides a more precise estimation of the ground state energy, although at the cost of increasing the CPU time, which is presently clearly not a bottleneck when compared to the available QPU time. BO with RBF kernel started converging faster than with periodic kernel, but the most accurate solution was found with the periodic kernel.
The main weakness of BO is its expansion to a high number of circuit parameters, at its standard implementation would require too much CPU time and memory. With some modifications, BO was, however, successfully used with a high number of parameters in other contexts (see for example [31][32][33]). At the moment we are exploring different possible ways to expand the BO of VQEs to cases in which the number of circuit parameters grows up to O (100), which would provide the BO approach a promising perspective to be used in VQE also for the next generation of quantum computers.
Aknowledgements
This project has received funding from the Marie Skłodowska-Curie European Joint Doctorate program STIMULATE of the European Commission under grant agreement No. 765048. G.I.'s position was funded under this program.
2 .
2Given 1 , ..., , measure their corresponding energies 1 , ..., and their errors Δ 1 , ..., Δ as described in Eq. (2) and Eq. (3). 3. Use MLE-II to infer the prior hyperparameter of Eq. (8) and , ℓ of Eq. (9) or Eq. (10), depending on whether RBF or periodic kernel was chosen. The default settings of Ax, BoTorch and GPyTorch were used for this inference. 4. Compute the GP posterior mean and covariance of Eq. (7). 5. The current estimation of the parameters for the global minimum point min is chosen among 1 , ..., as the with the minimum expected energy according to the GP model found at point 4: min ≡ argmin ( | )
Figure 1 :Figure 2 :
12shots/meas indicates the number of shots used for each energy measurement. Black lines correspond to the exact solution.NFT The performances of the NFT algorithm are shown inFig. 2. In this case, the optimization gets quickly close to the solution in all three cases. The speed of convergence increases reducing the number of shots per measurement, but, on the other hand, the stability and the precision of the solution decreases. The fidelity is slightly unstable only with 16 shots per measurement, while energy measurements do not have a good precision with 16 and 32 shots.
Figure 3 :Figure 4 :
34Here is used the same notation ofFig.
Figure 5 :
5Here are compared the best results of each previous plot.
Quantum machine learning. J Biamonte, P Wittek, N Pancotti, P Rebentrost, N Wiebe, S Lloyd, Nature. 549195J. Biamonte, P. Wittek, N. Pancotti, P. Rebentrost, N. Wiebe and S. Lloyd, Quantum machine learning, Nature 549 (2017) 195.
Towards quantum chemistry on a quantum computer. B P Lanyon, J D Whitfield, G G Gillett, M E Goggin, M P Almeida, I , Nature chemistry. 2106B.P. Lanyon, J.D. Whitfield, G.G. Gillett, M.E. Goggin, M.P. Almeida, I. Kassal et al., Towards quantum chemistry on a quantum computer, Nature chemistry 2 (2010) 106.
Quantum algorithms: an overview. A Montanaro, npj Quantum Information. 21A. Montanaro, Quantum algorithms: an overview, npj Quantum Information 2 (2016) 1.
Quantum computing in the nisq era and beyond. J , 79J. Preskill, Quantum computing in the nisq era and beyond, Quantum 2 (2018) 79.
Fault-tolerant quantum computation. P W Shor, Proceedings of 37th Conference on Foundations of Computer Science. 37th Conference on Foundations of Computer ScienceIEEEP.W. Shor, Fault-tolerant quantum computation, in Proceedings of 37th Conference on Foundations of Computer Science, pp. 56-65, IEEE, 1996.
Simulating lattice gauge theories within quantum technologies, The European physical journal D. M C Banuls, R Blatt, J Catani, A Celi, J I Cirac, M Dalmonte, 741M.C. Banuls, R. Blatt, J. Catani, A. Celi, J.I. Cirac, M. Dalmonte et al., Simulating lattice gauge theories within quantum technologies, The European physical journal D 74 (2020) 1.
Investigating a 3+ 1d topological -term in the hamiltonian formulation of lattice gauge theories for quantum and classical simulations. A Kan, L Funcke, S Kühn, L Dellantonio, J Zhang, J F Haase, arXiv:2105.06019arXiv preprintA. Kan, L. Funcke, S. Kühn, L. Dellantonio, J. Zhang, J.F. Haase et al., Investigating a 3+ 1d topological -term in the hamiltonian formulation of lattice gauge theories for quantum and classical simulations, arXiv preprint arXiv:2105.06019 (2021) .
Simulating gauge theories with variational quantum eigensolvers in superconducting microwave cavities. J Zhang, R Ferguson, S Kühn, J F Haase, C Wilson, K Jansen, arXiv:2108.08248arXiv preprintJ. Zhang, R. Ferguson, S. Kühn, J.F. Haase, C. Wilson, K. Jansen et al., Simulating gauge theories with variational quantum eigensolvers in superconducting microwave cavities, arXiv preprint arXiv:2108.08248 (2021) .
A variational eigenvalue solver on a photonic quantum processor. A Peruzzo, J Mcclean, P Shadbolt, M.-H Yung, X.-Q Zhou, P J Love, Nature communications. 54213A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q. Zhou, P.J. Love et al., A variational eigenvalue solver on a photonic quantum processor, Nature communications 5 (2014) 4213.
A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise. H J Kushner, H.J. Kushner, A new method of locating the maximum point of an arbitrary multipeak curve in the presence of noise, .
Efficient global optimization of expensive black-box functions. D R Jones, M Schonlau, W J Welch, Journal of Global optimization. 13455D.R. Jones, M. Schonlau and W.J. Welch, Efficient global optimization of expensive black-box functions, Journal of Global optimization 13 (1998) 455.
Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. A Kandala, A Mezzacapo, K Temme, M Takita, M Brink, J M Chow, Nature. 549242A. Kandala, A. Mezzacapo, K. Temme, M. Takita, M. Brink, J.M. Chow et al., Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets, Nature 549 (2017) 242.
Measurement error mitigation in quantum computers through classical bit-flip correction. L Funcke, T Hartung, K Jansen, S Kühn, P Stornati, X Wang, arXiv:2007.03663arXiv preprintL. Funcke, T. Hartung, K. Jansen, S. Kühn, P. Stornati and X. Wang, Measurement error mitigation in quantum computers through classical bit-flip correction, arXiv preprint arXiv:2007.03663 (2020) .
A simplex method for function minimization. J A Nelder, R Mead, The computer journal. 7308J.A. Nelder and R. Mead, A simplex method for function minimization, The computer journal 7 (1965) 308.
Sequential minimal optimization for quantum-classical hybrid algorithms. K M Nakanishi, K Fujii, S Todo, Physical Review Research. 243158K.M. Nakanishi, K. Fujii and S. Todo, Sequential minimal optimization for quantum-classical hybrid algorithms, Physical Review Research 2 (2020) 043158.
Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. J C Spall, IEEE transactions on automatic control. 37332J.C. Spall et al., Multivariate stochastic approximation using a simultaneous perturbation gradient approximation, IEEE transactions on automatic control 37 (1992) 332.
Traité de géostatistique appliquée. G Matheron, 1TechnipG. Matheron, Traité de géostatistique appliquée. 1 (1962), vol. 1, Editions Technip (1962).
Gaussian processes for machine learning. C K Williams, C E Rasmussen, MIT press2Cambridge, MAC.K. Williams and C.E. Rasmussen, Gaussian processes for machine learning, vol. 2, MIT press Cambridge, MA (2006).
Introduction to gaussian processes, NATO ASI series F computer and systems sciences. D J Mackay, 168133D.J. MacKay et al., Introduction to gaussian processes, NATO ASI series F computer and systems sciences 168 (1998) 133.
On bayesian methods for seeking the extremum. J Močkus, Optimization techniques IFIP technical conference. SpringerJ. Močkus, On bayesian methods for seeking the extremum, in Optimization techniques IFIP technical conference, pp. 400-404, Springer, 1975.
Global optimization based on noisy evaluations: an empirical study of two statistical approaches. E Vazquez, J Villemonteix, M Sidorkiewicz, E Walter, Journal of Physics: Conference Series. IOP Publishing13512100E. Vazquez, J. Villemonteix, M. Sidorkiewicz and E. Walter, Global optimization based on noisy evaluations: an empirical study of two statistical approaches, in Journal of Physics: Conference Series, vol. 135, p. 012100, IOP Publishing, 2008.
Constrained bayesian optimization with noisy experiments. B Letham, B Karrer, G Ottoni, E Bakshy, Bayesian Analysis. 14495B. Letham, B. Karrer, G. Ottoni, E. Bakshy et al., Constrained bayesian optimization with noisy experiments, Bayesian Analysis 14 (2019) 495.
Qiskit: An open-source framework for quantum computing. G Aleksandrowicz, T Alexander, P Barkoutsos, L Bello, Y Ben-Haim, D Bucher, Accessed onG. Aleksandrowicz, T. Alexander, P. Barkoutsos, L. Bello, Y. Ben-Haim, D. Bucher et al., Qiskit: An open-source framework for quantum computing, Accessed on: Mar 16 (2019) .
Ae: A domain-agnostic platform for adaptive experimentation. E Bakshy, L Dworkin, B Karrer, K Kashin, B Letham, A Murthy, E. Bakshy, L. Dworkin, B. Karrer, K. Kashin, B. Letham, A. Murthy et al., "Ae: A domain-agnostic platform for adaptive experimentation."
Botorch: A framework for efficient monte-carlo bayesian optimization. M Balandat, B Karrer, D Jiang, S Daulton, B Letham, A G Wilson, Advances in Neural Information Processing Systems (NeurIPS). M. Balandat, B. Karrer, D. Jiang, S. Daulton, B. Letham, A.G. Wilson et al., Botorch: A framework for efficient monte-carlo bayesian optimization, Advances in Neural Information Processing Systems (NeurIPS) (2020) .
Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration. J Gardner, G Pleiss, K Q Weinberger, D Bindel, A G Wilson, Advances in Neural Information Processing Systems. J. Gardner, G. Pleiss, K.Q. Weinberger, D. Bindel and A.G. Wilson, Gpytorch: Blackbox matrix-matrix gaussian process inference with gpu acceleration, in Advances in Neural Information Processing Systems, pp. 7576-7586, 2018.
On the distribution of points in a cube and the approximate evaluation of integrals. I M Sobol, Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki. 7784I.M. Sobol', On the distribution of points in a cube and the approximate evaluation of integrals, Zhurnal Vychislitel'noi Matematiki i Matematicheskoi Fiziki 7 (1967) 784.
A limited memory algorithm for bound constrained optimization. R H Byrd, P Lu, J Nocedal, C Zhu, SIAM Journal on scientific computing. 161190R.H. Byrd, P. Lu, J. Nocedal and C. Zhu, A limited memory algorithm for bound constrained optimization, SIAM Journal on scientific computing 16 (1995) 1190.
Scipy 1.0: fundamental algorithms for scientific computing in python. P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, Nature methods. 17261P. Virtanen, R. Gommers, T.E. Oliphant, M. Haberland, T. Reddy, D. Cournapeau et al., Scipy 1.0: fundamental algorithms for scientific computing in python, Nature methods 17 (2020) 261.
Dimensional expressivity analysis of parametric quantum circuits. L Funcke, T Hartung, K Jansen, S Kühn, P Stornati, Quantum. 5422L. Funcke, T. Hartung, K. Jansen, S. Kühn and P. Stornati, Dimensional expressivity analysis of parametric quantum circuits, Quantum 5 (2021) 422.
Bayesian optimization in a billion dimensions via random embeddings. Z Wang, F Hutter, M Zoghi, D Matheson, N De Feitas, Journal of Artificial Intelligence Research. 55361Z. Wang, F. Hutter, M. Zoghi, D. Matheson and N. de Feitas, Bayesian optimization in a billion dimensions via random embeddings, Journal of Artificial Intelligence Research 55 (2016) 361.
High dimensional bayesian optimization using dropout. C Li, S Gupta, S Rana, V Nguyen, S Venkatesh, A Shilton, Proceedings of the 26th International Joint Conference on Artificial Intelligence. the 26th International Joint Conference on Artificial IntelligenceC. Li, S. Gupta, S. Rana, V. Nguyen, S. Venkatesh and A. Shilton, High dimensional bayesian optimization using dropout, in Proceedings of the 26th International Joint Conference on Artificial Intelligence, pp. 2096-2102, 2017.
Adaptive and safe bayesian optimization in high dimensions via one-dimensional subspaces. J Kirschner, M Mutny, N Hiller, R Ischebeck, A Krause, International Conference on Machine Learning. PMLRJ. Kirschner, M. Mutny, N. Hiller, R. Ischebeck and A. Krause, Adaptive and safe bayesian optimization in high dimensions via one-dimensional subspaces, in International Conference on Machine Learning, pp. 3429-3438, PMLR, 2019.
| [
"https://github.com/ken-nakanishi/nftopt."
] |
[
"Enhancement of the superconducting transition temperature of FeSe by intercalation of a molecular spacer layer",
"Enhancement of the superconducting transition temperature of FeSe by intercalation of a molecular spacer layer"
] | [
"Matthew Burrard-Lucas \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"David G Free \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"Stefan J Sedlmaier \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"Jack D Wright \nDepartment of Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK\n",
"Simon J Cassidy \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"Yoshiaki Hara \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"Alex J Corkett \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n",
"Tom Lancaster \nDepartment of Physics\nDurham University\nSouth RoadDH1 3LEDurhamUK\n",
"Peter J Baker \nISIS facility\nOX11 0QXRutherford Appleton Laboratory, ChiltonOxonUK\n",
"Stephen J Blundell \nDepartment of Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK\n",
"Simon J Clarke \nDepartment of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK\n"
] | [
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK",
"Department of Physics\nDurham University\nSouth RoadDH1 3LEDurhamUK",
"ISIS facility\nOX11 0QXRutherford Appleton Laboratory, ChiltonOxonUK",
"Department of Physics\nClarendon Laboratory\nUniversity of Oxford\nParks RoadOX1 3PUOxfordUK",
"Department of Chemistry\nInorganic Chemistry Laboratory\nUniversity of Oxford\nSouth Parks RoadOX1 3QROxfordUK"
] | [] | The recent discovery of high temperature superconductivity in a layered iron arsenide 1 has led to an intensive search to optimize the superconducting properties of iron-based superconductors by changing the chemical composition of the spacer layer that is inserted between adjacent anionic iron arsenide layers 2-7 . Until now, superconductivity has only been found in compounds with a cationic spacer layer consisting of metal ions: Li + , Na + , K + , Ba 2+ or a PbO-type or perovskite-type oxide layer. Electronic doping is usually necessary to control the fine balance between antiferromagnetism and superconductivity. Superconductivity has also been reported 8 in FeSe, which contains neutral layers similar in structure to those found in the iron arsenides but without the spacer layer. Here we demonstrate the synthesis of Li x (NH 2 ) y (NH 3 ) 1-y Fe 2 Se 2 (x ~0.6 ; y ~ 0.2), with lithium ions, lithium amide and ammonia acting as the spacer layer, which exhibits superconductivity at 43(1) K, higher than in any FeSe-derived compound reported so far and four times higher at ambient pressure than the transition temperature, T c , of the parent Fe 1.01 Se. We have determined the crystal structure using neutron powder diffraction and used magnetometry and muon-spin rotation data to determine the superconducting properties. This new synthetic route opens up the possibility of further exploitation of related molecular intercalations in this and other systems in order to greatly optimize the superconducting properties in this family. | 10.1038/nmat3464 | [
"https://arxiv.org/pdf/1203.5046v2.pdf"
] | 1,040,176 | 1203.5046 | 464ec32b946d32546e61671bbb5313d6091db956 |
Enhancement of the superconducting transition temperature of FeSe by intercalation of a molecular spacer layer
Matthew Burrard-Lucas
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
David G Free
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Stefan J Sedlmaier
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Jack D Wright
Department of Physics
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUK
Simon J Cassidy
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Yoshiaki Hara
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Alex J Corkett
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Tom Lancaster
Department of Physics
Durham University
South RoadDH1 3LEDurhamUK
Peter J Baker
ISIS facility
OX11 0QXRutherford Appleton Laboratory, ChiltonOxonUK
Stephen J Blundell
Department of Physics
Clarendon Laboratory
University of Oxford
Parks RoadOX1 3PUOxfordUK
Simon J Clarke
Department of Chemistry
Inorganic Chemistry Laboratory
University of Oxford
South Parks RoadOX1 3QROxfordUK
Enhancement of the superconducting transition temperature of FeSe by intercalation of a molecular spacer layer
The recent discovery of high temperature superconductivity in a layered iron arsenide 1 has led to an intensive search to optimize the superconducting properties of iron-based superconductors by changing the chemical composition of the spacer layer that is inserted between adjacent anionic iron arsenide layers 2-7 . Until now, superconductivity has only been found in compounds with a cationic spacer layer consisting of metal ions: Li + , Na + , K + , Ba 2+ or a PbO-type or perovskite-type oxide layer. Electronic doping is usually necessary to control the fine balance between antiferromagnetism and superconductivity. Superconductivity has also been reported 8 in FeSe, which contains neutral layers similar in structure to those found in the iron arsenides but without the spacer layer. Here we demonstrate the synthesis of Li x (NH 2 ) y (NH 3 ) 1-y Fe 2 Se 2 (x ~0.6 ; y ~ 0.2), with lithium ions, lithium amide and ammonia acting as the spacer layer, which exhibits superconductivity at 43(1) K, higher than in any FeSe-derived compound reported so far and four times higher at ambient pressure than the transition temperature, T c , of the parent Fe 1.01 Se. We have determined the crystal structure using neutron powder diffraction and used magnetometry and muon-spin rotation data to determine the superconducting properties. This new synthetic route opens up the possibility of further exploitation of related molecular intercalations in this and other systems in order to greatly optimize the superconducting properties in this family.
The tetragonal phase of Fe 1+ Se (0.01 ≤ ≤ 0.04) with the anti-PbO-type crystal structure displays superconductivity (T c ~ 8.5 K) when = 0.01 and the superconductivity is destroyed by additional interstitial Fe. 9 The compound bears close structural resemblance to LiFeAs 10 (both compounds contain FeQ 4 (Q = Se, As) tetrahedra that are highly compressed in the basal plane compared with other iron-based superconductors) and both compounds differ from the canonical iron-based superconducting system in that they superconduct when as close to stoichiometric as possible (i.e. when they formally contain Fe 2+ ) and do not require chemical substitution to drive them away from the itinerant antiferromagnetic state into the superconducting regime (compare, for example LnFeAsO, 1 BaFe 2 As 2 6 and NaFeAs 11 ). Under an applied hydrostatic pressure the T c of Fe 1.01 Se increases to 37 K at 7 GPa. 12,13 Fe 1+ Se seems less inviting than the arsenide systems in terms of chemical flexibility. For example as part of this work we attempted to insert Li using excess n-BuLi in hexane at room temperature but this results in the reduction of iron to the elemental state with formation of Li 2 Se and Fe as the only crystalline products. Relatives of Fe 1+ Se in which alkali metals separate the layers are the subject of much recent research and remain controversial. The lack of much redox flexibility in these iron selenide systems synthesised at high temperatures is underlined by the fact that the compositions are close to the "245" stoichiometry of K 0.8 Fe 1.6 Se 2 (with a highly defective variant of the ThCr 2 Si 2 structure) where Fe is again formally divalent. These systems are characterised by a Fe/vacancy ordered 14 antiferromagnetic state which appears not to support superconductivity and differs from the antiferromagnetic state of the iron arsenide parent systems in that the magnetic structure 15 is related to the iron/vacancy ordering scheme and the ordered moment localised on Fe is around 3.4 B 15 as opposed to < 1 B in the arsenides. The crystal structures of these phases close to K 0.8 Fe 1.6 Se 2 are complex and there is reported evidence for coexistence of regions with different Fe/vacancy ratios and different ordering schemes in the same sample. 16 Phases of this type with variously reported compositions show bulk superconductivity, although this is often difficult to reproduce 14 and it has recently been suggested from NMR studies that the superconducting samples in the Rb-Fe-Se phase field consist of intergrowths of antiferromagnetic and non-superconducting Rb 0.8 Fe 1.6 Se 2 and regions of superconducting material which are electron-doped and close in composition to Rb 0.3(1) Fe 2 Se 2 . 17 Whether such a superconducting phase can be prepared pure is the subject of ongoing research.
A question arises as to how to make compounds in which stoichiometric iron selenide layers, which evidently support superconductivity in tetragonal Fe 1+ Se itself, may be separated by intervening layers as is possible in the case of the formally mono-anionic FeAs layers in other iron-based superconductors. Nature provides a clue in the compound tochilinite in which close-tostoichiometric FeS layers (like those in the mackinawite polymorph of FeS) are separated by brucitetype MgOH 2 layers to form (FeS)(Mg(OH) 2 ) y (y ~ 0.8 -1). 18 In this letter we demonstrate that the reaction of Fe 1+ Se with a solution of lithium in liquid ammonia (see experimental methods section) is a way to simultaneously intercalate both lithium cations, amide anions and ammonia molecules between the FeSe layers to produce Li x (NH 2 ) y (NH 3 ) 1-y Fe 2 Se 2 (x ~0.6; y ~ 0.2). The result is a dramatic enhancement of T c which reaches 43(1) K. These properties are in line with a recent report 19 of a series of nominal composition A x Fe 2 Se 2 (A = ammonia-soluble electropositive metal) including a lithium intercalate which we suspect also contains amide and ammonia. Here we report on the crystal structure and superconducting properties of products obtained by the intercalation reactions using from Li/NH 3 and Li/ND 3 solutions. Refinement of a model for the deuterated compound against neutron powder diffraction data shows that amide ions and ammonia molecules occupy the The lithium/ammonia solutions were rapidly decolourised by Fe 1+ Se at -78 °C. This is consistent either with the classic method for decomposing the metastable solution of solvated electrons using a "rusty nail" as a catalyst for the formation of lithium amide and hydrogen, or it indicates. Donation of the solvated electrons from the alkali metal ammonia solution to empty bands in the solid with Li ions co-inserted to balance the charge in a reductive intercalation reaction. The product was a black powder with a much finer grain size than the parent Fe 1+ Se material. The products were extremely air sensitive. X-ray powder diffraction (XRPD) showed no evidence for the starting material or other products above the 5% level and the diffraction peaks were indexed on a body centred tetragonal unit cell with lattice parameters a = 3.8249(2) Å and c = 16.5266(9) Å at room temperature. The basal lattice parameter, a (= √2 × Fe-Fe distance) is 1.4% larger than the value of 3.7734(1) Å reported for Fe 1.01 Se. 9 The c lattice parameter of 16.5266(9) Å is almost 20 % larger than the value found in K 0.8 Fe 1.6 Se 2 14 with the ThCr 2 Si 2 structure type. This suggests that more than just a metal cation has been inserted between the layers. Indeed, a preliminary structural model with Li located on the 8-coordinate site surrounded by selenide ions had unreasonably long Li-Se distances. (This structure is known in, for example, LiCu 2 P 2 , 21 but this compound accommodates lithium in a relatively small 8-coordinate site resulting from the presence of short P-P bonds). Rietveld refinement 22 against laboratory XRPD data of the scattering from the 8-coordinate site suggested a chemically unreasonable formula NFe 2 Se 2 suggesting that light Li and/or H atoms were also present. A control reaction in which Fe 1+ Se was stirred in ammonia solution with no lithium present resulted in no change in the XRPD pattern.
Neutron powder diffraction (NPD) patterns were collected from samples synthesised using 0.5 moles of Li per mole of FeSe with either NH 3 or ND 3 as solvent. The XRPD patterns of these products were similar, but their NPD patterns were dramatically different (the intensities varied greatly between the two compounds because of the greatly different neutron scattering lengths for H (-3.74 fm) and D (+6.67 fm), and the H-containing material produced a characteristic incoherent background) proving that the samples contain H(D). A structural model was obtained from the deuterated sample by starting from the model suggested by the X-ray refinements with N in the site 8-coordinate by Se and computing Fourier difference maps to reveal the remaining nuclear scattering density. Refinements against data from the GEM diffractometer at room temperature and the HRPD diffractometer at 8 K on the same sample of deuterated material produced similar structural models at the two temperatures. The initial assumption of a formula (LiND 2 )Fe 2 Se 2 resulted in an apparently satisfactory fit to the low temperature HRPD data (which emphasises the short d-spacing data) using a model in which the D atoms were located on crystallographic positions (16m site: (x, x, z)) which refined freely to be about 1 Å from the N atom (2a site: (0, 0, 0)) and with the N-D bonds directed towards the selenide anions. In this initial model the D site had a site occupancy fixed at one quarter resulting in one ND 2 moiety per square prismatic site. Li ions were located at sites (0, ½, 0) (4c site) between the ND 2 moieties but with very large displacement ellipsoids suggestive of disorder in the intercalant. This model proved unsatisfactory when the ambient temperature GEM data were investigated. In particular the 002 reflection which accidently has zero intensity in the deuterated material could only be modelled accurately when further D atoms were included in the 8i sites (x, 0, 0) in the same plane as the N atoms and which freely refined to be about 1 Å from the N atom.
Further Li was located at the 2b site (0, 0, ½). Although Li makes a minor contribution to the scattering in the presence of N, D, Fe and Se, the Li site occupancy was consistently less than the formula (LiND 2 )Fe 2 Se 2 would suggest. Unconstrained refinement of the site occupancies of the two D sites and the Li sites produced a refined composition of Li 0.6(1) ND 2.8(1) Fe 2 Se 2 which may be reformulated Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2 with intercalation of lithium amide and ammonia. About 20 % of the Li included in the synthesis appears as a separate LiND 2 phase present in the products. Re-examination of the HRPD data collected at 8 K resulted in a significant improvement in the fit when ND 3 was accommodated in place of some LiND 2 and the refined composition at 8 K using HRPD data was similar to that obtained from the refinement against GEM data at 298 K with some redistribution of Li. The final model is shown in Figure 1 and the supporting information. In the model N-D bonds of about 1 Å from [ND 2 ]and ND 3 species are directed towards the selenide ions with D•••Se distances of 2.75 Å consistent with weak hydrogen bonding interactions comparable to those found in the lithium/ammonia intercalates of TiS 2 . 20 The uncertainty in our refinements is partly associated with the large displacement ellipsoids for the intercalated species typical for similar systems, 20,23 but the refinements show that most of the N is present in ND 3 molecules and the Li : amide ratio exceeds unity implying donation of electrons (0.2(1) per FeSe unit) to the FeSe layers, consistent with the proposed Rb 0.3(1) Fe 2 Se 2 superconducting phase suggested by NMR measurements. 17 The crystal structure obtained from the refinement against NPD data is shown in Figure 1 and the Rietveld fits are shown in Figure 2. We have used muon-spin rotation (μSR) to measure the increase in B rms , the rms width of the magnetic field distribution, due to the development of the superconducting vortex lattice below T c . These results suggest a superconducting volume fraction in the range 40-50 % for the sample which shows the smallest volume fraction in the magnetic susceptibility measurements (Figure 3). In summary we have identified the products of the reaction between tetragonal Fe 1+ Se and lithium/ammonia solutions as intercalates in which lithium ions, amide ions and the molecule NH 3 occupy sites between the FeSe layers which become well separated compared with Fe 1+ Se itself and in which superconductivity occurs with a T c of 43(1) K. The refined composition and the increase in Fe-Se bond lengths suggest electronic doping of the FeSe layers. The dramatic change in the electronic properties may result partially from the increase in the separation of the layers. The reducing reaction conditions would be expected to completely remove the few % of interstitial Fe ions that are found to have a very detrimental effect on the superconducting properties of Fe 1.01 Se, 9 and this change may also contribute to the observation of the enhanced superconducting properties. Measurement of the superconducting state using SR spectroscopy shows features similar to those in Fe 1+ Se and the superfluid stiffness is found to follow the scaling behaviour of Uemura that is valid for many classes of unconventional superconductor. Our results strongly imply that incorporation of salts and molecular groups into spacer layers in iron arsenides and selenides in intercalation reactions is a powerful new synthetic strategy that holds great promise for discovering new superconducting compounds.
Experimental Methods.
Synthesis. All manipulations were carried out in a Glovebox Technology Ltd argon-filled dry box with a O 2 and H 2 O content below 1 ppm or on a Schlenk line. Tetragonal FeSe was synthesised by grinding together high purity iron powder (ALFA 99.995 %) and selenium shot (ALFA 99.99 %) and heating them in a sealed silica tube to 700 °C at 2 °Cmin -1 , holding the temperature for 24 hours, cooling at 4°C min -1 to 400 °C and holding this temperature for24 hours before quenching the sample in water.
To synthesise the intercalates, FeSe and an amount of Li metal (Alrich 99 %) corresponding to the stoichiometry Li 0.5 FeSe was placed in a Schlenk tube with a magnetic stirrer. This vessel and a cylinder of ammonia (BOC 99.98 %) were attached to a Schlenk line equipped with a mercury manometer. The Schlenk tube and the pipework extending to the regulator (previously flushed with ammonia) attached to the ammonia cylinder were evacuated. The Schlenk tube was placed in a bath of dry ice/ isopropanol (-78 °C) and when cooled, the Schlenk line was isolated from the vacuum pump. The valves on the ammonia cylinder and regulator were then opened allowing ammonia to condense onto the reactants. The ammonia pressure in the line was monitored using the mercury manometer and did not exceed 1 atm during the condensation process. When working on the 2 g scale approximately 50 cm 3 of NH 3 were condensed and the ammonia cylinder and regulator were then closed. The solution was observed to turn blue characteristic of solvated electrons but after about 30 minutes of stirring at -78 °C the blue colour was not evident. The Schlenk tube was allowed to warm with the slush bath and the ammonia and any evolved gases were allowed to evaporate out through the mercury manometer. A similar procedure was adopted when ND 3 was used except that the more valuable ND 3 was recondensed into its storage vessel as it evaporated from the reaction vessel.
Structural characterisation. Laboratory X-ray powder diffraction measurements (XRPD) were made using a Philips PW1730 (CuK /K ) or a Panalytical X'pert PRO instrument (CuK ) radiation. Neutron powder diffraction measurements on both hydrogen and deuterium containing compounds were made using the time of flight diffractometer HRPD (ambient temperature and 8 K) and the GEM diffractometer (ambient temperature) at the ISIS facility, Rutherford Appleton Laboratory, UK. On HRPD the samples were cooled in a closed cycle refrigerator. The samples were contained in 6mm diameter thin-walled vanadium cans which were sealed with indium gaskets.
Magnetometry.
Magnetic susceptibility measurements were conducted using a Quantum Design MPMS-XL SQUID magnetometer. The powder samples of about 10 mg in mass were immobilized in gelatin capsules. Measurements were made in DC fields of 20 − 50 Oe in the temperature range 2 -300 K after cooling in zero applied field (zero-field-cooled: ZFC) and in the measuring field (fieldcooled: FC).
Muon-spin rotation spectroscopy.
The μSR experiments were carried out at the ISIS Pulsed Muon Facility, UK. In a μSR experiment, spin-polarized muons are implanted in the bulk of a material and the time-dependence of their polarization monitored by recording the angular distribution of the subsequent positron decay. A sample of 0.5-1.0 g was packed into a square Ag-foil packet (2 cm square, ~25 μm foil thickness), then crimped tightly closed, and then mounted on a silver backing plate inside a He 4 cryostat. For the transverse field measurements, the sample was mounted on a haematite backing plate so that any muons missing the sample did not contribute to the precessing amplitude. In these experiments, muons stopped at random positions in the vortex lattice where they precessed about the total local magnetic field B at the muon sites with frequency γ μ B/2π, where γ μ /2π = 135.5 MHz T -1 . The observed property of the experiment is the time evolution of the muon spin polarization P x (t), which is related to the distribution p(B) of fields in the sample by
0 ) cos( ) ( ) ( dB Bt B p t P x
where the phase results from the detector geometry. Figure S1. Rietveld refinements against HRPD data and GEM data for Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2 at 8K and 298K respectively. (a) HRPD data (blue) are (top to bottom) from the 168°, 90° and 35° data banks. The D-containing materials are shown on the right three panels and the data are compared with the H-containing materials (left three panels) showing the intensity differences due to the H/D contrast. (b) GEM data (black) run from, left to right and top to bottom, the 9°, 18°, 35°, 64°, 91° and 154° data banks respectively. (2) The large thermal displacements for N, D and Li, typical in these systems were constrained to be isotropic and equal. FeSe (tetragonal phase), FeSe (hexagonal phase), Fe and LiNH/D 2 impurities were also modelled. The average refined weight percentages of these phases for the two data sets were FeSe (hexagonal) 10.3%, LiND 2 1.8%, Fe 1.5% and FeSe (tetragonal) 1.2%. Figure S2. Decomposition products of a sample Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 after heating at 400 °C under vacuum. The main product is tetragonal FeSe and crystalline Li 2 Se and Fe are formed as significant further products in the molar ratio 8 : 2 : 1. The weight fractions of these phases imply a composition "Li 4 Fe 9 Se 10 " broadly consistent with the composition expected from the synthesis. Some small reflections remain unindexed. The decrease in the amount of hexagonal FeSe is consistent with annealing at 400 °C. Figure S3. Thermogravimetric decomposition of the SR sample under flowing argon. The mass changes are consistent with the decomposition process suggested by the equation. The small increase in mass at the start of the measurement arises from buoyancy effects in the apparatus. Figure S4. Raw muon data for Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 in a transverse field of 10 mT at temperatures of 63 K and 5 K, showing the increase in broadening that develops in the superconducting state. The plot shows the counts in a single detector bank, corrected for the muon lifetime decay. The data analysis was carried out using the data from all 64 detectors of the MuSR spectrometer, grouped into 8 detector banks yielding signals with different phases and these were individually fitted. From this analysis the average field and rms width of the field distribution of the magnetic field can be extracted.
Figure 1 .
1The crystal structure obtained from the refinement against neutron powder diffraction data on Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2 at 298 K (GEM data). In the model each square prism of Se atoms contains either an [ND 2 ]anion or an ND 3 molecule and these are both modelled as disordered over four orientations. The sizes of the spheres representing the Li atoms are in proportion to their site occupancies.
Figure 2 .
2Rietveld refinement against GEM data for Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2 at 298 K. The data are from the 2 = 64.6° detector bank. Allowed peak positions are marked by vertical lines: from top the main phase and minority phases: FeSe (hexagonal, present in the starting material) (10% by mass), FeSe (tetragonal) (1.2 %), LiNH 2 (1.8 %), and Fe (1.5 %). Compared with tetragonal Fe 1+ Se the structure is expanded in the basal direction by 1.4 %. At ambient temperatures the extremely compressed FeSe 4 tetrahedra found in Fe 1+ Se are retained with Se-Fe-Se bond angles (ambient temperature) of 102.89(6)° (× 2) and 112.86(3)° (× 4) compared with values of 103.9° (× 2) and 112.3° (× 4) found for Fe 1+ Se. 9 The Fe-Se bond distances are 2.3958 Å in Fe 1+ Se and these are significantly larger in the intercalates reaching 2.439(1) Å in Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2 at ambient temperature, consistent with electron doping of the system. At 8 K the Se-Fe-Se bond angles of 104.40(8)° (× 2) and 112.06(4)° (× 4) indicate a very slight decrease in the basal-direction compression of the FeSe 4 tetrahedra and the Fe-Se bonds contract to 2.408(1) Å. The high resolution data obtained on HRPD show that unlike in Fe 1+ Se 24 there is no structural distortion evident down to 8 K.SQUID magnetometry measurements (Figure 3) made under zero-field-cooled (ZFC) and field-cooled (FC) conditions showed that the products of the reactions with lithium ammonia solutions exhibit bulk superconductivity (superconducting volume fractions of about 50 %) with T c = 43(1) K. There is no discernible isotope effect and T c is similar for all the intercalates of FeSe within the experimental uncertainty. There are some variations between samples in the volume fraction based on the size of the diamagnetic signal and in the size of the normal state susceptibility. Measurements of magnetisation as a function of field showed the presence of a ferromagnetic impurity corresponding to about 1-2 % by mass of elemental Fe which explains the large normal state susceptibility. The Fe may originate from the Fe ions present in the interlamellar space in the Fe 1+ Se parent material 9 and reductively extruded by the reducing solution.
Figure 3 .
3Magnetic susceptibility measurements (zero-field-cooled (ZFC) and field-cooled (FC)) on the three samples of Li 0.6(1) (NH/D 2 ) 0.2(1) (NH/D 3 ) 0.8(1) Fe 2 Se 2 measured using SR and neutron powder diffraction (NPD). Red symbols: SR sample; blue symbols: H-containing NPD sample; green symbols:D-containing NPD sample.
Figure 4 (
4a)shows B rms as a function of temperature measured using a transverse field of 10 mT and the fitted curve is the result of three contributions (dashed lines) summed in quadrature, a temperatureindependent normal-state contribution and two temperature-dependent contributions which account for the effect of superconductivity (and is similar to the two-gap behavior observed in Fe 1+ Se in Ref 25.) Using the proportionality between B rms and 1/λ ab 2 , where λ ab =3 1/4 λ is the in-plane penetration depth we can extract an estimate of λ ab . This value places Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 on the main scaling line in a Uemura plot 26 of T c against superfluid stiffness ρ s =c 2 /λ ab 2(Figure 4(b)), in common with underdoped cuprates and many other iron-based superconductors.
Figure 4 :
4(a) Field width B rms as a function of temperature. The fitted line comprises the three dashed contributions summed in quadrature. (b) A Uemura plot of T c against superfluid stiffness ρ s =c 2 on the main scaling line. Data for Fe 1+ Se are shown for different pressures (Ref. 25) and the plot is adapted from Ref. 10. These superconductors Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 are thermally unstable: gentle heating of the samples under vacuum at below 100 °C was sufficient to decompose the intercalates. The powder diffraction pattern of the decomposition products of Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 at 400 °C revealed FeSe and Li 2 Se as the dominant crystalline products (see supplementary information). Thermogravimetric analysis revealed two mass loss features below 250 °C (see supplementary information). These are consistent with the facile loss of 0.75(2) moles of NH 3 per mole of Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 below 100 °C suggesting that the intercalated ammonia is lost readily and subsequent decomposition occurs above this temperatures of the intercalated lithium amide to form gaseous species and Li 2 Se. The facile loss of intercalated ammonia occurs in a similar temperature range in the intercalates of TiS 2 . 20
Figure S5 .
S5The average magnetic field experienced by implanted muons in Li 0.6(1) (NH 2 ) 0.2(1) (NH 3 ) 0.8(1) Fe 2 Se 2 as a function of temperature and normalized by the applied field, showing the diamagnetic shift observed in the superconducting state. These data are extracted from the same fits of the raw data that were used to constructFigure 4(a).
Figure S6 .
S6The data plotted inFigure 4(a) [main paper] can also be fitted using the expressions used in Ref.25 developed for modelling two-gap superconductors [see also Bussmann-Holder A., Micnas R. and Bishop A. R., Enhancements of the superconducting transition temperature within the twoband model, Eur. J. Phys. B 37, 345-348 (2004)]. Such an approach allows an estimate of the size of the gaps (shown in the figure) within the framework of this model. This fitting procedure leads to the same value of the penetration depth given in the main paper.
8 -
8coordinate sites in the ThCr 2 Si 2 structure type and are coordinated by Li ions. There is evidence for weak N-H•••Se hydrogen bonds as found for ammonia intercalates of TiS 2 .20
Table S2 .
S2Refined structural parameters for Li 0.6(1) (ND 2 ) 0.2(1) (ND 3 ) 0.8(1) Fe 2 Se 2
Temperature (K)
298
8
Instrument
GEM
HRPD
Space Group
I4/mmm (no. 139)
I4/mmm (no. 139)
Z
2
2
a (Å)
3.8149(2)
3.8059(1)
c (Å)
16.480(1)
16.1795(6)
V (Å 3 )
239.84(3)
234.35(2)
c/a
4.3200(3)
4.2512(2)
N-H/D(1) (Å)
1.007(3)
0.998(3)
N-H/D(2) (Å)
0.984(2)
0.993(3)
H/D(1)-N-H/D(1) (°) [2] 103.4(4)
95.7(3)
H/D(1)-N-H/D(2) (°) [2] 116.0(1)
118.3(9)
Fe-Se (Å)
2.439(1)
2.408(1)
Se-Fe-Se (°) [2]
102.89(6)
104.40(8)
Se-Fe-Se (°) [4]
112.86(3)
112.06(4)
H/D(1)•••Se (Å)
2.752(3)
2.726(3)
Li(1)-N (Å) = a/2
1.9075(1)
1.9029(1)
Li(2)-N (Å) = a/√2
2.6975(1)
2.6912(1)
wR p
5.773
6.155
Margadonna, S., Takabayashi, Y., McDonald, M. T., Kasperkiewicz, K., Mizuguchi, Y., Takano, Y., Fitch, A. N., Suarde, E., Prassides, K., Crystal structure of the new FeSe 1-x superconductor, Chem. Commun., 5607-5609 (2008). 25 Khasanov R., Bendele M., Amato A., Conder K., Keller H., Klauss H.-H., Luetkens H., Pomjakushina E., Evolution of two-gap behavior of the superconductor FeSe 1-x , Phys. Rev. Lett. 104, 087004 (2010). 26 Uemura, Y. J. et al. Universal correlations between T c and n s/m* (carrier density over effective mass) in high-T c cuprate superconductors. Phys. Rev. Lett. 62, 2317-2320 (1989).Enhancement of the superconducting transition temperature of FeSe by intercalation of a molecular spacer layer.Matthew Burrard-Lucas, a David G. Free, a Stefan J Sedlmaier, a Jack D Wright, b Simon J Cassidy, a Yoshiaki Hara, a Alex J Corkett, a Tom Lancaster, c Peter J Baker, d Stephen J Blundell b and Simon J
Acknowledgements.We are grateful to the ISIS facility including the GEM Xpress service for access to neutron and muon instruments and we acknowledge financial support from the UK EPSRC (Grant EP/I017844/1) and STFC (Grant EP/G067481/1).Contributions of co-authors
Iron-based layered superconductor La[O 1-x F x ]FeAs (x = 0.05-0.12) with T c = 26 K. Y Kamihara, T Watanabe, M Hirano, H Hosono, J. Am. Chem. Soc. 130Kamihara, Y., Watanabe, T., Hirano, M. & Hosono, H. Iron-based layered superconductor La[O 1-x F x ]FeAs (x = 0.05-0.12) with T c = 26 K. J. Am. Chem. Soc. 130, 3296-3297 (2008).
Superconductivity at 55 K in iron-based F-doped layered quaternary compound Sm. Z A Ren, FeAs. Chin. Phys. Lett. 25O 1-x F xRen, Z. A. et al. Superconductivity at 55 K in iron-based F-doped layered quaternary compound Sm[O 1-x F x ]FeAs. Chin. Phys. Lett. 25, 2215-2216 (2008).
Superconductivity in the iron selenide K x Fe 2 Se 2 (0<x<1.0). J Guo, Phys. Rev. B. 82180520Guo, J. et al. Superconductivity in the iron selenide K x Fe 2 Se 2 (0<x<1.0). Phys. Rev. B 82, 180520 (2010).
Superconductivity at 43 K in SmFeAsO 1−x F x. X H Chen, Nature. 453Chen, X. H. et al. Superconductivity at 43 K in SmFeAsO 1−x F x . Nature 453, 761-762 (2008).
Structural and magnetic phase diagram of CeFeAsO 1−x F x and its relation to high-temperature superconductivity. J Zhao, Nature Mater. 7Zhao, J. et al. Structural and magnetic phase diagram of CeFeAsO 1−x F x and its relation to high-temperature superconductivity. Nature Mater. 7, 953-959 (2008).
Superconductivity at 38 K in the iron arsenide Ba 1−x K x Fe 2 As 2. M Rotter, M Tegel, D Johrendt, Phys. Rev. Lett. 101107006Rotter, M., Tegel, M. & Johrendt, D. Superconductivity at 38 K in the iron arsenide Ba 1−x K x Fe 2 As 2 . Phys. Rev. Lett. 101, 107006 (2008).
Structure and superconductivity of LiFeAs. M J Pitcher, D R Parker, P Adamson, S J C Herkelrath, A T Boothroyd, R M Ibberson, M Brunelli, S J Clarke, Chem. Commun. 45Pitcher, M. J., Parker, D. R., Adamson, P., Herkelrath, S. J. C., Boothroyd, A. T., Ibberson, R. M., Brunelli, M., Clarke, S. J. Structure and superconductivity of LiFeAs. Chem. Commun. 45, 5918-5920 (2008).
Superconductivity in the PbO-type structure alpha-FeSe. F C Hsu, Proc. Natl Acad. Sci. 105. Natl Acad. Sci. 105Hsu, F. C. et al. Superconductivity in the PbO-type structure alpha-FeSe. Proc. Natl Acad. Sci. 105, 14262-14264 (2008).
. T M Mcqueen, Q Huang, V Ksenofontov, C Felser, Q Xu, H Zandbergen, Y S Hor, J Allred, A J Williams, Q Qu, J Checkelsky, N P Ong, R J Cava, Phys. Rev. B. 7914522McQueen, T. M.; Huang, Q.; Ksenofontov, V.; Felser, C.; Xu, Q.; Zandbergen, H.; Hor, Y. S.; Allred, J.; Williams, A. J.; Qu, Q.; Checkelsky, J.; Ong, N. P.; Cava, R. J. Phys. Rev. B 79, 014522 (2009).
Compositional control of the superconducting properties of LiFeAs. M J Pitcher, T Lancaster, J D Wright, I Franke, A J Steele, P J Baker, F L Pratt, W Trevelyan-Thomas, D R Parker, S J Blundell, S J Clarke, J. Am. Chem. Soc. 132Pitcher, M. J., Lancaster, T., Wright, J. D., Franke, I., Steele, A. J., Baker, P. J., Pratt, F. L., Trevelyan-Thomas, W., Parker, D. R., Blundell, S. J., Clarke, S. J., Compositional control of the superconducting properties of LiFeAs, J. Am. Chem. Soc. 132, 10467-10476 (2010).
Control of the Competition between a Magnetic Phase and a superconducting Phase in Cobalt-Doped and Nickel-Doped NaFeAs Using Electron Count. D R Parker, M J P Smith, T Lancaster, A J Steele, I Franke, P J Baker, F L Pratt, M J Pitcher, S J Blundell, S J Clarke, Phys. Rev. Lett. 10457007Parker, D. R., Smith, M. J. P., Lancaster, T., Steele, A. J., Franke, I., Baker, P. J., Pratt, F. L., Pitcher, M. J., Blundell, S. J., Clarke, S. J. Control of the Competition between a Magnetic Phase and a superconducting Phase in Cobalt-Doped and Nickel-Doped NaFeAs Using Electron Count, Phys. Rev. Lett., 104, 057007 (2010).
. S Margadonna, Y Takabayashi, Y Ohishi, Y Mizuguchi, Y Takano, T Kagayama, T Nakagawa, M Takata, K Prassides, Phys. Rev. B. 8064506Margadonna, S.; Takabayashi, Y.; Ohishi, Y.; Mizuguchi, Y.; Takano, Y.; Kagayama, T.; Nakagawa, T.; Takata, M.; Prassides, K. Phys. Rev. B 80, 064506 (2009).
. S Medvedev, T M Mcqueen, I A Troyan, T Palasyuk, M I Eremets, R J Cava, S Naghavi, F Casper, V Ksenofontov, G Wortmann, C Felser, Nat. Mat. 8630Medvedev, S.; McQueen, T. M.; Troyan, I. A.; Palasyuk, T.; Eremets, M. I.; Cava, R. J.; Naghavi, S.; Casper, F.; Ksenofontov, V.; Wortmann, G.; Felser, C. Nat. Mat. 8, 630 (2009).
Cation vacancy order in the K 0.8+x Fe 1.6-y Se 2 system: Five-fold cell expansion accommodates 20% tetrahedral vacancies. J Bacsa, A Y Ganin, Y Takabayashi, K E Christensen, K Prassides, M J Rosseinsky, J B Claridge, Chem. Sci. 21054Bacsa, J., Ganin, A. Y., Takabayashi, Y., Christensen, K. E., Prassides, K., Rosseinsky, M. J., Claridge, J. B., Cation vacancy order in the K 0.8+x Fe 1.6-y Se 2 system: Five-fold cell expansion accommodates 20% tetrahedral vacancies Chem. Sci. 2, 1054 (2011).
A Novel Large Moment Antiferromagnetic Order in K 0.8 Fe 1.6 Se 2 Superconductor Chin. W Bao, Q Z Huang, G F Chen, M A Green, D M Wang, J B He, Y M Qiu, Phys. Lett. 2886104Bao, W., Huang, Q. Z., Chen, G. F., Green, M. A., Wang, D. M., He, J. B., Qiu, Y. M., A Novel Large Moment Antiferromagnetic Order in K 0.8 Fe 1.6 Se 2 Superconductor Chin. Phys. Lett. 28, 086104 (2011).
Uniform Patterns of Fe-Vacancy Ordering in the K x. S M Kazakov, A M Abakumov, S Gonzalez, J M Perez-Mato, A V Ovchinikov, M V Roslova, A I Boltalin, I V Morozov, E V Anitpov, G Van Tendeloo, Chem. Mater. 23Kazakov, S. M., Abakumov, A. M., Gonzalez, S., Perez-Mato, J. M., Ovchinikov, A. V., Roslova, M. V., Boltalin, A. I., Morozov, I. V., Anitpov, E. V., Van Tendeloo, G., Uniform Patterns of Fe-Vacancy Ordering in the K x (Fe,Co) 2-y Se 2 Superconductors, Chem. Mater. 23, 4311-4316 (2011).
NMR in the 245 iron-selenides Rb 0.74 Fe 1.6 Se 2 : Phase separation between an antiferromagnet and a superconducting. Y Texier, J Deisenhofer, V Tsurkan, A Loidl, D S Inosov, G Friemel, J Bobroff, arXiv:1203.1834Rb 0.3 Fe 2 Se 2Texier, Y., Deisenhofer, J., Tsurkan, V., Loidl, A., Inosov, D. S., Friemel, G., Bobroff, J., NMR in the 245 iron-selenides Rb 0.74 Fe 1.6 Se 2 : Phase separation between an antiferromagnet and a superconducting Rb 0.3 Fe 2 Se 2 arXiv:1203.1834
An experimental study on the preparation of tochilinite-originated intercalation compounds comprised of Fe 1−x S host layers and various kinds of guest layers. Y Peng, G Xi, C Zhong, L Wang, J Lu, X Sun, L Zhu, Q Han, L Chen, L Shi, M Sun, Q Li, M Yu, M Yin, Geochim. Cosmochim. Acta. 73Peng Y., Xi G., Zhong C., Wang L., Lu J., Sun X., Zhu L., Han Q., Chen L., Shi L., Sun M., Li Q., Yu M., Yin M. An experimental study on the preparation of tochilinite-originated intercalation compounds comprised of Fe 1−x S host layers and various kinds of guest layers Geochim. Cosmochim. Acta 73, 4862-4878 (2009).
Observation of superconductivity at 30 K~46 K in A x Fe 2 Se 2 (A = Li. T P Ying, X L Chen, G Wang, S F Jin, T T Zhou, X F Lai, H Zhang, arXiv:1202.4340Na, Ba, Sr, Caand EuYing, T. P., Chen, X. L., Wang, G., Jin, S. F., Zhou, T. T., Lai, X. F., Zhang, H., Observation of superconductivity at 30 K~46 K in A x Fe 2 Se 2 (A = Li, Na, Ba, Sr, Ca, Yb, and Eu), arXiv: 1202.4340.
A structural investigation of deuterated ammonium titanium sulfide ((ND 4 ) + ) 0.22 (ND 3 ) 0.34 TiS 2 by timeof-flight neutron powder diffraction. V G Young, M J Mckelvy, W S Glaunsinger, R B Von Dreele, Solid State Ionics. 26Young, V.G.jr., McKelvy, M. J., Glaunsinger, W. S., von Dreele, R.B., A structural investigation of deuterated ammonium titanium sulfide ((ND 4 ) + ) 0.22 (ND 3 ) 0.34 TiS 2 by time- of-flight neutron powder diffraction. Solid State Ionics 26, 47-54 (1988)
Absence of Superconductivity in LiCu 2 P 2. F Han, X Zhu, G Mu, B Zeng, P Cheng, B Shen, H.-H Wen, J. Am. Chem. Soc. 133Han, F., Zhu, X., Mu, G., Zeng, B., Cheng, P., Shen, B., Wen, H.-H. Absence of Superconductivity in LiCu 2 P 2 , J. Am. Chem. Soc., 133, 1751-1753 (2011).
A A Coelho, Academic, General Profile and Structure Analysis Software for Powder Diffraction Data, 5; Bruker AXS. Karlsruhe,GermanyCoelho, A. A. TOPAS Academic: General Profile and Structure Analysis Software for Powder Diffraction Data, 5; Bruker AXS: Karlsruhe,Germany, (2010).
Crystal structure of the sodium cobaltate deuterate superconductor Na x CoO 2 4xD 2 O (x≈1/3). J D Jorgensen, M Avdeev, D G Hinks, J C Burley, S Short, Phys. Rev. B. 68214517Jorgensen, J. D., Avdeev, M., Hinks, D. G., Burley, J. C., Short, S., Crystal structure of the sodium cobaltate deuterate superconductor Na x CoO 2 4xD 2 O (x≈1/3), Phys. Rev. B 68, 214517 (2003).
| [] |
[
"Weak solutions to an initial-boundary value problem for a continuum equation of motion of grain boundaries",
"Weak solutions to an initial-boundary value problem for a continuum equation of motion of grain boundaries"
] | [
"Peicheng Zhu \nDepartment of Mathematics\nShanghai University\n200444ShanghaiP.R. China\n",
"Yu Lei \nDepartment of Mathematics\nShanghai University\n200444ShanghaiP.R. China\n",
"Yang Xiang \nDepartment of Mathematics\nHong Kong University of Science and Technology\nClear water bayKowloon, Hong Kong\n"
] | [
"Department of Mathematics\nShanghai University\n200444ShanghaiP.R. China",
"Department of Mathematics\nShanghai University\n200444ShanghaiP.R. China",
"Department of Mathematics\nHong Kong University of Science and Technology\nClear water bayKowloon, Hong Kong"
] | [] | We investigate an initial-(periodic-)boundary value problem for a continuum equation, which is a model for motion of grain boundaries based on the underlying microscopic mechanisms of line defects (disconnections) and integrated the effects of a diverse range of thermodynamic driving forces. We first prove the global-in-time existence and uniqueness of weak solution to this initial-boundary value problem in the case with positive equilibrium disconnection density parameter B, and then investigate the asymptotic behavior of the solutions as B goes to zero. The main difficulties in the proof of main theorems are due to the degeneracy of B = 0, a non-local term with singularity, and a non-smooth coefficient of the highest derivative associated with the gradient of the unknown. The key ingredients in the proof are the energy method, an estimate for a singular integral of the Hilbert type, and a compactness lemma. | 10.3934/dcdsb.2022139 | [
"https://arxiv.org/pdf/2204.13325v1.pdf"
] | 248,427,135 | 2204.13325 | 6c78d4e2047aba452ed77d37a94191703abf7df0 |
Weak solutions to an initial-boundary value problem for a continuum equation of motion of grain boundaries
28 Apr 2022
Peicheng Zhu
Department of Mathematics
Shanghai University
200444ShanghaiP.R. China
Yu Lei
Department of Mathematics
Shanghai University
200444ShanghaiP.R. China
Yang Xiang
Department of Mathematics
Hong Kong University of Science and Technology
Clear water bayKowloon, Hong Kong
Weak solutions to an initial-boundary value problem for a continuum equation of motion of grain boundaries
28 Apr 2022Motion of grain boundariesInitial-boundary value problemGlobal exis- tenceWeak solutionsDisconnections
We investigate an initial-(periodic-)boundary value problem for a continuum equation, which is a model for motion of grain boundaries based on the underlying microscopic mechanisms of line defects (disconnections) and integrated the effects of a diverse range of thermodynamic driving forces. We first prove the global-in-time existence and uniqueness of weak solution to this initial-boundary value problem in the case with positive equilibrium disconnection density parameter B, and then investigate the asymptotic behavior of the solutions as B goes to zero. The main difficulties in the proof of main theorems are due to the degeneracy of B = 0, a non-local term with singularity, and a non-smooth coefficient of the highest derivative associated with the gradient of the unknown. The key ingredients in the proof are the energy method, an estimate for a singular integral of the Hilbert type, and a compactness lemma.
Introduction
A polycrystalline material can be regarded as a network of grain boundaries (GBs) on the mesoscale. This GB network has a great impact on a wide range of materials properties, such as strength, toughness, electrical conductivity, and its evolution is important for engineering materials [27]. Grain boundaries are the interfaces between differently oriented crystalline grains, which are a kind of two-dimensional defects in materials. Grain boundary migration controls many microstructural evolution processes in materials. Since GBs are interfaces between crystals, the microscopic mechanisms by which they move are intrinsically different from other classes of interfaces, such as solid-liquid interfaces and biological cell membranes.
Recent experiments and atomistic simulations have shown that the microscopic mechanism of GB migration is associated with the motion of topological line defects, i.e., disconnections [7,17,14,21,15,22,23,28]. This dependence on microscopic structures enables broad-range and deep understandings of GB migration, e.g, the stress-driven motion and the shear coupling effect [19,10], which cannot be described by the classical motion by mean curvature models (driven by capillary forces) [27].
A new continuum equation for motion of grain boundaries based on the underlying disconnection mechanisms was developed by Zhang et al. [33] in 2017. This continuum model integrates the effects of a diverse range of thermodynamic driving forces including the stress-driven motion and is able to describe the shear coupling effect during the GB motion. Generalizations of this continuum model with multiple disconnection modes and GB triple junctions have been futher developed [29,30,34].
In the present article, we will study the existence of weak solutions to the initialboundary value problem of the continuum equation for GB motion developed in Ref. [33], which reads
h t = −M d (σ i + τ )b + ΨH − γHh xx (|h x | + B) (1.1)
for (t, x) ∈ (0, ∞) × Ω, where Ω = (a, d). The boundary and initial conditions are
h| x=a = h| x=d , h x | x=a = h x | x=d , (t, x) ∈ (0, T e ) × ∂Ω, (1.2) h(0, x) = h 0 (x), x ∈ Ω, (1.3) where σ i (t, x) = P.V. ∞ −∞ Kβh x (t, x 1 ) x − x 1 dx 1 ,(1.4)
and
β = b H , K = µ 2π(1 − ν) , B = 2H a e −F d /(k B T ) .
The unknown function h in Equation (1.1) is the height of grain boundary from reference line, and σ i (x, t) is the stress due to the elastic interaction between disconnections based on their dislocation nature [13,32]. The parameters b and H respectively are the Burgers vector and step height of a disconnection, µ and ν respectively are the shear modulus and Poisson ratio, γ is the GB energy, τ is the applied stress, Ψ is energy jump across the GB, and M d is the mobility constant. The parameter B is associated with the equilibrium density of the disconnection pairs, where F d is the disconnection formation energy, a is the lattice constant, k B is the Boltzmann constant, T is the temperature, and 1 a e −F d /(k B T ) is the equilibrium disconnection density. We will study the existence of weak solutions for both cases of B > 0 and B = 0. Note that the regime of B → 0 means small equilibrium disconnection density or large slope of the grain boundary profile, and when B = 0, the equation (1.1) is degenerate at those points where h x = 0. Numerical results in Ref. [33] showed that sharp corners may be developed in the GB profile in the case of B = 0.
The difficulties in the proofs of the existence and uniqueness theorems come from the non-local term with singularity together with a non-smooth coefficient associated with |h x | of the highest derivative h xx , and the degeneracy of the equation in the case of B = 0. To estimate this singular integral term, we employ a theorem in the book by Stein [26]. Regularization is performed so that the coefficient of the h xx term is smooth and uniformly bounded from below, and then compactness lemmas are employed to obtain the results for the original equations. Note that dependence on non-smooth gradient terms in the coefficient of the highest derivative also appeared in the phase field models proposed by Alber and Zhu in [1,3] to describe the evolution of an interface driven by configurational forces, and properties of solutions have been obtained [2,4,5,35,36,37]. Models with non-smooth gradient terms have also been investigated by Acharya, et al [6] and Hilderbrand, et al [12].
Interpretation of the formula for σ i
This subsection is intended to give an explanation of the formula for σ i . The material we consider is normally in a bounded domain and we assume the spatial periodic boundary conditions, which means that the unknown h is defined over Ω; however in formula (1.4), the integral domain is R = (−∞, +∞), which implies h should be defined over R. Letting L = d − a be the smallest positive period, choosing x ∈ Ω we then arrive at
σ i (t, x) = P.V. ∞ −∞ Kβh x (t, x 1 ) x − x 1 dx 1 = k∈Z P.V. a+(k+1)L a+kL Kβh x (t, x 1 ) x − x 1 dx 1 = k∈Z P.V. d a Kβh x (t, y) x − y + kL dy =: σ i1 + σ i2 , (1.5) where σ i1 =: P.V. k∈Z\{0} d a Kβh x (t, y) x − y + kL dy, (1.6) σ i2 =: P.V. d a Kβh x (t, y) x − y dy. (1.7)
Here Z denotes the set of integers. Observing that for k = 0 we see that the function 1
x−y+kL is positive and monotonically increasing in y for any fixed x and L, thus by applying the second mean value theorem of integrals, we conclude that there exists a
number η ∈ [a, d] such that d a Kβh x (t, y) x − y + kL dy = 1 x − d + kL d η Kβh x (t, y)dy,
while for the case that k = 0, d a Kβhx(t,y)
x−y dy is a singular integral for general h, from which one thus finds that for many kinds of h, the series in σ i may diverge.
Therefore one must understand Principal Value in the formula of σ i both for a series σ i1 and for singular integral σ i2 , more precisely, the term
σ i1 = ∞ k=1 d a Kβh x (t, y) x − y + kL + Kβh x (t, y) x − y − kL dy,(1.8)
which implies that
|σ i1 | ≤ C h x L 2 (Ω) ∞ k=1 1 (kL) 2 ≤ C h x L 2 (Ω) ,(1.9)
and
σ i2 = lim ε→0 {|x−y|>ε}∩Ω Kβh x (t, y)
x − y dy, whose bound can be evaluated by employing Theorem A.1 suppose that h x ∈ L 2 (Ω). Remark. We would like to point out another way to interpret the singular integral σ i with the following modification:
σ i (t, x) = P.V. ∞ −∞ Kβh x (t, x 1 ) D(x − x 1 ) dx 1 =: σ i1 + σ i2 , (1.10) where D(x − x 1 ) is defined by D(x − x 1 ) = sgn(x 1 − x)|x − x 1 | 1+ζ , if x ∈ [a, d]; x − x 1 , if x ∈ [a, d] (1.11)
with an arbitrarily given positive constant ζ. We can then conclude that the series σ i1 converges and the singular integral σ i2 may be treated as in the previous method. We will not use this method in the proofs in this paper.
Main results
We first perform nondimensionalization of the quation. Using M d µ as the time unit, µ the unit of σ i , τ and Ψ, L 0 the unit of the length scale of the continuum equation, and µL 0 the unit of γ, we have the dimensionless form of the equation. Further introducing parameters
α 1 = γH, α 2 = b, α 3 = τ b + ΨH,
where all the quantities are in dimensionless form, equation (1.1) can be written as
h t − α 1 2 (|h x |h x + 2Bh x ) x + (α 2 σ i + α 3 )(|h x | + B) = 0. (1.12)
Here we have used the formula (|y|y) ′ = 2|y|. From now on, we will use this nondimensionalized equation with the dimensionless parameters described above.
To define weak solutions to the initial-boundary value problem (1.1) -(1.3), we denote by Ω = (a, d) a bounded open interval with constants a < d, by T e > 0 an arbitrary constant, and by Q Te the domain (0, T e ) × Ω. Define
(v 1 , v 2 ) Z = Z v 1 (y)v 2 (y)dy for Z = Ω or Z = Q Te . Moreover, if v is a function defined on Q Te , we use v(t) to represent the mapping x → v(t, x) and sometimes write v = v(t) for convenience.
Statement of the main results. Our main results are concerned with the existence and uniqueness of weak solution to an initial-boundary value problem.
Definition 1.1 Let h 0 ∈ L 1 (Ω). A function h with h ∈ L 2 (0, T e ; H 1 per (Ω)) (1.13)
is called a weak solution to problem (1.
1) -(1.3), if for all ϕ ∈ C ∞ 0 ((−∞, T e ) × Ω), there holds (h, ϕ t ) Q Te − α 1 2 (|h x |h x + 2Bh x , ϕ x ) Q Te − ((α 2 σ i + α 3 )(|h x | + B), ϕ) Q Te + (h 0 , ϕ(0)) Ω = 0.
(1.14)
We then have Theorem 1.1 Assume that γH is sufficiently greater than b, and h 0 ∈ H 1 per (Ω). Then there exists a unique weak solution h to problem (1.1) -(1.3) with B > 0, which in addition to (1.13), satisfies We are also interested in the limit as B → 0.
h ∈ L ∞ (0, T e ; H 1 per (Ω)), h x ∈ L 2 (0, T e ; H 1 per (Ω)) ∩ L 3 (Q Te ), (1.15) h t ∈ L 4 3 (Q Te ), |h x |h x ∈ LDefinition 1.2 Let h 0 ∈ L 1 (Ω). A function h with h ∈ L 2 (0, T e ; H 1 per (Ω)) (1.17)
is called a weak solution to problem (1.
1) -(1.3) with B = 0, if for all ϕ ∈ C ∞ 0 ((−∞, T e )× Ω), there holds (h, ϕ t ) Q Te − α 1 2 (|h x |h x , ϕ x ) Q Te − ((α 2 σ i + α 3 )|h x |, ϕ) Q Te + (h 0 ,h ∈ L ∞ (0, T e ; H 1 per (Ω)), h x ∈ L 3 (Q Te ), (1.19) h t ∈ L 4 3 (Q Te ), |h x |h x ∈ L 4 3 (0, T e ; W 1, 4 3 per (Ω)), (1.20) (|h x |h x ) t ∈ L 1 (0, T e ; H −2 per (Ω)). (1.21)
Remarks. 1. In the original units, the assumption that γH is sufficiently greater than b means that γ ≫ (b/H)µL 0 , where L 0 is the length scale of the continuum equation. 2. For the regularity of the solution h, we have a more regular weak solution in the case of B > 0 than that in the case of B = 0. This result agrees with the numerical results obtained in Ref. [33] in which sharp corners were developed in h in the case of B = 0.
Notations. C, C(·) denote, respectively, universal constants which may vary from line to line. and C(·) depends on its argument(s). Greek letters ε, ζ are small positive numbers which are normally assumed to be small. κ is taken in (0, 1], which will be sent to zero. T e (or t e ) denotes a positive constant related to time, the life of a solution.
Let p, q be real numbers such that p, q ≥ 1. Let N be the set of natural number and N + = N ∪ {0}, and R d be d-dimensional Euclidean space.
Ω denotes an open, bounded, simple-connected domain in R d with natural number d, with smooth boundary ∂Ω. It represents the material points of a solid body. Q t = (0, t) × Ω, and its parabolic boundary PQ t is defined by
PQ t := (∂Ω × [0, t)) ∪ (Ω × {0}).
L p (Ω) are the Sobolev spaces of p-integrable real functions over Ω endowed with the norm
f L p (Ω) = Ω |f (x)| p dx 1 p , if p < ∞; f L ∞ (Ω) = ess sup x∈Ω |f (x)|.
Throughout this article, the norm of L 2 (Ω) is denoted by · , and the norm of
L 2 (Q Te ) is denoted by · Q Te .
Let Ω be an n-dimensional cuboid. Let α ∈ N d 0 be a multi-index and |α| be its
length, where N 0 = N ∪ {0}. D α f is the |α|-th order weak derivatives. Define the space W m,p per (Ω) = {f ∈ L p (Ω) | D α f ∈ L p (Ω) for all α such that |α| ≤ m, and γ j f | on one face = (−1) j γ j f | on the corresponding face , for j = 0, 1, · · · , m − 1} endowed with norm f W m,p per = |α|≤m D α f p L p (Ω) 1 p , where γ j are the trace operators. And W m,p 0 (Ω) is the closure of C ∞ 0 (Ω) in the norm · W m,p (Ω) . For p = 2, H m per (Ω) := W m,2 per (Ω), H −m per (Ω) denotes the dual space of H m per (Ω). H m 0 (Ω) := W m,2 0 (Ω). Let q, p ∈ R such that q, p ≥ 1. L q (0, T ; L p (Ω)) := f | f is Lebesgue measurable such that f L q (0,t;L p (Ω)) := t 0 Ω |f | p dx q p dτ 1 q < ∞ , and L q (0, t; W m,p per (Ω)) := {f ∈ L q (0, t; L p (Ω)) | t 0 f (·, τ ) q W m,p per dτ < ∞}.
See, e.g., [18]. We also need some function spaces: For non-negative integers m, n, real number α ∈ (0, 1) we denote by C m+α (Ω) the space of m-times differentiable functions on Ω, whose mth derivative is Hölder continuous with exponent α. The space C α, α 2 (Q Te ) consists of all functions on Q Te , which are Hölder continuous in the parabolic distance d((t, x), (s, y)) := |t − s| + |x − y| 2 C m,n (Q Te ) and C m+α,n+ α 2 (Q Te ), respectively, are the spaces of functions, whose xderivatives up to order m and t-derivatives up to order n belong to C(Q Te ) or to C α, α 2 (Q Te ), respectively.
Organization of rest of this article. The main results of this article are Theorem 1.1 and Theorem 1.2. The remaining sections are devoted to the proofs of these theorems.
In Section 2, we construct an approximate initial-boundary value problem, and prove by employing the Leray-Schauder fixed-point theorem, the existence of classical solutions to this problem. Then we derive in Section 3 a prior estimates which are uniform in a small parameter, in which we used some results about singular integrals. In Section 4, by making use of the Aubin-Lions lemma and properties of strong convergence, we prove the existence and uniqueness of weak solution to the original initial-boundary value problem when B is a positive constant. Section 5 is devoted to the study of asymptotic behavior as B goes to zero, to this end we establish a prior estimates that are independent of B, consequently we prove Theorem 1.2.
Existence of solutions to the modified problem
To prove Theorem 1.1, we construct the following approximate initial-boundary value problem.
h t − α 1 hx 0 |p| κ dp + Bh x x + (α 2 σ κ i + α 3 )(|h x | κ + B) = 0, in Q Te , (2.1) h| x=a = h| x=d , h x | x=a = h x | x=d , on ∂Ω × [0, T e ], (2.2) h(0, x) = h κ 0 (x), in Ω. (2.3)
Here κ > 0 is a constant, we used the notation
|p| κ := |p| 2 + κ 2 (2.4)
to replace the function |p| to smooth the coefficient of the principal term in (2.1) and to guarantee that equation is uniformly parabolic from below. And
σ κ i = σ κ i1 + σ κ i2 , where σ κ i1 is obtained via replacing h in σ i1 by a function h κ ∈ L 2 (0, T e ; H 2 per (Ω)), and σ κ i2 is defined, for this h κ , by σ κ i2 = Kβ {|y|>κ}∩Ω h κ x (t, x − y) y dy. (2.5)
The initial data h κ 0 (x) is chosen such that h κ 0 ∈ C ∞ (Ω) and
h κ 0 − h 0 H 1 per (Ω) → 0.
We now state the existence of classical solution to problem (2.1) -(2.3) as follows.
Theorem 2.1 Suppose that γH is sufficiently greater than b and the initial data h κ 0 satisfies the compatibility conditions
h κ 0 (a) = h κ 0 (d), h κ 0x (a) = h κ 0x (d), h κ 0xx (a) = h κ 0xx (d), and h κ t (0, a) + α 2 σ κ i (0, a)(|h κ 0x (a)| + B) = h κ t (0, d) + α 2 σ κ i (0, d)(|h κ 0x (d)| + B). Then there exists a classical solution h κ to problem (2.1) -(2.3) such that h κ xt ∈ L 2 (Q te ), h κ C β/2,1+β (Qt e ) ≤ C κ .
We now present the strategy of the proof of Theorem 2.1. Since there is a non-local term σ κ i in Eq. (2.1), we shall employ the Leray-Schauder fixed-point theorem to prove this theorem. To this end, we first modify that equation as
h t − α 1 h xx (|h x | κ + B) + (α 2 λσ κ i + α 3 )(|h x | κ + B) = 0, in Q Te , (2.6) h| x=a = h| x=d , h x | x=a = h x | x=d , on ∂Ω × [0, T e ],(2.7)h(0, x) = λh κ 0 (x), in Ω, (2.8) where λ ∈ [0, 1],σ κ i =σ κ i1 +σ κ i2 , andσ κ i2 is defined bŷ σ κ i2 = Kβ {|y|>κ}∩Ωĥ x (t, x − y) y dy. (2.9)
We take 0 < α < 1 and define for anyĥ ∈ B := C α/2,1+α (Q te ) a mapping P λ : [0, 1] × B → B;ĥ → h where h is the solution to problem (2.6) -(2.8), and the existence of solutions to this problem can be found, e.g., in Theorem 4.1, P. 558, Ladyshenskaya et al [18] with slight modifications.
Next we derive a priori estimates which may depend on the parameter κ. In the rest part of this section, we assume that the conditions in Theorem 2.1 are met, and there exists a unique solution to (2.6) -(2.8), which means thatσ κ i in (2.6) is replaced by σ κ i .
Lemma 2.1 There holds for any t ∈ [0, T e ] that h κ (t) 2 H 1 (Ω) + t 0 h κ xx (τ ) 2 dτ ≤ C κ . (2.10)
Here C κ is a constant which may depend on κ.
This estimate is easier to obtain than those in Section 3 that are independent of κ, so we omit the details of the derivation for most of them. We also need the following estimates.
Lemma 2.2 There holds for any t ∈ [0, T e ] that h κ xx (t) 2 + h κ t (t) 2 + t 0 h κ xt (τ ) 2 dτ ≤ C κ . (2.11)
This estimate is not necessary for the proof of the existence of weak solutions, thus we give the main idea on deriving it. For the sake of reader's convenience, we present some tools as follows. First we recall the Gronwall lemma.
y ′ (t) ≤ A(t)y(t) + B(t), f or a.e. t,
implies In the case that 1 ≤ p 0 < ∞ and p 1 = 1, the lemma is also called the Generalized Aubin-Lions which plays a crucial role in the investigation of the limit as B → 0 in Section 5. The proof of Lemma 2.4, we refer to, e.g., [20,24,25]. We also will use the lemma on Hölder continuity.
y(t) ≤ y(0)e t 0 A(τ )dτ + t 0 B(s)e t s A(τ )dτ ds.Lemma 2.5 Let f (t, x) be a function, defined over Q te , such that (i) f is uniformly (with respect to x) Hölder continuous in t, with exponent 0 < α ≤ 1, that is |f (t, x) − f (s, x)| ≤ C|t − s| α , and (ii) f x is uniformly (with respect to t) Hölder continuous in x, with exponent 0 < β ≤ 1, that is |f x (t, x) − f x (t, y)| ≤ C ′ |y − x| β .
Then f x is uniformly Hölder continuous in t with exponent γ = αβ/(1 + β), such that |f
x (t, x) − f x (s, x)| ≤ C ′′ |t − s| γ , ∀x ∈Ω, 0 ≤ s ≤ t ≤ t e ,
where C ′′ is a constant which may depend on C, C ′ and α, β.
We now turn back to the proof of Lemma 2.2. Proof of Lemma 2.2. Differentiating formally Eq. (2.6) (whereσ κ i is replaced by σ κ i ) with respect to t, multiplying by h κ t , using integration by parts, and invoking the boundary condition (2.7), we obtain
1 2 d dt h κ t (t) 2 + α 1 Ω (|h κ x | κ + B) |h κ xt | 2 dx = − Ω (α 2 σ κ i ) t (|h κ x | κ + B)h κ t + (α 2 σ κ i + α 3 )(|p| κ ) ′ | p=h κ x h κ xt h κ t dx =: I 1 + I 2 .
(2.12)
Next we are going to estimate I 1 , I 2 . Noting the periodic boundary conditions, by using the Hölder and Young inequalities and the Sobolev embedding theorem, we get
|I 1 | ≤ Kβ κ Ω Ω α 2 |h κ xt (t, x − y)|dy 2 1 2 |h κ x | κ + B L ∞ (Ω) h κ t ≤ C κ h κ xt ( h κ xx + 1 h κ t ) ≤ ε h κ xt 2 + C( h κ xx 2 + 1) h κ t 2 .
(2.13)
To evaluate I 2 , we invoke the Nirenberg inequality in the following form
f L 4 ≤ C f x 1 4 f 3 4 + C ′ f ,(2.14)
where f will be replaced by h κ x and h κ t . It is easy to see that |(|p| κ ) ′ | ≤ 1, hence applying the Young inequality and recalling the estimates (2.10) we arrive at
|I 2 | ≤ C( σ κ i L 4 + 1) h κ xt h κ t L 4 ≤ C( h κ x L 4 + 1) h κ xt h κ t L 4 ≤ C( h κ xx 1 4 h κ x 3 4 + h κ x + 1) h κ xt ( h κ tx 1 4 h κ t 3 4 + h κ t ) ≤ C( h κ xx 1 4 + 1) h κ xt ( h κ tx 1 4 h κ t 3 4 + h κ t ) ≤ C h κ tx 5 4 ( h κ xx 1 4 + 1) h κ t 3 4 + C h κ tx ( h κ xx 1 4 + 1) h κ t ≤ ε h κ tx 2 + C( h κ xx 4 3 + 1) h κ t 2 + ε h κ tx 2 + C( h κ xx 2 + 1) h κ t 2 . (2.15)
Combination of (2.12), (2.13) and (2.15) yields
1 2 d dt h κ t (t) 2 + Ω (α 1 |h κ x | κ + (α 1 B − 3ε)) |h κ xt | 2 dx ≤ C h κ xx 2 + h κ xx 4 3 + 1 h κ t 2 .
(2.16)
To apply Lemma 2.3, we define In what follows we will derive more regularities from (2.10) and (2.11). To this end, we recall Lemma 2.4, and let f = h κ x ,
y(t) = h κ t (t) 2 , A(t) = C( h κ xx 2 + h κ xx 4 3 +1) which is integrable over [0,p 0 = ∞, p 1 = 2, B 0 = H 1 per (Ω) ⊂⊂ B := C α per (Ω), B 1 = L 2 (Ω),
it then follows from (2.10) and (2.11) that
h κ x ∈ L ∞ (0, t e ; B 0 ), ∂h κ x ∂t = h κ tx ∈ L 2 (0, t e ; B 1 ),
and we arrive at
h κ x ∈ C([0, t e ]; B) = C([0, t e ]; C α per (Ω)). (2.17)
Invoking the Sobolev embedding theorem, we also have
|h κ (t, x) − h κ (s, x)| = t s h κ t (τ, x)dτ ≤ t s h κ t (τ ) L ∞ (Ω) dτ ≤ t s h κ t (τ ) H 1 per (Ω) dτ ≤ t s h κ t (τ ) 2 H 1 per (Ω) dτ 1 2 t s 1 dτ 1 2 ≤ C|t − s| 1 2 . (2.18)
Completion of the Proof of Theorem 2.1. Now using (2.18) and (2.17) we may apply Lemma 2.5 to conclude that there exists a positive constant 0 < α < 1 such that h κ x C α/2,α ≤ C κ . By the a priori estimate of the Schauder type for parabolic equations, we thus obtain that h κ C 1+α/2,2+α (Qt e ) ≤ C κ .
Invoking that C 1+α/2,2+α (Q te ) ⊂⊂ C α/2,1+α (Q te ), we see the conditions for the Leray-Schauder fixed-point theorem are satisfied. By definition it is easy to see that P 0 h ≡ 0. Thus we are in a position to apply the Leray-Schauder fixed-point theorem (see, e.g., [11]), and assert that P 1 has a fixed point, i.e., P 1 h ≡ h and this implies that a classical solution exists globally. Hence the proof of Theorem 2.1 is complete.
A prior estimates
In this section we are going to derive a prior estimates for solutions to the modified problem (2.1) -(2.3), which are uniform with respect to κ. Since we shall take the limits of approximate solutions as κ → 0, in what follows we may assume that 0 < κ ≤ 1, γH is sufficiently greater than b.
(3.1)
In this section, the letter C stands for various positive constants independent of κ, but may depend on B.
Lemma 3.1 There hold for any t ∈ [0, T e ] h κ (t) 2 + t 0 Ω h κ x 0 |y| κ dy + Bh κ x h κ x dxdτ ≤ C, (3.2) t 0 h κ x (τ ) 3 L 3 (Ω) dτ ≤ C. (3.3)
Proof. Multiplying Eq. (2.1) by h κ , making use of integration by parts, and invoking the boundary condition (2.2), we arrive at
1 2 d dt h κ (t) 2 + α 1 Ω h κ x 0 |y| κ dy + Bh κ x h κ x dx = − Ω (α 2 σ κ i + α 3 )(|h κ x | κ + B)h κ dx = − Ω ∂ ∂x x a (α 2 σ κ i + α 3 )(|h κ x | κ + B)dyh κ dx = Ω x a (α 2 σ κ i + α 3 )(|h κ x | κ + B)dyh κ x dx =: I. (3.4)
For a term in the left-hand side of (3.4) we evaluate it as
Ω h κ x 0 |y| κ dy h κ x dx ≥ Ω h κ x 0 |y|dy h κ x dx = 1 2 Ω |h κ x | 3 dx. (3.5)
Note that the above inequality is obvious for h κ x ≥ 0, otherwise one may replace h κ x by −|h κ x | and finds the same inequality. Applying the Young and Hölder inequalities and Theorem A.1, we obtain
|I| ≤ Ω |(α 2 σ κ i + α 3 )(|h κ x | κ + B)|dx Ω |h κ x |dx ≤ α 2 σ κ i + α 3 L 3 (Ω) |h κ x | κ + B L 3 (Ω) 1 L 3 (Ω) h κ x L 3 (Ω) 1 L 3 2 (Ω) ≤ |Ω| α 2 σ κ i L 3 (Ω) + α 3 |Ω| h κ x L 3 (Ω) + C h κ x L 3 (Ω) . (3.6)
Here |Ω| denotes the measure of Ω. We now estimate the term of σ κ i . From estimate (1.9) it follows that
σ i1 L 3 (Ω) ≤ C h x L 2 (Ω) ≤ C h x L 3 (Ω) .(3.7)
and from Theorem A.1 we infer that
σ κ i2 L 3 (Ω) ≤ C 3 h κ x L 3 (Ω) . (3.8)
Thus we arrive at
σ κ i L 3 (Ω) ≤ σ κ i1 L 3 (Ω) + σ κ i2 L 3 (Ω) ≤ C h κ x L 3 (Ω) ,(3.9)
and hence
|I| ≤ |Ω| α 2 C h κ x L 3 (Ω) + α 3 |Ω| h κ x L 3 (Ω) + C h κ x L 3 (Ω) ≤ C|Ω|α 2 h κ x 3 L 3 (Ω) + ε h κ x 3 L 3 (Ω) + C ε . (3.10)
Combination of inequalities (3.4) and (3.10) yields
1 2 d dt h κ (t) 2 + α 1 2 Ω h κ x h κ x 0 |y| κ dydx + α 1 4 Ω |h κ x | 3 dx + Bα 1 Ω |h κ x | 2 dx ≤α 2 h κ x 3 L 3 (Ω) + ε h κ x 3 L 3 (Ω) + C ε , (3.11) which implies 1 2 d dt h κ (t) 2 + α 1 2 Ω h κ x h κ x 0 |y| κ dydx + α 1 4 −α 2 − ε Ω |h κ x | 3 dx + Bα 1 Ω |h κ x | 2 dx ≤ C ε . (3.12)
Hereα 2 := meas(Ω)C 3 α 2 . Therefore choosing that α 1 is sufficiently greater than α 2 , and ε is suitably small, integrating (3.12) with respect to t, we arrive at
h κ (t) 2 + t 0 Ω h κ x h κ x 0 |y| κ dy + |h κ x | 3 + |h κ x | 2 dxdτ ≤ C + h κ 0 2 ≤ C. (3.13)
Thus the proof of this lemma is complete.
Lemma 3.2 There holds for any t ∈ [0, T e ] h κ x (t) 2 + t 0 Ω (|h κ x | κ + B) |h κ xx | 2 dxdτ ≤ C. (3.14)
Proof. Multiplying Eq. (2.1) by −h κ xx , employing the technique of integration by parts with respect to x, and invoking the boundary condition (2.2), we obtain formally for almost all t that
1 2 d dt h κ x (t) 2 + α 1 Ω (|h κ x | κ + B)|h κ xx | 2 dx = Ω (α 2 σ κ i + α 3 )(|h κ x | κ + B)h κ xx dx = Ω α 2 σ κ i (|h κ x | κ + B)h κ xx dx + Ω α 3 (|h κ x | κ + B)h κ xx dx =: I 1 + I 2 . (3.15)
We may employ the technique of finite difference to justify the formal computation in (3.15). It is quite standard so we omit the details. Now we treat I 1 and I 2 , and first estimate the easier term I 2 . Applying the Young inequality with ε, we have
|I 2 | = Ω α 3 (|h κ x | κ + B) 1 2 (|h κ x | κ + B) 1 2 h κ xx dx ≤ C ε Ω (|h κ x | κ + B)dx + ε 2 Ω (|h κ x | κ + B)|h κ xx | 2 dx ≤ C ε Ω (|h κ x | + κ + B)dx + ε 2 Ω (|h κ x | κ + B)|h κ xx | 2 dx ≤ C ε Ω (|h κ x | 2 + C ′ )dx + ε 2 Ω (|h κ x | κ + B)|h κ xx | 2 dx. (3.16)
Here we used the simple inequality |p| κ ≤ |p| + κ. To handle I 1 , we recall Theorem A.1, (1.9) then arrive at
|I 1 | = α 2 Ω σ κ i (|h κ x | κ + B) 1 2 (|h κ x | κ + B) 1 2 h κ xx dx ≤ α 2 Ω |σ κ i | 3 dx 1 3 Ω (|h κ x | κ + B) 1 2 * 6 dx 1 6 Ω (|h κ x | κ + B) 1 2 h κ xx 2 dx 1 2 ≤ α 2 h κ x 1+ 1 2 L 3 (Ω) Ω (|h κ x | κ + B)|h κ xx | 2 dx 1 2 ≤ C ε h κ x 3 L 3 (Ω) + ε 2 Ω (|h κ x | κ + B)|h κ xx | 2 dx. (3.17)
Combining (3.15) with (3.16) and (3.17), integrating it with respect to t, and making use of the Young inequality, we then arrive at
1 2 h κ x (t) 2 2 + α 1 t 0 Ω (|h κ x | κ + B)|h κ xx | 2 dxdτ ≤ C ε t 0 h κ x 3 L 3 (Ω) dτ + C + ε t 0 Ω (|h κ x | κ + B)|h κ xx | 2 dxdτ. (3.18)
Now we choose ε small enough so that α 1 − ε > 0, recall the estimates in Lemma 3.1, then
1 2 h κ x (t) 2 2 + (α 1 − ε) t 0 Ω (|h κ x | κ + B)|h κ xx | 2 dxdτ ≤ C ε t 0 h κ x 3 L 3 (Ω) dτ + C ≤ C. (3.19)
Therefore, the proof of Lemma 3.2 is complete.
Corollary 3.1 There hold for any t ∈ [0, T e ] t 0 Ω (|h κ x | κ |h κ xx |) 4 3 dxdτ ≤ C, (3.20) t 0 Ω (|h κ x h κ xx |) 4 3 dxdτ ≤ C, (3.21) t 0 (h κ x ) 2 4 3 W 1, 4 3 (Ω) dτ ≤ C, (3.22) t 0 h κ x 8 3 L ∞ (Ω) dτ ≤ C. (3.23)
Proof. By Hölder's inequality, we have for some 1 ≤ p < 2, q = 2 p and 1
q + 1 q ′ = 1 that t 0 Ω (|h κ x | κ |h κ xx |) p dxdτ = t 0 Ω |h κ x | p 2 κ |h κ x | p 2 κ |h κ xx | p dxdτ ≤ t 0 Ω |h κ x | pq ′ 2 κ dxdτ 1 q ′ t 0 Ω |h κ x | pq 2 κ |h κ xx | pq dxdτ 1 q ≤ t 0 Ω |h κ x | p 2−p κ dxdτ 2−p 2 t 0 Ω |h κ x | κ |h κ xx | 2 dxdτ p 2 . (3.24)
Inequality (3.14) implies for p 2−p ≤ 2, i.e., p ≤ 4 3 , that the right-hand side of (3.24) is bounded, hence (3.20) is true.
Invoking the basic fact that |p| κ ≥ |p|, from (3.20) it follows that (3.21) holds. To prove (3.23), we apply the Poincaré inequality in the following form
f −f L p (Ω) ≤ C f x L p (Ω) ,
wheref = Ω f (x)dx/|Ω|, choose p = 4 3 , then from (3.21) we deduce that (3.28)
(h κ x ) 2 − (h κ x ) 2 W 1, 4 3 (Ω) ≤ (h κ x ) 2 − (h κ x ) 2 x W 1, 4 3 (Ω) = (h κ x ) 2 x W 1, 4 3 (Ω) = 2 h κ x h κ xx W 1, 4 3 (Ω) , (3.25) hence t 0 (h κ x ) 2 − (h κ x ) 2 4 3 W 1, 4 3 (Ω) dτ ≤ 2 t 0 h κ x h κ xx 4 3 W 1, 4 3 (Ω) dτ ≤ C, (3.26) which implies t 0 (h κ x ) 2 4 3 W 1, 4 3 (Ω) dτ ≤ t 0 (h κ x ) 2 − (h κ x ) 2 4 3 W 1, 4 3 (Ω) dτ + t 0 (h κ x ) 2 4 3 W 1, 4 3 (Ω) dτ ≤ C + t 0 (h κ x ) 2 1 4 3 L 4 3 (Ω) dτ ≤ C + sup 0≤τ ≤t (h κ x ) 2 (τ )
Proof.
Recalling the regularity about h κ t we use the integration by parts and obtain
(h κ , ϕ t ) Q Te = Te 0 d dt (h κ , ϕ) Ω dt − (h κ t , ϕ) Q Te = (h κ , ϕ) Ω | Te 0 − (h κ t , ϕ) Q Te = −(h κ (0), ϕ(0)) Ω − (h κ t , ϕ) Q Te (3.29) thus one has (h κ t , ϕ) Q Te = α 1 h κ xx − (α 2 σ κ i + α 3 ) (|h κ x | κ + B), ϕ Q Te . (3.30)
Making use of Theorem A.1 with p = 4 3 , and estimates (3.20), (3.23), and the Hölder inequality, we have
|(h κ t , ϕ) Q Te | ≤ C |h κ x | κ h κ xx L 4 3 (Q Te ) + h κ xx L 4 3 (Q Te ) ϕ L 4 (Q Te ) +C σ κ i L 8 3 (0,Te,L 4 3 (Ω)) h κ x L 8 3 (0,Te;L ∞ (Ω)) + 1 ϕ L 4 (Q Te ) ≤ C 1 + h κ x L 8 3 (0,Te,L ∞ (Ω)) ϕ L 4 (Q Te ) ≤ C ϕ L 4 (Q Te ) (3.31)
for all ϕ ∈ L 4 (Q Te ). Thus (3.31) implies that h κ t ∈ L 4 3 (Q Te ) and (3.28) holds. The proof of the lemma is complete.
Existence of solutions to the original IBVP
In this section we shall use the a prior estimates established in Section 2.2 to investigate the convergence of h κ as κ → 0, and show that there exists a subsequence, which converges to a weak solution to the initial-boundary value problem (1.1) -(1.3), thereby prove Theorem 1.1.
Lemma 4.1 There exists a subsequence of h κ x (we still denote it by h κ x ) such that h κ x → h x strongly in L 2 (Q Te ), (4.1) |h κ x | κ → |h x | strongly in L 2 (Q Te ), (4.2) |h κ x | κ h κ x → |h x |h x strongly in L 1 (Q Te ) (4.3)
as κ → 0.
Proof. Let p 0 = 2, p 1 = 4 3 and
B 0 = H 1 per (Ω), B 1 = L 2 (Ω), B 2 = W −1, 4 3 per (Ω).
These spaces satisfy the assumptions of Lemma 2.4. Since estimate (3.14) implies that h κ xx ∈ L 2 (0, T e ; L 2 (Ω)), then h κ x ∈ L 2 (0, T e ; H 1 per (Ω)). It follows from Theorem 2.4 that
h κ x → h x strongly in L 2 (Q Te ),
as κ → 0. This proves (4.1).
It is easy to see that
| √ x − √ y| ≤ |x − y|
for all x, y ∈ R + . From this we deduce that as κ → 0
|h κ x | κ − |h x | 2 L 2 (Q Te ) ≤ |(h κ x ) 2 + κ 2 − (h x ) 2 | 2 L 2 (Q Te ) ≤ Q Te (h κ x ) 2 − (h x ) 2 + κ 2 dxdτ ≤ h κ x + h x L 2 (Q Te ) h κ x − h x L 2 (Q Te ) + κ 2 L 2 (Q Te ) ≤ C h κ x − h x L 2 (Q Te ) + κ 2 L 2 (Q Te ) → 0. (4.6)
From this we infer that |h κ x | κ converges to |h x | strongly in L 2 (Q Te ) as κ → 0. This proves (4.2). Combining (4.1) and (4.2), we get (4.3) immediately.
Proof of Theorem 1.1. We have h κ L ∞ (0,Te;H 1 per (Ω)) ≤ C, and h κ L 2 (0,Te;H 2 per (Ω)) ≤ C by (3.14). This implies h ∈ L ∞ (0, T e ; H 1 per (Ω)) ∩ L 2 (0, T e ; H 2 per (Ω)), since we can select a subsequence of h κ which converges weakly to h in this space. Thus, h satisfies (1.13).
It therefore suffices to show that problem (1.1) -(1.3) is fulfilled in the weak sense which means we need to prove the relation (1.14) holds. To this end, we employ the following equality
(h κ , ϕ t ) Q Te − α 1 h κ x 0 |p| κ dp + Bh κ x , ϕ x Q Te − ((α 2 σ κ i + α 3 )(|h κ x | κ + B), ϕ) Q Te + (h 0 , ϕ(0)) Ω = 0. (4.7)
From which we see that equation (1.14) follows if we show that
(h κ , ϕ t ) Q Te → (h, ϕ t ) Q Te , (4.8) h κ x 0 |y| κ , ϕ x Q Te → 1 2 |h x |h x , ϕ x Q Te , (4.9) (|h κ x | κ , ϕ) Q Te → (|h x |, ϕ) Q Te ,(4.10)(h κ x , ϕ x ) Q Te → (h x , ϕ x ) Q Te , (4.11) (σ κ i |h κ x | κ , ϕ) Q Te → (σ i |h x |, ϕ) Q Te (4.12)
as κ → 0. Now, the conclusions (4.8) and (4.11) follow easily from (3.14), and the relation (4.10) follows from (4.2). It remains to prove (4.9) and (4.12). To prove (4.9) we write
h κ x 0 |y| κ dy − 1 2 |h x |h x = h κ x 0 |y| κ dy − 1 2 |h κ x | κ h κ x + 1 2 (|h κ x | κ h κ x − |h x |h x ) := I 1 + I 2 .
(4.13)
The conclusion (4.3) implies
I 2 L 1 (Q Te ) → 0,(4.14)
as κ → 0. Next we handle I 1 as follows.
|I 1 | = h κ x 0 |y| κ dy − 1 2 |h κ x | κ h κ x ≤ h κ x 0 |y| κ dy − h κ x 0 |y|dy ≤ |h κ x | 0 |y| κ − |y| dy ≤ |h κ x | 0 κdy = κ|h κ x |,(4.15)
whence (3.14) implies
I 1 L 1 (Q Te ) ≤ C I 1 L ∞ (0,Te;L 2 (Ω)) ≤ C I 1 L ∞ (0,Te;H 1 (Ω)) ≤ Cκ → 0,(4.16)
as κ → 0. From this relation and (4.13), (4.14) we obtain
h κ x 0 |y| κ dy − 1 2 |h x |h x L 1 (Q Te ) → 0,(4.17)
which implies (4.9). Finally we prove (4.12). Applying the compactness lemma and Theorem A.1 with p = 2 we get that
σ κ i → σ i strongly in L 2 (Q Te ) (4.18) where σ i (t, x) = P.V. ∞ −∞ Kβh x (t, x 1 ) x − x 1 dx 1 .
Then recalling (4.2) one concludes that It remains to prove the uniqueness. To this end, we recall the regularity of h κ t , and definition (1.14), using integration by parts, to get
σ κ i |h κ x | κ → σ i |h x | strongly in L 1 (Q Te ),(4.−(h t , ϕ) Q Te − α 1 1 2 h x |h x | + Bh x , ϕ x Q Te − ((α 2 σ i + α 3 )(|h x | + B), ϕ) Q Te = −(h(0), ϕ(0)) Ω + (h 0 , ϕ(0)) Ω = 0. (4.20)
Suppose that there exist two solutions h 1 , h 2 , let h = h 1 − h 2 , then from (4.20) we infer that
(h t , ϕ) Q Te + α 1 2 (h 1x |h 1x | − h 2x |h 2x |, ϕ x ) Q Te + α 1 (Bh x , ϕ x ) Q Te + (α 2 σ 1 i + α 3 )(|h 1x | + B) − (α 2 σ 2 i + α 3 )(|h 2x | + B), ϕ Q Te = 0.
(4.21)
Here σ j i with j = 1, 2 stand for the formulas of σ i in which h is replaced by h j , respectively. Since C ∞ 0 (Q t ) is dense in L 2 (Q t ), we can choose ϕ = h, using the monotonicity property
(x|x| − y|y|)(x − y) ≥ 0
to infer from (4.21) that
1 2 h(t) 2 + α 1 B h x 2 Q Te + (α 2 σ 1 i + α 3 )(|h 1x | + B) − (α 2 σ 2 i + α 3 )(|h 2x | + B), h Q Te ≤ 1 2 h(0) 2 = 0. (4.22)
We write
I := (α 2 σ 1 i + α 3 )(|h 1x | + B) − (α 2 σ 2 i + α 3 )(|h 2x | + B), h Q Te = x a (α 2 σ 1 i + α 3 )(|h 1x | − |h 2x |) + α 2 (σ 1 i − σ 2 i )(|h 2x | + B)dy, h x Q Te ,(4.23)
whence applying again Theorem A.1 and the Hölder inequality, we obtain
|I| ≤ C t 0 d a (|σ 1 i | + 1)|h x | + |σ 1 i − σ 2 i |(|h 2x | + B) dy h x dτ ≤ C t 0 ( σ 1 i + 1) h x + σ 1 i − σ 2 i ( h 2x + B) h x dτ ≤ C t 0 (( h 1x + h 2x + 1)) h x 2 dτ ≤ C ′ t 0 h x 2 dτ. (4.24)
Now choosing α 1 sufficiently large, we infer from (4.24) and (4.22) that
1 2 h(t) 2 + (α 1 B − C ′ ) h x 2 Q Te ≤ 0,(4.25)
hence h(t) = 0 which implies that h = 0 for almost all (t, x) ∈ Q Te , the uniqueness follows, and thus the proof of Theorem 1.1 is complete.
The limit of h B as B vanishes
This section is devoted to the investigation the limit of h B as B → 0, and to the proof of Theorem 1.2. We will denote the solution h to problem (1.1) -(1.3) by h B . Thus we need a priori estimates which are independent of B and B may be taken to meet
0 < B ≤ 1.
Those estimates in Lemmas 3.1 and 3.2, and Corollary 3.1 are of this type. In this section a universal constant C is independent of B.
To prove Theorem 1.2, we shall obtain more estimates as follows.
Lemma 5.1 There hold for any t ∈ [0, T e ] and for any φ ∈ L ∞ (0, T e ; H 2 per (Ω)) that
| ((|h κ x |h κ x ) t , φ) | ≤ C φ L ∞ (0,Te;H 2 per (Ω)) , (5.1) (|h κ x |h κ x ) t L 1 (0,t;H −2 per (Ω)) ≤ C. (5.2)
and
|I 12 | ≤ C φ L ∞ (Qt) t 0 ( σ κ i L 3 (Ω) + 1) |h κ x | 1 2 κ + B L 6 (Ω) (|h κ x | 1 2 κ + B)|h κ xx dτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) t 0 h κ x 3 2 L 3 (Ω) + 1 (|h κ x | 1 2 κ + B)|h κ xx dτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) t 0 h κ x 3 2 L 3 (Ω) + 1 2 + (|h κ x | 1 2 κ + B)|h κ xx 2 dτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) . (5.6)
Next I 2 is evaluated as follows.
|I 2 | ≤ C t 0 Ω |h κ t ||h κ x | δ |φ x |dxdτ ≤ C t 0 Ω ((|h κ x | κ + B)|h κ xx | + (|σ κ i | + 1)(|h κ x | κ + B)) |h κ x | δ |φ x |dxdτ ≤ C φ x L ∞ (Qt) t 0 Ω (|h κ x | κ + B)|h κ xx | + (|σ κ i | + 1)(|h κ x | κ + B) (|h κ x | + 1)dxdτ := I 21 + I 22 . (5.7)
Using the estimates in Lemmas 3.1 and 3.2, we obtain
|I 21 | ≤ C φ x L ∞ (Qt) t 0 Ω (|h κ x | κ + B) 1 2 |h κ xx |(|h κ x | + 1) 3 2 dxdτ ≤ C φ x L ∞ (Qt) t 0 Ω (|h κ x | κ + B)|h κ xx | 2 + (|h κ x | + 1) 3 dxdτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) ,(5.8)
and from the estimates in Corollary 3.1 it follows that
|I 22 | ≤ C φ x L ∞ (Qt) t 0 Ω (|σ κ i | + 1)(|h κ x | 2 + 1)dxdτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) t 0 Ω (|σ κ i | + 1) 3 + (|h κ x | 2 + 1) 3 2 dxdτ ≤ C φ L ∞ (0,t;H 2 per (Ω)) . (5.9)
Therefore combining (5.4) -(5.9) together we arrive at |I| ≤ C φ L ∞ (0,t;H 2 per (Ω)) . It then follows from (5.11) and (5.10) that ((|h κ x |h κ x ) t , φ) Qt = |I| ≤ C φ L ∞ (0,t;H 2 per (Ω)) . (5.12)
Since L 1 (0, t; H −2 per (Ω)) is isometrically imbedded into the dual space of L ∞ (0, t; H 2 per (Ω)), we complete the proof of Lemma 5.1.
We are going to study the asymptotic behavior of solution h B as B goes to zero. For this purpose we also need the following lemma.
Lemma 5.2 Let (0, T e ) × Ω be an open set in R + × R n . Suppose functions g n , g are in L q ((0, T e ) × Ω) for any given 1 < q < ∞, which satisfy g n L q ((0,Te)×Ω) ≤ C, g n → g a.e. in (0, T e ) × Ω.
Then g n converges to g weakly in L q ((0, T e ) × Ω).
Proof of Theorem 1.2. Applying now the generalized case (p 1 = 1), of Aubin-Lions Lemma, i.e., Lemma 2.4 to the series |(h B ) x |(h B ) x , we assert from Lemma 5.1 and Corollary 3.1 that Hence we can select a subsequence, denote it by h Bn , of h B , such that |(h Bn ) x |(h Bn ) x → χ, a.e., (t, x) ∈ Q Te , B n → 0, as n → ∞. It is easy to see that the function F : y → |y|y is reversible, we obtain (h Bn ) x → F −1 (χ) as n → 0. By uniqueness of weak limit, we assert that F −1 (χ) = h x .
|(h B ) x |(h B ) x ∈ L
Recalling that h B satisfies
(h B , ϕ t ) Q Te − α 1 2 |(h B ) x |(h B ) x + 2B(h B ) x , ϕ x Q Te = (α 2 σ i + α 3 )(|(h B ) x | + B), ϕ Q Te − (h 0 , ϕ(0)) Ω ,(5.
A Singular integrals
For the sake of the readers' convenience, we include the following theorem on the boundedness of singular integrals. and
{R 1 <|x|<R 2 } K(x)dx = 0, 0 < R 1 < R 2 < ∞, (A.3)
where C is a positive constant. Let 1 < p < ∞, for f ∈ L p (R n ) we define Then there holds
T ε (f ) p ≤ C p f p (A.5)
here C p is a constant that is independent of f and ε. Also for each f ∈ L p (R n ), lim ε→0
T ε (f ) = T (f ) exists in L p norm. The operator T so defined also satisfies the inequality (A.5).
The cancellation property alluded to is contained in condition (A.3). This hypothesis, together with (A.1), (A.2), allows us to prove the L 2 boundedness and from this the L p convergence of the truncated integrals (A.4).
ϕ(0)) Ω = 0. (1.18) We denote a solution to problem (1.1) -(1.3) by h B , then h B converges h almost everywhere (t, x) over Q Te , and h satisfies (1.18).
Theorem 1. 2
2Assume that γH is sufficiently greater than b, and h 0 ∈ H 1 per (Ω). Then there exists a weak solution h to problem (1.1) -(1.3) with B = 0, which in addition to (1.17), satisfies
Lemma) Let y, A, B be functions satisfying that A(t), B(t) are integrable over [0, t e ] and y(t) ≥ 0 is absolutely continuous function. Then
Lemma 2 . 4 (
24Aubin-Lions) Let B 0 and B 2 be reflexive Banach spaces and let B 1 be a Banach space such that B 0 is compactly embedded in B 1 and that B 1 is embedded in B 2 . For 1 ≤ p 0 , p 1 ≤ +∞, defineW = f | f ∈ L p 0 (0, T e ; B 0 ), df dt ∈ L p 1 (0, T e ; B 2 ) .
(i) if p 0 <
0+∞, then the embedding of W into L p 0 (0, T e ; B 1 ) is compact. (ii) if p 0 = +∞ and p 1 > 1, then the embedding of W into C([0, T e ]; B 1 ) is compact.
t e ] by(2.10), and B(t) = 0, then we infer from (2.16) in which we choose ε = α 1 B 6 , that (2.11), except h κ xx 2 ≤ C (which however can be obtained from equation(2.6) with the help of other estimates in (2.11)), is valid. Thus the proof of Lemma 2.2 is complete.
Sobolev embedding theorem we have W 1, 4 3 (Ω) ⊂ L ∞ (Ω), and (3.22) follows. Hence the proof of the corollary is complete.
κ x |h κ x ) t φdxdτ = − ((|h κ x |h κ x ) t , φ) Qt . (5.11)
(Ω)); (|(h B ) x |(h B ) x ) t ∈ L 1 (0, T e ; H −2 per (Ω)(Ω), B = L 2 (Ω), B 1 = H −2 per (Ω).We thus have B 0 ⊂⊂ B,|(h B ) x |(h B ) x ∈ L p 0 (0, T e ; B 0 ); (|(h B ) x |(h B ) x ) t ∈ L 1 (0, T e ; B 1 ), and conclude that |(h B ) x |(h B ) x is compact in L4 3 (0, T e ; B).
Theorem A.1 (p. 48, Ref.[26]) Let n ∈ N and x ∈ R n . Suppose that the kernel K(x) satisfies the conditions|K(x)| ≤ C|x| −n , for all |x| > 0, (A.1) {|x|≥2|y|} |K(x − y) − K(x)|dx ≤ C, for all |y| > 0, (A.2)
13 )
13we need only study the limits of the most difficult terms, i.e., the nonlinear terms like(|(h B ) x |(h B ) x , φ) Qt .Employing Lemma 5.2 we can easily pass the nonlinear terms to their limits. Thus h is a solution, in the sense of Definition 1.2, to problem (1.1) -(1.3) with B = 0. And the proof of Theorem 1.2 is thus complete.
(Q Te ) is implied by(3.28). To verify the second assertion in (1.16), we use estimates(3.21) and(3.23) in Corollary 3.1, and also the strong convergence result in Lemma 4.1. Consequently (1.16) holds.
Acknowledgements. The first author of this article is supported in part by Science and Technology Commission of Shanghai Municipality (Grant No. 20JC1413600); The corresponding author is supported in part by the Hong Kong Research Grants Council General Research Fund 16302818 and Collaborative Research Fund C1005-19G.Proof. For a rigorous procedure, we derive estimate (5.1) from Eq. (2.6), whereσ κ i is replaced by σ κ i ), thus we see the solution h depends on both κ and B. We shall write h = h κ B (t, x). However as in Section 4 one can pass h κ B to its limit as κ → 0, and get solutions h B , and hence we get estimates for h B . To this end, we take an arbitrary φ ∈ L ∞ (0, t; H 2 per (Ω)), multiply h κ t by (|h κ x | δ φ) x , and integrate the production with respect to x, t, and arrive atHere |h κ x | δ = |h κ x | 2 + δ 2 with a small positive parameter δ ≤ 1. For the sake of notations' simplicity, we still denote h = h κ B by h = h κ . We now treat I 1 and I 2 . First for I 1 we invoke Eq. (2.6) (whereσ κ i is replaced by σ κ i ), to getBy using the estimates in Lemmas 3.1 and 3.2, and Corollary 3.1, one gets easily≤ C φ L ∞ (Qt) ≤ C φ L ∞ (0,t;H 2 per (Ω)) , (5.5)
Solutions to A model with Nonuniformly Parabolic Terms for Phase Evolution Driven by Configurational Forces. H.-D Alber, P Zhu, SIAM J. Appl. Math. 662H.-D. Alber and P. Zhu. Solutions to A model with Nonuniformly Parabolic Terms for Phase Evolution Driven by Configurational Forces. SIAM J. Appl. Math. (2006) 66(2), 680 -699.
Evolution of Phase Boundaries by Configurational Forces. H.-D Alber, P Zhu, Arch. Rati. Mech. Anal. 2H.-D. Alber and P. Zhu. Evolution of Phase Boundaries by Configurational Forces. Arch. Rati. Mech. Anal. (2007) 185(2), 235 -286.
Solutions to a Model for Interface Motion by Interface Diffusion. H.-D Alber, P Zhu, Proc. Roy. Soc. Edin. 5H.-D. Alber and P. Zhu. Solutions to a Model for Interface Motion by Interface Dif- fusion. Proc. Roy. Soc. Edin. (2008) A138(5), 923 -955.
Interface Motion by Interface Diffusion Driven by Bulk Energy: Justification of a Diffusive Interface Model. H.-D Alber, P Zhu, Conti. Mech. Thermodyn. 2H.-D. Alber and P. Zhu. Interface Motion by Interface Diffusion Driven by Bulk Energy: Justification of a Diffusive Interface Model. Conti. Mech. Thermodyn. (2011) 23(2), 139 -176.
Solutions to a Model with Neumann Boundary Conditions for Phase Transitions Driven by Configurational Forces. H.-D Alber, P Zhu, Nonlinear Anal. RWA. 123H.-D. Alber and P. Zhu. Solutions to a Model with Neumann Boundary Conditions for Phase Transitions Driven by Configurational Forces. Nonlinear Anal. RWA. (2011) 12(3), 1797 -1809.
Traveling wave solutions for a quasilinear model of field dislocation mechanics. A Acharya, K Matthies, J Zimmer, J. Mech. Phys. Solids. 58A. Acharya, K. Matthies and J. Zimmer, Traveling wave solutions for a quasilinear model of field dislocation mechanics, J. Mech. Phys. Solids. (2010) 58, 2043 -2053.
Boundary defects, and atomistic aspects of boundary sliding and diffusional creep. M F Ashby, Surf. Sci. 31M.F. Ashby. Boundary defects, and atomistic aspects of boundary sliding and diffu- sional creep. Surf. Sci. (1972) 31, 498 -542.
A Microscopic Theory for Antiphase Boundary Motion and Its Application to Antiphase Domain Coarsening. J W Cahn, S M Allen, Acta Metall. 6J.W. Cahn and S.M. Allen. A Microscopic Theory for Antiphase Boundary Motion and Its Application to Antiphase Domain Coarsening. Acta Metall. (1979) 27(6), 1085 -1095.
Free Energy of a Nonuniform System. J W Cahn, J E Hilliard, I. Interfacial Free Energy. J. Chem. Phys. 282J.W. Cahn and J.E. Hilliard. Free Energy of a Nonuniform System. I. Interfacial Free Energy. J. Chem. Phys. (1958) 28(2), 258 -267.
A unified approach to motion of grain boundaries, relative tangential translation along grain boundaries, and grain rotation. J W Cahn, J E Taylor, Acta Mater. 52J. W. Cahn and J. E. Taylor. A unified approach to motion of grain boundaries, relative tangential translation along grain boundaries, and grain rotation. Acta Mater. (2004) 52, 4887-4898.
Elliptic Partial Differential Equations of Second Order. D Gilbarg, N S Trudinger, Grundlehren der mathematischen Wissenschaften. Berlin HeidelbergSpringer-Verlag224D. Gilbarg and N.S. Trudinger. Elliptic Partial Differential Equations of Second Order. Grundlehren der mathematischen Wissenschaften Vol. 224, Springer-Verlag Berlin Heidelberg 2001.
A regularized sharp-interface model for phase transformation accounting for prescribed sharp-interface kinetics. F Hildebrand, C Miehe, Proc. Appl. Math. Mech. 10F. Hildebrand and C. Miehe, A regularized sharp-interface model for phase transfor- mation accounting for prescribed sharp-interface kinetics, Proc. Appl. Math. Mech. (2010) 10, 673 -676.
Theory of dislocations. J P Hirth, J Lothe, John WileyNew York2nd editionJ. P. Hirth and J. Lothe, Theory of dislocations. John Wiley, New York, 2nd edition, 1982.
Steps, dislocations and disconnections as interface defects relating to structure and phase transformations. J P Hirth, R C Pond, Acta Mater. 44J.P. Hirth and R.C. Pond. Steps, dislocations and disconnections as interface defects relating to structure and phase transformations. Acta Mater. (1996) 44, 4749 -4763.
Spacing defects and disconnections in grain boundaries. J P Hirth, R C Pond, J Lothe, Acta Mater. 55J.P. Hirth, R.C. Pond, and J. Lothe. Spacing defects and disconnections in grain boundaries. Acta Mater. (2007) 55, 5428 -5437.
Computing the mobility of grain boundaries. K G F Janssens, D Olmsted, E A Holm, S M Foiles, S J Plimpton, P M Derlet, Nat. Mater. 5K.G.F. Janssens, D. Olmsted, E.A. Holm, S.M. Foiles, S.J. Plimpton, and P.M. Derlet. Computing the mobility of grain boundaries. Nat. Mater. (2006) 5, 124 -127.
The effects on grain-boundary processes of the steps in the boundary plane associated with the cores of grain-boundary dislocations. A H King, D A Smith, Acta Cryst. 36A.H. King and D.A. Smith. The effects on grain-boundary processes of the steps in the boundary plane associated with the cores of grain-boundary dislocations. Acta Cryst. (1980) A36, 335 -343.
Ural'Tseva. Linear and Quasi-Linear Equations of Parabolic Type. O A Ladyzhenskaya, V A Solonnikov, N N , American Mathematical SocietyO.A. Ladyzhenskaya, V.A. Solonnikov and N.N. Ural'Tseva. Linear and Quasi- Linear Equations of Parabolic Type. American Mathematical Society. 1968.
Stress-induced movement of crystal boundaries. C H Li, E H Edwards, J Washburn, E R Parker, Acta Metall. 1C.H. Li, E.H. Edwards, J. Washburn, and E.R. Parker. Stress-induced movement of crystal boundaries. Acta Metall. (1953) 1, 223 -229.
Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires. Dunod Gauthier-Villars. J Lions, ParisJ. Lions, Quelques Methodes de Resolution des Problemes aux Limites Non Lineaires. Dunod Gauthier-Villars, Paris. 1969.
. K L Merkle, L J Thompson, F Phillipp, Situ HREM Studies of Grain Boundary Migration. Interface Sci. 12K.L. Merkle, L.J. Thompson, and F. Phillipp. In-Situ HREM Studies of Grain Boundary Migration. Interface Sci. (2004) 12, 277 -292.
Elementary Mechanisms of Shear-Coupled Grain Boundary Migration. A Rajabzadeh, F Mompiou, M Legros, N Combe, Phys. Rev. Lett. 265507A. Rajabzadeh, F. Mompiou, M. Legros and N. Combe. Elementary Mechanisms of Shear-Coupled Grain Boundary Migration. Phys. Rev. Lett. (2013) 110, 265507.
Evidence of grain boundary dislocation step motion associated to shear-coupled grain boundary migration. A Rajabzadeh, M Legros, N Combe, F Mompiou, D A Molodov, Phil. Mag. 93A. Rajabzadeh, M. Legros, N. Combe, F. Mompiou, and D. A. Molodov. Evidence of grain boundary dislocation step motion associated to shear-coupled grain boundary migration. Phil. Mag. (2013) 93, 1299 -1316.
A Generalization of the Lions-Temam Compact Imbedding Theorem. Casopis pro pȇstování matematiky. T Roubicek, 115T. Roubicek. A Generalization of the Lions-Temam Compact Imbedding Theorem. Casopis pro pȇstování matematiky, (1990) 115, 338 -342.
Compact Sets in the Space L p (0, T ; B). J Simon, Ann. Mat. Pura Appl. 146J. Simon, Compact Sets in the Space L p (0, T ; B), Ann. Mat. Pura Appl. (1987) 146, 65 -96.
Singular Integrals and Differentiability Properties of Functions. E M Stein, Princeton University PressPrinceton, NJE.M. Stein. Singular Integrals and Differentiability Properties of Functions. Prince- ton University Press, Princeton, NJ. 1970.
A P Sutton, R W Balluffi, Interfaces in Crystalline Materials. New YorkOxford University PressA.P. Sutton and R.W. Balluffi, Interfaces in Crystalline Materials. Oxford University Press, New York, 1995.
Reconciling grain growth and shear-coupled grain boundary migration. S Thomas, K Chen, J Han, P Purohit, D J Srolovitz, Nature Commun. 81764S. Thomas, K. Chen, J. Han, P. Purohit, and D. J. Srolovitz, Reconciling grain growth and shear-coupled grain boundary migration, Nature Commun. (2017) 8, 1764.
A continuum multidisconnection-mode model for grain boundary migration. C Z Wei, S Thomas, J Han, D J Srolovitz, Y Xiang, J. Mech. Phys. Solids. 103731C. Z. Wei, S. Thomas, J. Han, D. J. Srolovitz, and Y. Xiang, A continuum multi- disconnection-mode model for grain boundary migration, J. Mech. Phys. Solids (2019) 133, 103731.
Grain boundary triple junction dynamics: A continuum disconnection model. C Z Wei, L C Zhang, J Han, D J Srolovitz, Y Xiang, SIAM J. Appl. Math. 80C. Z. Wei, L. C. Zhang, J. Han, D. J. Srolovitz, and Y. Xiang, Grain boundary triple junction dynamics: A continuum disconnection model, SIAM J. Appl. Math. (2020) 80, 1101-1122.
Stress induced grain boundary motion. M Winning, G Gottstein, L Shvindlerman, Acta Mater. 49M. Winning, G. Gottstein, and L. Shvindlerman. Stress induced grain boundary motion. Acta Mater. (2001) 49, 211 -219.
Modeling dislocations at different scales. Y Xiang, Commun. Comput. Phys. 1Y. Xiang. Modeling dislocations at different scales. Commun. Comput. Phys., (2006) 1, 383-424.
Equation of Motion for a Grain Boundary. L C Zhang, J Han, Y Xiang, D J Srolovitz, 246101.1 -246101.5Phys. Rev. Lett. 24119L. C. Zhang, J. Han, Y. Xiang and D.J. Srolovitz. Equation of Motion for a Grain Boundary. Phys. Rev. Lett. (2017) 119(24), 246101.1 -246101.5.
Equation of motion for grain boundaries in polycrystals. L C Zhang, J Han, D J Srolovitz, Y Xiang, Comput. Mater. 64L. C. Zhang, J. Han, D. J. Srolovitz, and Y. Xiang, Equation of motion for grain boundaries in polycrystals, npj Comput. Mater. (2021) 7, 64.
Solvability via Viscosity Solutions for a Model of Phase Transitions Driven by Configurational Forces. P Zhu, J. Diff. Eqs. 10P. Zhu. Solvability via Viscosity Solutions for a Model of Phase Transitions Driven by Configurational Forces. J. Diff. Eqs. (2011) 251(10), 2833 -2852.
Solid-Solid Phase Transitions Driven by Configurational Forces: A phasefield model and its validity. P Zhu, LAMBERT Academic Publishing (LAPP. Zhu, Solid-Solid Phase Transitions Driven by Configurational Forces: A phase- field model and its validity. LAMBERT Academic Publishing (LAP). July, 2011.
Regularity of Solutions to a Model for Solid Phase Transitions Driven by Configurational Forces. P Zhu, J. Math. Anal. Appl. 3892P. Zhu. Regularity of Solutions to a Model for Solid Phase Transitions Driven by Configurational Forces. J. Math. Anal. Appl. (2012) 389(2), 1159 -1172.
| [] |
[
"Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation",
"Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation",
"Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation",
"Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation"
] | [
"Hiroaki Katsuragi \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n",
"Daisuke Sugino \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n",
"Haruo Honjo \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n",
"Hiroaki Katsuragi \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n",
"Daisuke Sugino \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n",
"Haruo Honjo \nDepartment of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan\n"
] | [
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan",
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan",
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan",
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan",
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan",
"Department of Applied Science for Electronics and Materials\nInterdisciplinary Graduate School of Engineering Sciences\nKyushu University\n6-1 Kasugakoen816-8580KasugaFukuokaJapan"
] | [] | We performed vertical and horizontal sandwich 2D brittle fragmentation experiments. The weighted mean fragment mass was scaled using the multiplicity µ. The scaling exponent crossed over at log µc ≃ −1.4. In the small µ(≪ µc) regime, the binomial multiplicative (BM) model was suitable and the fragment mass distribution obeyed log-normal form. However, in the large µ(≫ µc) regime, in which a clear power-law cumulative fragment mass distribution was observed, it was impossible to describe the scaling exponent using the BM model. We also found that the scaling exponent of the cumulative fragment mass distribution depended on the manner of impact (loading conditions): it was 0.5 in the vertical sandwich experiment, and approximately 1.0 in the horizontal sandwich experiment. | 10.1103/physreve.70.065103 | [
"https://export.arxiv.org/pdf/cond-mat/0409770v1.pdf"
] | 28,707,205 | cond-mat/0409770 | bd837d3210af8530a7471ea33fbb3f9413e14ebb |
Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation
30 Sep 2004
Hiroaki Katsuragi
Department of Applied Science for Electronics and Materials
Interdisciplinary Graduate School of Engineering Sciences
Kyushu University
6-1 Kasugakoen816-8580KasugaFukuokaJapan
Daisuke Sugino
Department of Applied Science for Electronics and Materials
Interdisciplinary Graduate School of Engineering Sciences
Kyushu University
6-1 Kasugakoen816-8580KasugaFukuokaJapan
Haruo Honjo
Department of Applied Science for Electronics and Materials
Interdisciplinary Graduate School of Engineering Sciences
Kyushu University
6-1 Kasugakoen816-8580KasugaFukuokaJapan
Crossover of the weighted mean fragment mass scaling in 2D brittle fragmentation
30 Sep 2004(Dated: March 23, 2022)
We performed vertical and horizontal sandwich 2D brittle fragmentation experiments. The weighted mean fragment mass was scaled using the multiplicity µ. The scaling exponent crossed over at log µc ≃ −1.4. In the small µ(≪ µc) regime, the binomial multiplicative (BM) model was suitable and the fragment mass distribution obeyed log-normal form. However, in the large µ(≫ µc) regime, in which a clear power-law cumulative fragment mass distribution was observed, it was impossible to describe the scaling exponent using the BM model. We also found that the scaling exponent of the cumulative fragment mass distribution depended on the manner of impact (loading conditions): it was 0.5 in the vertical sandwich experiment, and approximately 1.0 in the horizontal sandwich experiment.
The origin of the power-law distribution in brittle fragmentation is one of the best-examined problems in statistical physics [1,2]. It has been examined in many recent experiments and simulations [3,4,5,6,7,8,9,10,11,12,13,14,15,16]. In particular, the universality of fragmentation transition and low-impact energy fragmentation have been discussed [9,13]. Due to the success of scaling theory with critical phenomena, it is natural to consider the universality of critical behavior for various phenomena. Kun and Herrmann discussed the possibility of percolation universality using a point impacted granular solid model [9]. They also investigated the universality of shell fragmentation [10]. Åström et al. proposed another universality law for LJ liquid and elastic beam models [13]. Dimensional analyses of the exponent of the power-law distribution have also been derived [6,8,14,15].
Previously, we conducted 2D brittle fragmentation experiments in which we applied a flat impact to one side of the specimen [17]. This consisted of a vertical sandwich procedure using glass tubes. We showed that the critical scaling differed from that of percolation transition, and proposed a binomial multiplicative (or biased cascade) model for critical fragmentation. The binomial multiplicative (BM) model is very similar to the turbulent multifractal p model [18]. This implies the similarity between brittle fragmentation and turbulence by means of multifractality. However, the BM model included a fitting parameter that was fixed at a = 2/3, although the origin of this value was not clear. When a more realistic case was considered, the model predictions did not fit the experimental results [19]. The model results also did not follow the power-law fragment mass distribution; rather, they obeyed a log-normal distribution due to the central limit theorem. * Electronic address: [email protected] Low-impact energy fragmentation measured in experiments that involved dropping a 1D glass rod yielded log-normal distributions in the relatively low-impact energy regime [3]. The log-normal form has also been observed in the 3D numerical results of viscoelastic crystal fragmentation [8]. The first discussion of the lognormal distribution for a fragmentation process is found in Kolmogoloff [20]. It is not clear how the fragment mass distribution approaches the power-law form from the log-normal distribution. Do the fragments obey any other distributions before they reach the power-law form?
The relation between the universal scaling law, the lognormal model, and multifractality is one of the most frequently discussed topics, even in the turbulent energy cascade problem [21]. Since the brittle fragmentation phenomenon is very simple, it is very useful to investigate the origin of and path to the power-law form.
In order to study this problem, we performed lowimpact energy fragmentation experiments. In addition to the glass tube results we reported previously [17,19], we also analyze the results for glass plate samples, which correspond to a horizontal sandwich procedure.
The experimental apparatus was very simple. Samples were sandwiched between a stainless steel plate and a stainless steel stage. Then, a heavy brass weight was dropped along guide poles. This experimental system was described in Ref. [17]. After fragmentation, all the fragments were collected and their masses were measured using an electronic balance. We broke 25 new glass plates. Fifteen were 30 mm × 30 mm × 0.1 mm in size, and ten were 60 mm × 60 mm × 0.1 mm in size. We set the measurement limit for the minimum mass at 0.001 g, but only analyzed the data for fragments down to m min = 0.01 g. This m min value is same as that used in the glass tube experiments. The glass tubes and plates corresponded to vertical and horizontal sandwich procedures, respectively.
Let us introduce a critical divergence of the weighted mean fragment mass,
M 2 M 1 m min ∼ µ −σ ,(1)
where M k and µ are written as
M k = m m k n(m), µ = m min M 0 M 1 .(2)
where m and n(m) denote the fragment mass and fragment number of the mass m, respectively. Note that the summation in Eq. (2) includes the largest fragment mass. The left-hand side of Eq. (1) also includes the factor m −1 min , which was not considered in the previous definition of σ (Eq. (4) in Ref. [17]). This factor is a normalization term for the weighted mean fragment mass and gives a dimensionless value. It does not affect the value of the scaling exponent. The multiplicity parameter µ was first introduced by Campi as a pseudo-control-parameter to analyze nuclear fragmentation [22]. It indicates the dimensionless normalized fragment number.
The entire plot of log[M 2 /(M 1 m min )] vs. log µ is shown in Fig. 1. The figure shows that the scaling crosses over around (log µ c ≃ −1.4). There are also two divergent points in Fig. 1, which are likely due to experimental failure, such as an oblique impact. However, we did not remove these points, since we do not have clear criteria to distinguish between success and failure. In the regime µ ≪ µ c , the scaling exponent σ can be described using the previously obtained value σ = 0.84 ≤ 1 [17]. The higher-order weighted mean fragment mass exhibited a multi-scaling nature and its exponent agreed with the one predicted by the BM model. Therefore, we expect the fragment mass distribution to obey the log-normal form in this regime. Figure 2 shows an integrated lognormal form of the cumulative fragment mass distribution N (m) = ∞ m n(m ′ )dm ′ for a typical low-impact energy fragmentation (150-mm long glass tube data with log µ = −2.55). The integrated log-normal function can be written as
N (m) = A m∞ m exp −{log(m ′ /m)} 2 /2σ 2 ln m ′ 2πσ 2 ln dm ′ (3)
where A,m, and σ ln are parameters, which were taken as 0.24, 10.0, and 2.0 for the solid line in Fig. 2, respectively. We used m ∞ = 20 as the cutoff scale. Since good agreement was obtained, the fragment mass distribution in µ ≪ µ c followed a log-normal distribution.
While most of the data exhibited a log-normal form, there were a small number of fragments in the low-impact energy regime in general (e.g., raw curves in Fig. 3(a)) so that it was difficult to establish the form of the distribution directly. Therefore, we measured the weighted mean fragment mass using the moment M k of the distribution to obtain sufficient evidence. However, we encountered problems when calculating the multiscaling exponent σ k (defined as M k+1 /(M k m min ) ∼ µ −σ k ) for the glass plate data due to the large fluctuations in M k+1 /(M k m min ). We did not obtain reliable estimates of σ k for the glass plates, particularly in the large k region. Therefore, we focused only on M 2 /(M 1 m min ) scaling here. The scaling in the large k regime is obviously determined mainly by the largest fragment. This means that we used mean mass statistics instead of the largest mass statistics, in thia paper.
For the glass tubes, some fragmentation results showed a power-law distribution in the relatively large µ regime [17]. In such regime, µ was close to µ c , i.e., the crossover might have already occurred (Fig. 4(b) in Ref. [17]). Due to the dimensional restrictions of the experimental apparatus, we could only examine the small µ regime for glass tubes. Therefore, the clear crossover found in the glass plate fragmentation data has not been observed previously.
Another important characteristic is the power-law form of the cumulative fragment mass distribution for a fully fragmented state. It had different exponents for the tube and plate experiments. Figure 3 shows the cumulative fragment mass distributions in the range m ≥ 0.01 g for the glass plate samples. Figures 3(a) and 3(b) give the low-and high-impact energy regime distributions, respectively. Each curve represents different imparted energy (dropping height of the weight) state. The cumulative distributions of well-fragmented events (Fig. 3(b)) have a power-law portion N (m) ∼ m −(τ −1) with an exponent τ − 1 of about 1. Some distributions in Fig. 3(b) contain large fragments, so that the scaling regions are restricted to almost one order of magnitude; however, most portions of the distributions follow τ − 1 = 1. The glass tube experiments had τ − 1 = 0.5 [17]. This difference between the tubes and plates indicates that the exponent τ depends on the fracturing method. The value τ − 1 = 1 does not concur with the value predicted by Hayakawa andÅström et al., τ − 1(= (d − 1)/d) = 1/2 (for d = 2) [8,13]. They considered the propagating and branching dynamics of the crack (or the failure wave). Therefore, in the horizontal sandwich fragmentation of glass plates, mechanisms other than crack dynamics might determine the value of τ . Moreover, the boundary conditions are different between our horizontal sandwich experiments and the simulations ofÅström et al. By contrast, Behera et al. obtained a value of τ − 1 ≃ 1 in the highly fragmented state for a lateral impact disk fragmentation simulation [11]. This value agrees with our experiments, despite the difference in the loading conditions. Kadono discussed the energy balance and obtained the inequality 1/2 < τ − 1 < 1 [6]. This inequality range is also close to our result.
On the other hand, our distributions in the low-impact energy regime showed the remains of large fragments and were rather flat ( Fig. 3(a)). This behavior resembles that of the integrated log-normal form, as described in Fig. 2. Although all the curves in Figs. 3(a) and 3(b) correspond to different imparted energy states, we tried summing up those. As a result, summed curves are shown in the insets; these more clearly indicate the integrated log-normal and power-law distributions. The solid curve in Fig. 3(a) is same as that in Fig. 2, except for the cutoff scale m ∞ = 3.5.
The weighted mean fragment mass scaling of the glass plate samples only are shown in Fig. 3(c). Here, the triangles correspond to the low-impact energy regime distributions ( Fig. 3(a)), and the circles correspond to the high-impact energy regime distributions (Fig. 3(b)). As expected, we confirmed a distinct separation between the two regimes using the weighted mean fragment mass and the multiplicity. The value of σ became large (σ ≃ 1.7) in the larger µ(≫ µ c ) regime (Fig. 1). Although the BM model can be applied to small σ(≤ 1) values, it is inappropriate for large σ(> 1). Furthermore, other models, such as the distributed and remaining cascade model, also break down for large σ values [19]. The point µ c indicates the distribution crossover from the log-normal to the power-law. We cannot explain what happens in the large µ regime at present. Perhaps the smallest limit of the splitting mass might appear above µ c , similar to an idea proposed by Matsushita and Sumida [23]. We assumed that the crossover point was universal, but there appears to be a slight difference between the points shown in Figs. 1 and 3(c). More details and direct observations of fragmentation are necessary to understand the crossover precisely. Theoretical studies are also required.
In particular, an analysis in the vicinity of µ c would be interesting to see how the transition occurs. The crossover in Fig. 1 is reasonable from the viewpoint of the limit point.
The limit point (log µ, log[M 2 /(M 1 m min )])=(0, 0) corresponds to the completely fragmented state. In such a state, all fragments are the smallest unit size fragments. While it is extremely difficult to achieve such a state (i.e., fragment mass distributions exhibit power-laws in general), it can exist as an ideal limit case. If the BM scaling stretches until log µ = 0 in Fig. 1, log[M 2 /(M 1 m min )] never reaches the value 0. This is a nonphysical state. Therefore, it is natural that the crossover point corresponds to a certain value µ c .
Aström et al. recently proposed a generic fragment mass distribution form that was composed of a powerlaw portion and an exponential portion [14]. The former originates from the dynamics of crack branching and merging, and the latter results from the Poisson process. Their proposed form also applies to the low-impact energy regime. The inset in Fig. 2 depicts a semi-log plot of the same N (m) distribution that was explained using a log-normal distribution. It shows a straight (i.e., exponential) tail, which suggests that theÅström model may also be suitable. However, from the viewpoint of the multiscaling nature of critical fragmentation, the BM model and log-normal distribution are more plausible. Diehl et al. obtained a similar coincidence between 2D explosive fragmentation simulations and the BM model [16]. They also discussed the log-normal distribution form.
Very recently, Wittel et al. reported the results of shell fragmentation experiments and simulations [10]. They concluded that the impact fragmentation of shells showed a continuous transition, while the explosive one showed an abrupt transition. In our experiments, fragmentation seemed to occur suddenly. We could not obtain samples that only had visible macro-cracks, but did not split. A small amount of imparted energy cannot make brittle solids cleave. This might imply a "latent-heat-like behavior". That is, the beginning of fragmentation requires a finite "latent energy" to generate macro-cracks. The splitting occurs abruptly and it proceeds according to the BM model statistics. We can observe critical scaling in the range µ > 0. However, we cannot discuss the scaling in the range µ < 0, since it corresponds to the unfragmented state. The fragmentation transition of open 2D objects involved in flat impacts is not yet understood very well in terms of the phase transition, and this is still an open question. Conversely, the transition from the log-normal to the power-law is characterized by the crossover of the weighted mean fragment mass scaling, as demonstrated above.
Wittel et al. also revealed that the scaling exponent τ is dependent on the loading conditions in numerical simulations [10]. While this was not consistent with their experimental results, it concurs with our findings qualitatively if we consider the vertical and horizontal sandwich procedures to correspond to impact and explosive fragmentation processes, respectively. Quantitatively, their values of τ differed from ours slightly. This might result from the difference between an open 2D sample and a closed shell sample.
In summary, we examined 2D brittle fragmentation using experiments with glass tubes and glass plates. The exponent τ had different values depending on the loading conditions, which consisted of either a horizontal or vertical sandwich impact to the 2D surface. Contrarily, the normalized weighted mean fragment mass scaling was universal and had a crossover point at which the fragment mass distribution changed from a log-normal to a power-law type. The results were consistent with other recent experiments and numerical simulations, but included new experimental findings about the relatively large µ weighted mean fragment mass scaling.
FIG. 1 :
1The log of the dimensionless weighted mean fragment mass [M2/(M1mmin)] as a function of the log of the pseudocontrol-parameter µ. The scaling crossed over from a lognormal to a power-law distribution regime around log µc ≃ −1.4. While σ satisfied σ ≤ 1 in the small µ(≪ µc) regime, it exceeded 1 in the µ ≫ µc regime. This implies that the BM model is unsuitable for the large µ regime.
FIG. 2 :
2Log-normal form of the fragment mass distribution in the low-impact energy regime (log µ = −2.55 data). The sample was 150-mm long glass tube. The inset shows a semilog plot of the same distribution. The dashed line indicates an exponential-like tail.
FIG. 3 :
3The cumulative fragment mass distribution of the glass plate samples in (a) the low-impact energy regime and (b) the high-energy-impact energy regime. (c) Scaling plot of log[M2/(M1mmin)] vs. log µ for all glass plate samples. The triangles correspond to the low-impact-energy cases (a), and the circles correspond to the high-impact-energy cases (b). The insets of (a) and (b) show the all summed curves.
We thank Dr. J. A.Åström and Prof. H. Nakanishi for their helpful comments. This research was partially supported by the Ministry of Education, Culture, Sports, Science and Technology, through a Grant-in-Aid for Young Scientists No. 16740206.
. D L Turcotte, J. Geophys. Res. 911921D. L. Turcotte, J. Geophys. Res. 91, 1921 (1986).
. T Ishii, M Matsushita, J. Phys. Soc. Jpn. 613474T. Ishii and M. Matsushita, J. Phys. Soc. Jpn. 61, 3474 (1992).
. L Oddershede, P Dimon, J Bohr, Phys. Rev. Lett. 713107L. Oddershede, P. Dimon, and J. Bohr, Phys. Rev. Lett. 71, 3107 (1993).
. A Meibom, I Balslev, Phys. Rev. Lett. 762492A. Meibom and I. Balslev, Phys. Rev. Lett. 76, 2492 (1996).
. T Kadono, Phys. Rev. Lett. 781444T. Kadono, Phys. Rev. Lett. 78, 1444 (1997).
. T Kadono, M Arakawa, Phys. Rev. E. 6535107T. Kadono and M. Arakawa, Phys. Rev. E 65, 035107(R) (2002).
. Y Hayakawa, Phys. Rev. B. 5314828Y. Hayakawa, Phys. Rev. B 53, 14828 (1996).
. F Kun, H J Herrmann, Phys. Rev. E. 592623F. Kun and H. J. Herrmann, Phys. Rev. E 59, 2623 (1999).
. F Wittel, F Kun, H J Herrmann, B H Kröplin, Phys. Rev. Lett. 93F. Wittel, F. Kun, H. J. Herrmann, and B. H. Kröplin, Phys. Rev. Lett. 93, (2004).
. B Behera, F Kun, S Mcnamara, H J Herrmann, cond-mat/0404057B. Behera, F. Kun, S. McNamara, and H. J. Herrmann, cond-mat/0404057.
. J Åström, J Timonen, Phys. Rev. Lett. 783677J.Åström and J. Timonen, Phys. Rev. Lett. 78, 3677 (1997).
. J A Åström, B L Holian, J Timonen, Phys. Rev. Lett. 843061J. A.Åström, B. L. Holian, and J. Timonen, Phys. Rev. Lett. 84, 3061 (2000).
. J A Åström, F Ouchterlony, R P Linna, J Timonen, Phys. Rev. Lett. 92245506J. A.Åström, F. Ouchterlony, R. P. Linna, and J. Tim- onen, Phys. Rev. Lett. 92, 245506 (2004).
. J A Åström, R P Linna, J Timonen, preprintJ. A.Åström, R. P. Linna, and J. Timonen, preprint.
. A Diehl, H A Carmona, L E Araripe, J S Andrade, Jr , G A Farias, Phys. Rev. E. 624742A. Diehl, H. A. Carmona, L. E. Araripe, J. S. Andrade, Jr., and G. A. Farias, Phys. Rev. E, 62, 4742 (2000);
. J A Åström, R P Linna, J Timonen, 6548101J. A.Åström, R. P. Linna, and J. Timonen, ibid., 65, 048101 (2002);
. A Diehl, J S Andrade, Jr , G A Farias, 6548102A. Diehl, J. S. Andrade, Jr., and G. A. Farias, ibid., 65, 048102 (2002)
. H Katsuragi, D Sugino, H Honjo, Phys. Rev. E. 6846105H. Katsuragi, D. Sugino, and H. Honjo, Phys. Rev. E 68, 046105 (2003).
. C Meneveau, K R Sreenivasan, Phys. Rev. Lett. 591424C. Meneveau and K. R. Sreenivasan, Phys. Rev. Lett. 59, 1424 (1987).
. H Katsuragi, D Sugino, H Honjo, cond-mat/0310479World Scientific233Singaporein Thinking in PatternsH. Katsuragi, D. Sugino, and H. Honjo, in Thinking in Patterns, pp. 233, (World Scientific, Singapore, 2004): cond-mat/0310479.
. A N Kolmogoloff, Doklady Acad, Nauk. SSSR. 3199A. N. Kolmogoloff, Doklady Acad. Nauk. SSSR 31, 99 (1941).
. U Frisch, Turbulence , Cambridge University PressCambridgeU. Frisch, Turbulence, (Cambridge University Press, Cambridge, 1995).
. X Campi, Phys. Lett. B. 208351X. Campi, Phys. Lett. B 208, 351 (1988);
. X Campi, H Krivine, In Ref, 312X. Campi and H. Krivine, in Ref. [1], pp. 312.
. M Matsushita, . K Sumida, Bull. Facul. Sci. & Eng. Chuo Univ. 3169M. Matsushita and . K. Sumida, Bull. Facul. Sci. & Eng. Chuo Univ. 31, 69 (1988).
| [] |
[
"Time-dependent moments from partial differential equations and the time-dependent set of atoms",
"Time-dependent moments from partial differential equations and the time-dependent set of atoms"
] | [
"Raúl E Curto \nDepartment of Mathematics\nUniversity of Iowa\n52246Iowa City, IowaU.S.A\n",
"Philipp J Di Dio \nDepartment of Mathematics and Statistics\nUniversity of Konstanz\nUniversitätsstraße 10D-78464KonstanzGermany\n",
"Milan Korda \nLaboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)\n7 Avenue du Colonel Roche31031ToulouseFrance\n\nFaculty of Electrical Engineering\nCzech Technical University\nTechnická 2CZ-16626Prague, PragueCzech Republic\n",
"Victor Magron \nLaboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)\n7 Avenue du Colonel Roche31031ToulouseFrance\n\nInstitut de Mathématiques de Toulouse\n118 Route de Narbonne31062ToulouseFrance\n"
] | [
"Department of Mathematics\nUniversity of Iowa\n52246Iowa City, IowaU.S.A",
"Department of Mathematics and Statistics\nUniversity of Konstanz\nUniversitätsstraße 10D-78464KonstanzGermany",
"Laboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)\n7 Avenue du Colonel Roche31031ToulouseFrance",
"Faculty of Electrical Engineering\nCzech Technical University\nTechnická 2CZ-16626Prague, PragueCzech Republic",
"Laboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)\n7 Avenue du Colonel Roche31031ToulouseFrance",
"Institut de Mathématiques de Toulouse\n118 Route de Narbonne31062ToulouseFrance"
] | [] | We study the time-dependent moments and associated polynomials arising from the partial differential equation ∂ t f = ν∆f + g · ∇f + h · f , and consider in detail the dual equation. For the heat equation we find that several non-negative polynomials which are not sums of squares become sums of squares under the heat equation in finite time. We show that every non-negative polynomial in R[x, y, z] ≤4 becomes a sum of squares in finite time under the heat equation. We solve the problem of moving atoms under the equation ∂ t f = g · ∇f + h · f with f 0 = µ 0 being a finitely atomic measure. The time evolution µ t = k i=1 c i (t) · δ xi(t) of the atom positions x i (t) are described by the transport term g · ∇ and the time-dependent coefficients c i (t) have an explicit solution depending on x i (t), h, and div g. | null | [
"https://export.arxiv.org/pdf/2211.04416v2.pdf"
] | 253,397,907 | 2211.04416 | c38ba7124a3d5146240504ded1a7a052c2840408 |
Time-dependent moments from partial differential equations and the time-dependent set of atoms
13 Mar 2023
Raúl E Curto
Department of Mathematics
University of Iowa
52246Iowa City, IowaU.S.A
Philipp J Di Dio
Department of Mathematics and Statistics
University of Konstanz
Universitätsstraße 10D-78464KonstanzGermany
Milan Korda
Laboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)
7 Avenue du Colonel Roche31031ToulouseFrance
Faculty of Electrical Engineering
Czech Technical University
Technická 2CZ-16626Prague, PragueCzech Republic
Victor Magron
Laboratoire d'analyse et d'architecture des systèmes (LAAS-CNRS)
7 Avenue du Colonel Roche31031ToulouseFrance
Institut de Mathématiques de Toulouse
118 Route de Narbonne31062ToulouseFrance
Time-dependent moments from partial differential equations and the time-dependent set of atoms
13 Mar 2023arXiv:2211.04416v2 [math.FA]momenttime evolutionpartial differential equationssum of squaresheat equation 2020 MSC: Primary: 47A5744A60; Secondary: 30E0565D32
We study the time-dependent moments and associated polynomials arising from the partial differential equation ∂ t f = ν∆f + g · ∇f + h · f , and consider in detail the dual equation. For the heat equation we find that several non-negative polynomials which are not sums of squares become sums of squares under the heat equation in finite time. We show that every non-negative polynomial in R[x, y, z] ≤4 becomes a sum of squares in finite time under the heat equation. We solve the problem of moving atoms under the equation ∂ t f = g · ∇f + h · f with f 0 = µ 0 being a finitely atomic measure. The time evolution µ t = k i=1 c i (t) · δ xi(t) of the atom positions x i (t) are described by the transport term g · ∇ and the time-dependent coefficients c i (t) have an explicit solution depending on x i (t), h, and div g.
Introduction
Static Moments
Let µ be a Borel measure on R n for some n ∈ N and α = (α 1 , . . . , α n ) ∈ N n 0 . The α-moment s α of µ is
s α = R n x α dµ(x)(1)
with x α := x α1 1 · · · x αn n . The classical moment problem is: Given finitely or infinitely many real numbers s α in a sequence s = (s α ) α∈A⊆N n 0 , does there exist a Borel measure µ such that (1) holds for all α ∈ A? If the answer to this question is affirmative, then µ is called a representing measure of s and s is called a moment sequence. If A is finite, then s is a truncated moment sequence, and if A = N n 0 , then s is called full moment sequence. Additionally, if supp µ ⊆ K for some K ⊆ R n , then s is called a K-moment sequence. A classical tool to investigate moment sequences is the Riesz functional L = L s : R[x 1 , . . . , x n ] → R defined by L s (x α ) := s α and linearly extended to R[x 1 , . . . , x n ] if s is full or linearly extended to {x α | α ∈ A} if s is truncated. L : R[x 1 , . . . , x n ] → R is called a moment functional if it is represented by some Borel measure µ. We always assume measures to be non-negative unless specifically denoted as signed measures.
Moments are a classical field of research and in modern times they are still of interest, e.g. because of the following application in optimization. Let p ∈ R[x 1 , . . . , x n ] be a polynomial. Then p(x) dµ(x) = min
s K-moment sequence, s0=1 L s (p),(2)
since for the first equality we have that for any x 0 ∈ R n the Dirac measure δ x0 acts as a point evaluation in (1) and the second formulation holds by linearity of the integral using the s α definition in (1). See e.g. [15] for more. A classical result for truncated moment sequences is the Richter [24] (or Richter-Rogosinski-Rosenbloom [24][25][26]) Theorem; see [3] for a detailed discussion about the historical development.
Theorem 1.1 (Richter Theorem 1957 [24,Satz 4]). Let d ∈ N, V be a ddimensional real vector space of measurable real functions f : X → R on a measurable space X , and L : V → R be a moment functional, i.e., there exists a measure µ on X such that
L(f ) = X f (x) dµ(x)
for all f ∈ V. Then there are c 1 , . . . , c k > 0 and x 1 , . . . , x k ∈ X with k ≤ d such that
L(f ) = k i=1 c i · f (x i )
for all f ∈ V, i.e., we always find a k-atomic representing measure ν = k i=1 c i · δ xi of L with k ≤ d.
The points x i in the k-atomic representing measure ν are called atoms and the minimal number k of atoms for fixed L is called Carathéodory number C(L) resp. C(s) for truncated moment sequences. A k-atomic representing measure is also called a (Gauss) quadrature rule [27]. Two questions arise naturally in theory and applications:
(a) How many atoms C(L) are required to represent L? Very recent studies about the Carathéodory number C(L) are [28][29][30] and the set of atoms (or core variety) is studied in [31][32][33].
All the studies and references given so far have one thing in common: They study static moments, i.e., s is fixed in these studies and properties are only derived from and for s. Hence, the study of the moment cone S A (= set of all moment sequences s) is only pointwise and collecting or even connecting moment sequences with the same properties in the moment cone is difficult and does not arise naturally.
Time-dependent Moments
Let n, m ∈ N. We denote by C ∞ b (R n , R m ) the set of smooth bounded functions
C ∞ b (R n , R m ) := {f ∈ C ∞ (R n , R m ) | ∂ α f ∞ < ∞ for all α ∈ N n 0 }
and by S(R n , R m ) the Schwartz functions S(R n , R m ) := f ∈ C ∞ (R n , R m ) x α ∂ β f ∞ < ∞ for all α, β ∈ N n 0 .
By C d ([0, ∞), C ∞ b (R n , R m )) we denote all functions f : R n × [0, ∞) → R m such that (i) f ( · , t), ∂ t f ( · , t), . . . , ∂ d t f ( · , t) ∈ C ∞ b (R n , R m ) for all t ≥ 0 and (ii) ∂ α f (x, · ) ∈ C d ([0, ∞), R m ) for all x ∈ R n and α ∈ N n 0 . Let d ∈ N 0 , ν = (ν 1 , . . . , ν n ) T ∈ [0, ∞) n , ν · ∆ = ν 1 · ∂ 2 1 + · · · + ν n · ∂ 2 n be the anisotropic Laplace operator, g = (g 1 , . . . , g n ) ∈ C d ([0, ∞), C ∞ b (R n , R n )) be a smooth bounded vector field, h = (h i,j ) m i,j=1 ∈ C d ([0, ∞), C ∞ b (R n , R m×m )) be a smooth bounded matrix function, k = (k 1 , . . . , k m ) T ∈ S(R n , R m ) be a Schwartz function vector-valued function, and a ∈ R n be a vector.
Then by [34,Thm. 2.10] the initial value problem
∂ t f (x, t) = ν∆f (x, t) + [ax + g(x, t)] · ∇f (x, t) + h(x, t) · f (x, t) + k(x, t) f (x, 0) = f 0 (x) ∈ S(R n , R m )(3)
with (ax + g) · ∇ := (a 1 x 1 + g 1 ) · ∂ 1 + · · · + (a n x n + g n ) · ∂ n has a unique solution
f ∈ C d+1 ([0, ∞), S(R n , R m )). Additionally, for m = 1 and f 0 ≥ 0 we have that f ( · , t) ≥ 0 for all t ≥ 0. For f 0 ∈ S(R n ) with f 0 ≥ 0 we can calculate the moments s α (0) := R n x α · f 0 (x) dx
for all α = (α 1 , . . . , α n ) ∈ N n 0 . Since the time-dependent solution f of (3) is unique, Schwartz function valued, and non-negative for m = 1 and f 0 ≥ 0, we find that the solution f induces unique time-dependent moments
s α (t) := R n x α · f (x, t) dx(4)
for all t ≥ 0. The time-dependent moments can be defined for nonlinear partial differential equations as well. Let us look at a (non-linear) example. In [34,Lem. 3.6] for Burgers' equation
∂ t f (x, t) = −f (x, t) · ∂ x f (x, t) we calculated the time-dependent moments s k,p (t) := R x k · f (x, t) p dx
of f (x, t) p for all k, p ∈ N 0 and found the explicit expression
s k,p (t) = k i=0 s k−i,p+i (0) i! · t i · i−1 j=0 (p + j) · (k − j) 1 + (p + j) 2 ∈ R[t](5)
which depends only on the initial values s k−i,p+i (0) of f 0 . Hence, despite the fact that for any non-constant f 0 ∈ S(R) the classical solution of Burgers' equation breaks down in finite time, the moments are a priori known for all times t ∈ R.
In [34,Exm. 3.7] we then calculate for the one-tooth-function
f 0 (x) := 1 + x for x ∈ [−1, 0], 1 − x for x ∈ [0, 1] 0 else ≥ 0
the time-dependent moments s 0,1 (t), s 1,1 (t), and s 2,1 (t) to find
R (x − t) 2 · f (x, t) dx = L s(t) ((x − t) 2 ) = 1 6 − 2 15 t 2 t→±∞ −−−−→ −∞.(6)
Applying a molifier S ε to f 0 gives S ε f 0 ∈ S(R) for all ε > 0 and (6) changes continuously with ε > 0. Then (6) holds at least for the time of the existence of the classical solution. But the classical solution remains non-negative which contradicts (6), i.e., Burgers' equation breaks down in finite time. In other words, Burgers' equation as a (non-linear) transport equation, starting with a non-negative f 0 means that the classical solution is also non-negative as long as it exist. While from (5) we see that a priori the moments might exist for all times, the derivation of (5) requires that f (x, t) is a classical solution, i.e., smooth and even Schwartz. The break down of the classical solution of Burgers' equation in finite time is therefore not observed through the moments becoming infinite (i.e., they stop to exist), but through the non-negativity property of the representing measure which is encoded in the moments. For some t > 0 we find a non-negative polynomial p ∈ R[x] such that L s(t) (p) < 0, i.e., (6) shows that non-negativity is not preserved for all times and therefore the classical solution u does not exist for all times. The Burgers' equation example has two nice features:
(a) the time-dependent moments are polynomial in time: s k,p (t) ∈ R[t], and (b) the example to contradict non-negativity of the moments only needs moment up to degree 2, i.e., 0th, 1st, and 2nd moments.
Studying (3) with general ν = (ν 1 , . . . , ν n ) ∈ [0, ∞) n , C ∞ b -functions g i and h i,j , and Schwartz functions k i it is evident that neither (a) nor (b) need to be satisfied anymore. I.e. we can in general not hope for the time-dependent moments s α (t) to be polynomial in time, depending only on the initial values s α (0), and to observe certain properties we no longer can rely on finitely many moments s α (t). We have to explore the computational and theoretical limits of time-dependent moments s α (t). That is, among other things, one purpose of this study. We say that a (moment) sequence s evolves with respect to or along the partial differential equation (3) if a representing measure of s evolves with respect to (3). Hence, in principle the time-evolution s(t) depends on the choice of representing measure µ 0 of s(0) if s(0) is indeterminate, i.e., s(0) has more than one representing measure.
The time-dependent moments s(t) from the heat equation ∂ t f = ∆f were studied extensively in [35]. In [36][37][38][39] moments also have been applied to (nonlinear) PDEs. In the present work we proceed the study [35] and go beyond the heat equation. The paper is structured as follows.
In the next section (Section 2) we describe for (3) with k = 0 the dual action on the polynomials. We describe the dual action by the dual operator of (3) and show that polynomials in general are no longer polynomials for t > 0 but at least remain in an algebra Pol(R n ) (see (7) for the definition). For ν∆ + g∇ + h with g(x, t) = g(t) and h(x, t) = h(t) we can solve the dual action analytically (see Corollary 2.6). We find that g and h have no effect on non-negativity and therefore study the (dual) action of the Laplace operator on polynomials in Section 3 more closely. We give a simple way to solve the polynomial heat equation in Theorem 3.3. We collect several results of this action and also present several specific results on non-negative polynomials. We show several examples where non-negative polynomials become sums of squares under the heat equation and also give counter examples, when a non-negative polynomial which is not a sum of squares does not become a sum of squares. In Theorem 3.20 we show for non-negative polynomials f ∈ R[x, y, z] ≤4 that under the heat equation they become sum of squares in finite time. I.e., every nonnegative polynomial in R[x, y, z] ≤4 is generated by taking a sum of squares in R[x, y, z] ≤4 and evolving it along the heat equation with negative times. All our examples of non-negative polynomials in R[x, y] ≤2d with d ∈ N 0 show that they also become sum of squares in finite time. This observation leads us to Open Problem 5.1. In Section 4 we investigate the application of (3) with ν = 0 and k = 0 to atomic measures. We find that since g(x, t) · ∇ and h(x, t) in general do not commute and therefore a solution can not be constructed from solutions of the individual operators, for atomic measures we can at first solve the timeevolution with respect to the transport operator g(x, t) · ∇ and then apply the scaling operator h(x, t). We find that the number of atoms is unchanged. We describe the time-evolution of µ t = k i=1 c i (t) · δ xi(t) in Lemma 4.1 and Theorem 4.2. For the full moment problem we show that the time-dependent moment sequence (functional) remains a boundary of the moment cone for all times. For the truncated moment problem we show that we can enter the interior of the moment cone and that the time-evolution in general depends on the representing measure. In Section 5 we summarize the results, give final discussions, and state the open problem.
The dual of a partial differential equation acting on polynomials
By [34, Thm. 2.10] we know that (3) has a unique solution f ∈ C d+1 ([0, ∞), S(R n , R m )) and therefore the time-dependent moments s α (t) in (4) exist for all times t ∈ [0, ∞). In this section we show that like in the heat equation [35] the action of the operator A = ν · ∆ + g(x, t)∇ + h(x, t) has a dual action A * acting on the polynomials, i.e., the time-dependency of the solution f (x, t) is moved to a time-dependency of p(x, t):
p 0 (x) · f (x, t) dx = p(x, t) · f 0 (x) dx
for all t ∈ [0, ∞). We will see in Theorem 2.2 that while p(x, 0) ∈ R[x 1 , . . . , x n ], we have in general p(x, t) ∈ R[x 1 , . . . , x n ] for any t = 0. We have to introduce the following space of at most polynomially increasing functions Pol(R n ) on R n :
Pol(R n ) := {f ∈ C ∞ (R n ) | for all α ∈ N n 0 there exists p α ∈ R[x] such that |∂ α f (x)| ≤ p α (x) for all x ∈ R n }.(7)
We collect some simple properties of Pol(R n ).
Lemma 2.1. Let n ∈ N. Then the following hold: . For the first we have sin
i) Pol(R n ) is an algebra. ii) S(R n ) Pol(R n ). iii) R[x 1 , . . . , x n ] R[x 1 , . . . , x n ] + C ∞ b (R n ) Pol(R n ) S(R n ) ′ . Proof.x 1 ∈ [R[x 1 , . . . , x n ] + C ∞ b (R n )] \ R[x 1 , . . . , x n ], for the second we have x 2 1 · sin x 1 ∈ Pol(R n ) \ [R[x 1 , . . . , x n ] + C ∞ b (R n )]
, and the last is clear.
Let p ∈ Pol(R n ) and f be the unique solution of
∂ t f (x, t) = ν∆f (x, t) + g(x, t) · ∇f (x, t) + h(x, t) · f (x, t) f (x, 0) = f 0 (x) with f 0 ∈ S(R n ), g = (g 1 , . . . , g n ) T ∈ C([0, ∞), C ∞ b (R n , R)), and h ∈ C([0, ∞), C ∞ b (R n , R)). Then ∂ t p(x) · f (x, t) dx = p(x) · [ν∆ + g(x, t) · ∇ + h(x, t)]f (x, t) dx
is by partial integration (since p ∈ Pol(R n ) and f ∈ S(R n ))
= f (x, t) · [ν∆ + g(x, t) · ∇ − div g(x, t) + h(x, t)]p(x) dx
The following result shows, that the time evolution of f can be shifted to p. The proof uses techniques of the semi-group approximations of the solution f .
Theorem 2.2. Let d ∈ N 0 , ν ≥ 0, g = (g 1 , . . . , g n ) T ∈ C d ([0, ∞), C ∞ b (R n ) n ), and h ∈ C d ([0, ∞), C ∞ b (R n )). Let f ∈ C d+1 ([0, ∞), S(R n )) be the unique solu- tion of ∂ f (x, t) = ν∆f (x, t) + g(x, t) · ∇f (x, t) + h(x, t) · f (x, t) f (x, 0) = f 0 (x)(8)
with f 0 ∈ S(R n ). Let T > 0. Then the unique solution p T of
∂ t p T (x, t) = ν∆p T (x, t) − g(x, T − t) · ∇p T (x, t) + (h(x, T − t) − div g(x, T − t)) · p T (x, t) p(x, 0) = p 0 (x) ∈ Pol(R n ) (9) fulfills p T ∈ C d+1 ([0, T ], Pol(R n ))
, and we have
p 0 (x) · f (x, T ) dx = p T (x, T ) · f 0 (x) dx.(10)
Proof. Note, the solution f of (8) is unique since it is unique on any open bounded set U ⊂ R n , see e.g. [40,Ch. 7], and we have f ∈ C d+1 ([0, ∞), S(R n )), see e.g. [34]. Let N ∈ N and
Z N = {t 0 = 0 < t 1 < · · · < t N = T } be a decomposition of [0, T ].
Then for the operator
A(x, t) = ν∆ + g(x, t) · ∇ + h(x, t)
we have the dual operator
A * (x, t) = ν∆ − g(x, t) · ∇ − div g(x, t) + h(x, t).
Note, of course, we actually also have to give the domain of the operators A and A * . We are working for A on S(R n ) and for A * it is therefore sufficient to work on Pol(R n ). We only have to ensure that p T ∈ Pol(R n ).
The unique solution f can be approximated in S(R n ) by the semigroup approach (Trotter [41])
f (x, T ) = lim N →∞ N i=1 exp ti ti−1 A(x, s) ds f 0 (x).(11)
For any operator B we have for the dual B * the relation f, Bg = B * f, g and for exponentials exp(B) we have f, exp(B)g = exp(B * )f, g . When we apply these to (11) we have to pay attention at the order of the operators, since in general they do not commute and are additionally time-dependent. We use the order
N i=1 B i = B N B N −1 · · · B 1 in this formulas. Hence, we get R n p 0 (x) · f (x, T ) dx = lim N →∞ R n p 0 (x) · N i=1 exp ti ti−1 A(x, s) ds f 0 (x) dx = lim N →∞ R n 1 i=N exp ti ti−1 A * (x, s) ds p 0 (x) · f 0 (x) dx = R n p T (x, T ) · f 0 (x) dx.
The last equality holds in the same way as (11). We set
p T (x, T ) := lim N →∞ 1 i=N exp ti ti−1 A * (x, s) ds p 0 (x)
and see that p T solves (10) since the last equality holds for all f 0 ∈ S(R n ), i.e., especially for all test functions f 0 ∈ C ∞ 0 (R n ) ⊂ S(R n ). This indeed shows that p T ∈ Pol(R n ) and the time-dependency of A(x, t) combined with the fact that A(x, t) and A(x, t ′ ) for t = t ′ in general do not commute shows that we have to take the reverse order in the operator product, i.e., p T (x, t) solves by
substituting t → T − t in A * the equation ∂ t p T (x, t) = A * (x, T − t)p T (x, t).
Remark 2.3. Theorem 2.2 holds with the same proof also for the anisotropic Laplace operator ν · ∆ = ν 1 ∂ 2 1 + · · · + ν n ∂ 2 n with ν = (ν 1 , . . . , ν n ) ∈ [0, ∞) n . Additionally, let M ∈ R n×n be a symmetric positive-definite matrix. Then by a change of coordinates the operator (∂ 1 , . . . , ∂ n )M (∂ 1 , . . . , ∂ n ) can be diagonalized to the anisotropic Laplace operator ν · ∆.
• Remark 2.4. In Theorem 2.2 we have seen that the action of (3) on f 0 is shifted to p 0 . But since the dual action on p 0 is independent on f 0 , it also provides the action (3) on measures µ 0 instead of functions f 0 . This provides a way to study the set of atoms in Section 4. • Remark 2.5. In the proof of Theorem 2.2 note that A = ν∆+g∇+h is in general an unbounded operator, i.e., it is not defined on all L 2 (R n ).
Since ∂ t f = Af with f 0 ∈ S(R n ) has a unique Schwartz function solution it is therefore sufficient to take the domain of A as D(A) = S(R n ). This very restrictive domain enables us in Theorem 2.2 to have Pol(R n ) ⊆ D(A * ). Here, the restriction that g and h are
C ∞ b -functions is essential. It is not sufficient to have A such that Af ∈ S(R n ) for any f ∈ S(R n ). E.g. take ν = 0, g = 0 and h = x 4 with f 0 (x) = e −x 2 . Then ∂ t f (x, t) = x 4 · f (x, t) has as an ordinary differential equation in t with fixed x the unique solution f (x, t) = e −x 2 +t·x 4 , i.e., f ( · , t) / ∈ S(R n ) for any t > 0. Additionally note, that in [42, Thm. 2.
2] a condition on the generator A is given such that the semi-group e At maps the Schwartz class S to the Schwartz class S.
• We have seen in the previous proof that the time-reversal for the dual equation acting on p 0 appears because ∆, g(x, t)∇, and h(x, t) do not commute (pairwise) in general. If g and h are time-independent, then of course the timereversal disappears naturally since no time-dependency exists. Another way the time-reversal disappears is, when ∆, g(x, t)∇, and h(x, t) commute (pairwise), e.g. when g and h do not depend on x. We then have the following explicit solution for p t (x, t). We denote by Θ t the heat kernel, i.e.,
Θ t (x) = 1 (4πt) n/2 · exp − x 2 4t
for all t > 0.
Corollary 2.6. Let n ∈ N 0 , ν ≥ 0, g = (g 1 , . . . , g n ), and g 1 , . . . , g n , h ∈ C d ([0, ∞), R)
. Then for the dual action (9) and (10) in Theorem 2.2 we have
p t (x, t) = e H(t) · [Θ νt * p 0 ](x + G(t))
with Θ νt the heat kernel, G(t) := t 0 g(t) ds, and H(t) := t 0 h(s) ds. In [35] the first and the second author studied the time-dependent moments from the heat equation in more detail. For the dual action it was especially found that
p 0 ∈ R[x 1 , . . . , x n ] ≤d ⇒ [Θ νt * p 0 ](x) ∈ R[x 1 , . . . , x n ] ≤d
for all t ≥ 0 and additionally of course because of the convolution with the non-negative heat kernel
p 0 ≥ 0 ⇒ [Θ νt * p 0 ](x) ≥ 0 for all t ≥ 0. Hence, with p 0 ∈ R[x 1 , . . . , x n ] in Corollary 2.6 we have p t (x, t) ∈ R[x 1 , . . . , x n ]
for all t ∈ [0, ∞) (and even for all t ∈ R, see [35]). In the general case of Theorem 2.2 we only can ensure
p t (x, t) ∈ Pol(R n ), but not p t (x, t) ∈ R[x 1 , . . . , x n ].
To see that, let e.g. ν = 0 and g = 0, i.e., we have the explicit solution
p t (x, t) = exp t 0 h(x, s) ds · p 0 (x) and with h(x) = sin(x) we have that p t (x, t) ∈ R[x 1 , . . . , x n ] for all t = 0.
Therefore, in case of Corollary 2.6 we have that g and h do not alter the properties of p 0 (significantly), but the group action induced by the Laplace operator might give changes. We will see that in the next section.
Non-negative polynomials and the heat equation
Time-dependent moments induced by the heat equation were already studied in [35]. There also the dual action on the polynomials was observed. We continue this investigation. We repeat the essential definitions and results for the convenience of the reader.
Definition 3.1. Let d ∈ N 0 . We define p 2d , p 2d+1 ∈ R[x, t] by p 2d (x, t) := d j=0 (2d)! (2d − 2j)! · j! · t j · x 2d−2j and p 2d+1 (x, t) := d j=0 (2d + 1)! (2d + 1 − 2j)! · j! · t j · x 2d+1−2j . Example 3.2. We have p 0 (x, t) = 1 p 1 (x, t) = x p 2 (x, t) = 2t + x 2 p 3 (x, t) = 6tx + x 3 p 4 (x, t) = 12t 2 + 12tx 2 + x 4 p 5 (x, t) = 60t 2 x + 20tx 3 + x 5 p 6 (x, t) = 120t 3 + 180t 2 x 2 + 30tx 4 + x 6 . . . • Straightforward calculations show that p k , k ∈ N 0 , solve the initial value heat equation ∂ t p k (x, t) = ∂ 2 x p k (x, t) p k (x, 0) = x k .(12)
Hence, by linearity of the heat equation we have the following extension of Definition 3.1 and the observation (12).
Theorem 3.3. Let d ∈ N 0 , n ∈ N, and f 0 (x) = α∈N n 0 c α · x α ∈ R[x 1 , . . . , x n ]. Then p f0 (x, t) := α∈N n 0 c α · p α1 (x 1 , t) · · · p αn (x n , t) ∈ R[x 1 , . . . , x n , t](13)
solves the initial value heat equation
∂ t f (x, t) = ∆f (x, t) f (x, 0) = f 0 (x).(14)
Proof. By linearity of the Laplace operator ∆ it is sufficient to look at
f 0 (x) = x α for α ∈ N n 0 . By (12) we already have ∂ t p αi (x i ) = ∂ 2 i p αi (x) and hence ∂ t p f0 (x, t) = ∂ t [p α1 (x 1 , t) · · · p αn (x n , t)] = [∂ t p α1 (x 1 , t)] · p α2 (x 2 , t) · · · p αn (x n , t) + · · · + p α1 (x 1 , t) · · · p αn−1 (x n−1 , t) · [∂ t p αn (x n , t)] = [∂ 2 1 p α1 (x 1 , t)] · p α2 (x 2 , t) · · · p αn (x n , t) + · · · + p α1 (x 1 , t) · · · p αn−1 (x n−1 , t) · [∂ 2 n p αn (x n , t)] = ∆p f0 (x, t).
Note, in Definition 3.1 we defined p for the monomials x d , d ∈ N 0 , and in (13) we define p by linearity for any f 0 ∈ R[x]. Hence, if f 0 (x) = x d then both definitions coincide: p f0 = p d . The same shall hold for the multivariate case to keep the notation simple.
The unique solution of (14) can be written as the convolution with the heat kernel Θ t , i.e.,
p f0 (x, t) = (Θ t * f 0 )(x)(15)
for all t > 0, or, since ∆ ⌊ 1 2 deg f0⌋+1 f 0 = 0, we can also write the solution as
p f0 ( · , t) = e t∆ f 0 = ∞ k=0 t k k! ∆ k f 0 = ⌊ 1 2 deg f0⌋ k=0 t k k! ∆ k f 0 .(16)
For more on one-parameter semigroups see [43]. Note that the polynomial heat equation (14) has a unique polynomial solution (16) for all t ∈ R, contrary to the L 2 -heat equation; i.e., p 0 ∈ L 2 , which can in general only be solved for t ≥ 0.
In connection with (16) we also want to mention the works [44,45].
Example 3.4 (Motzkin polynomial [46]). Let
f Motz (x, y) = 1 − 3x 2 y 2 + x 4 y 2 + x 2 y 4 ∈ R[x, y]
be the Motzkin polynomial. Then by Definition 3.1 (resp. Example 3.2) we have the substitutions
x 2 → 2t + x 2 , x 4 → 12t 2 + 12tx 2 + x 4 , y 2 → 2t + y 2 , y 4 → 12t 2 + 12ty 2 + y 4
and get
p Motz (x, y, t) = 1 − 3(2t + x 2 )(2t + y 2 ) + (12t 2 + 12tx 2 + x 4 )(2t + y 2 ) + (2t + x 2 )(12t 2 + 12ty 2 + y 4 ) = 1 − 12t 2 + 48t 3 + 6t(−1 + 6t)(x 2 + y 2 ) + (−3 + 24t)x 2 y 2 + 2t(x 4 + y 4 ) + x 4 y 2 + x 2 y 4 for all t ∈ R. •
Since the heat kernel is a Schwartz function the convolution with polynomials is well-defined and uniqueness of the solution of the heat equation shows that
(Θ t * f 0 )(x) = p f0 (x, t)(17)
holds for all f 0 ∈ R[x 1 , . . . , x n ]. That the heat kernel preserves polynomials holds for all convolution kernels which are integrable with respect to polynomials. To make the paper self-contained, let us briefly state and prove this known fact.
Theorem 3.5. Let d ∈ N 0 and ρ be a kernel such that R n y α · ρ(y) dy is finite for all α ∈ N n 0 with |α| ≤ d, then
· * ρ : R[x 1 , . . . , x n ] ≤d → R[x 1 , . . . , x n ] ≤d .
Proof. Let p ∈ R[x 1 , . . . , x n ] ≤d . Then from
(p * ρ)(x) = R n p(x − y) · ρ(y) dy
and expanding p(x − y) in the right side gives the assertion including the degree bound deg(p * ρ) ≤ d.
Since the heat kernel is non-negative, the convolution of the heat kernel with a non-negative polynomial gives again a non-negative polynomial. Denote by Pos(n, d) the set of all non-negative polynomials on R n with degree at most d ∈ N 0 :
Pos(n, d) := {p ∈ R[x 1 , . . . , x n ] ≤d | p(x) ≥ 0 for all x ∈ R n }.
Non-negativity and Theorem 3.5 gives the following. Corollary 3.6. Let n ∈ N, d ∈ N 0 , and f 0 ∈ Pos(n, d). Then
p f0 ( · , t) ∈ Pos(n, d)
for all t ≥ 0. Especially, if f 0 = 0 then p f0 ( · , t) > 0 on R n for all t > 0.
Corollary 3.7. Let n ∈ N and f 0 ∈ R[x 1 , . . . , x n ]. Assume there exist t > 0 and a point ξ ∈ R n such that p f0 (ξ, t) < 0. Then f 0 ∈ Pos(n, d).
Hence, recalling the Motzkin polynomial in Example 3.4 we find that p Motz ( · , t) ∈ Pos(2, 6) for all t ≥ 0. In the usual case (e.g. f 0 ∈ L 2 (R n )) the convolution with the heat kernel has a smoothing effect on f 0 , i.e., the regularity increases and Θ t * f 0 is a C ∞ -function. But in the polynomial case we already started with a C ∞ -function and this kind of regularity does not change. We have even seen that for f 0 ∈ R[x 1 , . . . , x n ] the function p f0 remains a polynomial for all t ∈ R. But from the Motzkin polynomial in Example 3.4 we observe something additional. The Motzkin polynomial was of course the first non-negative polynomial found that is not a sum of squares. However, the heat kernel has another "smoothing effect" for this polynomial, as seen in the following continuation. We denote by SOS(n, d) the set of all sums of squares in n ∈ N variables of degree less or equal to d ∈ N 0 : i.e., p Motz ( · , 1) is by Corollary 3.6 not just non-negative, but in fact a sum of squares. This relation can easily be obtained e.g. by the use of Macaulay2 [47] and the SumsOfSquares package [48]. In fact, additional calculations indicate that Theorem 3.9. Let ρ ≥ 0 be a kernel such that R n y α · ρ(y) dy is finite for all α ∈ N n 0 with |α| ≤ d, then · * ρ : SOS(n, d) → SOS(n, d).
SOS(n, d) := {p ∈ R[x 1 , . . . , x n ] ≤d | p is a sum of squares}.p Motz ( · , t) ∈ Pos(2, 6) \ SOS(2, 6) for t ∈ [0, T Motz ), and SOS(2, 6) for t ∈ [T Motz , ∞),
Proof. Let p ∈ SOS(n, d), i.e., there exists a symmetric Q ∈ R N ×N with
N = n+d d such that p(x) = (x α ) T α · Q · (x α ) α where (x α ) α is the vector of all monomials x α with |α| ≤ d. We then have (p * ρ)(x) = R n p(x − y) · ρ(y) dy = R n ((x − y) α ) T α · Q · ((x − y) α ) α · ρ(y) dy
and by Richter's Theorem [24] we can replace ρ(y) dy by a finitely atomic
representing measure µ = k i=1 c i · δ yi with c i > 0 and get = k i=1 c i · ((x − y i ) α ) T α · Q · ((x − y i ) α ) α ∈ SOS(n, d).
Besides sums of squares, other non-negative polynomials are linear combinations of even powers of linear forms, i.e., they have a Waring decomposition
p(x) = k i=1
(a i · x) d with even d ∈ N, a i ∈ R n+1 , and a i · x := a i,0 + a i,1 x 1 + · · · + a i,n x n [49]. We denote by War(n, d) the (Waring) cone of all these polynomials, i.e., we have the proper inclusions War(n, d) SOS(n, d) Pos(n, d).
In Theorem 3.5 we have seen that convolution preserves being a polynomial, in Theorem 3.9 we have seen that convolution preserves being a sum of squares, and the next result shows that convolution also preserves being in the Waring cone.
Theorem 3.10. Let ρ ≥ 0 be a kernel such that R n y α · ρ(y) dy is finite for all α ∈ N n 0 , then · * ρ : War(n, d) → War(n, d).
Proof. Let p ∈ War(n, d), i.e., p(
x) = k i=1 (a i · x) d . Then (p * ρ)(x) = R n p(x − y) · ρ(y) dy = R n k i=1 (a i · (x − y)) d · ρ(y) dy
and by Richter's Theorem [24] we can replace ρ(y) dy by a finitely atomic representing measure µ = l j=1 c j · δ yj with c j > 0 and get
= l j=1 k i=1 c j · (a i · (x − y j )) d ∈ War(n, d).
The Motzkin polynomial was the first polynomial found to be a non-negative polynomial which is not a sum of squares. But others have been identified [50]. We want to investigate some of these in chronological order.
Example 3.11 (Robinson polynomial [51]). Let
f Rob (x, y) = 1 − x 2 − y 2 − x 4 + 3x 2 y 2 − y 4 + x 6 − x 4 y 2 − x 2 y 4 + y 6
be the Robinson polynomial, i.e., f Rob ∈ Pos(2, 6) \ SOS(2, 6). Then by a direct calculation using Macaulay2 with the SumsOfSquares package similar to the Motzkin polynomial we find p Rob ( · , 1) ∈ SOS(2, 6) and by Theorem 3.9 we have Proof. We have
p CL (x, y, z, t) = 1 − 4xyz + (2t + x 2 )(2t + y 2 ) + (2t + x 2 )(2t + z 2 ) + (2t + y 2 )(2t + z 2 ) = 1 + 12t 2 − 4xyz + 4t(x 2 + y 2 + z 2 ) + x 2 y 2 + x 2 z 2 + y 2 z 2 = f CL (x, y, z) + 12t 2 + 4t(x 2 + y 2 + z 2 ) = v(x, y, z) ⊤ G t v(x, y, z),
with the monomial vector v(x, y, z) := (1, x, y, z, xy, xz, yz, x 2 , y 2 , z 2 ) ⊤ and the
Gram matrix G t = 1 x y z xy xz yz x 2 y 2 z 2 1 1 + 12t 2 δx δy δz x 4t − 2δx ax y 4t − 2δy ay z 4t − 2δz az xy az 1 − 2εz xz ay 1 − 2εy yz ax 1 − 2εx x 2 δx 0 εz εy y 2 δy εz 0 εx z 2 δz εy εx 0
with a x + a y + a z = −2 and t ∈ R. The Gram matrix is of course not unique, but for −4xyz there are only a x , a y , and a z . And since f CL does not contain x 4 , y 4 , or z 4 we also have zeros at the positions (x 2 , x 2 ), (y 2 , y 2 ), and (z 2 , z 2 ) (the diagonal of the ε-block of the columns and rows x 2 , y 2 , and z 2 ). At t = 1/9 we have p CL (x, y, z, 1/9) = 31 27
+ xy − 2 3 z 2 + xz − 2 3 y 2 + yz − 2 3 x 2 ∈ SOS(3, 4).
It remains to show that p CL for t < 1/9 it is not a sum of squares, i.e., no Gram matrix representation fulfills G t 0.
The coefficient of x 2 can be written in the (1, x 2 ) or (x, x) entries, i.e., we have the free parameter δ x . Similarly with y 2 and z 2 . Hence, in every Gram matrix representation of p CL , the submatrices
4t − 2δ x a x a x 1 − 2ε x , 4t − 2δ y a y a y 1 − 2ε y , 4t − 2δ z a z a z 1 − ε z(18)
appear. We show 0 for at least one of them if t < 1/9. So assume t < 1/9 and assume to the contrary that G t 0. Then ε x = ε y = ε z = 0 and δ x = δ y = δ z = 0. Since a x + a y + a z = −2 we have that at least one of a x , a y , or a z is ≤ − 2 3 . Without loss of generality let a x ≤ − 2 3 . Then
det 4t a x a x 1 = 4t − a 2 x ≤ 4t − 4 9 < 0
and hence G t 0 for any Gram matrix representation of p CL with t < 1/9.
Macaulay2 calculations with the SumsOfSquares package suggest
1 9 − 7 · 10 −9 < min{t ≥ 0 | p CL ( · , t) ∈ SOS(3, 4)} < 1 9 − 6 · 10 −9 .
That is close to the exact value of 1/9 found in Theorem 3.12. is the Schmüdgen polynomial and we find p Schm ( · , 1) ∈ SOS(2, 6). In fact, Macaulay2 calculations with the SumsOfSquares package and Theorem 3.9 shows that p Schm ( · , t) ∈ SOS(2, 6) for all t ≥ 2 · 10 −4 . •
p 8 (x, t) = 1680t 4 + 3360t 3 x 2 + 840t 2 x 4 + 56tx 6 + x 8 ,
and p 10 (x, t) = 30240t 5 + 75600t 4 x 2 + 25200t 3 x 4 + 2520t 2 x 6 + 90tx 8 + x 10 we calculate p Har and find p Har ( · , 1) ∈ SOS(2, 10). In fact, Macaulay2 calculations and Theorem 3.9 show that p Har ( · , t) ∈ SOS(2, 10) for all t ≥ 8 · 10 −4 . 2d),
f (x) = |α|≤2d a α · x α ∈ Pos(n, 2d) such that f 2d (x) := |α|=2d a α · x α ∈ SOS(n,then p f ( · , t) ∈ Pos(n, 2d) \ SOS(n, 2d) for all t ≥ 0.
Proof. Assume there is a t ≥ 0 such that
p f (x, t) = k i=1 |α|≤d c i,α (t) · x α 2 = |α|≤2d a α (t) · x α ∈ SOS(n, 2d).
Since by Definition 3.1 we have a α (t) = a α (0) for all α ∈ N n 0 with |α| = 2d the sum of squares decomposition of p f ( · , t) gives
f 2d (x) = k i=1 |α|=d c i,α (t) · x α 2 ∈ SOS(n, 2d)
which contradicts the assumption f 2d ∈ SOS(n, 2d).
The following example shows that the condition f 2d ∈ SOS(n, 2d) in Lemma 3.16 is necessary, but not sufficient. Just take the polynomial f (w, x, y, z) = z 6 − 3x 2 y 2 z 2 + x 4 y 2 + x 2 y 4 + w 8 ∈ Pos(3, 8) \ SOS (3,8). Then lim
t→∞ p f (x, t) · t −d = c > 0.
Proof. Let k ∈ N. It is easy to see that ∆ k g is constant on R n for all g ∈ R[x 1 , . . . , x n ] ≤2k and even equal to zero for all g ∈ R[x 1 , . . . , x n ] ≤2k−1 .
Since f ∈ Pos(R n ) with deg f = 2d we have that the homogeneous part f 2d of f of degree 2d is non-zero and non-negative on R n . Let S be the unit sphere
in R n . Since f 2d ∈ Pos(R n ) \ {0} we have S f 2d (x) dx > 0 and by [56, Cor. 1] we have ∆ m f 2d = 0 for all m = 1, . . . , d. Hence, ∂ d t p f (x, t) = ∆ d p f (x, t) = ∆ d f 2d (x) = c > 0 which proves the statement.
Note that [56,Cor. 1] can also be replaced by [4,Thm. 19.16]. There it is shown that ∆ d : R[x 1 , . . . , x n ] ≤2d → R is a strictly positive moment functional. Lemma 3.16 and Example 3.17 have only been proven here for n ≥ 3, but for n = 2 no counterexample has been found. In fact, every time evolution of the polynomials f ∈ Pos(R 2 ) \ SOS(R 2 ) in the Examples 3.8, 3.11, 3.13, and 3.14 enters SOS(R 2 ) and by Theorem 3.9 never leaves SOS(R 2 ) again. It is open if for n = 2 every time evolution of a non-negative polynomial becomes a sum of squares; see Open Problem 5.1. For Pos (3,4) this question is answered affirmative by the following theorem.
Theorem 3.20. Let f ∈ Pos (3,4). Then there exists a τ f ∈ [0, ∞) such that
p f ( · , t) ∈ SOS(3, 4) for all t ≥ τ f .
Proof. Let f 4 be the leading term (homogeneous part of highest degree 4) of f . Since f 4 is a homogeneous polynomial of degree 4 in three variables it is a sum of squares. By linearity let f 4 (x, y, z) = (ax 2 + bxy + cxz + dy 2 + eyz + gz 2 ) 2 ∈ SOS(3, 4) \ {0} be one square. Then by (16) we have p f4 (x, y, z, t) = f 4 (x, y, z) + t · ∆f 4 (x, y, z) + t 2 2 · ∆ 2 f 4 (x, y, z) = (ax 2 + bxy + cxz + dy 2 + eyz + gz 2 ) 2 + 2t · (2ax + by + cz) 2 + (bx + 2dy + ez) 2 + (cx + ey + 2gz) 2 =:A2(x,y,z)=A2∈SOS(3,2) + + 2 · (a + d + g) · (ax 2 + bxy + cxz + dy 2 + eyz + gz 2 ) + (12a 2 + 4b 2 + 4c 2 + 8ad + 12d 2 + 4e 2 + 8ag + 8dg + 12g 2 =:A0>0 since f4∈Pos(3,4)\{0}
) · t 2 .
For lin {x 2 , xy, xz, y 2 , yz, z 2 } we take a basis ε 1 , . . . , ε 6 with ε 1 := ax 2 + bxy + cxz + dy 2 + eyz + gz 2 . In the basis 1, x, y, z, ε 1 , . . . , ε 6 we can write p f4 ( · , t) in a Gram matrix in the form
1 x y z ε1 ε2 . . . ε6 1 A0 · t 2 2t · (a + d + g) x y 2A2 · t z ε1 2t · (a + d + g) 1 ε2
. . .
ε6
. (19) This Gram matrix has block structure. For the sub-block of the columns/rows x, y, z we have that A 2 ∈ SOS(3, 2) and hence it is positive semi-definite. In fact, we have
A 2 (x, y, z) = x y z ⊤ · 4a 2 + b 2 + c 2 2ab + 2bd + ce 2ac + be + 2cg 2ab + 2bd + ce b 2 + 4d 2 + e 2 bc + 2de + 2eg 2ac + be + 2cg bc + 2de + 2eg c 2 + e 2 + 4g 2 · x y z .
(20) The remaining block from the columns/rows 1 and ε 1 is
A 0,2 := A 0 · t 2 2t · (a + d + g) 2t · (a + d + g) 1
and we have
det A 0,2 = t 2 · [A 0 − 4 · (a + d + g) 2 ] = 4t 2 · (2a 2 + b 2 + c 2 + 2d 2 + e 2 + 2g 2 )
which shows that A 0,2 is positive semi-definite and hence our choice (19) of the Gram matrix is positive semi-definite and hence a sum of squares representation.
In general, f 4 is not a single square, but a sum of (k ≤ 6) squares:
f 4 (x, y, z) = k j=1
α j · (a j x 2 + b j xy + c j xz + d j y 2 + e j yz + g j z 2 ) 2 , α j > 0.
By an orthonormal transformation of the Gram matrix of f 4 we can assume without loss of generality that ε 1 = a 1 x 2 + · · · + g 1 z 2 , . . . , ε k = a k x 2 + · · · + g k z 2 are orthonormal in R 6 and completed by ε k+1 , . . . , ε 6 to an orthonormal basis of R 6 . In this basis one Gram matrix of p f ( · , t) has the form
1 x y z ε1 . . . ε k ε k+1 . . . ε6 ε k * * * * 0 α k ε k+1 0 . . . . . . ε6 0 .(21)
That the entries of (1, ε 1 ) to (1, ε 6 ) resp. (ε 1 , 1) to (ε 6 , 1) are zero is because all contributions can be written into the submatrix of the columns/rows x, y, z.
That the entries in (u, v) and (v, u) with u ∈ {x, y, z} and v ∈ {ε k+1 , . . . , ε 6 } are zero follows from the fact that f is non-negative and the ε 1 , . . . , ε 6 are orthonormal. Assume an entry (u, v) is non-zero, i.e., it is a homogeneous polynomial of degree 3. Since the ε 1 , . . . , ε 6 are orthonormal and the x 2 , xy, . . . , z 2 are linearly independent, there exists a point (x * , y * , z * ) ∈ R 3 such that v(x * , y * , z * ) = 1 and ε j (x * , y * , z * ) = 0 for all ε j = v. But then f (λx * , λy * , λz * ) → −∞ for λ → −∞. That is a contradiction to f ∈ Pos (3,4). For f 4 with k squares we can, by an orthonormal coordinate change (x, y, z) → (x,ỹ,z), always have one square of the form (a i x 2 + d i y 2 + g i z 2 ) 2 with a i = 0, and without loss of generality we can assume that i = 1. If d 1 = g 1 = 0 we can use another coordinate change such that for i = 2 (if present) we have (a 2 x 2 + b 2 xy + c 2 xz + d 2 y 2 + g 2 z 2 ) 2 , i.e., the y and z coordinates are separated. We now distinguish among four cases:
(i) f 4 depends on x, y, and z: After the previous coordinate change we see that the Gram matrix (20) of A 2 has full rank and hence for t ≫ 0 we have that the specific choice (19) of the Gram matrix of p f ( · , t) is positive semi-definite and hence p f ( · , t * ) ∈ SOS(3, 4) for some t * ≥ 0.
(ii) f 4 depends on only two variables: After the previous coordinate changes without loss of generality f 4 is independent of z. But since f ≥ 0, then also f 3 (homogeneous part of degree 3 of f ) is independent on z. To see this assume, to the contrary that f 3 depends on z 2 . Since f 3 contains only degree 3 monomials we have that either xz 2 or yz 2 is in f 3 . But in both cases we can chose (x, y) ∈ R 2 such that the coefficient of z 2 is negative and hence letting z → ±∞ gives f → −∞: This is a contradiction to f ≥ 0, since f 4 has no z-dependency for compensation. So f 3 contains no z 2 . But the same holds for z. We find (x, y) ∈ R 2 such that the coefficient of z is non-zero and either letting z → +∞ or z → −∞ gives again f → −∞, a contradiction to f ≥ 0. The linear contributions in a x,x, and a y,y remain (since f 4 depends on x and y) and hence we again can find, as in (i), a t * ≥ 0 such that the Gram matrix representation (19) of p f ( · , t * ) is positive definite and hence p f ( · , t * ) ∈ SOS (3,4).
(iii) f 4 only depends on one variable: Without loss of generality f 4 depends only on x. Then with the same argument as in (ii) since f ≥ 0 we have that the specific Gram matrix representation (19) of p f ( · , t * ) is positive semi-definite for some t * ≥ 0 and hence p f ( · , t * ) ∈ SOS (3,4).
(iv) f 4 = 0: Then also f 3 = 0 and hence f ∈ Pos(3, 2) = SOS (3,2). Proof. p f ( · , t) is continuous in f and t and since Pos(3, 4) is a finite-dimensional cone it has a compact basis. Hence,
T (f ) := min{t | p f ( · , t) ∈ SOS(3, 4)} < ∞
is continuous in f ∈ Pos(3, 4) and therefore τ 3,4 := max f ∈Pos (3,4) T (f ) < ∞.
Corollary 3.22. There are k ∈ N, c 1 , . . . , c k ≥ 0, and y 1 , . . . , y k ∈ R such that c 1 p(x + y 1 ) + · · · + c k p(x + y k ) ∈ SOS (3,4) for all p ∈ Pos (3,4).
Proof. The operator e τ3,4∆ is a positivity preserver with constant coefficients. Hence, by [57,Thm. 3.1] there exists a non-negative Borel measure µ with finite moments on R 2 such that e τ3,4∆ (p)(x) = R 2 p(y + x) dµ(y). Since we have the degree bound deg p ≤ 4 this integral is a truncated moment functional. By Richter's Theorem [24] we can replace µ by ν = k i=1 c i δ yi which proves the statement.
Corollary 3.23. Let f ∈ Pos (3,4). Then there exists a t = t(f ) ∈ [0, τ 3,4 ] and a g ∈ SOS (3,4)
such that f = p g ( · , −t).
The previous result means that we can generate any f ∈ Pos(3, 4) from a sum of squares by going backwards in the heat equation (14), i.e., taking a negative time in (16). Proof. Let f ∈ Pos(3, 4)\{0}. Then by Corollary 3.21 we have g = p f ( · , τ 3,4 ) = e τ3,4∆ f ∈ SOS(3, 4) \ {0} and hence L(f ) = L(e −τ3,4∆ e τ3,4∆ f ) =L(g) > 0, i.e., L is in the interior of the truncated moment cone and hence a moment functional.
Note that the reverse implication in the previous results is in general not true. In Section 2 we calculated the dual action of (3) on p 0 ∈ Pol(R n ). It is easy to see that with ν > 0 and p 0 ≥ 0, then p t > 0 for all t > 0. Therefore for a moment functional L we always have L(p t ) > 0 and no restriction of the support of the representing measure is possible. Therefore, we can only study the time-dependent set of atoms for ν = 0.
Let us remind the reader, that for any time-dependent vector field g and starting point x 0 ∈ R n the system of ordinary differential equations
d dt G(x, t) = g(x, t) G(x 0 , 0) = x 0(22)
has by the Picard-Lindelöff Theorem a unique solution G : R n × R → R n . That means a particle located at x 0 ∈ R n for time t 0 = 0 has the trajectory G(x 0 , t) when it experiences the force field g, i.e., at time t ∈ R is has the position G(x 0 , t) ∈ R n . The initial value problem (3) acts only on functions f 0 . But by duality in Section 2 also the action of (3) on a measure µ 0 is well-defined. The following solves the problem for a Dirac measure.
Lemma 4.1. Let g ∈ C(R, C ∞ b (R n )
) n be a time-dependent vector field and G : R n × [0, ∞) → R n be the unique solution of (22). Additionally, let h ∈ C(R, C ∞ b (R n )), x 0 ∈ R n , and c 0 > 0. Then the measure-valued differential equation
∂ t µ t (x) = −g(x, t)∇µ t (x) + h(x, t) · µ t (x) µ 0 (x) = c 0 · δ x0 (x)(23)
with the initial value µ 0 = c 0 δ x0 has the unique solution
µ t (x) = c 0 · exp t 0 (h + div g)(G(x 0 , s), s) ds · δ G(x0,t) (x)
for all t ∈ R.
Proof. The operator −g∇+h has the dual g∇+h+div g and we use Theorem 2.2 with g replaced by −g. Let T > 0 and let B ⊆ R n be a closed ball around x 0 such that G([−T, T ], x 0 ) ⊂ B with G from (22). Take any p 0 ∈ C ∞ (B, R), then ∂ t p = −g∇p + (h + div g)p has a unique classical solution for all t ∈ [−T, T ]. This solution can be written as
p(x, t) = lim N →∞ 0 i=N exp ti ti−1 B(x, s) ds exp ti ti−1 A(x, s) ds p 0 (x) for any decomposition Z N = {t 0 = 0 < t 1 < · · · < t N = t} with ∆Z N → 0 as N → ∞, with A = g∇ and B = h + div g. With the approximations exp t1 t0 A(x, s) ds p 0 (x) ≈ p 0 x + t1 t0
g(x, s) ds , and since exp( B ds) acts by multiplication, we immediately get from p 0 , µ t = p( · , t), µ 0 the needed statement.
Note that since div g = 0 in general, the transport equation does not preserve the L 2 -norm of the solution. This results in additional scaling with div g besides the contribution provided by h.
The solution of (23) for dµ 0 (x) = f 0 (x) dx with f 0 ∈ S(R n ) can in general not be written down explicitly. But for the case of µ 0 = c 0 · δ x0 Lemma 4.1 shows that (23) is simply solved since the transport term g(x, t) · ∇ acts only on the point x 0 = x(0) to get x(t) and the multiplication h(x, t) only acts on the coefficient c 0 = c(0) to get c(t). This simplifies the description of the solution immensely and provides in the following result a way to trace the Carathéodory number C(s).
Theorem 4.2. Let n ∈ N, s(0) = (s α (0)) α∈N n ∈ S n,∞ be a n-dimensional moment sequence with finite rank Hankel matrix H(s(0)), g ∈ C(R, C(R n , R n )), and h ∈ C(R, C(R, R)). Then there is a unique time evolution s : R → S n,∞ of s(0) with respect to
∂ t f (x, t) = −g(x, t)∇f (x, t) + h(x, t) · f (x, t).
Additionally, for all t ∈ R we have rank H(s(t)) = rank H(s(0)) and therefore the Carathéodory number is constant, i.e., C(s(t)) = C(s(0)).
Proof. Since K := rank H(s(0)) is finite, there exists a unique K-atomic representing measure µ 0 = K i=0 c(0)·δ xi(0) of s(0). By linearity of the time-evolution and Lemma 4.1 we have the unique time-evolution of µ t = K i=1 c i (t) · δ xi(t) and therefore the unique time-evolution of s(t).
It remains to show that C(s(t)) = rank H(s(t)) = rank H(s(0)) = C(s(0)). While the first and the third equality is clear from the rank, see e.g. [4,Prop. 17.21], we have to show the second equality. For that it is sufficient to show that the path of x k (t) never splits or two paths x j (t) and x k (t) (j = k) intersect for any t ∈ R. But this follows from Lemma 4.1 and the uniqueness of the solution x j (t) = G(x j (0), t) of (22) by Peano's Theorem, i.e., when two integral curves x j (t) and x k (t) of (22) coincide for some t ′ ∈ R, i.e., x j (t ′ ) = x k (t ′ ), then x j (t) = x k (t) for all t ∈ R, especially for t = 0 which contradicts the minimal choice of K = rank H(s(0)).
In the univariate case the previous result also holds for the truncated moment problem.
(x) = (x − x 1 (0)) 2 · · · (x − x l (0)) 2 fulfills L s(0) (p 0 ) = 0. Let s(t) evolve with respect to ∂ t f = −g∂ x f + h·f . By Lemma 4.1 we have µ t = l≤d k=1 c k (t) · δ x k (t)
and therefore L s(t) (p t ) = 0 for the non-negative polynomial p t (x) = (x − x 1 (t)) 2 · · · (x − x l (t)) 2 . Hence, the boundary moment sequence remains a boundary sequence for all times.
• Remark 4.4. In the previous example we have seen that for univariate truncated moment moment sequences on the boundary the time evolution with respect to (3) with ν = 0 remains a boundary truncated moment sequence for all times. And additionally, the time evolution is unique, depending only on the initial moments. For multivariate moment sequences this no longer holds. Firstly, multivariate (truncated) moment sequences on the boundary can be indeterminate [3] and the time evolution then depends on the choice of the representing measure µ 0 . And secondly, even if the truncated boundary moment sequence is determinate, it can immediately enter the interior of the moment cone. To see this, take an example from optimal design [49] where the isolated zeros of a non-negative polynomial is maximal. For example take 10 projective zeros of the Robinson polynomial [51] after rotation of the projective space such that no zero lies at infinity (all 10 zeros are then in the affine space R 2 ). Since the zeros are isolated we find a vector field g : R 2 → R 2 such that these points are in general position for t = 0. Then by the Alexander-Hirschowitz theorem [58] there is no non-negative polynomial of degree 6 vanishing on all these 10 points, i.e., the truncated moment sequence from the measure µ t with t = 0 belongs to an interior moment sequence.
Extension to C ∞
We have so far treated g ∈ C ∞ b (R n , R n ). We want to see what happens when we extend this class. In [35] we calculated the explicit time-evolution of the moments for ∂ t f (x, t) = x · ∂ x f (x, t) and we got s k (t) = s k (0) · e −(k+1)·t .
Also for k = 0 the moments can be calculated only from the initial values s(0) by solving ∂ t s 0 (t) = 0 and ∂ t s k (t) = −k · s k−1 (t) for k ∈ N by induction.
When we have ∂ t f = x k ∂ x f with k ≥ 2, it is in general not possible to calculate the time-dependent moments. For moment sequences with finite rank we can at least ensure the existence of the time-evolution s(t) for small times and we observe a finite break down in time. and hence x(t) = (1 − t) −1 . Since lim tր1 x(t) = ∞, we have that the timedependent moments s l (t) for ∂ t f = −x 2 · ∂ x f exist only for t < 1 and s l (1) = ∞ for all l ∈ N.
•
The previous example demonstrates why we have to restrict to g ∈ C ∞ b .
Summary and open questions
We investigated time-dependent moments from partial differential equations. At first we gave the dual action on the polynomials (Theorem 2.2). While for general coefficients g and h we leave R[x 1 , . . . , x n ] in general, for coefficients independent on x we remain in R[x 1 , . . . , x n ]. We investigated the action of the heat equation on non-negative polynomials more deeply. In Examples 3.8, 3.11, and 3.13 to 3.15 we have collected examples of non-negative polynomials on R 2 which are not sums of squares. But under the action of the heat equation after finite time they become sums of squares. The highest degree part is a nonnegative homogeneous polynomial in two variables and hence a sum of squares [59]. If this holds for all bivariate non-negative polynomials is still open.
Open Problem 5.1. Let f ∈ Pos(R 2 )\SOS(R 2 ). Is it true that there is always a T = T (f ) > 0 such that p f (x, t) ∈ SOS(R 2 ) for all t ≥ T ? By Theorem 3.9 Open Problem 5.1 reduces to the questions that p f (x, T ) ∈ SOS(R 2 ) holds for at least one T > 0.
On R n with n ≥ 3 this in general no longer holds and the heat equation fails to produce sum of squares from non-negative polynomials in three or more variables, see Lemma 3.16 and Example 3.17. For the Choi-Lam polynomial we calculated in Theorem 3.12 the exact time when it becomes a sum of squares under the heat equation. From the counterexamples in Lemma 3.16 and Example 3.17 we see that it is necessary that the highest degree part is a sum of squares. The idea in Theorem 3.12 for the Choi-Lam polynomial is then used in Theorem 3.20 to show that any f ∈ Pos(3, 4) becomes a sum of squares in finite time.
The third possibility in Hilbert's Theorem [59] is deg f = 2. But then For the equation ∂ t f = g · ∇f + h · f we calculate the time-dependent set of atoms, i.e., we solve the problem of how the atoms of a finitely atomic representing measure µ t = k i=1 c i (t) · ·δ xi(t) evolve in time. We find that the atom positions x i (t) are governed by the transport term g · ∇ but are unaffected by the scaling with h. The coefficients (masses) c i (t) can then be analytically solved from the x i (t), see Lemma 4.1 and Theorem 4.2.
We also extended the treatment beyond smooth and bounded coefficients and found that atoms under the transport term −x 2 · ∂ x move to infinity in finite time, i.e., a finite break down of the moments (and the measure solution) appears even for linear partial differential equations, see Example 4.5.
We thank the organizers of ICCOPT in Bethlehem, PA, in July 2022, and the organizers of IWOTA in Krakow in September 2022 where [35] and the present work have been presented. We also thank Tim Netzer for discussions on this work at IWOTA 2022.
(b) Where are the atoms x i located in a representation of L?
(i) and (ii) follow immediately from the definition of Pol(R n ) in(7). For (iii) all inclusions ⊆ are clear, it remains to show that the inclusions are proper
Example
The choice of the intervals [0, T Motz ) and [T Motz , ∞) is clear since SOS(2, 6) is closed and p f0 ( · , t) continuous in t, i.e., p Motz ( · , T Motz ) ∈ SOS(2, 6).•p Motz : [0, ∞) ∋ t → p Motz ( · , t) ∈ Pos(2,6)is a continuous path through the cone of non-negative polynomials. The following result shows that once p f0 enters SOS(n, d) e.g. at time t 0 ≥ 0, then it stays in SOS(n, d) for all t ≥ t 0 .
pf
Rob ( · , t) ∈ Pos(2, 6) \ SOS(2, 6) for t ∈ [0, T Rob ), and SOS(2,6) for t ∈ [T Rob , CL (x, y, z) := 1 − 4xyz + x 2 y 2 + x 2 z 2 + y 2 z 2 ∈ Pos(3, 4) \ SOS(3,4) be the Choi-Lam polynomial[52]. Then p CL ( · , t) ∈ Pos(3, 4) \ SOS(3,4) for t ∈ [0, 1/9), and SOS(3,4) for t ∈ [1/9, ∞),
Example 3 . 13 (
313Schmüdgen polynomial [53]). The polynomialf Schm (x, y) = (y 2 − x 2 )x(x + 2)[x(x − 2) + 2(y 2 − 4)] + 200[(x 3 − 4x) 2 + (y 3 − 4y) 2 ] ∈ Pos(2, 6) \ SOS(2,6)
Example 3. 14 (
14Berg-Christensen-Jensen polynomial[54]). The Berg-Christensen-Jensen polynomialf BCJ (x, y) = 1 − x 2 y 2 + x 4 y 2 + x 2 y 4 ∈ Pos(2, 6) \ SOS(2, 6)is connected to the Motzkin polynomial f Motz (Example 3.4) by f BCJ (x, y) = f Motz (x, y) + 2x 2 y 2 and hence from Theorem 3.9 we see that p BCJ ( · , t) ∈ SOS(2, 6) for all t ≥ 1 6 . • After all these historic examples let us have a look at a more modern example from the vast literature of non-negative polynomials which are not sums of squares.
Example 3.15 (Harris polynomial [55, R 2,0 in Lem. 5.1 and 6.8]). Let f Har (x, y) = 16x 10 − 36x 8 y 2 + 20x 6 y 4 + 20x 4 y 6 − 36x 2 y 8 + 16y 10 − 36x 8 + 57x 6 y 2 − 38x 4 y 4 + 57x 2 y 6 − 36y 8 + 20x 6 − 38x 4 y 2 − 38x 2 y 4 + 20y 6 + 20x 4 + 57x 2 y 2 + 20y 4 − 36x 2 − 36y 2 + 16 be the Harris polynomial, i.e., f Har = R 2,0 ∈ Pos(2, 10) \ SOS(2, 10). With Example 3.2,
•
All examples so far become a sum of squares for large t. However, this is not in general true.
Lemma 3 . 16 .
316Let n, d ∈ N and
Example 3 . 17 .
317Let f (x, y, z) = z 6 − 3x 2 y 2 z 2 + x 4 y 2 + x 2 y 4 ∈ Pos(3, 6) \ SOS(3, 6) be the homogeneous Motzkin polynomial. Then p f ( · , t) ∈ Pos(3, 6) \ SOS(3, 6) for all t ≥ 0.• Remark 3.18. The result in Lemma 3.16 also holds for f 2d ∈ War(n, 2d), i.e., p f ( · , t) ∈ Pos(n, 2d) \ War(n, 2d).• While we have seen in Lemma 3.16 and Example 3.17 that there are nonnegative polynomials which do not become sum of squares under the heat equation, the following result shows that any non-negative polynomial becomes asymptotically close to SOS(R n ) under the heat equation, that is, the constant polynomial becomes an attractor of the polynomial heat equation.
Lemma 3 . 19 .
319Let n ∈ N and f ∈ Pos(R n ) with deg f = 2d for some d ∈ N.
The 3 .
320 implies the existence of a time when all f ∈ Pos(3, 4) become sum of squares.
Corollary 3 . 21 .
321There exists a τ 3,4 ∈ [0, ∞) such that p f ( · , t) ∈ SOS(3, 4)for all t ≥ τ 3,4 and f ∈ Pos(3, 4), i.e., e τ3,4∆ Pos(3, 4) ⊆ SOS(3,4).
Corollary 3 . 24 .
324Let τ 3,4 be (minimal) as in Corollary 3.21 and L : R[x, y, z] ≤4 → R be a linear functional. IfL := L • e −τ3,4∆ is strictly square-positive, i.e., L(g) > 0 for all g ∈ SOS(3, 4) \ {0}, then L is a moment functional.
Example 3 . 25 .
325Letμ = χ Br (0) ·λ with r = 6τ3,4 and λ the Lebesgue measure on R 3 . Then L :R[x, y, z] → R with representing measure µ = e τ3,4∆ χ Br(0) · λ is a moment functional with L(f ) > 0 for f = x 2 + y 2 + z 2 . Butf = e −τ3,4∆ f = −6τ 3,4 + f and henceL(f ) < 0 sincef ≤ 0 on B r (0).• 4. Time-dependent set of atoms 4.1. Coefficients in C ∞ b Let L : R[x 1 , . . . , x n ] ≤d → R with d ∈ N 0 ∪ {∞} be a (truncated) moment functional. When a p 0 ∈ R[x 1 , . . . , x n ] ≤d with p 0 ≥ 0 exists with L(p 0 ) = 0, then supp µ ⊆ Z(p 0 ) for any representing measure µ of L.
Example 4 . 3 .
43Let s(0) = (s k (0)) 2d k=0 with d ∈ N be a moment sequence on the boundary of the moment cone, i.e., s is represented byµ 0 = l≤d k=1 c k (0) · δ x k (0) with c 1 (0), . . . , c l (0) >0 and x 1 (0), . . . , x l (0) pairwise different and the nonnegative polynomial p 0
Example 4 . 5 .y
45Let k = 2 and x 0 = 1. Then to solve∂ t µ t = −x 2 · ∂ x µ t µ 0 = δ 1 we have by Lemma 4.1 to solve ∂ t x(t) = a · x(t) −k dy = (k − 1) · 1 − x 1−k
p f (x, t) = f (x) + c · t with c > 0 [56], i.e., homogenization shows already f ∈ SOS(n, 2).In Lemma 3.19 we show that under the heat equation any non-negative polynomial in R[x 1 , . . . , x n ] gets close to the constant polynomiallim t→∞ p f (x, t) · t −d = c > 0,i.e., the positive constant polynomials are attractors of the polynomial heat equation.
3.8 (Motzkin polynomial, Example 3.4 continued). We havep Motz (x, y, 1) = 37 · 1 −
11
148
x 2 −
11
148
y 2
2
+
71
2
· x −
4
71
xy 2
2
+
71
2
· y −
4
71
x 2 y
2
+
57
2
x 2 y 2 +
1063
592
· x 2 +
27
1063
y 2
2
+
3815
2126
· y 4 +
63
71
· x 4 y 2 +
63
71
x 2 y 4 ∈ SOS(2, 6)
AcknowledgmentsThe second author and this project is financed by the Deutsche Forschungsgemeinschaft DFG with the grant DI-2780/2-1 and his research fellowship at the Zukunfskolleg of the University of Konstanz, funded as part of the Excellence Strategy of the German Federal and State Government. This work has also been supported by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Actions, grant agreement 813211 (POEMA), by the AI Interdisciplinary Institute ANITI funding, through the French "Investing for the Future PIA3" program under the Grant agreement n • ANR-19-PI3A-0004 as well as by the National Research Foundation, Prime Minister's Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme.
The classical moment problem and some related questions in analysis. N I Akhiezer, Oliver & BoydEdinburghN. I. Akhiezer, The classical moment problem and some related questions in analysis, Oliver & Boyd, Edinburgh, 1965.
G Blekherman, P A Parrilo, Semidefinite optimization and convex algebraic geometry. R. R. ThomasPhiladelphia, PA; Philadelphia, PAMathematical Optimization Society13G. Blekherman, P. A. Parrilo, R. R. Thomas (Eds.), Semidefinite opti- mization and convex algebraic geometry, Vol. 13 of MOS-SIAM Series on Optimization, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA; Mathematical Optimization Society, Philadelphia, PA, 2013.
The multidimensional truncated moment problem: The moment cone. P J Di Dio, K Schmüdgen, J. Math. Anal. Appl. 511126066P. J. di Dio, K. Schmüdgen, The multidimensional truncated moment prob- lem: The moment cone, J. Math. Anal. Appl. 511 (2022) 126066.
The Moment Problem. K Schmüdgen, SpringerNew YorkK. Schmüdgen, The Moment Problem, Springer, New York, 2017.
The truncated K-moment problem: a survey. L A Fialkow, Theta Ser. Adv. Math. 18L. A. Fialkow, The truncated K-moment problem: a survey, Theta Ser. Adv. Math. 18 (2016) 25-51.
On the momentum problem for distribution functions in more than one dimension. E K Haviland, Amer. J. Math. 57E. K. Haviland, On the momentum problem for distribution functions in more than one dimension, Amer. J. Math. 57 (1935) 562-572.
On the momentum problem for distribution functions in more than one dimension II. E K Haviland, Amer. J. Math. 58E. K. Haviland, On the momentum problem for distribution functions in more than one dimension II, Amer. J. Math. 58 (1936) 164-168.
The truncated moment problem on N 0. M Infusino, T Kuna, J L Lebowitz, E R Speer, J. Math. Anal. Appl. 452M. Infusino, T. Kuna, J. L. Lebowitz, E. R. Speer, The truncated moment problem on N 0 , J. Math. Anal. Appl. 452 (2017) 443-468.
The General Moment Problem, a Geometric Approach. J H B Kemperman, Ann. Math. Stat. 39J. H. B. Kemperman, The General Moment Problem, a Geometric Ap- proach, Ann. Math. Stat. 39 (1968) 93-122.
Moment problems with convexity conditions I. J H B Kemperman, Optimizing Methods in Statistics. J. S. RustagiAcad. PressJ. H. B. Kemperman, Moment problems with convexity conditions I, in: J. S. Rustagi (Ed.), Optimizing Methods in Statistics, Acad. Press, 1971, pp. 115-178.
Geometry of the Moment Problem. J H B Kemperman, Proc. Sym. Appl. Math. 37J. H. B. Kemperman, Geometry of the Moment Problem, Proc. Sym. Appl. Math. 37 (1987) 16-53.
Nudel'man, The Markow Moment Problem and Extremal Problems. M G Kreȋn, A A , American Mathematical SocietyProvidence, Rhode IslandM. G. Kreȋn, A. A. Nudel'man, The Markow Moment Problem and Ex- tremal Problems, American Mathematical Society, Providence, Rhode Is- land, 1977.
H J Landau, of Proceedings of Symposia in applied Mathematics. RIAmerican Mathematical Society37Moments in MathematicsH. J. Landau (Ed.), Moments in Mathematics, Vol. 37 of Proceedings of Symposia in applied Mathematics, American Mathematical Society, Prov- idence, RI, 1980.
The Classical Moment Problem: Hilbertian Proofs. H J Landau, J. Funct. Anal. 38H. J. Landau, The Classical Moment Problem: Hilbertian Proofs, J. Funct. Anal. 38 (1980) 255-272.
An Introduction to Polynomial and Semi-Algebraic Optimization. J.-B Lasserre, Cambridge University PressCambridgeJ.-B. Lasserre, An Introduction to Polynomial and Semi-Algebraic Opti- mization, Cambridge University Press, Cambridge, 2015.
Revisiting two theorems of Curto and Fialkow on moment matrices. M Laurent, Proc. Amer. Math. Soc. 133M. Laurent, Revisiting two theorems of Curto and Fialkow on moment matrices, Proc. Amer. Math. Soc. 133 (2005) 2965-2975.
Sums of squares, moment matrices and optimization over polynomials, in: Emerging application of algebraic geometry. M Laurent, Math. Appl. 149SpringerM. Laurent, Sums of squares, moment matrices and optimization over poly- nomials, in: Emerging application of algebraic geometry, Vol. 149 of IMA Vol. Math. Appl., Springer, New York, 2009, pp. 157-270.
A generalized flat extension theorem for moment matrices. M Laurent, B Mourrain, Arch. Math. (Basel). 931M. Laurent, B. Mourrain, A generalized flat extension theorem for moment matrices, Arch. Math. (Basel) 93 (1) (2009) 87-98.
The Problem of Moments. J A Shohat, J D Tamarkin, Amer. Math. Soc. J. A. Shohat, J. D. Tamarkin, The Problem of Moments, Amer. Math. Soc., Providence, R.I., 1943.
Recherches sur les fractions continues. T J Stieltjes, Ann. Fac. Sci. Toulouse. 84T. J. Stieltjes, Recherches sur les fractions continues, Ann. Fac. Sci. Toulouse 8 (4) (1894) J1-J122.
Solving the truncated moment problem solves the moment problem. J Stochel, Glasgow J. Math. 43J. Stochel, Solving the truncated moment problem solves the moment prob- lem, Glasgow J. Math. 43 (2001) 335-341.
The full moment problem on subsets of probabilities and point configurations. M Infusino, T Kuna, J. Math. Anal. Appl. 4831ppM. Infusino, T. Kuna, The full moment problem on subsets of probabilities and point configurations, J. Math. Anal. Appl. 483 (1) (2020) 123551, 29 pp.
Projective limit techniques for the infinite dimensional moment problem. M Infusino, S Kuhlmann, T Kuna, P Michalski, Integr. Equ. Oper. Theory. 94244M. Infusino, S. Kuhlmann, T. Kuna, P. Michalski, Projective limit tech- niques for the infinite dimensional moment problem, Integr. Equ. Oper. Theory 94 (2) (2022) 2, 44 pp.
Parameterfreie Abschätzung und Realisierung von Erwartungswerten. H Richter, Bl. Deutsch. Ges. Versicherungsmath. 3H. Richter, Parameterfreie Abschätzung und Realisierung von Er- wartungswerten, Bl. Deutsch. Ges. Versicherungsmath. 3 (1957) 147-161.
Moments of non-negative mass. W W Rogosinski, Proc. R. Soc. Lond. A. R. Soc. Lond. A245W. W. Rogosinski, Moments of non-negative mass, Proc. R. Soc. Lond. A 245 (1958) 1-27.
Quelques classes de problème extrémaux. P C Rosenbloom, Bull. Soc. Math. France. IIP. C. Rosenbloom, Quelques classes de problème extrémaux. II, Bull. Soc. Math. France 80 (1952) 183-215.
Methodus nova integralium valores per approximationem inveniendi. C F Gauß, Comm. Soc. Sci. Göttingen Math. 3C. F. Gauß, Methodus nova integralium valores per approximationem in- veniendi, Comm. Soc. Sci. Göttingen Math. 3 (1815) 29-76.
Optimization approaches to quadrature: New characterizations of Gaussian quadrature on the line and quadrature with few nodes on plane algebraic curves, on the plane and in higher dimensions. C Riener, M Schweighofer, J. Compl. 45C. Riener, M. Schweighofer, Optimization approaches to quadrature: New characterizations of Gaussian quadrature on the line and quadrature with few nodes on plane algebraic curves, on the plane and in higher dimensions, J. Compl. 45 (2018) 22-54.
The multidimensional truncated Moment Problem: Carathéodory Numbers. P J Di Dio, K Schmüdgen, J. Math. Anal. Appl. 461P. J. di Dio, K. Schmüdgen, The multidimensional truncated Moment Prob- lem: Carathéodory Numbers, J. Math. Anal. Appl. 461 (2018) 1606-1638.
The multidimensional truncated Moment Problem: Carathéodory Numbers from Hilbert Functions. P J Di Dio, M Kummer, Math. Ann. 380P. J. di Dio, M. Kummer, The multidimensional truncated Moment Prob- lem: Carathéodory Numbers from Hilbert Functions, Math. Ann. 380 (2021) 267-291.
The core variety of a multi-sequence in the truncated moment problem. L A Fialkow, J. Math. Anal. Appl. 456L. A. Fialkow, The core variety of a multi-sequence in the truncated mo- ment problem, J. Math. Anal. Appl. 456 (2017) 946-969.
The multidimensional truncated Moment Problem: Atoms, Determinacy, and Core Variety. P J Di Dio, K Schmüdgen, J. Funct. Anal. 274P. J. di Dio, K. Schmüdgen, The multidimensional truncated Moment Prob- lem: Atoms, Determinacy, and Core Variety, J. Funct. Anal. 274 (2018) 3124-3148.
The core variety and representing measures in the truncated moment problem. G Blekherman, L Fialkow, J. Op. Theory. 84G. Blekherman, L. Fialkow, The core variety and representing measures in the truncated moment problem, J. Op. Theory 84 (2020) 185-209.
Schwartz function valued solutions of the Euler and the Navier-Stokes. P J Di Dio, equationsArXiv:1912.11075P. J. di Dio, Schwartz function valued solutions of the Euler and the Navier- Stokes equationsArXiv:1912.11075 (2019).
Time-dependent moments from the heat equation and a transport equation. R Curto, P J Di Dio, 10.1093/imrn/rnac244Int. Math. Res. R. Curto, P. J. di Dio, Time-dependent moments from the heat equation and a transport equation, Int. Math. Res. No- ticesHttps://doi.org/10.1093/imrn/rnac244 (2022).
Optimal control of linear PDEs using occupation measures and SDP relaxations. V Magron, C Prieur, IMA J. Math. Control Inf. 371V. Magron, C. Prieur, Optimal control of linear PDEs using occupation measures and SDP relaxations, IMA J. Math. Control Inf. 37 (1) (2020) 159-174.
A moment approach for entropy solutions to nonlinear hyperbolic PDEs. S Marx, T Weisser, D Henrion, J Lasserre, Math. Control Relat. F. 10S. Marx, T. Weisser, D. Henrion, J. Lasserre, A moment approach for entropy solutions to nonlinear hyperbolic PDEs, Math. Control Relat. F. 10 (2020) 113-140.
Moments and convex optimization for analysis and control of nonlinear partial differential equations. M Korda, D Henrion, J.-B Lasserre, Numerical Analysis. 23M. Korda, D. Henrion, J.-B. Lasserre, Moments and convex optimization for analysis and control of nonlinear partial differential equations, Hand- book of Numerical Analysis 23 (2022) 339-366.
The gap between a variational problem and its occupation measure relaxationArXiv. M Korda, R Rios-Zertuche, M. Korda, R. Rios-Zertuche, The gap between a variational problem and its occupation measure relaxationArXiv:2205.14132 (2022).
L C Evans, Partial Differential Equations. Providence, Rhode Island2nd EditionL. C. Evans, Partial Differential Equations, 2nd Edition, Amercian Math- ematical Society, Providence, Rhode Island, 2010.
On the product of semi-groups of operators. H F Trotter, Proc. Amer. Math. Soc. 10H. F. Trotter, On the product of semi-groups of operators, Proc. Amer. Math. Soc. 10 (1959) 545-551.
Dispersive mixed-order systems in L p -Sobolev spaces and application to the thermoelastic plate equation. R Denk, F Hummel, Adv. Diff. Eq. 247-8R. Denk, F. Hummel, Dispersive mixed-order systems in L p -Sobolev spaces and application to the thermoelastic plate equation, Adv. Diff. Eq. 24 (7-8) (2019) 377-406.
One-Parameter Semigroups for Linear Evolution Equations. K.-J Engel, R Nagel, Graduate Texts in Mathematics. 194SpringerK.-J. Engel, R. Nagel, One-Parameter Semigroups for Linear Evolution Equations, Vol. 194 of Graduate Texts in Mathematics, Springer, New York, 2000.
On linear operators preserving the set of positive polynomials. A Guterman, B Shapiro, J. Fixed Point Theory Appl. 3A. Guterman, B. Shapiro, On linear operators preserving the set of positive polynomials, J. Fixed Point Theory Appl. 3 (2008) 411-429.
Representation and Approximation of Positivity Preservers. T Netzer, J. Geom. Anal. 20T. Netzer, Representation and Approximation of Positivity Preservers, J. Geom. Anal. 20 (2010) 751-770.
The arithmetic-geometric inequality. T S Motzkin, Inequalities, Proc. of Sympos. at Wright-Patterson AFB. O. ShishaNew YorkAcademic PressT. S. Motzkin, The arithmetic-geometric inequality, in: O. Shisha (Ed.), Inequalities, Proc. of Sympos. at Wright-Patterson AFB, August 19-27, 1965, Academic Press, New York, 1967, pp. 205-224.
D R Grayson, M E Stillman, Macaulay2, a software system for research in algebraic geometry. D. R. Grayson, M. E. Stillman, Macaulay2, a software system for research in algebraic geometry, Available at http://www.math.uiuc.edu/Macaulay2/.
Sums of squares in Macaulay2. D Cifuentes, T Kahle, P Parrilo, J. Softw. Algebra Geom. 10D. Cifuentes, T. Kahle, P. Parrilo, Sums of squares in Macaulay2, J. Softw. Algebra Geom. 10 (2020) 17-24.
Sums of even powers of real linear forms. B Reznick, MEMO/0463Mem. Amer. Math. Soc. 96American Mathematical SocietyB. Reznick, Sums of even powers of real linear forms, Vol. 96 of Mem. Amer. Math. Soc., American Mathematical Society, 1992, MEMO/0463.
Positive Polynomials and Sums of Squares, no. 146 in Mathematical Surveys and Monographs. M Marshall, American Mathematical SocietyRhode IslandM. Marshall, Positive Polynomials and Sums of Squares, no. 146 in Mathe- matical Surveys and Monographs, American Mathematical Society, Rhode Island, 2008.
Some definite polynomials which are not sums of squares of real polynomials. R M Robinson, Notices Amer. Math. Soc. 16554R. M. Robinson, Some definite polynomials which are not sums of squares of real polynomials, Notices Amer. Math. Soc. 16 (1969) 554.
Extremal positive semi-definite forms. M D Choi, T.-Y Lam, Math. Ann. 231M. D. Choi, T.-Y. Lam, Extremal positive semi-definite forms, Math. Ann. 231 (1977) 1-18.
A positive polynomial which is not a sum of squares. A positive, but not strongly positive functional. K Schmüdgen, Math. Nachr. 88K. Schmüdgen, A positive polynomial which is not a sum of squares. A positive, but not strongly positive functional, Math. Nachr. 88 (1979) 385- 390.
A remark on the multidimensional moment problem. C Berg, J P R Christensen, C U Jensen, Math. Ann. 243C. Berg, J. P. R. Christensen, C. U. Jensen, A remark on the multidimen- sional moment problem, Math. Ann. 243 (1979) 163-169.
Real Even Symmetric Ternary Forms. W R Harris, J. Alg. 222W. R. Harris, Real Even Symmetric Ternary Forms, J. Alg. 222 (1999) 204-245.
Laplace Operator and Polynomial Invariants. A V Iltyakov, J. Alg. 207A. V. Iltyakov, Laplace Operator and Polynomial Invariants, J. Alg. 207 (1998) 256-271.
Classification of linear operators presevering elliptic, positive and non-negative polynomials. J Borcea, J. reine angew. Math. 650J. Borcea, Classification of linear operators presevering elliptic, positive and non-negative polynomials, J. reine angew. Math. 650 (2011) 67-82.
Polynomial interpolation in several variables. J Alexander, A Hirschowitz, J. Alg. Geom. 4J. Alexander, A. Hirschowitz, Polynomial interpolation in several variables, J. Alg. Geom. 4 (1995) 201-222.
Über die Darstellung definiter Formen als Summe von Formenquadraten. D Hilbert, Math. Ann. 32D. Hilbert, Über die Darstellung definiter Formen als Summe von Formen- quadraten, Math. Ann. 32 (1888) 342-350.
| [] |
[
"SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ON DECEMBER 3 2022 1 Nonlinear Intensity, Scale and Rotation Invariant Matching for Multimodal Images",
"SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ON DECEMBER 3 2022 1 Nonlinear Intensity, Scale and Rotation Invariant Matching for Multimodal Images"
] | [
"Zhongli Fan ",
"Li Zhang ",
"Yuxuan Liu "
] | [] | [] | We present an effective method for the matching of multimodal images. Accurate image matching is the basis of various applications, such as image registration and structure from motion. Conventional matching methods fail when handling noisy multimodal image pairs with severe scale change, rotation, and nonlinear intensity distortion (NID). Toward this need, we introduce an image pyramid strategy to tackle scale change. We put forward an accurate primary orientation estimation approach to reduce the effect of image rotation at any angle. We utilize multi-scale and multi-orientation image filtering results and a feature-to-template matching scheme to ensure effective and accurate matching under large NID. Integrating these improvements significantly increases noise, scale, rotation, and NID invariant capability. Our experimental results confirm the excellent ability to achieve high-quality matches across various multimodal images. The proposed method outperforms the mainstream multimodal image matching methods in qualitative and quantitative evaluations. Our implementation is available at https://github.com/Zhongli-Fan/NISR. | 10.48550/arxiv.2302.14239 | [
"https://export.arxiv.org/pdf/2302.14239v1.pdf"
] | 257,232,593 | 2302.14239 | 06186f2498b4165e6e87014907f1e3ba35115182 |
SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ON DECEMBER 3 2022 1 Nonlinear Intensity, Scale and Rotation Invariant Matching for Multimodal Images
Zhongli Fan
Li Zhang
Yuxuan Liu
SUBMITTED TO IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE ON DECEMBER 3 2022 1 Nonlinear Intensity, Scale and Rotation Invariant Matching for Multimodal Images
We present an effective method for the matching of multimodal images. Accurate image matching is the basis of various applications, such as image registration and structure from motion. Conventional matching methods fail when handling noisy multimodal image pairs with severe scale change, rotation, and nonlinear intensity distortion (NID). Toward this need, we introduce an image pyramid strategy to tackle scale change. We put forward an accurate primary orientation estimation approach to reduce the effect of image rotation at any angle. We utilize multi-scale and multi-orientation image filtering results and a feature-to-template matching scheme to ensure effective and accurate matching under large NID. Integrating these improvements significantly increases noise, scale, rotation, and NID invariant capability. Our experimental results confirm the excellent ability to achieve high-quality matches across various multimodal images. The proposed method outperforms the mainstream multimodal image matching methods in qualitative and quantitative evaluations. Our implementation is available at https://github.com/Zhongli-Fan/NISR.
INTRODUCTION
MAGE is one of the most widely applied data forms in many areas [1], [2], [3], [4]. Various modalities of images have been developed nowadays, as displayed in Fig. 1. Each modality of images encodes one aspect of information and has its limitations. The fusion of different modalities of images is conducive to the comprehensive utilization of their advantages. As a prerequisite, image matching is crucial to many image fusion applications, such as image registration [5], structure from motion [6], visual simultaneous localization and mapping [7], and so on. Even though the task of image matching has been researched for decades [8], [9], [10], multimodal image matching is still challenging work due to the severe nonlinear intensity distortion (NID) and geometric distortion.
Recent works show that the local image structure recovered from multi-scale and multi-orientation filtering results has good robustness against NID and can be applied to match images with translation [11], [12], [13], [14]. However, very few works consider scale change and image rotation. The scale variance can be modeled with the image pyramid strategy, but the traditional rotation handling strategies become ineffective. Most methods use image gradient or intensity information to estimate a primary orientation for each feature to achieve rotation invariance [15], [16], [17]. However, both image gradient and intensity are unstable under different image modalities due to NID.
Using traditional methods to estimate the primary orientation will cause the wrong estimation and leads to matching failure. Moreover, these methods only use the orientation index information and ignore the complete multiorientation filtering amplitudes with richer information.
In this paper, we put forward a nonlinear intensity, scale, and rotation invariant method (NISR) for multimodal image matching, which treats these complex factors explicitly. We show that the local structures in the orientation index map are similar under different rotation angles even though the index values change (Fig. 4). Based on this principle, we develop an accurate and robust primary orientation estimation method that analyzes the distribution of the index values, calculates a weighted centroid point, and takes the direction from the feature point to the calculated centroid point as the primary orientation. We also construct a template feature with the complete multi-orientation filtering results to further improve the matching performance of the reference image and the resampled sensed image, which is corrected based on a similarity transformation calculated from the matches obtained in feature matching. In summary, our main contributions are: 1) A robust primary orientation estimation method for multimodal images that uses the local similarity of the orientation index map built from multi-orientation and multiscale filtering results.
2) A feature-to-template matching framework that first uses the index information to increase robustness against image modality variance and then uses the complete multiorientation filtering results to improve matching reliability.
3) An effective matching method for multimodal images that takes scale change, image rotation, and severe NID into consideration.
We create an image pyramid and detect distinctive and repetitive feature points on phase congruency (PC) maps generated from each layer image (Sec. 3). We construct a feature descriptor involving primary orientation estimation and match the feature descriptors in Sec. 4 and 5. We also present a template feature that can rematch the unmatched feature points (Sec. 6). As seen in Fig. 1, our method can obtain more matches with high accuracy across various multimodal images. In Sec. 7, we verify the robustness against image rotation ( Fig. 9 and 10) and scale change ( Fig. 11 and 12) and present extensive qualitative (Fig. 13~15) and quantitative results (Table 4) on 164 multimodal image pairs, demonstrating superior results against previous work. Additionally, we apply NISR to image registration ( Fig. 16) in Sec. 8.
RELATED WORK
Multimodal image matching has been an active research topic over the last few years [18]. While the deep learning technique is introduced to this area, this work focuses on designing new handcrafted features that can be effectively applied to real applications. Previous studies have investigated a range of multimodal image matching methods, including feature-based and template-based techniques [9]. Feature-based methods: Chen et al. [15] proposed a partial intensity invariant feature descriptor using the average squared gradient and integrated it into the framework of the scale invariant feature transform (SIFT) algorithm [1]. Xiang et al. [16] used multi-scale Sobel and ROEWA filters to calculate image gradients for optical and synthetic aperture radar (SAR) images and conducted image matching with SIFT algorithm. To match multimodal remote sensing images, Yao et al. [17] constructed a co-occurrence scale space to extract image edge features and combined Butterworth and Sobel filters to calculate descriptors. For the same purpose, Li et al. [19] adopted a local normalization filter to resist significant NIDs and improved the ORB detector [20] and HOG descriptor [21] for feature matching. These methods use image gradients to estimate primary orientation to handle image rotation, but the gradient information is unstable for multimodal images, significantly decreasing the performance. Based on multi-scale and multi-oriented log-Gabor filters, Aguilera et al. [11] proposed a multi-layer orientation index map and developed a log-Gabor histogram descriptor to match visible, depth, and long-wave infrared images, obtaining good matching performance and demonstrating good robustness to severe NID. However, the method can only tackle image translation and does not resist image rotation and scale change. Following this thought, Li et al. [14] proposed a radiation-variation insensitive feature transform (RIFT) algorithm. RIFT utilizes the PC model for feature detection and constructs a maximum index map for feature description. By setting an end-to-end annular structure for multiple index maps, RIFT achieves rotation invariance under small image rotation angles. Besides, the rotation performance is unstable on various image modalities, and the efficiency is extremely low. To address this problem, we propose to use the local similarity of the orientation index map to estimate primary orientation, achieving invariance in any rotation angle across various image modalities. Besides, we introduce the scale space to achieve scale invariance. Template-based methods: Maes et al. [22] proposed using mutual information (MI) to match multimodal medical image pair images. Liu et al. [23] embedded local self-similarity with MI for the matching of optical and SAR images. Due to the high computation complexity of MI, only local maximization is applied to match the images in most situations, and no guarantee with respect to global energy is employed. Researchers recently found that image structures are well retained across different modalities. Ye et al. [12] proposed a histogram of the oriented PC method based on the classical HOG algorithm. Xiong et al. [24] proposed the rank-based local self-similarity algorithm for matching optical and SAR images by introducing a rank correlation coefficient. Unlike these methods that build sparse features, Ye et al. [13] proposed the channel features of orientated gradients by changing the sampling pattern of HOG from the sparse grid to pixel-by-pixel. Fan et al. [25] further proposed the angle-weighted oriented gradients algorithm by distributing the gradient magnitude to the two most relevant gradient directions. Similarly, Zhu et al. [26] used an odd-symmetric Gabor filter to construct dense structure features for matching the optical aerial and LiDAR intensity images. Even though the template-based methods have relatively high matching accuracy, they are susceptible to image geometric distortions, including scale change and image rotation. Considering this, we first use matches obtained in the proposed feature matching process to coarsely eliminate the geometric difference between images and then construct dense template features on the corrected images to improve the matching performance. Particularly, our template feature encodes the complete multi-scale and multi-orientation image information, having good robustness against NID involved in multimodal images.
OS-SIFT RIFT CoFSM LNIFT NISR (a) (b) (c) (d) (e) (f)
FEATURE EXTRACTION
Distinctive and repeatable feature point detection is crucial to the success of feature matching. In this study, we first build an image pyramid to tackle scale change. Then, we compute a salient edge map using multi-orientation phase congruency [27] for each layer image in the pyramid and use the FAST detector [28] to locate feature points. The edge map reduces the affection of image intensity and illumination and enhances the image structure information, increasing the robustness against NID. Besides, the method can work well in low-texture and noisy areas.
Scale-space construction
We construct a scale-space by conducting a serial of image subsampling and Gaussian smoothing to achieve scale invariance. Precisely, the scale-space pyramid consists of octave layers, , and intra-octave layers, , = {0,1, … , − 1}. 0 corresponds to the original image, and 0 is obtained by subsampling 0 with a scale factor of 1.5, making the size of 0 is two-thirds of 0 . The other octave and intra-octave layers can be extracted by progressively half-sampling the layers 0 and 0 , respectively. Typically, can be set as 4. The construction of the scalespace pyramid can be represented as follows:
( , ) = ∑ ∑ ( , ) − − * −1 (2 + , 2 + ) (1) ( , ) = 1 √2 2 −( 2 + 2 ) 2 2 (2)
where ( , ) is a Gaussian kernel with standard deviation of .
Feature detection
For each layer image in the pyramid, we calculate a weighted moment map derived from its corresponding PC maps and detect feature points on it. For an image ( , ), we first convolve it with a multi-scale and multiorientation log-Gabor filter [29]. This process can be described as follows:
( , ) = ℱ −1 ( ( , ) • ( , )) (3) ( , ) = ℱ( ( , )) (4) ( , ) = −( ( / )) 2 2•( ( )) 2 • −( − ) 2 2 2(5)
where ( , ) denotes the filtering result;
( , ) denotes a 2D log-Gabor filter; and are the scale and orientation index, respectively; ( , ) represents the polar coordinates; and represent the filter center frequency at the scale of and the orientation angle of ; and denote radial and tangential bandwidths. ℱ(•) and ℱ −1 (•) represent the Fourier transform and the inverse Fourier transform, respectively.
Then, a PC map can be obtained with (6) based on the amplitude component ( , ) and phase component ( , ) calculated using the real part and imaginary part of ( , ).
( , ) = √ ( , ) 2 + ( , ) 2 (6) ( , ) = −1 ( ( , ) ( , ) ) (7) ( , ) = ∑ ∑ ( , )⌊ ( , )ΔΦ ( , ) − ⌋ ∑ ∑ ( , ) + (8) ΔΦ ( , ) = ( ( , ) − ( , )) − | ( ( , ) − ( , ))|(9)
where ( , ) is a weighting factor for frequency domain spread; ⌊•⌋ is a truncation function, the value is itself when it is positive, and 0 otherwise; ΔΦ ( , ) is a phase deviation function; is a noise threshold that can be determined from the response of the filter to the image; is a small value to avoid division by 0; ( , ) denotes the mean phase component. Based on the obtained PC map, a maximum moment map and a minimum moment map can be calculated using (10) and (11).
= 1 2 ( + + √( − ) 2 + 2 ) (10) = 1 2 ( + − √( − ) 2 + 2 )(11)
where, = ∑( ( )cos( )) 2 (12)
= 2 • ∑( ( )cos( ))( ( )sin( )) (13) = ∑( ( )sin( )) 2(14)
where ( ) is the PC map under the orientation angle . According to the moment analysis theory [30], encodes feature information, encodes edge information, and is a strict subset of . Considering that the edge structures are more stable across multimodal images, we suppress the corner response and strength-en the edge response by integrating and to weaken the influence of intensity variations, which can be expressed as follows:
= • + ( − 1) •(15)
where denotes the weight coefficient, ranges from 0.5 to 1.
Finally, the FAST detector [28] is used for feature extraction on the weighted moment map. Fig. 2 shows the feature extraction results on a pair of optical-depth images. We can see that the moment map can well keep the image structure regardless of different image modals, and a large amount of repeated feature points are evenly distributed on the two images.
FEATURE DESCRIPTION
In this section, we introduce a novel feature descriptor, show how to construct it and demonstrate why it is rotation invariant. As displayed in Fig. 3, we first extract an orientation index map from the multi-orientation filtering results. Then, we evaluate a primary orientation for each point by analyzing the distribution of index values in a local circular area. At last, a high-dimensional feature vector is created by arranging the histogram values of multiple sub-regions in a local square patch.
(a) (b) (c) (d) (e) (f)
Orientation index map extraction
Previous studies [11], [14] have proven that the index information of the largest amplitude components in all orientation filtering results is more robust and effective than the amplitude values in describing the feature points for multimodal images. However, they did not give the profound reason behind this principle. Based on detailed experiments and analysis, we found that the reason could be that the largest amplitude components encode image salient structures, and the index information provides a unified measure while the value itself is unstable for multimodal images with large NID. Thus, we first extract an index map from the multi-scale and multi-orientation convolved images. To increase the stability of filtering results in one orientation and improve the efficiency, we add the amplitude components obtained in all scales for each orientation with (16).
( , ) = ∑ ( , ) =1(16)
The index map can be found as follows:
( , ) = ( ( ( , )))(17)
where (•) and (•) are used to grab the layer index and find the largest amplitude in all orientations, respectively. Fig. 3 shows the construction process of the index map.
A proper value of is crucial to the success of image matching. A large value of will reduce the stability of the following primary orientation estimation and increase computation cost. A small value of means a large interval between two adjacent orientations of filters, which will decrease the similarity of orientation index maps under small image rotations. Generally, the index value for a pixel has the highest creditability when the rotation angle is an integral multiple of 180°⁄ . Simultaneously, when the rotation angle between two images is an integral multiple of 180°2 ⁄ , the index value is most easily biased to the wrong value, which is 180°⁄ when the correct one is 180°⁄ +1 or 180°⁄ +1 when the correct one is 180°⁄ . Toward this, we induce a double index map strategy. Instead of applying one index map extracting from all layers in ( , ), we extract an odd index map using the odd-numbered layers in ( , ), and an even index map using the even-numbered layers in ( , ). The most ambiguous angle for one index map is the most distinguished one on the other, ensuring high performance and decreasing the computation cost. Table 1 gives the performance comparison with/without this strategy on 164 multimodal image pairs described in Sec. 7. We can see that the evaluation metrics presented in Sec. 7 are all significantly improved. Namely, the SR is enhanced by about 14%, the NM is improved by about 280, and the RMSE is improved by about 0.3 pixels with the double index map when = 12.
Primary orientation assignment
The appearance of index maps for different multimodal images without image rotation would be similar (Fig. 4). However, this principle is broken when the images are rotated at different angles, considering that the orientation angles of the log-Gabor filters are fixed. We manually rotate a pair of optical-infrared by different angles, extract an index map at each angle, and demonstrate their visualization results in Fig. 4.
By observing the index maps at different rotation angles, we can see that the local structures are consistent even though the index values change (marked by the red squares in Fig. 4). This is because the general image internal structures of local regions remain unchanged at different rotation angles, so the local general structures on the index map are kept correspondingly. In contrast, the index values become unpredictable, considering that the directions of the multi-orientation filters are fixed when the image rotates at different angles.
Based on this characteristic, we evaluate the primary orientation for a feature point to recover the similarity of index maps of images under different rotation angles. In detail, a circular area centered at the feature point is first determined on the index map. Then, we count the index value distributions of all the pixels located in the area, find the pixel set whose index value takes the largest percentage, and estimate the coordinates of the centroid point of all pixels. The direction of the line connecting the feature point and the gravity point is taken as the primary direction.
Let P be the point set in the local circular area of p and p i be a point in P. Taking the coordinates of p i as the initial coordinates of the centroid point, we gradually update the coordinates by involving the unprocessed pixels in P with (18).
= + 1+
• ⃗⃗⃗⃗⃗ (18) where G is the updated centroid point after taking the unprocessed point , is the obtained centroid point after processing points in P After all the pixels are processed, the coordinates of the centroid point are obtained, and the primary orientation can be further calculated. Considering that the image filters also have orientation, we unify the index of filters in the same direction on different images. As shown in Fig.5, we modify the index values of the pixels whose index values take the largest proportion in the region to and adjust the index values of the remaining pixels correspondingly, which can be represented as follow:
= + ( − ), ≤(19)= − ( − ), >(20)
where is an integer and ∈ [1, ].
In addition, if the number of pixels for the second primary orientation is larger than 80% of that of the primary orientation, we construct a feature descriptor for the point with the second primary orientation. This strategy can improve the robustness of feature matching. Taking the method that only considers the primary orientation as the baseline, the results on 164 multimodal image pairs (Table 1) demonstrate the effectiveness of the proposed strategy.
Feature descriptor construction
The scale, orientation, and location have been assigned to each feature point using the previous operations. Given that a square window on the index map at the given scale has been opened, we divide the square window image into × sub-regions and use a distribution histogram technique for feature vector description. As shown in Fig. 3, we calculated the accumulated values in every value of in each subregion and connected the statistical results in all subregions to generate the feature descriptor. Moreover, the × × dimensional feature vector is normal- ized to reduce the effect of change in illumination.
(a) (b) (c) (d)
FEATURE MATCHING
We accomplish feature matching based on the nearest neighbor principle and complete outlier removal using the FSC algorithm [31]. Fig. 6 shows the initial matching results on a pair of optical-infrared images and some intermediate results of this stage. More qualitative and quantitative experimental results will be presented in Sec. 7.
TEMPLATE MATCHING
Feature matching can successfully match the distinctive features across image modalities, while the less distinctive features are failed to be matched due to local structure change and severe NID. In this section, we rematch the unmatched high-quality feature points using template matching. Note that we utilize the full-orientation filtering results rather than the index information to construct a high-dimensional template feature to improve the matching performance. First, we calculate a similarity transformation matrix M based on the matches obtained in feature matching and resample the sensed image with M to coarsely eliminate the effect of scale change and image rotation. Then, we use the accumulated amplitude component ( , ) to construct template features. For the reference images, we can directly use the results generated in the previous stage; for the resampled sensed image, we perform a log-Gabor filter on it and calculate ( , ) with (17). Since image rotation between the two images has been almost eliminated, the value of o does not need to be significant to calculate the primary orientation, and a smaller value of is sufficient for effective matching, as demonstrated in Sec. 7.2. Empirically, can be set as half of that in the feature matching stage. Finally, the o-dimensional template feature ( , , ) is constructed by stacking the values of ( , ) in the order of orientation. Moreover, we normalize ( , , ) to increase the robustness against illumination change.
To match the template features efficiently, we use phase correlation instead of traditional similarity measures, such as normalized cross-correlation (NCC). Assuming that 1 ( , , ) and 2 ( , , ) are template features of the reference image and the resampled sensed image, the geometric relationship between them can be described as 1 ( , , ) = 2 ( , , ) − ( 0 + 0 )
where denotes a 3D unit vector. Then, we can obtain the cross-power spectrum as follow:
1 ( , , ) 2 ( , , ) * = − ( 0 + 0 )(23)
where * denotes complex conjugate.
After that, we perform a 3D inverse Fourier transform to the cross-power spectrum, a relation function ( − 0 , − 0 ) is obtained.
The optimal matching position can be obtained by searching the largest response value of ( − 0 , − 0 ) in the local image window. Generally, the largest response of the related function will appear at ( 0 , 0 ). After obtaining the matches, the FSC algorithm is adopted for outlier removal. Moreover, the matching points on the resampled sensed image are reprojected back to the coordinate system of the original sensed image. Fig. 7 shows the enhanced matching results on a pair of optical-infrared images. We can see that many more correct matches are obtained on the basis of feature matching, proving the high accuracy of feature matching and the effectiveness of template matching.
EXPERIMENTAL RESULTS
We first give the general parameter settings for our proposed method, then verify the great invariant against scale change and rotation, and demonstrate the superiority of our method by comparing the qualitative and quantitative results with several state-of-the-art algorithms, OS-SIFT [16], RIFT [14], CoFSM [17], and LNIFT [19] at last. The parameters recommended by the authors for each method are applied. Specifically, we modify the threshold of detecting features for OS-SIFT to 0.001 to obtain more feature points.
Three sets of multimodal images from the fields of computer vision, medicine, and remote sensing are employed, with each set containing six types of challenging multimodal image pairs. Therefore, broad modalities of images are applied, including visible images, infrared images, thermal images, optical images, SAR images, map images, LiDAR depth images, LiDAR intensity images, retina images captured by different angiography techniques, Magnetic-Resonance-Imaging images (MRI), Positron-Emission-computed-Tomography images (PET), Proton-Density-Weighted-Image images (PD), T1-Weighted-Image images (T1), T2-Weighted-Image images (T2), Single-Photon-Emission-Computed-Tomography images (SPECT), Computed-Tomography images (CT), and so on. These images contain significant nonlinear radiometric differences and geometric differences, such as scale, rotation, and translation. A detailed description of the datasets is given in Table 2, and some example image pairs for each set are presented in Fig. 8.
For quantitative evaluation, 10~15 high-precision checkpoints are manually selected for each image pair. For a pair of checkpoints, ( 1 , 1 ) and ( 2 , 2 ) , a point ( 1 ′ , 1 ′ ) can be estimated with a similarity matrix calcu- lated based on the obtained matches. Then, the metrics of the number of putative matches ( ), root mean square error ( ), and success rate ( ) can be derived as follows:
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) (p) (q) (r)= √ 1 ∑ [( 2 − 1 ′ ) 2 + ( 2 − 1 ′ ) 2 ] =1 (25) =(26)
where and are the number of successfully matched image pairs and the total number of image pairs, rese.
encodes the matching accuracy. The lower the , the higher the accuracy. In addition, if RMSE is larger than 5 pixels, the result will be deemed as a matching failure. SR reflects the robustness and generality of the method for a certain type of multimodal image pair. the higher the SR, the better the robustness and generality of the method.
Parameter Study
We test the performance of NISR with respect to the scale of the image pyramid, , the orientation number of the log-Gabor filter, , the number of sub-regions inside a local image window, , and the size of the image window for feature description, . Notably, the sizes of the two image windows are set to . We modify the tested parameter during experiments while keeping the other parameters unchanged. Besides, the maximum number of features of NISR is fixed at 1500. Table 3 provides the detailed experimental settings and the detailed results.
We vary from 2 to 6, and the matching performance reaches the best when = 4. The number of matches will decrease and the RMSE will increase when s is smaller or larger than 4. Meanwhile, the performance improves with the increase of and . In particular, when >= 2 and > = 72, the number of matches is larger than 829, and the RMSE is smaller than 1 pixel. Besides, we can see that SR is very sensitive to n and robust against the other parameters. A larger helps to increase the matching results. However, a larger will increase the dimensions of the feature vector, decreasing the efficiency. Based on the above analysis, we set = 4, = 12, = 72, and = 6 to give good results in most situations, and the settings are employed in the following experiments. Moreover, the maximum number of features of NISR is set to 5000.
Performance with respect to scale and rotation change
In this section, we verify the robustness of NISR against scale change and image rotation. In terms of image rotation, we manually change the rotation angle between the matching images from 0° to 360° for three image pairs: a pair of visible-thermal images, a pair of Retina images with different angiography, and a pair of optical-map images. The quantitative experimental results measured by NM for NISR are presented in Fig. 9. We can see that NISR can always keep a relatively high and stable performance even though there is a small range of period fluctuance. The periodicity includes a larger period of 90° and a smaller period of 15°. The large period is caused by image simulation, which is independent from our method. Specifically, the information of the simulated images is exact at every rotation of 90°, which will produce a fluctuance period of 90°. While the smaller periodicity is caused by the number of orientations of the log-Gabor filter. In the experiments, we set as 12, and the angle difference between two adjacent filters is 15°. When the rotation angle is × 7.5°, = 1,2,3, ⋯ ,48, the index value of a pixel has a similar probability to its corresponding two adjacent index values, bringing the largest confusion and leading to the lowest similarity of the two orientation index maps. Meanwhile, the two orientation index maps will be the most similar at angles of × 15°, = 1,2,3 ⋯ , 24. Theoretically, the larger the value of , the less affected by image rotation. Several typical visualization results at different rotations are displayed in Fig. 10.
In terms of scale invariance, we manually change the scale ratio between three image pairs, which are a pair of visible-infrared images, a pair of Retina images with different angiography, and a pair of optical-SAR images from 1:1 to 6:1, with their original size all 1000 ×1200 pixels. The quantitative experimental results are shown in Fig. 11.
We can see that NM is proportional to image size and gradually decreases with the increase of scale ratio, demonstrating good scale invariance. In particular, when the scale ratio reaches 6:1, the width and height of the sensed image are less than 200 pixels, and there are still more than 180 matches, which meets the needs of most applications. The visualization results corresponding to Fig. 11 are displayed in Fig. 12.
Comparative qualitative evaluation
In this section, we select six representative image pairs of different categories for experiments. The visual comparison results of OS-SIFT, RIFT, CoFSM, LNIFT, and NISR are shown in Fig.13~Fig. 15. For NISR, we show not only its final results but also the results of its initial matching stage, which are noted as NISRini.
0° 45° 90° 135° 180° 225° 270° 315° (a) visible -thermal 0° 45° 90° 135° 180° 225° 270° 315° (b) retina-retina (different angiography) 0° 45° 90° 135° 180° 225° 270° 315° (c) optical-map
We can see that OS-SIFT only matched 5 image pairs. RIFT exhibits good resistance to NID and successfully matched 13 of the 18 image pairs. However, RIFT failed to match the image pairs with scale differences and rotation, such as the pair of visible-infrared images. CoFSM is sensitive to the image modalities, matching most of the image pairs in dataset 1, half of the image pairs in dataset 2, and only the optical-infrared image pair in dataset 3. Even though LNIFT matched all 18 image pairs, the obtained matches are incorrect for at least half of the image pairs. In terms of our methods, NISR performed significantly better than all the other methods, and it is the only one that correctly matched all 18 image pairs. The feature matching stage of NISR (NISRini) is robust to critical NID and geometric differences, and the template matching stage can further improve the number of matches and matching accuracy. Therefore, NISR can be effectively applied to various multimodal image matching applications.
Comparative quantitative evaluation
The detailed quantitative results on all datasets are summarized in Table 4. We can see that NISR with/-out template matching achieved the best performance on all three metrics. The NM obtained by NISR was several or even dozens of times that of the best of the other methods, namely, CoFSM in datasets 1 and 3 and RIFT in dataset 2. For RMSE, we can see that NISR without template matching can keep a high accuracy, slightly worse than RIFT. However, after template matching, the accuracy improves to around 0.98 pixels on average, almost one time higher than RIFT. In terms of SR, NISR successfully matches almost all 164 image pairs except for two image pairs while the highest of the other methods is only 75.6%, proving the excellent robustness to various image modalities. Figure 11. The ratio below an image represents the scale difference.
OS-SIFT RIFT CoFSM LNIFT NISRini NISR (a) (b) (c) (d) (e) (f)
IMAGE REGISTRATION
We apply NISR to the image registration and fusion tasks in this section. Taking the matches obtained from NISR, we calculate a transformation matrix, map each pixel of the sensed image to the coordinates of the reference image and create a corrected image with the bilinear interpolation. The performance is tested on six challenging multimodal image pairs: the visible-LiDAR Depth, visiblethermal, MRI-PECT, PD-T1, optical-map, and optical-SAR image pairs. The visualization results are presented in Fig. 16. We can see that the images are precisely registered and fused without edge break, ghosting and blurring, further proving the high accuracy and good distribution of the matches obtained from NISR.
CONCLUSION
In this paper, we present a robust multimodal image matching method, which can be effectively applied to various modalities of images. The whole process is based on multi-scale and multi-orientation log-Gabor filtering results. The filter makes the method naturally resistant to noise. Firstly, we detect distinctive and repeatable feature points on phrase congruency maps in an image pyramid. Secondly, we estimate the primary orientation by exploiting the index information from different image orienta- tions to make the method invariant to image rotation. Thirdly, we construct a template feature, fully applying the available filtering results and increasing the probabil-ity of successful matching. We demonstrate the benefits of our method on a large number of image datasets with various image modals. 4 THE COMPARATIVE RESULTS ACROSS ALL DATASETS. VIS, NIR, IR, LI, LD, THERM, RETI, AND OPTI DEONTE VISIBLE, NEAR INFRARED, INFRARED, LIDAR INTENSITY, LIDAR DEPTH, THERMAL, In the future, we tend to apply convolutional neural networks to generate more accurate and robust orientation index maps, which will benefit exact primary orientation evaluation and feature descriptor construction. Besides, considering the high computation cost of log-Gabor filters, we will try other light weighted filters or approaches to extract the multi-scale and multi-orientation image information to improve the processing efficiency.
OS-SIFT RIFT CoFSM LNIFT NISRini NISR (a) (b) (c) (d) (e) (f)
Fig. 1 .
1Comparison of matching results on six typical multimodal image pairs with different modalities. (a) day-night; (b) visible-infrared; (c) MRI-PET; (d) retina-retina (different angiography); (e) optical-LiDAR depth; (f) optical-SAR. It can be seen that NISR significantly improves the matching performance compared with the state-of-the-art methods: OS-SIFT, RIFT, CoFSM, and LNIFT and can be applied to various image pairs with scale change, image rotation, and severe nonlinear intensity distortion.
Fig. 2 .
2The feature extraction results on a pair of optical-depth images. (a) and (d) are the original optical and depth images; (b) and (e) are the edge maps calculated by equation 16; (c) and (d) demonstrate the detected feature points.
Fig. 3 .
3Feature description.
Fig. 5 .
5Orientation index value modification. (a) and (c) are the local index maps of the optical image (Fig.4c) and infrared image (Fig. 4d), where the two images are rotated at 60° and 120°, respectively. (b) and (d) are the maps after modification. We can see that index maps become similar after modification.
Fig. 4 .
4Appearance of orientation index maps under different rotation angles. (a) optical (up)-infrared (down); (b)~(d) correspond to the maps with enlarged areas marked in red squares at the rotation angles of 0°, 60°, 120°.
ℱ − 1
1( 1 ( , , ) 2 ( , , ) * ) = ( − 0 , − 0 )
Fig. 6 .
6Results of different stages of our feature matching method. (a) a pair of optical-infrared images with rotation; (b) extracted feature points; (c) and (d) are the two index maps with estimated primary orientation for each feature point, which are indicated with red arrows.(e) established correspondences.
Fig. 8 .
8Typical multimodal image pairs from dataset 1 (computer vision), dataset 2 (medicine) and dateset 3 (remote sensing). (a) daynight; (b) visible-near infrared; (c) visible-infrared; (d) visible-LiDAR intersity;(e) visbile-LiDAR depth;(f) visible-thermal;(g) MRI-PET;(h) PD-T1;(i) PD-T2; (j)retina-retina (different angiography); (k)SPECT-CT; (l)T1-T2; (m)optical-optical (different season); (n)optical-infrared;(o)optical-LiDAR depth; (p)optical-map; (q)optical-SAR; (r)night-day.
Fig. 7 .
7Results of different stages of our feature matching method. (a) a pair of optical-infrared images with rotation; (b) the original optical image and a simulated infrared image using a similarity matrix estimated by the matches obtained in feature matching; (c) constructed template features; (d) established correspondences on the image pair of (b); (e) established correspondences on the image pair of (a).
Fig. 10 .
10Some typical visualization results ofFigure 9. The angle below an image represents the rotation angle between the image pair.
Fig. 11 .
11The changing curve of number of matches with different ratios of scale change.
Fig. 9 .
9The changing curve of number of matches with different angles of image rotation for a VIS-INT image pair, a Retina-Retina image pair and an optical-map image pair.
Fig. 12 .
12Some typical visualization results of
Fig. 13 .
13The Qualitative comparison results of OS-SIFT, RIFT, CoFSM, LINIFT, NISR ini . NISR on the six typical image pairs of dataset 1. (a) day-night; (b) visible-near infrared; (c) visible-infrared; (d) visible-LiDAR intersity;(e) visbile-LiDAR depth;(f) visible-thermal.
Fig. 14 .Fig. 15 .
1415The Qualitative comparison results of OS-SIFT, RIFT, CoFSM, LINIFT, NISR ini . NISR on the six typical image pairs of dataset 2. (a) MRI-PET;(b) PD-T1;(c) PD-T2; (d) retina-retina (different angiography); (e)SPECT-CT; (f)T1-T2. The Qualitative comparison results of OS-SIFT, RIFT, CoFSM, LINIFT, NISR ini . NISR on the six typical image pairs of dataset 3. (a)optical-optical (different season); (b)optical-infrared; (c)optical-LiDAR depth; (d)optical-map; (e)optical-SAR; (f)night-day.
Fig. 16 .
16The image registration and fusion results of NISR on six multimodal image pairs. (a) visible-LiDAR depth; (b) MRI-PET; (c) opticalmap; (d) visible-thermal; (e) PD-T1; (f) optical-SAR; For each subfigure (a)-(f), the two images in the first line are the original images, the images in left and right of the second line are the registration and fusion results, respectively.
TABLE 1 RESULTS
1OF FEATURE MATCHING WITH/-OUT THE PROPOSED STRATEGIES. ORI. DENOTES THE ORIGINAL METHOD. STR.1 DE-NOTES THE STRATEGY CONSIDERING SECONDARY PRIMARY ORIENTATION; STR.2 DENOTES THE STRATEGY CONSIDERING DOUBLE INDEX MAPSMethod
NM ↑
RMSE (pixels)↓ SR (%)↑
Ori.
605
2.48
82.3
Ori.+Str.1
723
2.39
85.4
Ori.+Str.2
888
2.19
96.3
Ori.+Str.1+Str.2
1012
2.15
98.8
TABLE 3 THE
3PERFORMANCE OF NISR UNDER DIFFERENT PARAM-ETER SETTINGS.Parameter setting
NM ↑ RMSE(pixels)↓
SR(%)↑
2
12
72
6
735
1.21
83.5
3
12
72
6
806
1.05
92.1
4
12
72
6
829
0.98
98.8
5
12
72
6
809
1.12
92.1
6
12
72
6
763
1.38
90.2
4
6
72
6
816
1.13
87.8
4
8
72
6
823
1.11
91.5
4
10
72
6
824
1.06
94.5
4
14
72
6
831
1.01
98.8
4
12
36
6
752
1.05
87.8
4
12
48
6
782
1.03
91.5
4
12
60
6
806
0.98
93.9
4
12
84
6
832
0.94
98.8
4
12
72
2
804
1.31
86.6
4
12
72
3
810
1.03
90.2
4
12
72
4
820
1.01
95.1
4
12
72
5
820
0.98
97.6
TABLE 2
2EXPERIMENTAL DATASETS
Dataset 1 in
Computer Vision,
38 pairs
Image pair
day
visible
visible
visible
visible
visible
night
near-infrared
infrared
LiDAR-intensity LiDAR-depth
thermal
Number
4
7
11
6
4
6
Resolution
263 × 198 ~ 1119 × 1465 pixels
Dataset 2 in Med-
icine,
64 pairs
Image pair
MRI
PD
PD
retina
SPECT
T1
PECT
T1
T2
retina
CT
T2
Number
7
10
10
23
4
10
Resolution
181 × 217 ~ 1280 × 960 pixels
Dataset 3 in Re-
mote Sensing,
62 pairs
Image pair
optical
optical
optical
optical
optical
night
optical
infrared
LiDAR-depth
map
SAR
day
Number
14
7
7
10
14
10
Resolution
350 × 426 ~ 1001 × 1001 pixels
TABLE
RETINA, AND OPTICAL, RESPECTIVELY.NM ↑
RMSE (pixels) ↓
SR (%) ↑
OS-
SIFT
RIFT CoFSM LNIFT NISRini NISR
OS-
SIFT
RIFT CoFSM LNIFT NISRini NISR
OS-
SIFT
RIFT CoFSM LNIFT NISRini NISR
Dataset 1
day-night
177
86
576
218
462
1864 1.86
3.11
1.99
2
2.18
1.02
75
50
100
75
100
100
VIS-NIR
120
182
252
246
936
2111
2.3
2.67
2.78
3.92
2.61
1.08
100
71.4
85.7
28.5
100
100
VIS-IR
17
81
99
170
351
1363 3.24
2.86
2.5
2.95
2.65
1.16
9.1
63.6
9.1
54.5
100
100
VIS-LI
153
336
1448
235
2101 3427 1.65
1.43
1.52
1.99
1.76
0.81
33.3 83.3
83.3
50
100
100
VIS-LD
87
448
435
158
1414 2752 2.69
1.96
1.33
2.96
1.62
0.81
50
75
50
75
100
100
VIS-Therm
60
45
260
0
1773 2414 3.19
3.16
2.01
5
2.45
0.90
33.3 16.6
33.3
0
100
100
Dataset 2
MRI-PET
0
0
0
0
70
640
5
5
5
5
4.42
2.06
0
0
0
0
100
100
PD-T1
29
80
164
175
165
281
1.77
2.16
2.84
2.82
1.56
0.69
80
90
90
70
100
100
PD-T2
26
84
182
225
177
286
2.46
1.87
2.36
2.83
1.37
0.69
90
90
100
80
100
100
Reti-Reti
70
701
385
253
3067 3223 2.52
1.64
1.47
3.2
1.34
0.89
43.5
100
69.5
26.1
100
100
SPECT-CT
0
0
0
46
62
565
5
5
5
4.4
4.39
1.63
0
0
0
25
100
100
T1-T2
36
120
184
338
237
411
1.59
1.15
1.49
2.17
1.02
0.64
90
90
100
50
100
100
Dataset 3
Opti-Opti
109
251
406
73
704
1875
2
1.64
2.56
2.48
2.35
0.96
7.1
71.4
64.2
35.7
100
100
Opti-IR
163
330
587
95
1321 2331 1.49
1.5
2.62
3.17
1.41
0.61
28.6
100
100
42.8
85.7
85.7
Opti-LD
62
224
360
133
725
2475
1.5
1.13
3.14
3.63
2.19
0.81
14.3 85.7
28.5
14.2
100
100
Opti-map
0
143
339
66
610
2237
5
2.23
1.88
3.56
2.28
0.91
0
90
30
50
100
100
Opti-SAR
80
235
396
141
820
2424 1.76
1.9
3.05
3.33
2.66
1.21
7.1
78.5
28.5
50
100
100
night-day
104
186
468
62
587
1926 2.47
3.36
3.07
3.03
2.81
1.16
10
70
40
40
90
90
All
Average
accuracy
70
281
391
173
1012 1883 2.15
1.96
2.2
2.93
2.15
0.98
35.9 75.6
57.3
42.1
98.8
98.8
(a)
(b)
(c)
(d)
(e)
(f)
( , , ) = 2 ( − 0 , − , )(21)where 0 and are the offsets in horizontal direction and vertical direction, respectively.By performing 3D Fourier transform to 1 ( , , ) and 2 ( , , ) , we can obtain 1 ( , , ) and 2 ( , , ) . According to the Fourier shift theorem, the correlation between 1 ( , , ) and 2 ( , , ) can be represented as follow:
ACKNOWLEDGMENTThe authors wish to thank Dr. Haibin Ai for his helpful suggestions. We also thank anonymous reviewers for their valuable feedback.
Distinctive Image Features from Scale Invariant Key Points. D Lowe, Int. J. Comput. Vis. 602D. Lowe, "Distinctive Image Features from Scale Invariant Key Points," Int. J. Comput. Vis., vol. 60, no. 2, pp. 91-110, 2004.
PCA-SIFT: A More Distinctive Representation for Local Image Descriptors. Y Ke, R Sukthankar, Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE Conf. Comput. Vis. Pattern RecognitY. Ke and R. Sukthankar, "PCA-SIFT: A More Distinctive Rep- resentation for Local Image Descriptors," in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2004.
WLD: A robust local image descriptor. J Chen, IEEE Trans. Pattern Anal. Mach. Intell. 329J. Chen et al., ''WLD: A robust local image descriptor,'' IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1705-1720, Sep. 2010.
Uniform robust scale-invariant feature matching for optical remote sensing images. A Sedaghat, M Mokhtarzade, H Ebadi, IEEE Trans. Geosci. Remote Sens. 4911A. Sedaghat, M. Mokhtarzade, and H. Ebadi, "Uniform robust scale-invariant feature matching for optical remote sensing im- ages," IEEE Trans. Geosci. Remote Sens., vol. 49, no. 11, pp. 4516- 4527, Nov. 2011.
Robust feature matching for remote sensing image registration via locally linear transforming. J Ma, H Zhou, J Zhao, Y Gao, J Jiang, J Tian, IEEE Trans. Geosci. Remote Sens. 5312J. Ma, H. Zhou, J. Zhao, Y. Gao, J. Jiang, and J. Tian, "Robust feature matching for remote sensing image registration via locally linear transforming," IEEE Trans. Geosci. Remote Sens., vol. 53, no. 12, pp. 6469-6481, 2015.
Robust hierarchical structure from motion for large-scale unstructured image sets. B Xu, L Zhang, Y Liu, H Ai, B Wang, Y Sun, Z Fan, ISPRS J. Photogramm. Remote Sens. 181B. Xu, L. Zhang, Y. Liu, H. Ai, B. Wang, Y. Sun, and Z. Fan, "Robust hierarchical structure from motion for large-scale un- structured image sets," ISPRS J. Photogramm. Remote Sens., vol. 181, pp. 367-384, 2021.
Direct sparse odometry. J Engel, V Koltun, D Cremers, IEEE Trans. Pattern Anal. Mach. Intell. J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE Trans. Pattern Anal. Mach. Intell., 2017.
Image matching from handcrafted to deep features: A survey. J Ma, X Jiang, A Fan, J Jiang, J Yan, Int. J. Comput. Vis. 1291J. Ma, X. Jiang, A. Fan, J. Jiang, and J. Yan, "Image matching from handcrafted to deep features: A survey," Int. J. Comput. Vis., vol. 129, no. 1, pp. 23-79, 2021.
A review of multimodal image matching: Methods and applications. X Jiang, J Ma, G Xiao, Z Shao, X Guo, Inf. Fusion. 73X. Jiang, J. Ma, G. Xiao, Z. Shao, and X. Guo, "A review of mul- timodal image matching: Methods and applications," Inf. Fu- sion., vol. 73, pp. 22-71, 2021.
Image registration methods: A survey. B Zitoví, J Flusser, Image Vis. Comput. 2111B. Zitoví and J. Flusser, "Image registration methods: A survey," Image Vis. Comput., vol. 21, no. 11, pp. 977-1000, Oct. 2003.
LGHD: A feature descriptor for matching across non-linear intensity variations. C A Aguilera, A D Sappa, R Toledo, Proc. IEEE Int. Conf. Image Process. IEEE Int. Conf. Image essC. A. Aguilera, A. D. Sappa, and R. Toledo, "LGHD: A feature descriptor for matching across non-linear intensity variations," in Proc. IEEE Int. Conf. Image Process., Sep. 2015, pp. 178-181.
Robust registration of multimodal remote sensing images based on structural similarity. Y Ye, J Shan, L Bruzzone, L Shen, IEEE Trans. Geosci. Remote Sens. 555Y. Ye, J. Shan, L. Bruzzone, and L. Shen, "Robust registration of multimodal remote sensing images based on structural similari- ty," IEEE Trans. Geosci. Remote Sens., vol. 55, no. 5, pp. 2941- 2958, Mar. 2017.
Fast and robust matching for multimodal remote sensing image registration. Y Ye, L Bruzzone, J Shan, F Bovolo, Q Zhu, IEEE Trans. Geosci. Remote Sens. 5711Y. Ye, L. Bruzzone, J. Shan, F. Bovolo, and Q. Zhu, "Fast and robust matching for multimodal remote sensing image registra- tion," IEEE Trans. Geosci. Remote Sens., vol. 57, no. 11, pp. 9059- 9070, Nov. 2019.
RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform. J Li, Q Hu, M Ai, IEEE Trans. Image Process. 29J. Li, Q. Hu, and M. Ai, "RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform," IEEE Trans. Image Process., vol. 29, pp. 3296-3310, 2020.
A partial intensity invariant feature descriptor for multimodal retinal image registration. J Chen, J Tian, N Lee, J Zheng, R T Smith, A F Laine, IEEE Trans. Biomed. Eng. 577J. Chen, J. Tian, N. Lee, J. Zheng, R. T. Smith, and A. F. Laine, "A partial intensity invariant feature descriptor for multimodal retinal image registration," IEEE Trans. Biomed. Eng., vol. 57, no. 7, pp. 1707-1718, Jul. 2010.
OS-SIFT: A robust SIFT-like algorithm for high-resolution optical-to-SAR image registration in suburban areas. Y Xiang, F Wang, H You, IEEE Trans. Geosci. Remote Sens. 566Y. Xiang, F. Wang, and H. You, ''OS-SIFT: A robust SIFT-like algorithm for high-resolution optical-to-SAR image registration in suburban areas,'' IEEE Trans. Geosci. Remote Sens., vol. 56, no. 6, pp. 3078-3090, Jun. 2018.
Multimodal remote sensing image matching considering cooccurrence filter. Y Yao, Y Zhang, Y Wan, X Liu, X Yan, J Li, IEEE Trans. Image Process. 31Y. Yao, Y. Zhang, Y. Wan, X. Liu, X. Yan, and J. Li, "Multi- modal remote sensing image matching considering co- occurrence filter," IEEE Trans. Image Process., vol. 31, pp. 2584- 2597, 2022.
3MRS: An Effective Coarse-to-Fine Matching Method for Multimodal Remote Sensing Imagery. Z Fan, Y Liu, Y Liu, L Zhang, J Zhang, Y Sun, H Ai, Remote Sens. 143478Z. Fan, Y. Liu, Y. Liu, L. Zhang, J. Zhang, Y. Sun, H. Ai, "3MRS: An Effective Coarse-to-Fine Matching Method for Multimodal Remote Sensing Imagery," Remote Sens., vol. 14, no. 3, pp. 478, 2022.
LNIFT: Locally normalized image for rotation invariant multimodal feature matching. J Li, W Xu, P Shi, Y Zhang, Q Hu, IEEE Trans. Geosci. Remote Sens. 60J. Li, W. Xu, P. Shi, Y. Zhang, and Q. Hu, "LNIFT: Locally nor- malized image for rotation invariant multimodal feature match- ing," IEEE Trans. Geosci. Remote Sens., vol. 60, pp. 1-14, 2022.
ORB: An efficient alternative to SIFT or SURF. E Rublee, V Rabaud, K Konolige, G Bradski, Proc. IEEE Int. Conf. Comput. Vis. IEEE Int. Conf. Comput. VisBarcelona, SpainE. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in Proc. IEEE Int. Conf. Comput. Vis., Barcelona, Spain, 2011, pp. 2564-2571.
Histograms of oriented gradients for human detection,'' in Proc. N Dalal, B Triggs, IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. 1N. Dalal and B. Triggs, ''Histograms of oriented gradients for human detection,'' in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 1, Jun. 2005, pp. 886-893.
Multimodality image registration by maximization of mutual information. F Maes, A Collignon, D Vandermeulen, G Marchal, P Suetens, IEEE Trans. Med. Imag. 16F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, and P. Suetens, "Multimodality image registration by maximization of mutual information," IEEE Trans. Med. Imag., vol. 16, pp. 187- 198, Apr. 1997.
Multi-sensor image registration by combining local self-similarity matching and mutual information. X Liu, S Chen, L Zhuo, J Li, K Huang, Front. Earth Sci. 124X. Liu, S. Chen, L. Zhuo, J. Li, and K. Huang, "Multi-sensor image registration by combining local self-similarity matching and mutual information," Front. Earth Sci., vol. 12, no. 4, pp. 779-790, 2018.
Rank-based local selfsimilarity descriptor for optical-to-SAR image matching. X Xiong, Q Xu, G Jin, H Zhang, X Gao, IEEE Geosci. Remote Sens. Lett. 1710X. Xiong, Q. Xu, G. Jin, H. Zhang, and X. Gao, ''Rank-based local selfsimilarity descriptor for optical-to-SAR image match- ing,'' IEEE Geosci. Remote Sens. Lett., vol. 17, no. 10, pp. 1742- 1746, 2020.
Exploiting high geopositioning accuracy of SAR data to obtain accurate geometric orientation of optical satellite images. Z Fan, L Zhang, Y Liu, Q Wang, S Zlatanova, Remote Sens. 13173535Z. Fan, L. Zhang, Y. Liu, Q. Wang, and S. Zlatanova, "Exploit- ing high geopositioning accuracy of SAR data to obtain accu- rate geometric orientation of optical satellite images," Remote Sens., vol. 13, no. 17, pp. 3535, 2021.
Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features. B Zhu, Y Ye, L Zhou, Z Li, G Yin, ISPRS J. Photogramm. Remote Sens. 181B. Zhu, Y. Ye, L. Zhou, Z. Li, and G. Yin, "Robust registration of aerial images and LiDAR data using spatial constraints and Gabor structural features," ISPRS J. Photogramm. Remote Sens., vol. 181, pp. 129-147, 2021.
Phase congruency detects corners and edges. P Kovesi, Proc. Austral. AustralP. Kovesi, "Phase congruency detects corners and edges," in Proc. Austral. Pattern Recognit. Soc. Conf. DICTA, 2003, pp. 309- 318.
Faster and better: A machine learning approach to corner detection. E Rosten, R Porter, T Drummond, IEEE Trans. Pattern Anal. Mach. Intell. 321E. Rosten, R. Porter, and T. Drummond, "Faster and better: A machine learning approach to corner detection," IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, pp. 105-119, Jan. 2010.
Relations between the statistics of natural images and the response properties of cortical cells. D J Field, J. Opt. Soc. Amer. A. 412D. J. Field, "Relations between the statistics of natural images and the response properties of cortical cells," J. Opt. Soc. Amer. A, vol. 4, no. 12, pp. 2379-2394, 1987.
Robot Vision. B Horn, MIT PressCambridge, MA, USAB. Horn, Robot Vision. Cambridge, MA, USA: MIT Press, 1986.
A novel point-matching algorithm based on fast sample consensus for image registration. Y Wu, W Ma, M Gong, L Su, L Jiao, IEEE Geosci. Remote Sens. Lett. 121Y. Wu, W. Ma, M. Gong, L. Su, and L. Jiao, "A novel point-matching algorithm based on fast sample consensus for image registration," IEEE Geosci. Remote Sens. Lett., vol. 12, no. 1, pp. 43-47, Jan. 2015.
| [
"https://github.com/Zhongli-Fan/NISR."
] |
[
"Published as a conference paper at ICLR 2020 SCALABLE NEURAL METHODS FOR REASONING WITH A SYMBOLIC KNOWLEDGE BASE",
"Published as a conference paper at ICLR 2020 SCALABLE NEURAL METHODS FOR REASONING WITH A SYMBOLIC KNOWLEDGE BASE"
] | [
"William W Cohen [email protected] \nGoogleInc\n",
"Haitian Sun [email protected] \nGoogleInc\n",
"& R Alex Hofer [email protected] \nGoogleInc\n",
"Matthew Siegler [email protected] \nGoogleInc\n"
] | [
"GoogleInc",
"GoogleInc",
"GoogleInc",
"GoogleInc"
] | [] | We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural KB inference modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.Published as a conference paper at ICLR 2020x: an entity X: weighted set of entities x: vector encoding X NE: # entities in KB r: an relation R: weighted set of relations r: vector encoding R NR: # relations in KB Mr: matrix for r MR: weighted sum of Mr's, see Eq 1 follow(x, r): see Eq 2 NT : # triples in KB M subj , M obj , M rel : the reified KB, encoded as matrices mapping triple id to subject, object, and relation ids | null | [
"https://arxiv.org/pdf/2002.06115v1.pdf"
] | 211,126,517 | 2002.06115 | 7f0dbd30dc839fd95ea953a9229c879396ca11c0 |
Published as a conference paper at ICLR 2020 SCALABLE NEURAL METHODS FOR REASONING WITH A SYMBOLIC KNOWLEDGE BASE
William W Cohen [email protected]
GoogleInc
Haitian Sun [email protected]
GoogleInc
& R Alex Hofer [email protected]
GoogleInc
Matthew Siegler [email protected]
GoogleInc
Published as a conference paper at ICLR 2020 SCALABLE NEURAL METHODS FOR REASONING WITH A SYMBOLIC KNOWLEDGE BASE
We describe a novel way of representing a symbolic knowledge base (KB) called a sparse-matrix reified KB. This representation enables neural KB inference modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. The sparse-matrix reified KB can be distributed across multiple GPUs, can scale to tens of millions of entities and facts, and is orders of magnitude faster than naive sparse-matrix implementations. The reified KB enables very simple end-to-end architectures to obtain competitive performance on several benchmarks representing two families of tasks: KB completion, and learning semantic parsers from denotations.Published as a conference paper at ICLR 2020x: an entity X: weighted set of entities x: vector encoding X NE: # entities in KB r: an relation R: weighted set of relations r: vector encoding R NR: # relations in KB Mr: matrix for r MR: weighted sum of Mr's, see Eq 1 follow(x, r): see Eq 2 NT : # triples in KB M subj , M obj , M rel : the reified KB, encoded as matrices mapping triple id to subject, object, and relation ids
INTRODUCTION
There has been much prior work on using neural networks to generalize the contents of a KB (Xiong et al., 2017;Bordes et al., 2013;Dettmers et al., 2018), typically by constructing low-dimensional embeddings of the entities and relations in the KB, which are then used to score potential triples as plausible or implausible elements of the KB. We consider here the related but different problem of incorporating a symbolic KB into a neural system, so as to inject knowledge from an existing KB directly into a neural model. More precisely, we consider the problem of designing neural KB inference modules that are (1) fully differentiable, so that any loss based on their outputs can be backpropagated to their inputs; (2) accurate, in that they are faithful to the original semantics of the KB; (3) expressive, so they can perform non-trivial inferences; and (4) scalable, so that realistically large KBs can be incorporated into a neural model.
To motivate the goal of incorporating a symbolic KB into a neural network, consider the task of learning neural semantic parsers from denotations. Many questions-e.g., what's the most recent movie that Quentin Tarantino directed? or which nearby restaurants have vegetarian entrees and take reservations?-are best answered by knowledge-based question-answering (KBQA) methods, where an answer is found by accessing a KB. Within KBQA, a common approach is neural semantic parsing-i.e., using neural methods to translate a natural-language question into a structured query against the KB (Zhong et al., 2017;Finegan-Dollak et al., 2018;Shaw et al., 2019), which is subsequently executed with a symbolic KB query engine. While this approach can be effective, it requires training data pairing natural-language questions with structured queries, which is difficult to obtain. Hence researchers have also considered learning semantic parsers from denotations (Berant et al., 2013;Yih et al., 2015), where training data consists of pairs (q, A), where q is a natural-language question and A is the desired answer. Typically A is a set of KB entities-e.g., if q is the first sample question above, A would be 1 the singleton set containing Once Upon a Time in Hollywood.
Learning semantic parsers from denotations is difficult because the end-to-end process to be learned includes a non-differentiable operation-i.e., reasoning with the symbolic KB that contains the answers. To circumvent this difficulty, prior systems have used three different approaches. Some have used heuristic search to infer structured queries from denotations (Pasupat & Liang, 2016;Dasigi et al., 2019): this works in some cases but often an answer could be associated with many possible structured queries, introducing noise. Others have supplemented gradient approaches with Table 1: Summary of notation used in the paper. (This excludes notation used in defining models for the KB completion and QA tasks of Section 3.) reinforcement learning (e.g., (Misra et al., 2018)). Some systems have also "neuralized" KB reasoning, but to date only over small KBs: this approach is natural when answers are naturally constrained to depend on a small set of facts (e.g., a single table (Zhong et al., 2017;Gupta & Lewis, 2018)), but more generally requires coupling a learner with some (non-differentiable) mechanism to retrieve an appropriate small question-dependent subset of the KB as in (Sun et al., 2018;.
In this paper, we introduce a novel scheme for incorporating reasoning on a large question-independent KB into a neural network, by representing a symbolic KB with an encoding called a sparse-matrix reified KB. A sparse-matrix reified KB is very compact, can be distributed across multiple GPUs if necessary, and is well-suited to modern GPU architecture. For KBs with many relations, a reified KB can be up to four orders of magnitude faster than alternative implementations (even alternatives based on sparse-matrix representations), and in our experiments we demonstrate scalability to a KB with over 13 million entities and nearly 44 million facts. This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations-architectures based on a single end-to-end differentiable process, rather than cascades of retrieval and neural processes.
We show that very simple instantiations of these architectures are still highly competitive with the state of the art for several benchmark tasks. To our knowledge these models are the first fully end-to-end neural parsers from denotations that have been applied to these benchmark tasks. We also demonstrate that these architectures scale to long chains of reasoning on synthetic tasks, and demonstrate similarly simple architectures for a second task, KB completion.
NEURAL REASONING WITH A SYMBOLIC KB
BACKGROUND
KBs, entities, and relations. A KB consists of entities and relations. We use x to denote an entity and r to denote a relation. Each entity has an integer index between 1 and N E , where N E is the number of entities in the KB, and we write x i for the entity that has index i. A relation is a set of entity pairs, and represents a relationship between entities: for instance, if x i represents "Quentin Tarantino" and x j represents "Pulp Fiction" then (x i , x j ) would be an member of the relation director_of. A relation r can thus be represented as a subset of {1, . . . , N E } × {1, . . . , N E }. Finally a KB consists a set of relations and a set of entities.
Weighted sets as "k-hot" vectors. Our differentiable operations are based on weighted sets, where each element x of weighted set X is associated with a non-negative real number. It is convenient to define this weight to be zero for all x ∈ X, while for x ∈ X, a weight less than 1 is a confidence that the set contains x, and weights more than 1 make X a multiset. If all elements of X have weight 1, we say X is a hard set. A weighted set X can be encoded as an entity-set vector x ∈ R N E , where the i-th component of x is the weight of x i in X. If X is a hard entity set, then this will be a "k-hot" vector, for k = |X|. The set of indices of x with non-zero values is called the support of x.
Sets of relations, and relations as matrices Often we would like to reason about sets of relations 2 , so we also assume every relation r in a KB is associated with an entity and hence an integer index. We write r k for the relation with index k, and we assume that relation entities are listed first in the index of entities, so the index k for r k is between 1 and N R , where N R is the number of relations in the KB. We use R for a set of relations, e.g., R = {writer_of, director_of} might be such a set, and use r for a vector encoding of a set. A relation r can be encoded as a relation matrix M r ∈ R N E ×N E , where the value for M r [i, j] is (in general) the weight of the assertion r(x i , x j ) in the KB. In the experiments of this paper, all KB relations are hard sets, so M r [i, j] ∈ {0, 1}.
Sparse vs. dense matrices for relations. Scalably representing a large KB requires careful consideration of the implementation. One important issue is that for all but the smallest KBs, a relation matrix must be implemented using a sparse matrix data structure, as explicitly storing all N 2 E values is impractical. For instance, consider a KB containing 10,000 movie entities and 100,000 person entities. A relationship like writer_of would have only a few tens of thousands of facts (since most movies have only one or two writers), but a dense matrix would have 1 billion values.
We thus model relations as sparse matrices. Let N r be the number of entity pairs in the relation r: common sparse matrix data structures require space O(N r ). One common sparse matrix data structure is a sparse coordinate pair (COO) encoding: with a COO encoding, each KB fact requires storing only two integers and one float.
Our implementations are based on Tensorflow (Abadi et al., 2016), which offers limited support for sparse matrices. In particular, driven by the limitations of GPU architecture, Tensorflow only supports matrix multiplication between a sparse matrix COO and a dense matrix, but not between two sparse matrices, or between sparse higher-rank tensors and dense tensors.
Entity types. It is often possible to easily group entities into disjoint sets by some notion of "type": for example, in a movie domain, all entities might be either of the type "movie", "person", or "movie studio". It is straightforward to extend the formalism above to typed sets of entities, and doing this can lead to some useful optimizations. We use these optimizations below where appropriate: in particular, relation-set vectors r are of dimension N R , not N E , in the sections below. The full formal extension to typed entities and relations is given in Appendix A.
REASONING IN A KB
The relation-set following operation. Note that relations can also be viewed as labeled edges in a knowledge graph, the vertices of which are entities. Adopting this view, we define the rneighbors of an entity x i to be the set of entities x j that are connected to x i by an edge labeled r, i.e., r-neighbors(x) ≡ {x j : (x i , x j ) ∈ r}. Extending this to relation sets, we define
R-neighbors(X) ≡ {x j : ∃r ∈ R, x i ∈ X so that (x i , x j ) ∈ r}
Computing the R-neighbors of an entity is a single-step reasoning operation: e.g., the answer to the question q ="what movies were produced or directed by Quentin Tarantino" is precisely the set R-neighbors(X) for R = {producer_of, writer_of} and X = {Quentin_Tarantino}. "Multi-hop" reasoning operations require nested R-neighborhoods, e.g. if R = {actor_of} then R -neighbors(Rneighbors(X)) is the set of actors in movies produced or directed by Quentin Tarantino.
We would like to approximate the R-neighbors computation with differentiable operations that can be performed on the vectors encoding the sets X and R. Let x encode a weighted set of entities X, and let r encode a weighted set of relations. We first define M R to be a weighted mixture of the relation matrices for all relations in R i.e.,
M R ≡ ( N R k=1 r[k] · M r k )(1)
We then define the relation-set following operation for x and r as:
follow(x, r) ≡ xM R = x( N R k=1 r[k] · M r k )(2)
As we will show below, this differentiable numerical relation-set following operation can be used as a neural component to perform certain types of logical reasoning. In particular, Eq 2 corresponds closely to the logical R-neighborhood operation, as shown by the claim below.
Claim 1 The support of follow(x, r) is exactly the set of R-neighbors(X).
A proof and the implications of this are discussed in Appendix B.
SCALABLE RELATION-SET FOLLOWING WITH A REIFIED KB
Baseline implementations. Suppose the KB contains N R relations, N E entities, and N T triples. Typically N R < N E < N T N 2 E . As noted above, we implement each M r as a sparse COO matrix, so collectively these matrices require space O(N T ). Each triple appears in only one relation, so M R in Eq 1 is also size O(N T ). Since sparse-sparse matrix multiplication is not supported in Tensorflow we implement xM R using dense-sparse multiplication, so x must be a dense vector of size O(N E ), as is the output of relation-set following. Thus the space complexity of follow(
O(N T + N E + N R ) 1 0 N R late mixing Eq 3 yes O(N T + bN E + bN R ) N R N R 0 reified KB Eq 4 yes O(bN T + bN E ) 3 1 0x, r) is O(N T + N E + N R ),
if implemented as suggested by Eq 2. We call this the naive mixing implementation, and its complexity is summarized in Table 2.
Because Tensorflow does not support general sparse tensor contractions, it is not always possible to extend sparse-matrix computations to minibatches. Thus we also consider a variant of naive mixing called late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself:
follow(x, r) = N R k=1 (r[k] · xM r k )(3)
Unlike naive mixing, late mixing can be extended easily to a minibatches (see Appendix C). Let b be the batch size and X be a minibatch of b examples [x 1 ; . . . ; x b ]: then this approach leads to N R matrices XM k , each of size O(bN E ). However, they need not all be stored at once, so the space complexity becomes O(bN E + bN R + N T ). An additional cost of late mixing is that we must now sum up N R dense matrices.
A reified knowledge base. While semantic parses for natural questions often use small sets of relations (often singleton ones), in learning there is substantial uncertainty about what the members of these small sets should be. Furthermore, realistic wide-coverage KBs have many relations-typically hundreds or thousands. This leads to a situation where, at least during early phases of learning, it is necessary to evaluate the result of mixing very large sets of relations. When many relations are mixed, late mixing becomes quite expensive (as experiments below show).
An alternative is to represent each KB assertion r k (x i , x j ) as a tuple (i, j, k) where i, j, k are the indices of x i , x j , and r k . There are N T such triples, so for = 1, . . . , N T , let (i , j , k ) denote the -th triple. We define these sparse matrices:
M subj [ , m] ≡ 1 if m = i 0 else M obj [ , m] ≡ 1 if m = j 0 else M rel [ , m] ≡ 1 if m = k 0 else
Conceptually, M subj maps the index of the -th triple to its subject entity; M obj maps to the object entity; and M rel maps to the relation. We can now implement the relation-set following as below, where is Hadamard product:
follow(x, r) = (xM T subj rM T rel )M obj(4)
Notice that xM T subj are the triples with an entity in x as their subject, rM T rel are the triples with a relation in r, and the Hadamard product is the intersection of these. The final multiplication by M obj finds the object entities of the triples in the intersection. These operations naturally extend to minibatches (see Appendix). The reified KB has size O(N T ), the sets of triples that are intersected have size O(bN T ), and the final result is size O(bN E ), giving a final size of O(bN T + bN E ), with no dependence on N R . Table 2 summarizes the complexity of these three mathematically equivalent but computationally different implementions. The analysis suggests that the reified KB is preferable if there are many relations, which is the case for most realistic KBs 3 . Distributing a large reified KB. The reified KB representation is quite compact, using only six integers and three floats for each KB triple. However, since GPU memory is often limited, it is important to be able to distribute a KB across multiple GPUs. Although to our knowledge prior implementations of distributed matrix operations (e.g., (Shazeer et al., 2018)) do not support sparse matrices, sparse-dense matrix multiplication can be distributed across multiple machines. We thus implemented a distributed sparse-matrix implementation of reified KBs. We distibuted the matrices that define a reified KB "horizontally", so that different triple ids are stored on different GPUs. Details are provided in Appendix D.
EXPERIMENTS
SCALABILITY
Like prior work De Raedt et al., 2007), we used a synthetic KB based on an n-by-n grid to study scalability of inference. Every grid cell is an entity, related to its immediate neighbors via relations north, south, east, and west. The KB for an n-by-n grid thus has O(n 2 ) entities and O(n 2 ) triples. We measured the time to compute the 2-hop inference follow(follow(x, r), r) for minibatches of b = 128 one-hot vectors, and report it as queries per second (qps) on a single GPU (e.g., qps=1280 would mean a single minibatch requires 100ms). We also compare to a key-value memory network (Miller et al., 2016), using an embedding size of 64 for entities and relations, where there is one memory entry for every triple in the KB. Further details are given in Appendix E.
The results are shown Figure 1 (left and middle), on a log-log scale because some differences are very large. With only four relations (the leftmost plot), late mixing is about 3x faster than the reified KB method, and about 250x faster than the naive approach. However, for more than around 20 relations, the reified KB is faster (middle plot). As shown in the rightmost plot, the reified KB is 50x faster than late mixing with 1000 relations, and nearly 12,000x faster than the naive approach.
With this embedding size, the speed of the key-value network is similar to the reified KB for only four relations, however it is about 7x slower for 50 relations and 10k entities. Additionally, the space needed to store a triple is much larger in a key-value network than the reified KB, so memory is exhausted when the KB exceeds 200,000 entities (with four relations), or when the KB exceeds 100 relations (with 10,000 entities.) The reified KB scales much better, and can handle 10x as many entities and 20x as many relations.
MODELS USING REIFIED KBS
As discussed below in Section 4, the reified KB is closely related to key-value memory networks, so it can be viewed as a more efficient implementation of existing neural modules, optimized for reasoning with symbolic KBs. However, being able to include an entire KB into a model can lead to a qualitative difference in model complexity, since it is not necessary to build machinery to retrieve from the KB. To illustrate this, below we present simple models for several tasks, each using the reified KB in different ways, as appropriate to the task. We consider two families of tasks: learning semantic parsers from denotations over a large KB, and learning to complete a KB.
KBQA for multi-hop questions. MetaQA (Zhang et al., 2018) consists of 1.2M questions, evenly distributed into one-hop, two-hop, and three-hop questions. (E.g, the question "who acted in a movie directed by Quentin Tarantino?" is a two-hop question.) The accompanying KB (Miller et al., 2016) contains 43k entities and 186k triples. Past work treated one-hop, two-hop and three-hop questions separately, and the questions are labeled with the entity ids for the "seed entities" that begin the reasoning chains (e.g., the question above would be tagged with the id of the entity for Quentin Tarantino).
Using a reified KB for reasoning means the neural model only needs to predict the relations used at each stage in the reasoning process. For each step of inference we thus compute relation sets r t using a differentiable function of the question, and then chain them together with relation-set following steps. Letting x 0 be the set of entities associated with q, the model we use is:
for t = 1, 2, 3: r t = f t (q); x t = follow(x t−1 , r t )
where follow(x t−1 , r t ) is implemented with a reified KB as described in Eq. 4.
To predict an answer on a T -hop subtask, we compute the softmax of the appropriate set x T . We used cross entropy loss of this set against the desired answer, represented as a uniform distribution over entities in the target set. Each function f t (q) is a different linear projection of a common encoding for q, specifically a mean-pooling of the tokens in q encoded with a pre-trained 128-dimensional word2vec model (Mikolov et al., 2013). The full KB was loaded into a single GPU in our experiments.
It is interesting to contrast this simple model with the one proposed by Zhang et al. (2018). The "module for logic reasoning" they propose in Section 3.4 is fairly complex, with a description that requires a figure, three equations, and a page of text; furthermore, training this model requires constructing an example-dependent subgraph for each training instance. In our model, the "logic reasoning" (and all interaction with the KB) has been encapsulated completely in the follow(x, r) operation-which, as we will demonstrate below, can be re-used for many other problems. Encapsulating all KB reasoning with a single scalable differentiable neural module greatly simplifies modeling: in particular, the problem of learning a structured KB query has been reduced to learning a few differentiable functions of the question, one for each reasoning "hop". The learned functions are also interpretable: they are mixtures of relation identifiers which correspond to soft weighted sets of relations, which in turn softly specify which KB relation should be used in each stage of the reasoning process. Finally, optimization is simple, as the loss on predicted denotations can be back-propagated to the relation-prediction functions.
A similar modeling strategy is used in all the other models presented below.
KBQA on FreeBase. WebQuestionsSP (Yih et al., 2016) contains 4737 natural language questions, all of which are answerable using FreeBase (Bollacker et al., 2008), a large open-domain KB. Each question q is again labeled with the entities x that appear in it.
FreeBase contains two kinds of nodes: real-world entities, and compound value types (CVTs), which represent non-binary relationships or events (e.g., a movie release event, which includes a movie id, a date, and a place.) Real-world entity nodes can be related to each other or to a CVT node, but CVT nodes are never directly related to each other. In this dataset, all questions can be answered with 1-or 2-hop chains, and all 2-hop reasoning chains pass through a CVT entity; however, unlike MetaQA, the number of hops is not known. Our model thus derives from q three relation sets and then uniformly mixes both potential types of inferences:
r E→E = f E→E (q); r E→CVT = f E→CVT (q); r CVT→E = f CVT→E (q) a = follow(follow(x, r E→CVT ), r CVT→E ) + follow(x, r E→E )
We again apply a softmax toâ and use cross entropy loss, and f E→E , f E→CVT , and f CVT→E are again linear projections of a word2vec encoding of q. We used a subset of Freebase with 43.7 million facts and 12.9 million entities, containing all facts in Freebase within 2-hops of entities mentioned in any question, excluding paths through some very common entities. We split the KB across three 12-Gb GPUs, and used a fourth GPU for the rest of the model.
This dataset is a good illustration of the scalability issues associated with prior approaches to including a KB in a model, such as key-value memory networks. A key-value network can be trained to implement something similar to relation-set following, if it stores all the KB triples in memory. If we assume 64-float embeddings for the 12.9M entities, the full KB of 43.7M facts would be 67Gb in size, which is impractical. Additionally performing a softmax over the 43.7M keys would be prohibitively expensive, as shown by the experiments of Figure 1. This is the reason why in standard practice with key-value memory networks for KBs, the memory is populated with a heuristically subset of the KB, rather than the full KB. We compare experimentally to this approach in Table 3.
Knowledge base completion. Following we treat KB completion as an inference task, analogous to KBQA: a query q is a relation name and a head entity x, and from this we predict a set of tail entities. We assume the answers are computed with the disjunction of multiple inference chains of varying length. Each inference chain has a maximum length of T and we build N distinct inference chains in total, using this model (where x 0 i = x for every chain i):
for i = 1, . . . , N and t = 1, . . . , T :
r t i = f t i (q); x t i = follow(x t−1 i , r t i ) + x t−1 i
The final output is a softmax of the mix of all the x T i 's: i.e., we letâ = softmax( i∈{1..
.N } x T i ). The update x t+1 i = follow(x t i , r t i ) + x t i
gives the model access to outputs of all chains of length less than t (for more intuition see Appendix E.) The encoding of q is based on a lookup table, and each f t i is a learned linear transformation of q's embedding. 4
An encoder-decoder architecture for varying inferential structures. To explore performance on more complex reasoning tasks, we generated simple artificial natural-language sentences describing longer chains of relationships on a 10-by-10 grid. For this task we used an encoder-decoder model which emits chains of relation-set following operations. The question is encoded with the final hidden state of an LSTM, written here h 0 . We then generate a reasoning chain of length up to T using a decoder LSTM. At iteration t, the decoder emits a scalar probability of "stopping", p t , and a distribution over relations to follow r t , and then, as we did for the KBQA tasks, sets x t = follow(x t−1 , r t ). Finally the decoder updates its hidden state to h t using an LSTM cell that "reads" the "input" r t−1 . For each step t, the model thus contains the steps
p t = f p (h t−1 ); r t = f r (h t−1 ); x t = follow(x t−1 , r t ); h t = LSTM(h t−1 , r t−1 )
The final predicted location is a mixture of all the x t 's weighted by the probability of stopping p t at iteration t, i.e.,â = softmax( T t=1 x t · p t t <t (1 − p t )). The function f r is a softmax over a linear projection, and f p is a logistic function. In the experiments, we trained on 360,000 sentences requiring between 1 and T hops and tested on an additional 12,000 sentences.
Experimental results. We next consider the performance of these models relative to strong baselines for each task. We emphasize our goal here is not to challenge the current state of the art on any particular benchmark, and clearly there are many ways the models of this paper could be improved. (For instance, our question encodings are based on word2vec, rather than contextual encodings (Devlin et al., 2018), and likewise relations are predicted with simple linear classifiers, rather than, say, attention queries over some semantically meaningful space, such as might be produced with language models or KB embedding approaches (Bordes et al., 2013)). Rather, our contribution is to present a generally useful scheme for including symbolic KB reasoning into a model, and we have thus focused on describing simple, easily understood models that do this for several tasks. However, it is important to confirm experimentally that the reified KB models "work"-e.g., that they are amenable to use of standard optimizers, etc.
Performance (using Hits@1) of our models on the KBQA tasks is shown in Table 3. For the nonsynthetic tasks we also compare to a Key-Value Memory Network (KV-Mem) baseline (Miller et al., 2016). For the smaller MetaQA dataset, KV-Mem is initialized with all facts within 3 hops of the query entities, and for WebQuestionsSP it is initialized by a random-walk process seeded by the query entities (see (Sun et al., 2018;Zhang et al., 2018) for details). ReifKB consistently outperforms the baseline, dramatically so for longer reasoning chains. The synthetic grid task shows that there is very little degradation as chain length increases, with Hits@1 for 10 hops still 89.7%. It also illustrates the ability to predict entities in a KB, as well as relations.
We also compare these results to two much more complex architectures that perform end-to-end question answering in the same setting used here: VRN (Zhang et al., 2018), GRAFT-Net (Sun et al., 2018), and PullNet (Sun et al., 2019). All three systems build question-dependent subgraphs of the (Sun et al., 2018) and (Sun et al., 2019).
KB, and then use graph CNN-like methods (Kipf & Welling, 2016) to "reason" with these graphs.
Although not superior, ReifKB model is competitive with these approaches, especially on the most difficult 3-hop setting.
A small extension to this model is to mask the seed entities out of the answers (see Appendix E). This model (given as ReifKB + mask) has better performance than GRAFT-Net on 2-hop and 3-hop questions.
For KB completion, we evaluated the model on the NELL-995 dataset (Xiong et al., 2017) which is paired with a KB with 154k facts, 75k entities, and 200 relations. On the left of Table 4 we compare our model with three popular embedding approaches (results are from Das et al. (2017)). The reified KB model outperforms DistMult (Yang et al., 2014), is slightly worse than ConvE (Dettmers et al., 2018), and is comparable to ComplEx (Trouillon et al., 2017).
The competitive performance of the ReifKB model is perhaps surprising, since it has many fewer parameters than the baseline models-only one float and two integers per KB triple, plus a small number of parameters to define the f t i functions for each relation. The ability to use fewer parameters is directly related to the fact that our model directly uses inference on the existing symbolic KB in its model, rather than having to learn embeddings that approximate this inference. Or course, since the KB is incomplete, some learning is still required, but learning is quite different: the system learns logical inference chains in the incomplete KB that approximate a target relation. In this setting for KBC, the ability to perform logical inference "out of the box" appears to be very advantageous.
Another relative disadvantage of KB embedding methods is that KB embeddings are generally transductive-they only make predictions for entities seen in training. As a non-transductive baseline, we also compared to the MINERVA model, which uses reinforcement learning (RL) methods to learn how to traverse a KB to find a desired answer. Although RL methods are less suitable as "neural modules", MINERVA is arguably a plausible competitor to end-to-end learning with a reified KB.
MINERVA slightly outperforms our simple KB completion model on the NELL-995 task. However, unlike our model, MINERVA is trained to find a single answer, rather than trained to infer a set of answers. To explore this difference, we compared to MINERVA on the grid task under two conditions: (1) the KB relations are the grid directions north, south, east and west, so the output of the target chain is always a single grid location, and (2) the KB relations also include a "vertical move" (north or south) and a "horizontal move" (east or west), so the result of the target chain can be a set of locations. As expected MINERVA's performance drops dramatically in the second case, from 99.3% Hits@1 to 34.4 %, while our model's performance is more robust. MetaQA answers can also be sets, so we also modified MetaQA so that MINERVA could be used (by making the non-entity part of the sentence the "relation" input and the seed entity the "start node" input) and noted a similarly poor performance for MINERVA. These results are shown on the right of Table 4.
In Tables 5 we compare 2019) system, which uses a learned method to incrementally retrieve from the KB, is about 15 times slower than the reified KB system. GRAFT-Net is only slightly less accurate, but also only slightly faster: recall that GRAFT-Net uses a heuristically selected subset (of up to 500 triples) from the KB for each query, while our system uses the full KB. Here the full KB is about 400 times as large as the question-specific subset used by GRAFT-Net. A key-value memory baseline including the full KB is nearly three times as slow as our system, while also performing quite poorly.
RELATED WORK
The relation-set following operation using reified KBs is implemented in an open-source package called NQL, for neural query language. NQL implements a broader range of operations for manipulating KBs, which are described in a companion paper . This paper focuses on implementation and evaluation of the relation-set following operation with different KB representations, issues not covered in the companion paper.
TensorLog , a probabilistic logic which also can be compiled to Tensorflow, and hence is another differentiable approach to neuralizing a KB. TensorLog is also based on sparse matrices, but does not support relation sets, making it unnatural to express the models shown in this paper, and does not use the more efficient reified KB representation. The differentiable theorem prover (DTP) is another differentiable logic (Rocktäschel & Riedel, 2017), but DPT appears to be much less scalable: it has not been applied to KBs larger than a few thousand triples. The Neural ILP system uses approaches related to late mixing together with an LSTM controller to perform KB completion and some simple QA tasks, but it is a monolithic architecture focused on rule-learning, while in contrast we propose a re-usable neural component, which can be used in as a component in many different architectures, and a scalable implementation of this. It has also been reported that neural ILP does not scale to the size of the NELL995 task (Das et al., 2017).
The goals of this paper are related to KB embedding methods, but distinct. In KB embedding, models are generally fully differentiable, but it is not considered necessary (or even desirable) to accurately match the behavior of inference in the original KB. Being able to construct a learned approximation of a symbolic KB is undeniably useful in some contexts, but embedded KBs also have many disadvantages. In particular, they are much larger than a reified KB, with many more learned parameters-typically a long dense vector for every KB entity. Embedded models are typically evaluated by their ability to score a single triple accurately, and many models are not capable of executing multi-step KB inferences efficiently; further, models that do allow multi-step inference are known to produce cascaded errors on long reasoning chains (Guu et al., 2015;Hamilton et al., 2018). In contrast we focus on accurate models of reasoning in a symbolic KB, which requires consideration of novel scalability issues associated with sparse matrice representations.
Mathematically, our definition of relation-set following is much like the bilinear model for path following from Guu et al. (2015); however, we generalize this to path queries that include weighted sets of relations, allowing the relations in paths to be learned. Similar differences apply to the work of Hamilton et al. (2018), which extends the work of Guu et al. (2015) to include intersection operations. The vector representation used here for weighted sets in a reified KB makes intersection trivial to implement, as intersection corresponds to Hadamard product. Conveniently set union also corresponds to vector sum, and the complement of X is 1 − x, which is perhaps why only a single additional neural operation is needed to support the KB reasoning tasks needed for the five benchmark tasks considered here.
Neural architectures like memory networks (Weston et al., 2014), or other architectures that use attention over some data structure approximating assertions (Andreas et al., 2016;Gupta & Lewis, 2018) can be used to build soft versions of relation-set following: however, they also do not scale well to large KBs, so they are typically used either with a non-differentiable ad hoc retrieval mechanism, or else in cases where a small amount of information is relevant to a question (Weston et al., 2015;Zhong et al., 2017). Similarly graph CNNs (Kipf & Welling, 2016) also can be used for reasoning, and often do use sparse matrix multiplication, but again existing implementations have not been scaled to tens of millions of triples/edges or millions of entities/graph nodes. Additionally, while graph CNNs have been used for reasoning tasks, the formal connection between them and logical reasoning remains unclear, whereas there is a precise connection between relation-set following and inference.
Reinforcement learning (RL) methods have been used to learn mappings from natural-language questions to non-differentiable logical representations and have also been applied to KB completion tasks (Das et al., 2017;Xiong et al., 2017). Above we compared experimentally to MINERVA, one such method; however, the gradient-based approaches enabled by our methods are generally preferred as being easier to implement and tune on new problems, and easier to combine in a modular way with other architectural elements.
CONCLUSIONS
We introduced here a novel way of representing a symbolic knowledge base (KB) called a sparsematrix reified KB. This representation enables neural modules that are fully differentiable, faithful to the original semantics of the KB, expressive enough to model multi-hop inferences, and scalable enough to use with realistically large KBs. In a reified KB, all KB relations are represented with three sparse matrices, which can be distributed across multiple GPUs, and symbolic reasoning on realistic KBs with many relations is much faster than with naive implementations-more than four orders of magnitude faster on synthetic-data experiments compared to naive sparse-matrix implementations.
This new architectural component leads to radically simpler architectures for neural semantic parsing from denotations and KB completion-in particular, they make it possible to learn neural KBQA models in a completely end-to-end way, mapping from text to KB entity sets, for KBs with tens of millions of triples and entities and hundreds of relations.
A ADDITIONAL BACKGROUND AND EXTENSIONS
KBs, entities, and relations, and types. In the more general case, a KB consists of entities, relations, and types. Again use x to denote an entity and r to denote a relation. We also assume each entity x has a type, written type(x), and let N τ denote the number of entities of type τ . Each entity x in type τ has a unique index index τ (x), which is an integer between 1 and N τ . We write x τ,i for the entity that has index i in type τ , or x i if the type is clear from context.
Every relation r has a subject type τ subj and an object type τ obj , which constrain the types of x and x for any pair (x, x ) ∈ r. Hence r can be encoded as a subset of {1, . . . , N τ subj } × {1, . . . , N τ obj }.
Relations with the same subject and object types are called type-compatible.
Our differentiable operations are based on typed weighted sets, where again each element x of weighted set X is associated with a non-negative real number, written ω|[x ∈ X]|, and we define ω|[x ∈ X]| ≡ 0 for all x ∈ X. A set X has a type type(X) = τ , and all members of Xmust be entities of type τ .
We also assume every relation r in a KB is associated with an entity x r , and hence, an index and a type. Sets of relations R are allowed only if all members are type-compatible. For example R = {writer_of, director_of} might be a set of type-compatible relations.
A weighted set X of type τ can be encoded as an entity-set vector x ∈ R Nτ , where the i-th component of x is the weight of the i-th entity of that type in the set X: e.g.,
x[index τ (x)] = ω|[x ∈ X]|.
We also use type(x) to denote the type τ of the set encoded by x.
A relation r with subject type τ 1 and object type τ 2 can be encoded as a relation matrix M r ∈ R Nτ 1 ×Nτ 2 . Background on sparse matrices. A COO encoding consists of a N r × 2 matrix Ind r containing pairs of entity indices, and a parallel vector w r ∈ R Nr containing the weights of the corresponding entity pairs. In this encoding,
if (i, j) is row k of Ind, then M r [i, j] = w r [k]
, and if (i, j) does not appear in Ind r , then M[i, j] is zero.
Extension to soft KBs. In the paper, we assume the non-zero weights in a relation matrix M r are all equal to 1.0. This can be relaxed: if assertions in a KB are associated with confidences, then this confidence can be stored in M r . In this case, the reified KB must be extended to encode the weight for a triple: we find it convenient to redefine M rel to hold that weight. In particular if the weight for the the -th triple r k (x i , x j ) is w , then we let
M rel [ , m] ≡ w if m = k 0 else B PROOF OF CLAIM 1
Claim 1 The support of follow(x, r) is exactly the set of R-neighbors(X).
To better understand this claim, let z = follow(x, r). The claim states z can approximate the R neighborhood of any hard sets R, X by setting to zero the appropriate components of x and r. It is also clear that z[j] decreases when one decreases the weights in r of the relations that link x j to entities in X, and likewise, z[j] decreases if one decreases the weights of the entities in X that are linked to x j via relations in R, so there is a smooth, differentiable path to reach this approximation.
More formally, consider first a matrix M r encoding a single binary relation r, and consider the vector x = xM r . As weighted sets, X and r have non-negative entries, so clearly for all i,
x [j] = 0 iff ∃j : M r [i, j] = 0 ∧ x[i] = 0 iff ∃x i ∈ X so that (x i , x j ) ∈ r
and so if r is a one-hot vector for the set {r}, then the support of follow(x, r) is exactly the set r-neighbors(X). Finally note that the mixture M R has the property that M R [i(e 1 ), i(e 2 )] > 0 exactly when e 1 is related to e 2 by some relation r ∈ R.
C MINIBATCHED COMPUTATIONS OF NAIVE AND LATE MIXING
The major problem with naive mixing is that, in the absence of general sparse tensor contractions, it is difficult to adapt to mini-batches-i.e., a setting in which x and r are replaced with matrices X and R with minibatch size b. An alternative strategy is late mixing, which mixes the output of many single-relation following steps, rather than mixing the KB itself:
follow(X, R) = N R k=1 (R[:, k] · XM k )
Here R[:, k], the k-th column of R, is "broadcast" to element of the matrix XM k . As noted in the body of the text, while there are N R matrices XM k , each of size O(bN E ), they need not all be stored at once, so the space complexity becomes O(bN E + bN R + N T ); however we must now sum up N R dense matrices.
The implementation of relation-set following for the reified KB can be straightforwardedly extended to a minibatch:
follow(X, R) = (XM T subj RM T rel )M obj D DISTRIBUTED MATRIX MULTIPLICATION
Matrix multiplication xM was distributed as follows: x can be split into a "horizontal stacking" of m submatrices, which we write as [x 1 ; . . . ;
x m ], and M can be similarly partitioned into m 2 submatrices. We then have the result that
xM = [x 1 ; x 2 ; . . . ; x m ] M 1,1 M 1,2 . . . M 1,m . . . . . . . . . M m,1 M m,2 . . . M m,m = ( m i=1 x 1 M i,1 ); . . . ; ( m i=1 x m M i,m )
This can be computed without storing either X or M on a single machine, and mathematically applies to both dense and sparse matrices. In our experiments we distibuted the matrices that define a reified KB "horizontally", so that different triple ids are stored on different GPUs.
M obj ,m (5) = m i=1 (xM T subj ,i rM T rel,i )M obj ,i(6)
This method can be easily extended to a mini-batch of examples X.
E EXPERIMENTAL DETAILS
Reproducing experiments. To reproduce these experiments, first download and install the Google language package 5 . Many of the experiments in this paper can be reproduced using scripts stored in the some subdirectory of the source directory language/nql/demos: for example, the scalability experiments of Figure 1 can be performed using scripts in language/nql/demos/gridworld_scaling/.
Grid experiments. In the grid experiments, the entity vector x is a randomly-chosen singleton set, and the relation vector r weights relations roughly uniformly-more specifically, each relation has weight 1+ where is a drawn uniformly at random between 0 and 0.001. 6 We vary the number of relations by inventing m new relation names and assigning existing grid edges to each new relation. These experiments were conducted on a Titan Xp GPU with 12Gb of memory.
For key-value networks, the key is the concatenation of a relation and a subject entity, and the value is the object entity. We considered only the run-time for queries on an untrained randomly-initialized network (since run-time performance on a trained network would be the same); however, it should be noted that considerable time that might be needed to train the key-value memory to approximate the KB. (In fact, it is not obvious under what conditions a KB can be approximated well by the key-value memory.)
We do not show results on the grid task for smaller minibatch sizes, but both reified and late mixing are about 40x slower with b = 1 than with b = 128.
WebQuestionsSP experiments. For efficiency, on this problem we exploit the type structure of the problem (see Appendix A). Our model uses two types of nodes, CVT and entity nodes. The model also uses three types of relations: relations mapping entities to entities, relations mapping entities to CVT nodes; and relations mapping CVT nodes to entity nodes.
MetaQA experiments. An example of a 2-hop question in MetaQA could be "Who co-starred with Robert Downey Jr. in their movies?", and the answer would be a set of actor entities, e.g., "Chris Hemsworth", "Thomas Stanley", etc. Triples in the knowledge base are represented as (subject, relation, object) triples, e.g., ("Robert Downey Jr.", "act_in", "Avengers: Endgame"), ("Avengers: Endgame", "stars", "Thomas Stanley"), etc. The quoted strings here all indicate KB entities.
We also observed that in the MetaQA 2-hop and 3-hop questions, the questions often exclude the seed entities (e.g., "other movies with the same director as Pulp Fiction"). This can be modeled by masking out seed entities from the predictions after the second hop (ReifKB + mask in the table).
Timing on MetaQA and other natural problems. The raw data for the bubble plot of Table 5 Discussion of the KB completion model. The KB completion model is for i = 1, . . . , N and t = 1, . . . , T : r t i = f t i (q); x t i = follow(x t−1 i , r t i ) + x t−1 i It may not be immediately obvious why we used
x t i = follow(x t−1 i , r t i ) + x t−1 i instead of the simpler x t i = follow(x t−1 i , r t i )
In the main text, we say that this "gives the model access to outputs of all chains of length less than t". This statement is probably easiest to understand by considering a concete example. Let us simplify notation slightly by dropping the subscripts and writing follow(x t−1 i , r t i ) as f t (x t−1 ). Now expand the definition of x t for a few small values of t, using the linearity of the definition of relation-set following where appropriate to simplify:
x 1 = f 1 (x 0 ) + x 0
x 2 = f 2 (x 1 ) + x 1 = f 2 (f 1 (x 0 ) + x 0 + (f 1 (x 0 ) + x 0 = f 2 (f 1 (x 0 )) + f 2 (x 0 ) + f 1 (x 0 ) + x 0
x 3 = f 3 (x 2 ) + x 2 = f 3 (f 2 (f 1 (x 0 )) + f 2 (x 0 ) + f 1 (x 0 ) + x 0 + f 2 (f 1 (x 0 )) + f 2 (x 0 ) + f 1 (x 0 ) + x 0 = f 3 (f 2 (f 1 (x 0 ))) + f 3 (f 2 (x 0 )) + f 3 (f 1 (x 0 )) + f 3 (x 0 ) + f 2 (f 1 (x 0 )) + f 2 (x 0 ) + f 1 (x 0 ) + x 0 A pattern is now clear: with this recursive definition x t expands to a mixture of many paths, each of which applies a different subset of f 1 , . . . , f t to the initial input x. Since the weights of the mixture can to a large extent be controlled by varying the norm of the relation vectors r 1 , . . . ,r t , this "kernel-like trick" increases the expressive power of the model without introducing new parameters. The final mixture of the x t 's seems to provide a bias towards accepting the output of shorter paths, which appears to be useful in practice.
Figure 1 :
1Left and middle: inference time in queries/sec on a synthetic KB as size and number of relations is varied. Queries/sec is given as zero when GPU memory of 12Gb is exceeded. Right: speedups of reified KBs over the baseline implementations.
the training time of our model with minibatch size of 10 on NELL-995, MetaQA, and WebQuestionsSP. With over 40 million facts and nearly 13 million entities from Freebase, it takes less than 10 minutes to run one epoch over WebQuestionsSP (with 3097 training examples) on four P100 GPUs. In the accompanying plot, we also summarize the tradeoffs between accuracy and training time for our model and three baselines on the MetaQA 3-hop task. (Here ideal performance is toward the upper left of the plot). The state-of-the-art PullNet Sun et al. (
Specifically, we shard the "triple index" dimension N T of matrices M subj , M rel and M obj in Eq. 4 to perform a distributed relation-set following on the reified KB. Let M subj ,i be the i'th shard of matrix M subj , and thus M subj = [M T subj ,1 ; . . . ; M T subj ,m ] T ∈ R N T ×N E . M obj and M rel are represented in the similar way. A distributed relation-set following is computed as a combination of relation-set following results on all shards of the KB. follow(x, r) = (xM T subj rM T rel )M obj = [xM T subj ,1 ; . . . ; xM T subj ,m ] [rM T rel,1 ; . . . ; rM T rel,m ]
Table 2 :
2Complexity of implementations of relation-set following, where N T is the number of KB triples, N E the number of entities, N R the number of relations, and b is batch size.
Table 3 :
3Hits@1 on the KBQA datasets. Results for KV-Mem and VRN on MetaQA are from (Zhang et al., 2018); results for GRAFT-Net, PullNet and KV-Mem on WebQSP are from
Table 4 :
4Left: Hits@1 and Hits@10 for KB completion on NELL 995. Starred KB completion methods are transductive, and do not generalize to entities not seen in training. Right: Comparison to MINERVA on several tasks for [email protected] MetaQA-3hop WebQuestionsSP
# Facts
154,213
196,453
43,724,175
# Entities
75,492
43,230
12,942,798
# Relations
200
9
616
Time (seconds)
44.3
72.6
1820
Table 5 :
5Left, time to run 10K examples for KBs of different size. Right, time for 10k examples vs Hits@1 performance for ReifKB compared to three baselines on MetaQA-3hop questions.
At the time of this writing.
This is usually called second-order reasoning.
The larger benchmark datasets used in this paper have 200 and 616 relations respectively.
In the experiments we tune the hyperparameters T ∈ {1, . . . , 6} and N ∈ {1, 2, 3} on a dev set.
https://github.com/google-research/language.git
If the relation weights do not vary from trial to trial, some versions of Tensorflow will optimize computation by precomputing and caching the matrix MR from Eq. 1, which speeds up the naive method considerably. Of course, this optimization is impossible when learning relation sets.
ACKNOWLEDGMENTSThe authors are greatful to comments and suggestions from Fernando Peireira, Bhuwan Dhingra, and many other colleagues on earlier versions of this work.
Tensorflow: A system for large-scale machine learning. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, 12th {USENIX} Symposium on Operating Systems Design and Implementation. Martín Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. Tensorflow: A system for large-scale machine learning. In 12th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 16), pp. 265-283, 2016.
Neural module networks. Jacob Andreas, Marcus Rohrbach, Trevor Darrell, Dan Klein, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJacob Andreas, Marcus Rohrbach, Trevor Darrell, and Dan Klein. Neural module networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 39-48, 2016.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pp. 1533-1544, 2013.
Freebase: a collaboratively created graph database for structuring human knowledge. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, Jamie Taylor, Proceedings of the 2008 ACM SIGMOD international conference on Management of data. the 2008 ACM SIGMOD international conference on Management of dataAcMKurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. Freebase: a collabora- tively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pp. 1247-1250. AcM, 2008.
Translating embeddings for modeling multi-relational data. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, Oksana Yakhnenko, Advances in neural information processing systems. Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. Translating embeddings for modeling multi-relational data. In Advances in neural information processing systems, pp. 2787-2795, 2013.
Fan William W Cohen, Kathryn Rivard Yang, Mazaitis, arXiv:1707.05390Deep learning meets probabilistic DBs. arXiv preprintWilliam W Cohen, Fan Yang, and Kathryn Rivard Mazaitis. Tensorlog: Deep learning meets probabilistic DBs. arXiv preprint arXiv:1707.05390, 2017.
Neural query language: A knowledge base query language for Tensorflow. CoRR, abs/1905.06209. William W Cohen, Matthew Siegler, R Alex Hofer, William W. Cohen, Matthew Siegler, and R. Alex Hofer. Neural query language: A knowledge base query language for Tensorflow. CoRR, abs/1905.06209, 2019. URL http://arxiv.org/ abs/1905.06209.
Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. Rajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishnamurthy, Alex Smola, Andrew Mccallum, arXiv:1711.05851arXiv preprintRajarshi Das, Shehzaad Dhuliawala, Manzil Zaheer, Luke Vilnis, Ishan Durugkar, Akshay Krishna- murthy, Alex Smola, and Andrew McCallum. Go for a walk and arrive at the answer: Reasoning over paths in knowledge bases using reinforcement learning. arXiv preprint arXiv:1711.05851, 2017.
Iterative search for weakly supervised semantic parsing. Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, Eduard Hovy, 10.18653/v1/N19-1273Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Pradeep Dasigi, Matt Gardner, Shikhar Murty, Luke Zettlemoyer, and Eduard Hovy. Iterative search for weakly supervised semantic parsing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 2669-2680, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1273. URL https: //www.aclweb.org/anthology/N19-1273.
Problog: A probabilistic Prolog and its application in link discovery. Angelika Luc De Raedt, Hannu Kimmig, Toivonen, IJCAI. Hyderabad7Luc De Raedt, Angelika Kimmig, and Hannu Toivonen. Problog: A probabilistic Prolog and its application in link discovery. In IJCAI, volume 7, pp. 2462-2467. Hyderabad, 2007.
Convolutional 2d knowledge graph embeddings. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Thirty-Second AAAI Conference on Artificial Intelligence. Tim Dettmers, Pasquale Minervini, Pontus Stenetorp, and Sebastian Riedel. Convolutional 2d knowledge graph embeddings. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Improving text-to-SQL evaluation methodology. Catherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, Dragomir Radev, arXiv:1806.09029arXiv preprintCatherine Finegan-Dollak, Jonathan K Kummerfeld, Li Zhang, Karthik Ramanathan, Sesh Sadasivam, Rui Zhang, and Dragomir Radev. Improving text-to-SQL evaluation methodology. arXiv preprint arXiv:1806.09029, 2018.
Neural compositional denotational semantics for question answering. Nitish Gupta, Mike Lewis, abs/1808.09942Nitish Gupta and Mike Lewis. Neural compositional denotational semantics for question answering. CoRR, abs/1808.09942, 2018. URL http://arxiv.org/abs/1808.09942.
Traversing knowledge graphs in vector space. Kelvin Guu, John Miller, Percy Liang, arXiv:1506.01094arXiv preprintKelvin Guu, John Miller, and Percy Liang. Traversing knowledge graphs in vector space. arXiv preprint arXiv:1506.01094, 2015.
Embedding logical queries on knowledge graphs. Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, Jure Leskovec, Advances in Neural Information Processing Systems. Will Hamilton, Payal Bajaj, Marinka Zitnik, Dan Jurafsky, and Jure Leskovec. Embedding logical queries on knowledge graphs. In Advances in Neural Information Processing Systems, pp. 2026- 2037, 2018.
Semi-supervised classification with graph convolutional networks. N Thomas, Max Kipf, Welling, arXiv:1609.02907arXiv preprintThomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. Chen Liang, Jonathan Berant, Quoc Le, D Kenneth, Ni Forbus, Lao, arXiv:1611.00020arXiv preprintChen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. arXiv preprint arXiv:1611.00020, 2016.
Memory augmented policy optimization for program synthesis and semantic parsing. Chen Liang, Mohammad Norouzi, Jonathan Berant, V Quoc, Ni Le, Lao, Advances in Neural Information Processing Systems. Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc V Le, and Ni Lao. Memory augmented policy optimization for program synthesis and semantic parsing. In Advances in Neural Information Processing Systems, pp. 9994-10006, 2018.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pp. 3111-3119, 2013.
Key-value memory networks for directly reading documents. CoRR, abs/1606.03126. Alexander H Miller, Adam Fisch, Jesse Dodge, Amir-Hossein, Antoine Karimi, Jason Bordes, Weston, Alexander H. Miller, Adam Fisch, Jesse Dodge, Amir-Hossein Karimi, Antoine Bordes, and Jason Weston. Key-value memory networks for directly reading documents. CoRR, abs/1606.03126, 2016. URL http://arxiv.org/abs/1606.03126.
Policy shaping and generalized update equations for semantic parsing from denotations. Dipendra Misra, Ming-Wei Chang, Xiaodong He, Wen-Tau Yih, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingDipendra Misra, Ming-Wei Chang, Xiaodong He, and Wen-tau Yih. Policy shaping and generalized update equations for semantic parsing from denotations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2442-2452, 2018.
Inferring logical forms from denotations. Panupong Pasupat, Percy Liang, arXiv:1606.06900arXiv preprintPanupong Pasupat and Percy Liang. Inferring logical forms from denotations. arXiv preprint arXiv:1606.06900, 2016.
End-to-end differentiable proving. Tim Rocktäschel, Sebastian Riedel, Advances in Neural Information Processing Systems. Tim Rocktäschel and Sebastian Riedel. End-to-end differentiable proving. In Advances in Neural Information Processing Systems, pp. 3788-3800, 2017.
Generating logical forms from graph representations of text and entities. Peter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, Yasemin Altun, arXiv:1905.08407arXiv preprintPeter Shaw, Philip Massey, Angelica Chen, Francesco Piccinno, and Yasemin Altun. Generating logical forms from graph representations of text and entities. arXiv preprint arXiv:1905.08407, 2019.
Mesh-tensorflow: Deep learning for supercomputers. Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, Hyoukjoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, Blake A Hechtman, abs/1811.02084Noam Shazeer, Youlong Cheng, Niki Parmar, Dustin Tran, Ashish Vaswani, Penporn Koanantakool, Peter Hawkins, HyoukJoong Lee, Mingsheng Hong, Cliff Young, Ryan Sepassi, and Blake A. Hechtman. Mesh-tensorflow: Deep learning for supercomputers. CoRR, abs/1811.02084, 2018. URL http://arxiv.org/abs/1811.02084.
Open domain question answering using early fusion of knowledge bases and text. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, William W Cohen, EMNLP. Haitian Sun, Bhuwan Dhingra, Manzil Zaheer, Kathryn Mazaitis, Ruslan Salakhutdinov, and William W Cohen. Open domain question answering using early fusion of knowledge bases and text. EMNLP, 2018.
Haitian Sun, Tania Bedrax-Weiss, William W Cohen, Pullnet, arXiv:1904.09537Open domain question answering with iterative retrieval on knowledge bases and text. arXiv preprintHaitian Sun, Tania Bedrax-Weiss, and William W Cohen. Pullnet: Open domain question answering with iterative retrieval on knowledge bases and text. arXiv preprint arXiv:1904.09537, 2019.
Knowledge graph completion via complex tensor factorization. Théo Trouillon, Éric Christopher R Dance, Johannes Gaussier, Sebastian Welbl, Guillaume Riedel, Bouchard, The Journal of Machine Learning Research. 181Théo Trouillon, Christopher R Dance, Éric Gaussier, Johannes Welbl, Sebastian Riedel, and Guil- laume Bouchard. Knowledge graph completion via complex tensor factorization. The Journal of Machine Learning Research, 18(1):4735-4772, 2017.
. Jason Weston, Sumit Chopra, Antoine Bordes, arXiv:1410.3916Memory networks. arXiv preprintJason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
Towards ai-complete question answering: A set of prerequisite toy tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, Tomas Mikolov, arXiv:1502.05698arXiv preprintJason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698, 2015.
Deeppath: A reinforcement learning method for knowledge graph reasoning. Wenhan Xiong, Thien Hoang, William Yang Wang, arXiv:1707.06690arXiv preprintWenhan Xiong, Thien Hoang, and William Yang Wang. Deeppath: A reinforcement learning method for knowledge graph reasoning. arXiv preprint arXiv:1707.06690, 2017.
Embedding entities and relations for learning and inference in knowledge bases. Bishan Yang, Wen-Tau Yih, Xiaodong He, Jianfeng Gao, Li Deng, arXiv:1412.6575arXiv preprintBishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. Embedding entities and relations for learning and inference in knowledge bases. arXiv preprint arXiv:1412.6575, 2014.
Differentiable learning of logical rules for knowledge base reasoning. Fan Yang, Zhilin Yang, William W Cohen, Advances in Neural Information Processing Systems. Fan Yang, Zhilin Yang, and William W Cohen. Differentiable learning of logical rules for knowledge base reasoning. In Advances in Neural Information Processing Systems, pp. 2319-2328, 2017.
| [
"https://github.com/google-research/language.git"
] |
[
"Class-Imbalanced Semi-Supervised Learning",
"Class-Imbalanced Semi-Supervised Learning"
] | [
"Minsung Hyun ",
"Jisoo Jeong ",
"Nojun Kwak "
] | [] | [] | Semi-Supervised Learning (SSL) has achieved great success in overcoming the difficulties of labeling and making full use of unlabeled data. However, SSL has a limited assumption that the numbers of samples in different classes are balanced, and many SSL algorithms show lower performance for the datasets with the imbalanced class distribution. In this paper, we introduce a task of class-imbalanced semi-supervised learning (CISSL), which refers to semi-supervised learning with class-imbalanced data. In doing so, we consider class imbalance in both labeled and unlabeled sets. First, we analyze existing SSL methods in imbalanced environments and examine how the class imbalance affects SSL methods. Then we propose Suppressed Consistency Loss (SCL), a regularization method robust to class imbalance. Our method shows better performance than the conventional methods in the CISSL environment. In particular, the more severe the class imbalance and the smaller the size of the labeled data, the better our method performs. | null | [
"https://arxiv.org/pdf/2002.06815v1.pdf"
] | 211,132,625 | 2002.06815 | 7727d200aa16f206a75a8e67556268c08b0464e5 |
Class-Imbalanced Semi-Supervised Learning
Minsung Hyun
Jisoo Jeong
Nojun Kwak
Class-Imbalanced Semi-Supervised Learning
Semi-Supervised Learning (SSL) has achieved great success in overcoming the difficulties of labeling and making full use of unlabeled data. However, SSL has a limited assumption that the numbers of samples in different classes are balanced, and many SSL algorithms show lower performance for the datasets with the imbalanced class distribution. In this paper, we introduce a task of class-imbalanced semi-supervised learning (CISSL), which refers to semi-supervised learning with class-imbalanced data. In doing so, we consider class imbalance in both labeled and unlabeled sets. First, we analyze existing SSL methods in imbalanced environments and examine how the class imbalance affects SSL methods. Then we propose Suppressed Consistency Loss (SCL), a regularization method robust to class imbalance. Our method shows better performance than the conventional methods in the CISSL environment. In particular, the more severe the class imbalance and the smaller the size of the labeled data, the better our method performs.
Introduction
A large dataset with well-refined annotations is essential to the success of deep learning and every time we encounter a new problem, we should annotate the whole dataset, which costs a lot of time and effort (Russakovsky et al., 2015;Bearman et al., 2016). To alleviate this annotation burden, many researchers have studied semi-supervised learning (SSL) that improves the performance of models by utilizing the information contained in unlabeled data (Chapelle et al., 2009;Verma et al., 2019;Berthelot et al., 2019).
However, SSL has a couple of main assumptions and shows excellent performance only in these limited settings. The first assumption is that unlabeled data is in-distribution, i.e., the class types of unlabeled data are the same as those of la-* Equal contribution 1 Seoul National University 2 SK hynix. < {minsung.hyun|soo3553|nojunk}@snu.ac.kr > Correspondence to: Nojun Kwak <[email protected]>.
Under review. beled data (Oliver et al., 2018). The second is the assumption of balanced class distribution, which assumes that each class has almost the same number of samples (Li et al., 2011;Stanescu & Caragea, 2014). In this paper, we performed a study dealing with the second assumption.
The class distribution of data, in reality, is not refined and is known to have long tails (Kendall et al., 1946). However, many researches have developed models based on well-refined balanced data such as CIFAR (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and ImageNet ILSCVRC 2012 (Deng et al., 2009). Training the model with imbalanced datasets causes performance degradation. Class imbalanced learning (CIL) is a way to solve such class imbalance and proposes various methods in the level of data, algorithm, and their hybrids (Krawczyk, 2016;Johnson & Khoshgoftaar, 2019). However, to our best knowledge, the studies on CIL have relied entirely on labeled datasets for training and have not considered the use of unlabeled data.
In this paper, we define a task, class-imbalanced semisupervised learning (CISSL), and propose a suitable algorithm for it. By assuming class imbalance in both labeled and unlabeled data, CISSL relaxes the assumption of balanced class distribution in SSL. Also, it can be considered as a task of adding unlabeled data to CIL.
We analyzed the existing SSL methods in the CISSL setting through toy examples. First, we found that the class imbalance in the CISSL disrupts the learning of the existing SSL methods based on the 'cluster assumption' which asserts that each class has its own cluster in the latent space (Chapelle et al., 2009). According to this assumption, the decision boundary traverses the low-density area of the latent space. With the class imbalance, however, the decision boundary may be incorrectly formed and passes through the high-density area of the minor class, which results in degradation of the SSL methods.
In Fig.1b, 1f, we can see that each decision boundary is skewed toward the minority class in the Π model (Laine & Aila, 2016), a representative algorithm of consistencyregularization-based SSL, compared to that of supervised learning (Fig.1a, 1e).
Second, we examined that the Mean Teacher (MT) (Tarvainen & Valpola, 2017) is more robust than Π model in arXiv:2002.06815v1 [cs.LG] 17 Feb 2020 CISSL settings. In Fig.1c, 1g, even though there is a class imbalance, MT maintains a relatively stable decision boundary. We show later that MT is more stable because it uses a conservative target for consistency regularization.
Based on these observations, we propose a regularization method using 'suppressed consistency loss' (SCL), for better performance in the CISSL settings. SCL prohibits the decision boundary in a minor class region from being smoothed too much in the wrong direction as shown in Fig.1d, 1h. In Section 4, we will discuss the role of SCL in more detail.
We also proposed standard experimental settings in the CISSL. We followed the SSL experiment settings, but to be more realistic, we considered class imbalance in both labeled and unlabeled data. In this setting, we compared existing SSL and CIL methods to ours and found that our method with SCL shows better performance than others. Furthermore, we applied SCL to the object detection problem and improved performance in the existing SSL algorithm for object detection.
Our main contributions can be summarized as follows: • We defined a task of imbalanced semi-supervised learning, reflecting a more realistic situation, and suggested standard experimental settings.
• We analyzed how the existing SSL methods work in CISSL settings through mathematical and experimental results.
• We proposed Suppressed Consistency Loss that works robustly for problems with class imbalance, and experimentally show that our method improves performance.
Related Work
Semi-Supervised Learning
Semi-supervised learning is a learning method that tries to improve the performance of supervised learning, which is based only on labeled data (D L ), by additional usage of unlabeled data (D U ). SSL approaches include methods based on self-training and generative models (Lee, 2013;Zhai et al., 2019;Goodfellow et al., 2014;Radford et al., 2015;Dumoulin et al., 2016;Lecouat et al., 2018). In addition, consistency regularization has shown good performance in semi-supervised learning, which pushes the decision boundary to low-density areas using unlabeled data (Bachman et al., 2014;Sajjadi et al., 2016;Laine & Aila, 2016;Verma et al., 2019). The objective function J is composed of supervised loss, L sup , for D L and consistency regularization loss, L con , for D U . As a typical semi-supervised learning method (Laine & Aila, 2016;Oliver et al., 2018), ramp-up scheduling function w(t) is used for stable training:
J = L sup + w(t) · L con (1) L con (X) = d(f θ (X + ), f θtg (X + )),(2)
where d is a distance metric such as L 2 distance or KLdivergence, and are perturbations to input data, and θ and θ tg are the parameters of the model and target model, respectively. For C-class classification problem, f θ (X) ∈ R C + is the output logit (class probability) for the input X. Π model (Laine & Aila, 2016) and Mean Teacher (MT) (Tarvainen & Valpola, 2017) are the representative algorithms using consistency regularization. The Π model uses θ as θ tg and MT updates θ tg with EMA (exponential moving average) as follows:
θ tg ← γθ tg + (1 − γ)θ.(3)
From (3), MT can be considered as a temporal ensemble model in the parameter space.
Above this, there are some methods that optimize the direction of perturbation (Miyato et al., 2018), regularize through graphs of minibatch samples (Luo et al., 2018) and perturb inputs with mixup (Zhang et al., 2017;Verma et al., 2019). In addition, the consistency-based semi-supervised learning for object detection (CSD) is an algorithm that applies SSL to object detection by devising classification and localization consistency (Jeong et al., 2019).
Class Imbalanced Learning
Class imbalanced learning is a way to alleviate the performance degradation due to class imbalance. Buda et al. (2018) defined the class imbalance factor ρ as the ratio between the numbers of samples of the most frequent and the least frequent classes. And we call each class as major class and minor class.
So far, there have been various researches to solve class imbalance problems (Johnson & Khoshgoftaar, 2019). Datalevel methods approach the problem by over-sampling minor classes or under-sampling major classes (Masko & Hensman, 2015;Lee et al., 2016;Pouyanfar et al., 2018;Buda et al., 2018). These methods take a long time in model training due to re-sampling. Algorithm-level methods re-weight the loss or propose a new loss without touching the sampling scheme (Wang et al., 2016;Lin et al., 2017;Wang et al., 2018;Khan et al., 2017;Zhang et al., 2016;Wang et al., 2017;Cui et al., 2019;Cao et al., 2019). Algorithm-level methods can be easily applied without affecting training time. There are also hybrids of both methods (Huang et al., 2016;Ando & Huang, 2017;Dong et al., 2019).
In this paper, we applied three algorithm-level methods to the CISSL environment and compared their performance to cross-entropy loss (CE): (i) Normalized weights, which weight a loss inversely proportional to the class frequency (IN) (Cao et al., 2019).
(ii) Focal loss which modulates by putting fewer weights on samples that the model is easy to classify (Lin et al., 2017).
(iii) Class-balanced loss which re-weights the loss in inverse proportion to the effective number of classes (CB) (Cui et al., 2019).
Analysis of SSL under Class Imbalance
In this section, we look into the topography of the decision boundary to see how the SSL algorithms work in the class-imbalanced environment. First, we compare su-
Toy examples
We trained each algorithm by 5,000 iterations on two moons and four spins datasets with an imbalance factor of 5 for each labeled and unlabeled data. 1 Fig.1 represents the probability of the class with the highest confidence at each location. The region with relatively low probability, closer to the dark red color, is the decision boundary in the figure.
In Fig.1a, 1e, the decision boundary of the supervised learning is very steep. And there are very high confidence areas far away from the decision boundary. With the SSL methods, unlabeled data smooth the decision boundary through consistency regularization (Chapelle et al., 2009). In particular, the decision boundary smoothing is larger in the minor class area. Also, we found that the learning patterns of the Π model and MT are different. Table.1 shows the validation error rates for toy examples. We found that performance degradation is evident in the minor class. MT shows relatively better performance than Π model, although it shows inferior performance than the supervised learning in two moons. Our method which applies SCL to MT achieves the best performance in both two moons and four spins datasets.
Π Model vs. Mean Teacher
We analyze the results of Section.3.1 in this part. When the consistency regularization is applied to supervised learning in Fig.1a, 1e, compared to the samples far away from the boundary, the influence of the samples around the decision boundary is considerable, because the model output does not change even if small perturbation is added to the model input in the region far from the decision boundary from (2). As a result, consistency regularization smooths the decision boundary, as shown in Fig.1b, 1f.
According to the cluster assumption (Chapelle et al., 2009), the decision boundary lies in the low-density area and far 1 The number of samples of class c is set to
Nc = Nmax × ρ − Rc −1 C−1 .
The Rank, Rc, of the major class is 1. ρ and C are the imbalance ratio and the number of classes. from the high-density area. However, in a problem with severe class imbalance, the decision boundary may penetrate a globally sparse but relatively high-density area of a minor class as shown in the blue square in Fig.2. By consistency regularization, decision boundary smoothing occurs in this area, and many samples in the minor class are misclassified.
Therefore, conventional consistency regularization-based methods are generally expected to degrade the performance for the minor class. But we found that the severity of this phenomenon differs depending on the SSL algorithm. In Table.1, MT consistently performed better than Π model, especially for the minor class.
First, we analyzed the behavior of MT in CISSL with the simple SGD optimizer. Consider the model parameter θ, the learning rate α, and the objective function J , then the update rule of SGD optimizer is:
θ ← θ − α∇J (θ).(4)
For a EMA decay factor of MT, γ ∈ (0, 1], the current (θ) and the target (θ ) model parameters at the t-th iteration are
θ t = θ 0 − α t−1 k=0 ∇J (θ k ),(5)θ t = θ 0 − α t−1 k=0 (1 − γ t−k−1 )∇J (θ k ).(6)
Comparing (5) and (6), we can see that θ , the target for the consistency loss in MT, is updated slower than the model parameter θ because of the use of the EMA decay factor γ. On the other hand, in Π model, because θ = θ, the target is updated faster than that of MT As described in the supplementary, we can get the same results of slow target update in MT for the SGD with momentum case that we used for our experiments. Now we will check why MT performs better than Π model in CISSL environment. Assume θ Π and θ M T be initially with the same value θ. In this case, the consistency loss of Π model and MT are
Π model :L Π con (θ) = d(f θ (X + ), f θ =θ (X + )) MT :L M T con (θ) = d(f θ (X + ), f θ (X + )).(7)
If we use L 2 distance for d for simplicity, their derivatives become
∇ θ L Π con = ∇ θ 1 2 [f θ (X + ) − f θ (X + )] 2 = [f θ (X + ) − f θ (X + )]∇ θ f θ (X + ),(8)∇ θ L M T con = ∇ θ 1 2 [f θ (X + ) − f θ (X + )] 2 = [f θ (X + ) − f θ (X + )]∇ θ f θ (X + ).(9)
Note the target parameters (θ ) in (7) are not included in the gradient calculation. Using the Taylor series expansion
f θ (X + ) f θ (X + ) + (θ − θ) T ∇ θ f θ (X + ) and
subtracting (8) from (9), we obtain
∇ θ L M T con − ∇ θ L Π con = ∇ θ f θ (X + )(θ − θ ) T ∇ θ f θ (X + ) ∇ θ f θ (X)∇ θ f θ (X) T (θ − θ ).(10)
In the last line of (10), we assumed gradients be constant in a small area around X. When the sample X is far away from the decision boundary, ∇ θ f θ (X) 0 and MT and Π model behave the same, but in the area near the decision boundary, it becomes ||∇ θ f θ (X)|| 0, and in the gradient descent step, compared to the Π model, the negative gradient of MT (∇ θ J in (1)) prohibits θ from being away from the target θ . In the CISSL environment, while Π model pushes the boundary towards the minor class, MT mitigates this by retaining the old target boundary like ensemble models.
In summary, the performance difference between the Π model and MT in CISSL is due to different targets of consistency regularization. The Π model uses the current model (θ) as a target. Therefore, the model smooths the decision boundary regardless of whether it passes the high-density area of the minor class. Because the target is the same as the parameter, smoothing causes model degradation as the parameter update is repeated. MT, on the other hand, targets a more conservative model (θ ) than the current model. Note that since the target of MT is different from the current model, even if we reduce the learning rate of the Π model, it would work differently from MT. The conservative target has an ensemble effect with consistency regularization, so smoothing does not cause severe performance degradation.
Besides, we can explain the reason why MT performs better than the Π model in terms of batch sampling. In the minibatch, minor class samples are sampled at a relatively low frequency. For this reason, the Π model frequently updates the model without a minor sample during the consistency regularization, which distorts the decision boundary. On the other hand, since the target of MT is calculated by EMA, even if there is no minor class sample in the mini-batch, it includes more information about the minor class samples. Thus, we can say that MT learns with a more stable target than the Π model.
Suppressed Consistency Loss
In Section 3, we found that the main performance degradation of SSL models in CISSL is due to consistency regularization in minor classes. With the intuition that we should suppress the consistency regularization of minor classes in CISSL, we propose a new loss term, suppressed consistency loss (SCL), as follows:
L SCL (X i ) = g(N c ) * L con (X i ), where c = argmax(f θ (X i )).(11)
Here, g(z) can be any function inversely proportional to z and we set it as
g(z) = β 1− z Nmax ,(12)
where β ∈ (0, 1]. N c is the number of training samples of the class predicted by the model, N max is the number of samples of the class with the most frequency. SCL weights the consistency loss in an exponentially inverse proportional to the number of samples in a class. In (11), g(N c ) is 1 for the most frequent class, where it works the same as the conventional consistency loss. For the least frequent class, the influence of the consistency loss is suppressed. In (12), the exponential decay is to incorporate very high imbalance factor in our model. However, when the imbalance factor is not so high, a simple linear decay can also be used. Fig.2 illustrates the effect of consistency regularization by SCL. When training with SCL, the decision boundary is smoothed weakly for minor class and is smoothed strongly for major class. If the performance of the model is inaccurate, especially for the minor class, it may pass through the high-density area. Then the SCL limits the smoothing of the decision boundary towards the minor class cluster. On the other hand, when the model mispredicts actual minor class samples as a major class in the high-density area of the minor class, the decision boundary is smoothed with higher weight. Consequentially, SCL pushes the decision boundary to low-density areas of the minor class and prevents performance degradation, as shown in Fig.2.
Experiments
Datasets and implementation details
We conducted experiments using the CIFAR10 (Krizhevsky et al., 2009) and SVHN (Netzer et al., 2011) datasets in our proposed environments and followed the common practice in SSL and CIL (Oliver et al., 2018;Johnson & Khoshgoftaar, 2019). We divided the training dataset into three parts: labeled dataset, unlabeled dataset, and validation dataset. Labeled data is configured to have an imbalance for each class according to the CIL environment. We have experimented with various numbers of labeled samples and imbalance factors. We considered three types of class imbalance in unlabeled data: Same (ρ u = ρ l , where ρ l and ρ u are the imbalance factors for labeled and unlabeled dataset.), Uniform (uniform distribution, ρ u = 1), and Half (ρ u = ρ l /2). The size of the unlabeled dataset changes depending on unlabeled data imbalance types because of the limitation of the dataset used. For fair experiments, we set the size of the unlabeled set based on the Same case, which uses the lowest number of unlabeled samples. Fig.3 shows the three imbalance types with imbalance factor 10. Validation data is made up as in (Oliver et al., 2018).
In all experiments, we used the third-party implementation 3 . All the scores of test error rates are from five independent runs with different random seeds. Experiments with different random seeds shuffle the frequency ranking of each class when the imbalance factor is constant, and cover a variety of cases.
Baselines to CISSL
We conducted experiments on how existing methods in the field of SSL and CIL perform in our defined CISSL environment and used them as the baseline for our research. We experimented in the case of 4k and 1k labeled samples for CIFAR10 and SVHN each, both with imbalance factor 100.
A. Comparison of Semi-supervised Learning Methods
Columns with imbalance factor 100 in Table.2a is the results of applying the SSL methods to the CISSL problem in CI-FAR10. Except for MT, almost all SSL methods are inferior to supervised learning. Even if the unlabeled data imbalance is mitigated to Uniform case, there is no improvement in the performance of SSL methods except MT.
Columns with imbalance factor 100 in Table.2b is the same experiment for SVHN. Most SSL methods perform better when the unlabeled data imbalance is lower, i.e. in Uniform case than in Same case. Notably, ICT showed a performance degradation of over 21%p compared to the supervised learn-3 https://github.com/perrying/realistic-ssl-evaluation-pytorch ing, and SNTG even failed to train a model.
From this experimental results and the analysis in Section.3, we used MT as our baseline, which performed best in all experiments.
B. Comparison of Class Imbalanced Learning Methods
We carried out the ablation experiments to cross-entropy loss (CE) as three types of CIL: Inverse and Normalization (IN), Focal loss, and Class-Balanced (CB) loss. We applied these CIL methods only to the supervised loss, L sup in (1), and did not apply them to unlabeled data because we do not know the class label of the unlabeled data. In this experiment, we ignored ICT because CIL methods cannot be applied to ICT which uses mixup supervised loss. imbalance beforehand, choosing a specific CIL algorithm does not guarantee a performance boost. So we used the most common cross-entropy as our baseline. In addition, SNTG failed to learn, as in Table.2b.
Unlabeled data Imbalance
A. Comparison of Imbalance Factor
We experimented with changing the imbalance factor while keeping the number of labeled samples. We experimented on CIFAR-10 and SVHN with imbalance factor ρ l ∈ {10, 20, 50, 100}. The results are shown in Table 2a, 2b, respectively.
In Table 2a, the higher the imbalance factor, the lower the performance. Supervised learning on imbalance factor 100 achieves 36.71% error, which 13%p higher than supervised learning on imbalance factor 10. In the case of the small imbalance factor, SSL algorithms generally improve performance although unlabeled data has same imbalance with labeled data. As the imbalance factor increases, on the other hand, some SSL algorithms show lower performance than supervised learning. Mean Teacher is the only SSL algorithm that improves the performance with imbalance factor 100 in Same case. This means that general SSL algorithms do not consider the imbalance for the unlabeled data. However, the proposed SCL has robustly improved the performance in various imbalance settings. Notably, it shows remarkable improvement in the Uniform case compared to SSL algorithms. Table.2b shows similar results. However, there is no big performance difference between MT and our method. This is because SVHN is easier to classify than CIFAR10. For SVHN, SNTG and ICT show lower performance than the supervised learning. It seems that the model training fails. We discuss this phenomenon in Section.6.
B. Comparison of The Number of Labeled Samples
We experimented with keeping the imbalance factor while changing the number of labeled samples. We set the number of labeled data to {1k, 2k, 4k} in CIFAR10, and {250, 500, 1k} in SVHN. The results of CIFAR10 and SVHN are shown in Table 4a, 4b, respectively.
In Table.4a, the smaller the size of the labeled set, the lower the performance. In particular, when the size of the labeled data is 1k, most of the algorithms are weaker than supervised learning , while our method improves performance. This result indicates that consistency regularization is not valid when the baseline classifier is not performing well. Table.4b also shows similar tendency between the size of labeled data and performance. For SNTG and ICT, same as Section.5.3.A, they have lower performance than supervised learning, either.
Object detection
We followed the CSD (Jeong et al., 2019) experiment settings and used the SSD300 model (Liu et al., 2016). We used PASCAL VOC2007 trainval dataset as the labeled data and PASCAL VOC2012 trainval dataset. We evaluated with PASCAL VOC2007 test dataset. In this experiment, the imbalance factor of labeled dataset is about 20. We applied our algorithm only to the classification consistency loss of CSD. The details are in the supplementary material.
In Table 5, supervised learning using VOC2007 shows 70.2 mAP. CSD with only classification consistency loss is 1.5%p higher than the supervised and CSD shows 2.1%p of enhancement. When SCL is applied to the CSD, our method shows additional improvement.
Discussion
The reason why the existing SSL methods did not perform well in the CISSL environment was that they did not consider data imbalance. This fact gives us some implications. First, for deep learning to become a practical application, we need to work on a harsher benchmark. We experimented on datasets which relaxed the equal class distribution assumption of SSL, and our method yielded meaningful re-sults. Second, we should avoid developing domain-specific algorithms which work very well only under certain conditions. SNTG (Luo et al., 2018) and ICT (Verma et al., 2019) are very good algorithms for existing SSL settings. In our experiments, however, both algorithms were not robust against class imbalance. Finally, we need to focus not only on the performance improvement of the model but also on its causes. An in-depth analysis of the causes of the phenomena provides an intuition about the direction of future research. Concerning this, we discussed aspects of learning in the CISSL environment in Section.3.
Conclusion
In this paper, we proposed Class-Imbalanced Semi-Supervised Learning, which is one step beyond the limitations of SSL. We theoretically analyzed how the existing SSL methods work in CISSL. Based on the intuition obtained here, we proposed Suppressed Consistency Loss that works robustly in CISSL. Our experiments show that our method works well in the CISSL environment compared to the existing SSL and CIL methods, as well as the feasibility of working in object detection. However, our research have focused on relatively small datasets. Applying CISSL to more massive datasets would be the future work.
Supplementary Materials
A. Toy Examples Details
We generated two moons and four spins datasets. We split the train set into labeled data and unlabeled data with imbalance factor 5. The class distribution of unlabeled data follows same case. The size of the labeled data is 12 ({2, 10} samples each) in two moons, 11 ({1,2,3,5} samples each) in four spins. The size of the unlabeled data is 3000 in two moons, 2658 in four spins. Both datasets have 6,000 validation samples. We trained each algorithm by 5,000 iterations. The model is a 3-layer network; optimizer is SGD with momentum, the learning rate is 0.1 decaying at 4,000 iterations multiplied by 0.2, and momentum is 0.9.
In the experiment, we set the function g(z) of suppressed consistency loss as z/N max with simplicity, where N c is the number of training samples of the class predicted by the model, N max is the number of samples of the class with the most frequency.
B. Π model vs. Mean Teacher Details
B.1. SGD Case
Consider the model parameter θ, the learning rate α, and the objective function J , then the update rule of SGD optimizer is:
θ ← θ − α∇J (θ)(13)
For a EMA decay factor of MT, γ, the current (θ) and the target (θ ) model parameters at the t-th iteration are
θ t = θ t−1 − α∇J (θ t−1 ) = . . . = θ 0 − α t−1 k=0 ∇J (θ k ) θ t = γθ t−1 + (1 − γ)θ t = . . . = θ 0 − α t−1 k=0 (1 − γ t−k−1 )∇J (θ k )(14)
B.2. SGD with Momentum Case
v is the momentum for SGD optimizer, δ ∈ (0, 1] is a decay factor of momentum, and other parameters are the same with Section.B.1.
v ← δv + α∇J (θ t−1 ) (15) θ ← θ − v(16)
The current model parameter(θ) at the t-th iteration is
θ t = θ t−1 − δv t−1 + α∇J (θ t−1 ) = . . . = θ 0 − α t−1 k=0 1 − δ t−k 1 − δ ∇J (θ k )(17)
And then the target model(θ ) at the t-th iteration is
θ t = γθ t−1 + (1 − γ)θ t = . . . = θ 0 − α(1 − γ) t j=1 γ t−j { j−1 k=0 1 − δ j−k 1 − δ ∇J (θ k )} = θ 0 − α(1 − γ) t−1 k=0 t−k−1 j=0 γ t−k−j−1 1 − δ j+1 1 − δ ∇J (θ k )(18)
Difference of the coefficient for ∇J (θ k ) for each k is Fig.4 is the difference of the first term (from current model θ) and the second term (from target model θ ) in (19) when δ is 0.9 and γ is 0.95 same as our experiments. We can see that the different between two terms is always greater or equal to 0. Therefore, θ is a more conservative target than θ in SGD with momentum optimizer, either.
1 − δ t−k 1 − δ − t−k−1 j=0 (1 − γ)γ t−k−j−1 1 − δ j+1 1 − δ ≥ 0 (19)
C. Experiment Settings
C.1. Dataset Details
We followed standard settings for CIFAR10 and SVHN. For CIFAR10, there are 50,000 training images and 10,000 test images. We split the training set into a 45,000 train set and a 5,000 validation set for experiments. The validation set consists of the same size per class. We applied global contrast normalization and ZCA normalization. For data Figure 4. The difference of the first term (from current model θ) and the second term (from target model θ ) in (19) for each iteration when δ is 0.9 and γ is 0.95. The trend between iteration 0 to 499,000 is almost same with the early iteration of this figure.
augmentation, we used random horizontal flipping, random cropping by padding 2 pixels each side of the image, and added Gaussian noise with standard deviation 0.15 to each pixel.
For SVHN, there are 73,257 training images and 26,032 test images. We split the training set into a 65,931 train set and a 7,326 validation set for experiments. The validation set consists of the same size per class. We applied global contrast normalization and ZCA normalization. For data augmentation, we used random cropping by padding 2 pixels on each side of the image only.
In our main experiments, we split the training set into the labeled set and the unlabeled set. The size of the unlabeled set changes depending on unlabeled data imbalance types because of the limitation of the training dataset. For fair experiments, we set the size of the unlabeled set based on the Same case, which uses the lowest number of unlabeled samples. The size of unlabeled data is described in Table.6a, 6b.
C.2. Implementation details
In all experiments, we use the
D. Detailed Experiment Results
We omitted the standard deviation from the experiment of the paper for readability. Table.8,9,10 are tables with stan- dard deviation. Since we used five different seeds in each experiment, the class frequency distribution varies from seed to seed, which results in a change in baseline performance. As a result, the standard deviation of our experiment is larger than that of the random initialization of the weights.
E. Object Detection Experiment Settings
E.1. Dataset Details
We used PASCAL VOC2007 trainval dataset as the labeled data and PASCAL VOC2012 trainval dataset as the unlabeled data. Fig. 5 shows the distributions of PASCAL VOC data. The imbalance factor of labeled data is 22, and the imbalance factor of unlabeled data is 15. The order of the number of classes is also different. It means that the object detection task is more difficult and real settings.
E.2. Implementation details
We followed the CSD 4 (Jeong et al., 2019) experiment settings and used the SSD300 model (Liu et al., 2016). All hyperparameters such as coefficient, learning iteration, schedule function, and background elimination are the same. We set the g(z) as z/N max becuase it shows better performance.
Figure 1 .
1Toy examples: We experimented on Two moons and Four spins datasets in CISSL settings for four algorithms (Supervised learning, Π model (Laine & Aila, 2016), Mean Teacher (Tarvainen & Valpola, 2017) and SCL (ours)). The color represents the probability of of the class with the highest confidence.
Figure 2 .
2Suppressed Consistency Loss (SCL). Due to the imbalance in data, decision boundary tends to skew into the areas of minor class with consistency regularization. SCL inversely weights consistency loss to the number of class samples and pushes the decision boundary against low-density areas.
Figure 3 .
3Types of unlabeled data imbalance. (a) Imbalance of labeled data. (b) Uniform case: The number of samples in all classes are the same (ρu =1). (c) Half case: The imbalance factor of the unlabeled data is half of the labeled data. (ρu = ρ l /2) (d) Same case: Same labeled and unlabeled imbalance factor (ρu = ρ l ).
Lecouat, B., Foo, C.-S., Zenati, H., and Chandrasekhar, V. Manifold regularization with gans for semi-supervised learning. arXiv preprint arXiv:1807.04307, 2018.Lee, D.-H. Pseudo-label: The simple and efficient semisupervised learning method for deep neural networks. In Workshop on Challenges in Representation Learning, ICML, volume 3, pp. 2, 2013. Lee, H., Park, M., and Kim, J. Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In 2016 IEEE international conference on image processing (ICIP), pp. 3713-3717. IEEE, 2016. Li, S., Wang, Z., Zhou, G., and Lee, S. Y. M. Semisupervised learning for imbalanced sentiment classification. In Twenty-Second International Joint Conference on Artificial Intelligence, 2011. Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Dollár, P. Focal loss for dense object detection. In Proceedings of the IEEE international conference on computer vision, pp. 2980-2988, 2017. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A. C. Ssd: Single shot multibox detector. In European conference on computer vision, pp. 21-37. Springer, 2016. Loshchilov, I. and Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016. Luo, Y., Zhu, J., Li, M., Ren, Y., and Zhang, B. Smooth neighbors on teacher graphs for semi-supervised learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8896-8905, 2018.Masko, D. and Hensman, P. The impact of imbalanced training data for convolutional neural networks, 2015. Miyato, T., Maeda, S.-i., Koyama, M., and Ishii, S. Virtual adversarial training: a regularization method for supervised and semi-supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993, 2018. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. Reading digits in natural images with unsupervised feature learning. 2011. Oliver, A., Odena, A., Raffel, C. A., Cubuk, E. D., and Goodfellow, I. Realistic evaluation of deep semi-supervised learning algorithms. In Advances in Neural Information Processing Systems, pp. 3235-3246, 2018. Pouyanfar, S., Tao, Y., Mohan, A., Tian, H., Kaseb, A. S., Gauen, K., Dailey, R., Aghajanzadeh, S., Lu, Y.-H., Chen, S.-C., et al. Dynamic sampling in convolutional neural networks for imbalanced data classification. In 2018 IEEE conference on multimedia information processing and retrieval (MIPR), pp. 112-117. IEEE, 2018. Radford, A., Metz, L., and Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434, 2015. Russakovsky, O., Li, L.-J., and Fei-Fei, L. Best of both worlds: human-machine collaboration for object annotation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2121-2131, 2015. Sajjadi, M., Javanmardi, M., and Tasdizen, T. Regularization with stochastic transformations and perturbations for deep semi-supervised learning. In Advances in Neural Information Processing Systems, pp. 1163-1171, 2016.
Figure 5 .
5Distributions for the labeled dataset (VOC2007) and the unlabeled dataset (VOC2012).
Table 1 .
1Meanand standard deviation of validation error rates (%)
for all, major, and minor classes in toy examples. We conducted 5
runs with different random seeds for class imbalance distribution.
(%)
CLASS TYPE SUPERVISED
Π MODEL
MEAN TEACHER MT+SCL (OURS)
ALL
25.06 ± 12.43 41.57 ± 8.82
34.99 ± 9.98
24.39 ± 15.14
TWOMOONS
MAJOR
0.95 ± 1.24
0.00 ± 0.00
0.01 ± 0.03
0.06 ± 0.07
MINOR
49.17 ± 24.74 83.14 ± 17.64 69.96 ± 19.98
48.01 ± 31.04
ALL
19.70 ± 6.70 17.79 ± 8.39
14.99 ± 8.46
10.91 ± 8.94
FOURSPINS
MAJOR
7.83 ± 5.43
4.75 ± 3.74
4.76 ± 3.30
6.28 ± 3.26
MINOR
49.39 ± 25.61 52.68 ± 31.17 43.29 ± 31.53
27.68 ± 36.48
pervised learning with SSL's representative algorithms, Π
model (Laine & Aila, 2016) and Mean Teacher (Tarvainen &
Valpola, 2017) via toy examples. And we analyze why MT
performs better in CISSL through a mathematical approach.
Wide-Resnet-28-2 model (Zagoruyko & Komodakis, 2016). It has enough capacity to show the performance improvement of SSL objectively (Oliver et al., 2018), and it is used in the new SSL methods (Berthelot et al., 2019; Verma et al., 2019). We adopt optimizer and learning rate from (Verma et al., 2019), and other hyper-parameters are set under a similar setting with (Oliver et al., 2018) 2 . In our experiments, we used
Table 2 .
2Test error rates (%) from experiments with 4k number of labeled data and imbalance factor {10, 20, 50, 100 } under 3 different unlabeled imbalance types in CIFAR10 and imbalance factor {10, 20, 50, 100 } under 3 different unlabeled imbalance types in SVHN. VAT+EM refers to Virtual Adversarial Training with Entropy Minimization. To improve legibility, the standard deviation is listed in supplemental materials. (Bold/Red/Blue: supervised, best and second best results for each column.) EM + SNTG (LUO ET AL., 2018) 93.30 93.30 14.88 93.30 93.30 93.30 20.60 93.30 93.30 93.30 23.(a) CIFAR10
Table .
.3a is the result of CIFAR10 experimented with im-
balance factor 100, 4k labeled dataset. First of all, it seems
that not all CIL methods always improve performance over
CE. As unlabeled data imbalance and SSL methods change,
their relative performance with CE differs. In this table, IN
shows the best performance in all cases except the Half case
of the Π model.
Table .
.3b is the result of SVHN experiments with imbalance factor 100, 1k labeled dataset. Unlike the previous CIFAR10 results, IN does not always dominate. The best algorithm differs according to the unlabeled data imbalance type in MT and our method. Since we do not know the unlabeled data
Table 3 .
3Test error rates (%) from experiments with different re-weighting methods in CIFAR10 and SVHN.We compared inverse and
Table 4 .
4Test error rates (%) from experiments with imbalance factor 100 and the number of labeled data {1k, 2k, 4k} under 3 different unlabeled imbalance types in CIFAR10 and the number of labeled data {250, 500, 1k} under 3 different unlabeled imbalance types in SVHN. Details are the same asTable 2.(a) CIFAR10
UNLABEL IMBALANCE TYPE
UNIFORM (ρu = 1) HALF (ρu = ρ l /2)
SAME (ρu = ρ l )
# LABELED DATA
1000 2000 4000 1000 2000 4000 1000 2000 4000
SUPERVISED
54.24 45.81 36.71 54.24 45.81 36.71 54.24 45.81 36.71
Π-MODEL (LAINE & AILA, 2016)
56.82 48.55 39.36 55.99 47.74 38.84 55.42 46.83 38.05
MT (TARVAINEN & VALPOLA, 2017)
51.74 38.94 29.06 51.61 42.47 35.37 52.58 44.11 35.91
VAT + EM (MIYATO ET AL., 2018)
53.68 48.47 36.57 53.60 45.20 36.77 53.62 44.77 37.67
VAT + EM + SNTG (LUO ET AL., 2018) 54.53 48.23 36.34 55.59 45.37 38.48 55.55 45.99 38.48
PSEUDO-LABEL (LEE, 2013)
58.19 50.01 39.59 57.05 49.42 39.72 56.68 48.45 38.69
ICT (VERMA ET AL., 2019)
57.10 48.25 38.33 56.02 47.60 37.36 55.10 47.19 36.85
MT+SCL (OURS)
42.84 28.69 22.62 45.72 39.97 33.09 48.00 40.69 34.22
(b) SVHN
UNLABEL IMBALANCE TYPE
UNIFORM (ρu = 1) HALF (ρu = ρ l /2)
SAME (ρu = ρ l )
# LABELED DATA
250 500 1000 250 500 1000 250 500 1000
SUPERVISED
61.31 47.98 35.89 61.31 47.98 35.89 61.31 47.98 35.89
Π-MODEL (LAINE & AILA, 2016)
54.51 39.49 28.59 54.14 42.20 33.73 54.10 43.89 33.71
MT (TARVAINEN & VALPOLA, 2017)
38.32 18.14 8.94 41.72 23.33 17.23 42.42 28.86 21.01
VAT + EM (MIYATO ET AL., 2018)
64.67 44.04 29.15 58.01 41.15 30.44 55.03 42.44 32.39
VAT + EM + SNTG (LUO ET AL., 2018) 65.02 93.30 93.30 57.94 93.30 93.30 54.19 93.30 93.30
PSEUDO-LABEL (LEE, 2013)
63.16 49.78 32.79 54.79 44.32 33.70 56.83 43.71 33.53
ICT (VERMA ET AL., 2019)
86.54 77.64 67.02 84.22 72.21 58.99 85.15 71.19 56.97
MT+SCL (OURS)
26.25 15.31 8.56 33.44 22.26 18.63 35.32 27.13 20.39
Table 5. Detection results for PASCAL VOC2007 testset. cls and
loc are the consistency loss for classification and localization,
respectively. We trained SSD300 on VOC07(L)+VOC12(U). Our
result is from three independent trials.
Algorithm Supervised CSD (Jeong et al., 2019)
CSD + SCL(Ours)
cls
o
o
o
o
loc
o
o
mAP (%)
70.2
71.7
72.3
72.07 ± 0.15 72.60 ± 0.10
Wide-Resnet-28-2 model (Zagoruyko & Komodakis, 2016). Following the settings from Verma et al.(2019), we set SGD with Nesterov momentum as our optimzer and adopted the cosine annealing technique(Loshchilov & Hutter, 2016). Detailed hyperparameters for experiments is described inTable.7.
Table 6 .
6Number of unlabeled data in CIFAR10 and SVHN according to imbalance factor and number of labeled data.(a) CIFAR10
# of labeled data
Imbalance Factor
1k
2k
4k
100
10166 9166 7166
50
-
-
8596
20
-
-
11322
10
-
-
14389
(b) SVHN
# of labeled data
Imbalance Factor
250
500
1k
100
16109 15858 15360
50
-
-
17455
20
-
-
21449
10
-
-
25943
Table 7 .
7Hyperparameters for shared environment and each SSL algorithms and our method used in the experiments.Shared
Training iteration
500k
Consistency ramp-up iteration
200k
Initial learning rate
0.1
Cosine learning rate ramp-down iteration
600k
Weight decay
10 −4
Momentum
0.9
Π Model
Max consistency coefficient
20
Mean Teacher
Max consistency coefficient
8
Exponential Moving Average decay factor
0.95
VAT+em
Max consistency coefficient
0.3
VAT (CIFAR10)
6.0
VAT (SVHN)
1.0
VAT ξ
10 e−6
VAT+EM+SNTG (as for VAT)
Entropy penaly multiplier
0.06
Pseudo-Label
Max consistency coefficient
1.0
Pseudo-label threshold
0.95
ICT
Max consistency coefficient
100
Exponential Moving Average decay factor 0.999
ICT α
1.0
Suppressed Consitency Loss (Ours)
Suppression Coefficient (β)
0.5
(a) Labeled dataset (VOC2007)
(b) Unlabeled dataset (VOC2012)
Table 8 .
8Test error rates (%) and standard deviation from experiments with 4k number of labeled data and imbalance factor {10, 20, 50, 100 } under 3 different unlabeled imbalance types in CIFAR10 and SVHN. VAT+EM refers to Virtual Adversarial Training with Entropy Minimization.
VAT + em + SNTG(Luo et al., 2018) 18.15 ± VAT + em + SNTG(Luo et al., 2018) 93.30 ±Table 9. Test error rates (%) and standard deviation from experiments with imbalance factor 100 and the number of labeled data {1k, 2k, 4k} in CIFAR10, and the number of labeled data {250, 500, 1k} in SVHN under 3 different unlabeled imbalance types..03 ±
1.65 27.49 ±
1.87 33.15 ±
2.83 36.71 ±
2.79 23.03 ±
1.65 27.49 ±
1.87 33.15 ±
2.83 36.71 ±
2.79 23.03 ±
1.65 27.49 ±
1.87 33.15 ±
2.83 36.71 ±
2.79
Π-Model
(Laine & Aila, 2016)
21.1 ±
1.93 25.74 ±
3.82 33.91 ±
3.49 39.36 ±
4.47 22.69 ±
1.99 27.72 ±
4.17 33.96 ±
3.19 38.84 ±
4.17 23.49 ±
2.69 28.18 ±
3.31 34.22 ±
3.19 38.05 ±
3.19
MT (Tarvainen & Valpola, 2017)
16.45 ±
1.24 19.25 ±
1.99 23.45 ±
3.30 29.06 ±
5.13 19.48 ±
1.96 23.30 ±
2.85 30.06 ±
3.92 35.37 ±
3.52 20.50 ±
2.58 24.67 ±
2.60 31.77 ±
3.79 35.91 ±
3.70
VAT + em (Miyato et al., 2018)
17.93 ±
2.12 20.18 ±
3.18 30.43 ±
6.18 36.57 ±
7.20 20.17 ±
2.49 24.50 ±
2.88 32.54 ±
4.61 36.77 ±
3.75 21.45 ±
1.88 25.83 ±
3.21 33.13 ±
3.67 37.67 ±
2.20
2.25 20.39 ±
2.46 29.77 ±
6.71 36.34 ±
6.54 20.41 ±
2.47 24.64 ±
2.79 32.56 ±
4.05 38.48 ±
3.87 21.87 ±
2.65 26.49 ±
3.07 33.36 ±
3.86 38.48 ±
2.96
Pseudo-Label (Lee, 2013)
19.33 ±
1.36 24.34 ±
4.06 34.18 ±
4.23 39.59 ±
5.70 21.23 ±
2.52 26.78 ±
3.41 34.12 ±
4.51 39.72 ±
4.20 22.73 ±
2.74 27.50 ±
3.39 34.91 ±
2.57 38.69 ±
4.28
ICT (Verma et al., 2019)
18.01 ±
1.28 20.52 ±
1.91 30.18 ±
2.63 38.33 ±
4.72 19.53 ±
1.41 23.90 ±
2.07 31.09 ±
3.35 37.36 ±
2.02 19.96 ±
1.05 25.63 ±
1.91 33.56 ±
3.14 36.85 ±
3.44
MT+SCL (ours)
15.65 ±
0.69 16.99 ±
1.31 19.95 ±
2.36 22.62 ±
3.54 17.36 ±
1.17 21.74 ±
2.15 28.20 ±
3.09 33.09 ±
3.63 18.69 ±
2.09 22.98 ±
2.33 29.76 ±
2.40 34.22 ±
3.50
(b) SVHN
Unlabel Imbalance Type
Uniform (ρu = 1)
Half (ρu = ρ
l /2)
Same (ρu = ρ
l )
Imbalance factor (ρ
l )
10
20
50
100
10
20
50
100
10
20
50
100
Supervised
18.49 ±
1.90 21.92 ±
2.28 30.03 ±
3.83 35.89 ±
6.39 18.49 ±
1.90 21.92 ±
2.28 30.03 ±
3.83 35.89 ±
6.39 18.49 ±
1.90 21.92 ±
2.28 30.03 ±
3.83 35.89 ±
6.39
Π-Model
(Laine & Aila, 2016)
11.74 ±
1.80 13.42 ±
2.14 21.63 ±
4.58 28.59 ±
7.90 12.96 ±
1.26 16.70 ±
4.01 24.02 ±
3.97 33.73 ±
7.52 13.46 ±
2.13 17.13 ±
2.61 26.53 ±
3.43 33.71 ±
8.17
MT (Tarvainen & Valpola, 2017)
6.52 ±
0.55 6.75 ±
0.49 7.60 ±
1.85
8.94 ±
2.12 7.25 ±
0.38 8.85 ±
1.10 12.19 ±
1.68 17.23 ±
2.44
8.62 ±
1.29 9.29 ±
1.41 15.16 ±
3.54 21.01 ±
4.14
VAT + em (Miyato et al., 2018)
6.81 ±
0.30 7.70 ±
0.87 13.84 ±
6.17 29.15 ±
4.80 8.99 ±
1.21 11.59 ±
1.85 18.95 ±
4.49 30.44 ±
6.95 10.39 ±
0.96
13.62 ±
2 21.49 ±
5.27 32.39 ±
8.25
0.00 93.30 ±
0.00 14.88 ±
5.38 93.30 ±
0.00 93.30 ±
0.00 93.30 ±
0.00 20.60 ±
5.73 93.30 ±
0.00 93.30 ±
0.00 93.30 ±
0.00 23.52 ±
7.34 93.30 ±
0.00
Pseudo-Label (Lee, 2013)
10.15 ±
0.87 9.97 ±
1.45 16.00 ±
4.34 32.79 ±
7.62 11.59 ±
1.96 13.97 ±
2.11 24.40 ±
4.46 33.70 ±
6.89 12.34 ±
1.79 15.93 ±
2.43 25.66 ±
5.95 33.53 ±
8.08
ICT (Verma et al., 2019)
27.82 ±
5.12 37.75 ±
7.50 58.20 ±
9.38 67.02 ±
12.66 22.38 ±
7.89 38.12 ±
6.57 48.88 ±
8.33 58.99 ±
7.35 24.53 ±
12.62 37.25 ±
8.22 49.85 ±
7.74 56.97 ±
10.28
MT+SCL (ours)
6.52 ±
0.53 7.11 ±
0.30 7.70 ±
0.73
8.56 ±
0.86 7.54 ±
0.50 9.29 ±
1.48 11.46 ±
1.21 18.63 ±
3.97
8.22 ±
0.89 10.04 ±
0.82 15.48 ±
2.29 20.39 ±
4.10
(a) CIFAR10
Unlabel Imbalance Type
Uniform (ρu = 1)
Half (ρu = ρ
l /2)
Same (ρu = ρ
l )
https://github.com/brain-research/realistic-ssl-evaluation
https://github.com/soo89/CSD-SSD
Table 10. Test error rates (%) and standard deviation from experiments with different re-weighting methods in CIFAR10 and SVHN. We compared inverse and normalization (IN), focal loss (FOCAL), and class-balanced loss (CB) to conventional cross-entropy loss (CE).(a) CIFAR10Unlabel Imbalance Type Uniform (ρu = 1)
Deep over-sampling framework for classifying imbalanced data. S Ando, C Y Huang, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerAndo, S. and Huang, C. Y. Deep over-sampling framework for classifying imbalanced data. In Joint European Con- ference on Machine Learning and Knowledge Discovery in Databases, pp. 770-785. Springer, 2017.
Learning with pseudo-ensembles. P Bachman, O Alsharif, D Precup, Advances in Neural Information Processing Systems. Bachman, P., Alsharif, O., and Precup, D. Learning with pseudo-ensembles. In Advances in Neural Information Processing Systems, pp. 3365-3373, 2014.
What's the point: Semantic segmentation with point supervision. A Bearman, O Russakovsky, V Ferrari, L Fei-Fei, European conference on computer vision. SpringerBearman, A., Russakovsky, O., Ferrari, V., and Fei-Fei, L. What's the point: Semantic segmentation with point su- pervision. In European conference on computer vision, pp. 549-565. Springer, 2016.
Mixmatch: A holistic approach to semi-supervised learning. D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, C Raffel, arXiv:1905.02249arXiv preprintBerthelot, D., Carlini, N., Goodfellow, I., Papernot, N., Oliver, A., and Raffel, C. Mixmatch: A holistic ap- proach to semi-supervised learning. arXiv preprint arXiv:1905.02249, 2019.
A systematic study of the class imbalance problem in convolutional neural networks. M Buda, A Maki, M A Mazurowski, Neural Networks. 106Buda, M., Maki, A., and Mazurowski, M. A. A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks, 106:249-259, 2018.
Learning imbalanced datasets with label-distributionaware margin loss. K Cao, C Wei, A Gaidon, N Arechiga, T Ma, arXiv:1906.07413arXiv preprintCao, K., Wei, C., Gaidon, A., Arechiga, N., and Ma, T. Learning imbalanced datasets with label-distribution- aware margin loss. arXiv preprint arXiv:1906.07413, 2019.
Semi-supervised learning (chapelle, o. O Chapelle, B Scholkopf, A Zien, . et al.book reviewsChapelle, O., Scholkopf, B., and Zien, A. Semi-supervised learning (chapelle, o. et al., eds.; 2006)[book reviews].
. IEEE Transactions on Neural Networks. 203IEEE Transactions on Neural Networks, 20(3):542-542, 2009.
Classbalanced loss based on effective number of samples. Y Cui, M Jia, T.-Y Lin, Y Song, S Belongie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionCui, Y., Jia, M., Lin, T.-Y., Song, Y., and Belongie, S. Class- balanced loss based on effective number of samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9268-9277, 2019.
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IeeeDeng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp. 248-255. Ieee, 2009.
Imbalanced deep learning by minority class incremental rectification. Q Dong, S Gong, X Zhu, 10.1109/TPAMI.2018.2832629IEEE Transactions on Pattern Analysis and Machine Intelligence. 416Dong, Q., Gong, S., and Zhu, X. Imbalanced deep learn- ing by minority class incremental rectification. IEEE Transactions on Pattern Analysis and Machine Intelli- gence, 41(6):1367-1381, Jun 2019. ISSN 1939-3539. doi: 10.1109/tpami.2018.2832629. URL http://dx. doi.org/10.1109/TPAMI.2018.2832629.
V Dumoulin, I Belghazi, B Poole, O Mastropietro, A Lamb, M Arjovsky, A Courville, arXiv:1606.00704Adversarially learned inference. arXiv preprintDumoulin, V., Belghazi, I., Poole, B., Mastropietro, O., Lamb, A., Arjovsky, M., and Courville, A. Adversar- ially learned inference. arXiv preprint arXiv:1606.00704, 2016.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A Courville, Y Bengio, Advances in neural information processing systems. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde- Farley, D., Ozair, S., Courville, A., and Bengio, Y. Gener- ative adversarial nets. In Advances in neural information processing systems, pp. 2672-2680, 2014.
Learning deep representation for imbalanced classification. C Huang, Y Li, C Loy, X Tang, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHuang, C., Li, Y., Change Loy, C., and Tang, X. Learning deep representation for imbalanced classification. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pp. 5375-5384, 2016.
Consistencybased semi-supervised learning for object detection. J Jeong, S Lee, J Kim, N Kwak, Advances in Neural Information Processing Systems. Jeong, J., Lee, S., Kim, J., and Kwak, N. Consistency- based semi-supervised learning for object detection. In Advances in Neural Information Processing Systems, pp. 10758-10767, 2019.
Survey on deep learning with class imbalance. J M Johnson, T M Khoshgoftaar, Journal of Big Data. 6127Johnson, J. M. and Khoshgoftaar, T. M. Survey on deep learning with class imbalance. Journal of Big Data, 6(1): 27, 2019.
The advanced theory of statistics. The advanced theory of statistics. M G Kendall, 19462nd EdKendall, M. G. et al. The advanced theory of statistics. The advanced theory of statistics., (2nd Ed), 1946.
Cost-sensitive learning of deep feature representations from imbalanced data. S H Khan, M Hayat, M Bennamoun, F A Sohel, R Togneri, IEEE transactions on neural networks and learning systems. 29Khan, S. H., Hayat, M., Bennamoun, M., Sohel, F. A., and Togneri, R. Cost-sensitive learning of deep feature repre- sentations from imbalanced data. IEEE transactions on neural networks and learning systems, 29(8):3573-3587, 2017.
Learning from imbalanced data: open challenges and future directions. B Krawczyk, Progress in Artificial Intelligence. 54Krawczyk, B. Learning from imbalanced data: open chal- lenges and future directions. Progress in Artificial Intelli- gence, 5(4):221-232, 2016.
Learning multiple layers of features from tiny images. A Krizhevsky, G Hinton, CiteseerTechnical reportKrizhevsky, A., Hinton, G., et al. Learning multiple layers of features from tiny images. Technical report, Citeseer, 2009.
Temporal ensembling for semisupervised learning. S Laine, T Aila, arXiv:1610.02242arXiv preprintLaine, S. and Aila, T. Temporal ensembling for semi- supervised learning. arXiv preprint arXiv:1610.02242, 2016.
. Π-Model , Laine & AilaΠ-Model (Laine & Aila, 2016)
. & Mt (tarvainen, Valpola, MT (Tarvainen & Valpola, 2017)
. Vat + Em (miyato, VAT + em (Miyato et al., 2018)
. + Vat + Em, Sntg (luo, VAT + em + SNTG (Luo et al., 2018) 65.02 ±
. Pseudo-Label, LeePseudo-Label (Lee, 2013)
. Ict (verma, ICT (Verma et al., 2019)
| [
"https://github.com/perrying/realistic-ssl-evaluation-pytorch",
"https://github.com/brain-research/realistic-ssl-evaluation",
"https://github.com/soo89/CSD-SSD"
] |
[
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW",
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW",
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW",
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW",
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW",
"WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW"
] | [
"Sebastian Hensel ",
"Kerrek Stinson ",
"Sebastian Hensel ",
"Kerrek Stinson ",
"Sebastian Hensel ",
"Kerrek Stinson "
] | [] | [] | We propose a novel weak solution theory for the Mullins-Sekerka equation primarily motivated from a gradient flow perspective. Previous existence results on weak solutions due to Luckhaus and Sturzenhecker (Calc. Var. PDE 3, 1995) or Röger (SIAM J. Math. Anal. 37, 2005) left open the inclusion of both a sharp energy dissipation principle and a weak formulation of the contact angle at the intersection of the interface and the domain boundary. To incorporate these, we introduce a functional framework encoding a weak solution concept for Mullins-Sekerka flow essentially relying only on i) a single sharp energy dissipation inequality in the spirit of De Giorgi, and ii) a weak formulation for an arbitrary fixed contact angle through a distributional representation of the first variation of the underlying capillary energy. Both ingredients are intrinsic to the interface of the evolving phase indicator and an explicit distributional PDE formulation with potentials can be derived from them. Existence of weak solutions is established via subsequential limit points of the naturally associated minimizing movements scheme. Smooth solutions are consistent with the classical Mullins-Sekerka flow, and even further, we expect our solution concept to be amenable, at least in principle, to the recently developed relative entropy approach for curvature driven interface evolution. | null | [
"https://export.arxiv.org/pdf/2206.08246v1.pdf"
] | 249,712,378 | 2206.08246 | baa21814dfe95b21fb158411054bf802a5a0dc9f |
WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW
16 Jun 2022
Sebastian Hensel
Kerrek Stinson
WEAK SOLUTIONS OF MULLINS-SEKERKA FLOW AS A HILBERT SPACE GRADIENT FLOW
16 Jun 2022Mullins-Sekerka flowgradient flowsweak solutionsenergy dis- sipation inequalityDe Giorgi metric slopecontact angleYoung's law Mathematical Subject Classification: 35D3049J2749Q2053E10
We propose a novel weak solution theory for the Mullins-Sekerka equation primarily motivated from a gradient flow perspective. Previous existence results on weak solutions due to Luckhaus and Sturzenhecker (Calc. Var. PDE 3, 1995) or Röger (SIAM J. Math. Anal. 37, 2005) left open the inclusion of both a sharp energy dissipation principle and a weak formulation of the contact angle at the intersection of the interface and the domain boundary. To incorporate these, we introduce a functional framework encoding a weak solution concept for Mullins-Sekerka flow essentially relying only on i) a single sharp energy dissipation inequality in the spirit of De Giorgi, and ii) a weak formulation for an arbitrary fixed contact angle through a distributional representation of the first variation of the underlying capillary energy. Both ingredients are intrinsic to the interface of the evolving phase indicator and an explicit distributional PDE formulation with potentials can be derived from them. Existence of weak solutions is established via subsequential limit points of the naturally associated minimizing movements scheme. Smooth solutions are consistent with the classical Mullins-Sekerka flow, and even further, we expect our solution concept to be amenable, at least in principle, to the recently developed relative entropy approach for curvature driven interface evolution.
1. Introduction 1.1. Context and motivation. The purpose of this paper is to develop the gradient flow perspective for the Mullins-Sekerka equation at the level of a weak solution theory. The Mullins-Sekerka equation is a curvature driven evolution equation for a mass preserved quantity, see (1a)-(1e) below. The ground breaking results of the early 90s showed that when strong solutions exist, this equation is in fact the sharp interface limit of the Cahn-Hilliard equation, a fourth order diffuse interface model for phase separation in materials [4] (see also, e.g., [9], [43]). However, as for mean curvature flows, one of the critical challenges in studying such sharp interface models is the existence of solutions after topological change. As a result, many different weak solution concepts have been developed, and in analogy with the development of weak solution theories for PDEs and the introduction of weak function spaces, a variety of weak notions of smooth surfaces have been applied for solution concepts. In the case that a surface arises as the common boundary of two sets (i.e., an interface), a powerful solution concept has been the BV solution, first developed for the Mullins-Sekerka flow in the seminal work of Luckhaus and Sturzenhecker [35].
For BV solutions in the sense of [35], the evolving phase is represented by a time-dependent family of characteristic functions which are of bounded variation. Furthermore, both the evolution equation for the phase and the Gibbs-Thomson law are satisfied in a distributional form. The corresponding existence result for such solutions crucially leverages the well-known fact that the Mullins-Sekerka flow can be formally obtained as an H −1 -type gradient flow of the perimeter functional (see, e.g., [22]). Indeed, BV solutions for the Mullins-Sekerka flow are constructed in [35] as subsequential limit points of the associated minimizing movements scheme. However, due to the discontinuity of the first variation of the perimeter functional with respect to weak- * convergence in BV , Luckhaus and Sturzenhecker [35] relied on the additional assumption of convergence of the perimeters in order to obtain a BV solution in their sense. Based on geometric measure theoretic results of Schätzle [47] and a pointwise interpretation of the Gibbs-Thomson law in terms of a generalized curvature intrinsic to the interface [44], Röger [45] was later able to remove the energy convergence assumption (see also [2]).
However, the existence results of Luckhaus and Sturzenhecker [35] and Röger [45] still leave two fundamental questions unanswered. First, both weak formulations of the Gibbs-Thomson law do not encompass a weak formulation for the boundary condition of the interface where it intersects the domain boundary. For instance, if the energy is proportional to the surface area of the interface, one expects a constant ninety degree contact angle condition at the intersection points, which quantitatively accounts for the fact that minimizing energy in the bulk, the surface will travel the shortest path to the boundary. Second, neither of the two works establishes a sharp energy dissipation principle, which, because of the formal gradient flow structure of the Mullins-Sekerka equation, is a natural ingredient for a weak solution concept as we will further discuss below. A second motivation to prove a sharp energy dissipation inequality stems from its crucial role in the recent progress concerning weak-strong uniqueness principles for curvature driven interface evolution problems (see, e.g., [19], [21] or [25]).
Turning to approximations of the Mullins-Sekerka flow via the Cahn-Hilliard equation Chen [14] introduced an alternative weak solution concept, which does include an energy dissipation inequality. To prove existence, Chen developed powerful estimates (that have been used in numerous applications, e.g., [1], [2], [37]) to control the sign of the discrepency measure, an object which captures the distance of a solution from an equipartition of energy. Critically these estimates do not rely on the maximum principle and are applicable to the fourth-order Cahn-Hilliard equation. However, in contrast to Ilmanen's proof for the convergence of the Allen-Cahn equation to mean curvature flow [28], where the discrepancy vanishes in the limit, Chen is restricted to proving non-positivity in the limit. As a result, the proposed solution concept requires a varifold lifting of the energy for the dissipation inequality and a modified varifold for the Gibbs-Thomson relation.
In the interior of the domain, the modified Gibbs-Thomson relation no longer implies the pointwise interpretation of the evolving surface's curvature in terms of the trace of the chemical potential and, on the boundary, cannot account for the contact angle. Further, Chen's solution concept does not use the optimal dissipation inequality to capture the full dynamics of the gradient flow.
Looking to apply the framework of evolutionary Gamma-convergence developed by Sandier and Serfaty [46] to the convergence of the Cahn-Hilliard equation, Le [31] introduces a gradient flow solution concept for the Mullins-Sekerka equation, which principally relies on an optimal dissipation inequality. However, interpretation of the limiting interface as a solution in this sense requires that the surface is regular and does not intersect the domain boundary, i.e., there is no contact angle. As noted by Serfaty [48], though the result of Le [31] sheds light on the gradient flow structure of the Mullins-Sekerka flow in a smooth setting, it is of interest to develop a general framework for viewing solutions of the Mullins-Sekerka flow as curves of maximal slope even on the level of a weak solution theory. This is one of the primary contributions of the present work.
Though still in the spirit of the earlier works by Le [31], Luckhaus and Sturzenhecker [35], and Röger [45], the solution concept we introduce includes both a weak formulation for the constant contact angle and a sharp energy dissipation principle. The boundary condition for the interface is in fact not only implemented for a constant contact angle α = π 2 but even for general constant contact angles α ∈ (0, π). For the formulation of the energy dissipation inequality, we exploit a gradient flow perspective encoded in terms of a De Giorgi type inequality. Recall to this end that for smooth gradient flows, the gradient flow equationu = −∇E [u] can equivalently be represented by the inequality
E[u(T )] + T 0 1 2 |u(t)| 2 + 1 2 |∇E[u(t)]| 2 dt ≤ E[u(0)]
(for a discussion of gradient flows and their solution concepts in further detail see Subsection 1.3). Representation of gradient flow dynamics through the above dissipation inequality allows one to generalize to the weak setting and is often amenable to typical variational machinery such as weak compactness and lower semi-continuity. The main conceptual contribution of this work consists of the introduction of a functional framework for which a weak solution of the Mullins-Sekerka flow is essentially characterized through only i) a single sharp energy dissipation inequality, and ii) a weak formulation for the contact angle condition in the form of a suitable distributional representation of the first variation of the energy. We emphasize that both these ingredients are intrinsic to the trajectory of the evolving phase indicator. Beyond proving existence of solutions via a minimizing movements scheme (Theorem 1), we show that our solution concept extends Le's [31] to the weak setting (Subsection 2.4), a more classical distributional PDE formulation with potentials can be derived from it (Lemma 4), smooth solutions are consistent with the classical Mullins-Sekerka equation (Lemma 5), and that the underlying varifold for the energy is of bounded variation (Proposition 6).
A natural question arising from the present work is whether solutions of the Cahn-Hilliard equation converge subsequentially to weak solutions of the Mullins-Sekerka flow in our sense, which would improve the seminal result of Chen [14] that relies on a (much) weaker formulation of the Mullins-Sekerka flow. An investigation of this question will be the subject of a future work.
1.2. Mullins-Sekerka motion law: Strong PDE formulation. Let d ≥ 2 and let Ω ⊂ R d be a bounded domain with orientable and C 2 -boundary ∂Ω. Consider also a finite time horizon T * ∈ (0, ∞) and let A = (A (t)) t∈[0,T * ) be a timedependent family of smoothly evolving open subsets A (t) ⊂ Ω with ∂A (t) = ∂ * A (t) and H d−1 (∂A (t) \ ∂ * A (t)) = 0, t ∈ [0, T * ), where ∂ * A refers to the reduced boundary [6]. Denoting for every t ∈ [0, T * ) by V ∂A (t) and H ∂A (t) the associated normal velocity and mean curvature vector, respectively, the family A is said to evolve by Mullins-Sekerka flow if for each t ∈ (0, T * ) there exists a chemical potential u(·, t) so that ∆u(·, t) = 0
in Ω \ ∂A (t), (1a)
V ∂A (t) = − n ∂A (t) · [[∇u(·, t)]] n ∂A (t) on ∂A (t) ∩ Ω,(1b)
c 0 H ∂A (t) = u(·, t)n ∂A (t) on ∂A (t) ∩ Ω,
(n ∂Ω · ∇)u(·, t) = 0 on ∂Ω \ ∂A (t) ∩ Ω.
Here, we denote by c 0 ∈ (0, ∞) a fixed surface tension constant, by n ∂A (t) the unit normal vector field along ∂A (t) pointing inside the phase A (t), and similarly n ∂Ω is the inner normal on the domain boundary ∂Ω. Furthermore, the jump [[·]] across the interface ∂A (t)∩Ω in normal direction is understood to be oriented such that the signs in the following integration by parts formula are correct:
− Ω\∂A (t) η∇ · v dx = Ω ∇η · v dx + ∂A (t)∩Ω η n ∂A (t) · [[v]] dH d−1 + ∂Ω η n ∂Ω · v dH d−1(2)
for all sufficiently regular functions v : Ω → R d and η : Ω → R.
For sufficiently smooth evolutions, it is a straightforward exercise to verify that the Mullins-Sekerka flow conserves the mass of the evolving phase as
d dt A (t) 1 dx = − ∂A (t)∩Ω V ∂A (t) · n ∂A (t) dH d−1 = 0.(3)
To compute the change of interfacial surface area, we first need to fix a boundary condition for the interface. In the present work, we consider the setting of a fixed contact angle α ∈ (0, π) in the sense that for all t ∈ [0, T * ) it is required that
n ∂Ω · n ∂A (t) = cos α on ∂Ω ∩ ∂A (t) ∩ Ω.(1e)
Then, it is again straightforward to compute that d dt
∂A (t)∩Ω c 0 dH d−1 + ∂A (t)∩∂Ω c 0 cos α dH d−1 = − ∂A (t)∩Ω V ∂A (t) · c 0 H ∂A (t) dH d−1 = − Ω |∇u(·, t)| 2 dx ≤ 0.(4)
In view of the latter inequality, one may wonder whether the Mullins-Sekerka flow can be equivalently represented as a gradient flow with respect to interfacial surface energy. That this is indeed possible is of course a classical observation (see [22] and references therein) and, at least for smooth evolutions, may be realized in terms of a suitable H −1 -type metric on a manifold of smooth surfaces.
1.3. Gradient flow perspective assuming smoothly evolving geometry. To take advantage of the insight provided by (4), we recall two methods for gradient flows. In parallel, our approach is inspired by De Giorgi's methods for curves of maximal slope in metric spaces and the approach for Gamma-convergence of evolutionary equations developed by Sandier and Serfaty in [46], which has been applied to the Cahn-Hilliard approximation of the Mullins-Sekerka flow by Le [31].
Looking to the school of thought inspired by De Giorgi (see [7] and references therein), in a generic metric space (X, d) equipped with energy E : X → R∪{∞}, a curve t → u(t) ∈ X is said to be a solution of the differential inclusion − d dt u ∈ ∂E[u] if it is a curve of maximal slope, that is, it satisfies the optimal dissipation relation
E[u(T )] + 1 2 T 0 d dt u 2 + |∂E[u]| 2 dt ≤ E[u(0)](5)
for almost all T ∈ (0, T * ), where | d dt u| is interpreted in the metric sense and
|∂E[u]| := lim sup v→u (E[v] − E[u]) + d(v, u) .
One motivation for this solution concept is in the Banach setting where, for sufficiently nice energies E, the optimal dissipation (5) is equivalent to solving the differential inclusion [7]. The energy behind the gradient flow structure of the Mullins-Sekerka flow is the perimeter functional, for which we have the classical result of Modica [40] (see also [42])
E ǫ [u] := Ω 1 ǫ W (u) + ǫ ∇u 2 dx ⇀ Γ c 0 Per Ω (χ) =: E[χ],
thereby making the perspective of Sandier and Serfaty [46] relevant. Abstractly, given Γ-converging (see, e.g., [10], [16]) energies E ǫ ⇀ Γ E, this approach gives conditions for when a curve t → u(t) ∈ Y , which is the
limit of t → u ǫ (t) ∈ X ǫ solving − d dt u ǫ ∈ ∇ Xǫ E ǫ [u ǫ ], is a solution the gradient flow − d dt u ∈ ∇ Y E[u]
associated with the limiting energy. Specifically, this requires the lower semi-continuity of the time derivative and the variations given by
T 0 d dt u 2 Y dt ≤ lim inf ǫ↓0 T 0 d dt u ǫ 2 Xǫ dt, T 0 ∇ Y E[u] 2 Y dt ≤ lim inf ǫ↓0 T 0 ∇ Xǫ E ǫ [u ǫ ] 2
Xǫ dt, which are precisely the relations needed to maintain an optimal dissipation inequality (5) in the limit. We note that this idea was precisely developed in finite dimensions with C 1 -functionals, and extending this approach to geometric evolution equations seems to require re-interpretation in general. This process of formally applying the Sandier-Serfaty approach to the Cahn-Hilliard equation was carried out by Le in [31] (see also [32] and [36]). As the Cahn-Hilliard equation and Mullins-Sekerka flow are mass preserving, it is necessary to introduce the Sobolev space
H 1 (0) := H 1 (Ω) ∩ {u : − u dx = 0} with dual H −1 (0)
. Then, for a set A ⊂ Ω with Γ := ∂A ∩ Ω a piecewise Lipschitz surface, Le recalls the space H 1/2 (0) (Γ), the trace space of H 1 (Ω \ Γ) with constants quotiented out, and introduces a norm with Hilbert structure given by
f H 1/2 (0) (Γ) = (f, f ) H 1/2 (0) (Γ) := ∇f L 2 (Ω) ,(6)
wheref satisfies the Dirichlet problem
−∆f = 0 in Ω \ Γ,f = f on Γ.(7)
Additionally, H −1/2 (0) (Γ) is the naturally associated dual space with a Hilbert space structure induced by the corresponding Riesz isomorphism.
With these concepts, Le shows that in the smooth setting the Mullins-Sekerka flow is the gradient flow of the perimeter functional on a formal Hilbert manifold with tangent space given by H −1/2 (0) (Γ), which for a characteristic function u(t) with interface Γ t := ∂{u(t)=1} can summarily be written as
− d dt u(t) ∈ ∇ H −1/2 (0) (Γt) E[u(t)].(8)
Further, solutions u ǫ of the Cahn-Hilliard equation
∂ t u ǫ = ∆v ǫ where v ǫ = δE ǫ [u ǫ ] = 1 ǫ f ′ (u ǫ ) − ǫ∆u ǫ in Ω,
(n ∂Ω · ∇)u ǫ = 0 and (n ∂Ω · ∇)v ǫ = 0 on ∂Ω, are shown to converge to a trajectory t → u(t) ∈ BV (Ω; {0, 1}), such that if the evolving surface Γ t is C 3 in space-time, then u is a solution of the Mullins-Sekerka flow (8) in the sense that
T 0 d dt u(t) 2 H −1/2 (0) (Γt) dt ≤ lim inf ǫ↓0 T 0 d dt u ǫ (t) 2 H −1 (0) (Ω) dt, T 0 H Γt 2 H 1/2 (0) (Γt) dt = T 0 ∇ H −1/2 (0) (Γt) E[u(t)] 2 H −1/2 (0) (Γt) dt ≤ lim inf ǫ↓0 T 0 ∇ H −1 (0) (Ω) E ǫ [u ǫ (t)] 2 H −1 (0) (Ω) dt,
where H Γ is the scalar mean curvature of a sufficiently regular surface Γ. As developed by Le, interpretation of the left-hand side of the above inequalities is only possible for regular Γ. In the next section, we will introduce function spaces and a solution concept that allow us to extend these quantities to the weak setting.
Main results and relation to previous works
So as not to waylay the reader, we first introduce in Subsection 2.1 a variety of function spaces necessary for our weak solution concept and then state our main existence theorem. Further properties of the associated solution space, an interpretation of our solution concept from the viewpoint of classical PDE theory (i.e., in terms of associated chemical potentials), as well as further properties of the time-evolving oriented varifolds associated with solutions which are obtained as limit points of the natural minimizing movements scheme are presented in Subsection 2.2. In Subsection 2.3, we return to a discussion of the function spaces introduced in Subsection 2.1 to further illuminate the intuition behind their choice. We then proceed in Subsection 2.4 with a discussion relating our functional framework to the one introduced by Le [31] for the smooth setting. In Subsection 2.5, we finally take the opportunity to highlight the potential of our framework in terms of the recent developments concerning weak-strong uniqueness for curvature driven interface evolution.
2.1. Weak formulation: Gradient flow structure and existence result. At the level of a weak formulation, we will describe the evolving interface, arising as the boundary of a phase region, in terms of a time-evolving family of characteristic functions of bounded variation. This strongly motivates us to formulate the gradient flow structure over a manifold of {0, 1}-valued BV functions in Ω. To this end, let d ≥ 2 and let Ω ⊂ R d be a bounded domain with orientable C 2 boundary ∂Ω. Fixing the mass to be m 0 ∈ (0, L d (Ω)), we define the "manifold"
M m0 := χ ∈ BV (Ω; {0, 1}) : Ω χ dx = m 0 .(9)
For the definition of the associated energy functional E on M m0 , recall that we aim to include contact point dynamics with fixed contact angle in this work. Hence, in addition to an isotropic interfacial energy contribution in the bulk, we also incorporate a capillary contribution. Precisely, for a fixed set of three positive surface tension constants (c 0 , γ + , γ − ) we consider an interfacial energy E[χ], χ ∈ M m0 , of the form
Ω c 0 d|∇χ| + ∂Ω γ + χ dH d−1 + ∂Ω γ − (1−χ) dH d−1 ,
where by an abuse of notation we do not distinguish between χ and its trace along ∂Ω. Furthermore, the surface tension constants are assumed to satisfy Young's relation |γ + −γ − | < c 0 so that there exists an angle α ∈ (0, π) such that
(cos α)c 0 = γ + − γ − .(10)
For convenience, we will employ the following convention: switching if needed the roles of the sets indicated by χ and 1−χ, we may assume that γ − < γ + and hence α ∈ (0, π 2 ]. In particular, by subtracting a constant, we may work with the following equivalent formulation of the energy functional on M m0 :
E[χ] := Ω c 0 d|∇χ| + ∂Ω (cos α)c 0 χ dH d−1 , χ ∈ M m0 .(11)
As usual in the context of weak formulations for curvature driven interface evolution problems, it will actually be necessary to work with a suitable (oriented) varifold relaxation of E. We refer to Definition 1 below for details in this direction.
In order to encode a weak solution of the Mullins-Sekerka equation as a Hilbert space gradient flow with respect to the interfacial energy E, it still remains to introduce the associated Hilbert space structure. To this end, we first introduce a class of regular test functions, which give rise to infinitesimally volume preserving inner variations, denoted by
S χ := B ∈ C 1 (Ω; R d ) : Ω χ∇ · B dx = 0, B · n ∂Ω = 0 on ∂Ω .(12)
As in Subsection 1.3, we recall the Sobolev space of functions with mass-average zero given by H 1
(0) := {u ∈ H 1 (Ω) : − Ω u dx = 0} with norm u H 1 (0) := ∇u L 2 (Ω) and dual H −1 (0) := (H 1 (0) ) * .
Based on the test function space S χ , we can introduce the space V χ ⊂ H −1 (0) as the closure of regular mass preserving normal velocities generated on the interface associated with χ ∈ M m0 :
V χ := B · ∇χ : B ∈ S χ H −1 (0) ⊂ H −1 (0) ,(13)
where B · ∇χ acts on elements u ∈ H 1 (0) in the distributional sense, i.e., recalling that B · n ∂Ω = 0 along ∂Ω for B ∈ S χ we have
B · ∇χ, u H −1 (0) ,H 1 (0) := − Ω χ∇ · (uB) dx.(14)
The space V χ carries a Hilbert space structure directly induced by the natural Hilbert space structure of H −1 (0) . The latter in turn is induced by the inverse ∆ −1 N of the weak Neumann Laplacian ∆ N :
H 1 (0) → H −1 (0) (which for the Hilbert space H 1 (0)
is in fact nothing else but the associated Riesz isomorphism) in the form of
(F, F ) H −1 (0) := Ω ∇∆ −1 N (F ) · ∇∆ −1 N ( F ) dx for all F, F ∈ H −1 (0) ,(15)
so that we may in particular define
F 2 Vχ := F 2 H −1 (0) = (F, F ) H −1 (0) , F ∈ V χ .(16)
We remark that operator norm on H −1 (0) is recovered from the inner product in (15). For the Mullins-Sekerka flow, the space V χ is the natural space associated with the action of the first variation (i.e., the gradient) of the interfacial energy on S χ , see (18h) in Definition 1 below.
In view of the Sandier-Serfaty perspective on Hilbert space gradient flows, cf. Subsection 1.3, it would be desirable to capture the time derivative of a trajectory t → χ(·, t) ∈ M m0 within the same bundle of Hilbert spaces. However, given the a priori lack of regularity of weak solutions, it will be necessary to introduce a second space of velocities T χ (containing the space V χ ) which can be thought of as a maximal tangent space of the formal manifold; this is given by
T χ := µ ∈ H −1 (0) ∩ M(Ω) : supp µ ⊂ supp |∇χ| H −1 (0) ⊂ H −1 (0) ,(17)
where M(Ω) denotes the space of Radon measures on Ω. Both spaces V χ and T χ are spaces of velocities, and from the PDE perspective, associated with these will be spaces for the (chemical) potential. We will discuss this and quantify the separation between V χ and T χ in Subsection 2.3. However, despite the necessity to work with two spaces, we emphasize that our gradient flow solution concept still only requires use of the above formal metric/manifold structure and the above energy functional.
Definition 1 (Varifold solutions of Mullins-Sekerka flow as curves of maximal slope). Let d ∈ {2, 3}, consider a finite time horizon T * ∈ (0, ∞), and let Ω ⊂ R d be a bounded domain with orientable C 2 boundary ∂Ω. For a locally compact and separable metric space X, we denote by M(X) the space of finite Radon measures on X. Fix χ 0 ∈ M m0 and define the associated oriented varifold
µ 0 := µ Ω 0 + µ ∂Ω 0 ∈ M(Ω×S d−1 ) by µ Ω 0 := c 0 |∇χ 0 | Ω ⊗ (δ ∇χ 0 |∇χ 0 | (x) ) x∈Ω and µ ∂Ω 0 := (cos α)c 0 χ 0 H d−1 ∂Ω ⊗ (δ n ∂Ω (x) ) x∈∂Ω . A measurable map χ : Ω×(0, T * ) → {0, 1} together with µ ∈ M((0, T * )×Ω×S d−1 )
is called a varifold solution for Mullins-Sekerka flow (1a)-(1e) with time horizon T * and initial data (χ 0 , µ 0 ) if: i) (Structure and compatibility) It holds that
χ ∈ L ∞ (0, T * ; M m0 ) ∩ C([0, T * ); H −1 (0) (Ω)) with Tr| t=0 χ = χ 0 in H −1 (0) . Furthermore, µ = L 1 (0, T * ) ⊗ {µ t } t∈(0,T * ) , and for each t ∈ (0, T * ) the oriented varifold µ t ∈ M(Ω×S d−1 ) decomposes as µ t = µ Ω t + µ ∂Ω t
for two separate oriented varifolds given in their disintegrated form by
µ Ω t =: |µ Ω t | S d−1 ⊗ (λ x,t ) x∈Ω ∈ M(Ω×S d−1 ) (18a) and µ ∂Ω t =: |µ ∂Ω t | S d−1 ⊗ (δ n ∂Ω (x) ) x∈∂Ω ∈ M(∂Ω×S d−1 ).
(18b) Finally, we require that these oriented varifolds contain the interface associated with the phase modelled by χ in the sense of
c 0 |∇χ(·, t)| Ω ≤ |µ Ω t | S d−1 Ω,(18c)(cos α)c 0 χ(·, t) H d−1 ∂Ω ≤ |µ ∂Ω t | S d−1 +|µ Ω t | S d−1 ∂Ω (18d)
for almost every t ∈ (0, T * ). ii) (Generalized mean curvature) For almost every t ∈ (0, T * ) there exists a function H χ (·, t) such that
H χ (·, t) ∈ L s Ω; d|∇χ(·, t)| Ω (18e) where s ∈ [2, 4] if d = 3 and s ∈ [2, ∞) if d = 2
, and H χ(·,t) is the generalized mean curvature vector of supp |∇χ(·, t)| Ω in the sense of Röger [44, Definition 1.1]. Moreover, the first variation δµ t of µ t in the direction of a volume preserving inner variation B ∈ S χ(·,t) is given by
δµ t (B) = − Ω c 0 H χ (·, t) ∇χ(·, t) |∇χ(·, t)| · B d|∇χ(·, t)|. (18f)
iii) (Mullins-Sekerka motion law as a sharp energy dissipation inequality) For almost every T ∈ (0, T * ), it holds that
E[µ T ]+ 1 2 T 0 (∂ t χ)(·, t) 2 T χ(·,t) + ∂E[µ t ] 2 V χ(·,t) dt ≤ E[µ 0 ],(18g)
where we define by a slight abuse of notation, but still in the spirit of the usual metric slopeà la De Giorgi (cf. (20) and (62) below),
1 2 ∂E[µ] 2 Vχ := sup B∈Sχ δµ B − 1 2 B · ∇χ 2 Vχ dt ,(18h)
and where the energy functional on the varifold level is given by the total mass measure associated with the oriented varifold µ t , i.e.,
E[µ t ] := |µ t | S d−1 (Ω) = |µ Ω t | S d−1 (Ω) + |µ ∂Ω t | S d−1 (∂Ω).(18i)
Finally, we call χ a BV solution for evolution by Mullins-Sekerka flow (1a)-(1e) with initial data (χ 0 , µ 0 ) if there exists µ = L 1 (0, T * )⊗{µ t } t∈(0,T * ) such that (χ, µ) is a varifold solution in the above sense and the varifold µ is given by the canonical lift of χ, i.e., for almost every t ∈ (0, T * ) it holds that
|µ t | S d−1 = c 0 |∇χ(·, t)| Ω + (cos α)c 0 χ(·, t) H d−1 ∂Ω, (19a) λ x,t = δ ∇χ(·,t) |∇χ(·,t)| (x) for (|∇χ(·, t)| Ω)-almost every x ∈ Ω.
(19b)
Before we state the main existence result of this work, let us provide two brief comments on the above definition. First, we note that in Lemma 4 we show that if (χ, µ) is a varifold solution to Mullins-Sekerka flow in the sense of Definition 1, then it is also a solution from a more typical PDE perspective. Second, to justify the notation of (18h), we refer the reader to Lemma 3 where it is shown that if in addition the relation (18f) is satisfied, it holds that
∂E[µ] Vχ = sup Ψ lim sup s→0 (E[µ • Ψ −1 s ] − E[µ]) + χ • Ψ −1 s − χ H −1 (0)
, where the supremum runs over all one-parameter families of diffeomorphisms s → Ψ s ∈ C 1 -Diffeo(Ω, Ω) which are differentiable in an open neighborhood of the origin and further satisfy Ψ 0 = Id, Ω χ•Ψ −1 s dx = m 0 and ∂ s Ψ s | s=0 = B ∈ S χ . Note that the relation ∂ s (χ•Ψ −1 s )| s=0 +(B·∇)χ = 0 enforced by the chain rule, (χ•Ψ −1 s )| s=0 = χ as well as ∂ s Ψ −1 s | s=0 = −∂ s Ψ s | s=0 = −B motivates us to consider V χ as the tangent space for the formal manifold at χ ∈ M m0 . Theorem 1 (Existence of varifold solutions of Mullins-Sekerka flow). Let d ∈ {2, 3}, T * ∈ (0, ∞), and Ω ⊂ R d be a bounded domain with orientable C 2 boundary ∂Ω. Let m 0 ∈ (0, L d (Ω)), χ 0 ∈ M m0 , c 0 ∈ (0, ∞), α ∈ (0, π 2 ], and let µ 0 ∈ M(Ω×S d−1 ) be the associated oriented varifold to this data, cf. Definition 1.
Then, there exists a varifold solution for Mullins-Sekerka flow (1a)-(1e) with initial data (χ 0 , µ 0 ) in the sense of Definition 1.
In fact, each limit point of the minimizing movements scheme associated with the Mullins-Sekerka flow (1a)-(1e), cf. Subsection 3.1, is a solution in the sense of Definition 1. In case of convergence of the time-integrated energies (cf. (63)), the corresponding limit point of the minimizing movements scheme is even a BV solution in the sense of Definition 1.
The proof of Theorem 1 is the content of Subsections 3.1-3.3.
Remark 2.
If instead of the conditions from items ii) and iii) from Definition 1 one asks for the existence of two potentials u ∈ L 2 (0, T * ; H 1 (0) ) and w ∈ L 2 (0, T * ; H 1 ), respectively, which satisfy the conditions (21), (22), (24) and (25) from Lemma 4 below, the results of Theorem 1 in fact hold without any restriction on the ambient dimenion d. We will prove this fact in the course of Subsection 3.4. . Suppose in addition that the tangential first variation of µ is given by a curvature H χ ∈ L 1 (Ω; |∇χ|) in the sense of equation (18f). Then, it holds that
∂E[µ] Vχ = sup Ψ lim sup s→0 (E[µ • Ψ −1 s ] − E[µ]) + χ • Ψ −1 s − χ H −1 (0) ,(20)
where the supremum runs over all one-parameter families of diffeomorphisms s → Ψ s ∈ C 1 -Diffeo(Ω, Ω) which are differentiable in a neighborhood of the origin and further satisfy
Ψ 0 = Id, Ω χ•Ψ −1 s dx = m 0 and ∂ s Ψ s | s=0 = B ∈ S χ . Without (18f)
, the right hand side of (20) provides at least an upper bound.
Next, we aim to interpret the information provided by the sharp energy inequality (18g) from a viewpoint which is more in the tradition of classical PDE theory. More precisely, we show that (18g) together with the representation (18f) already encodes the evolution equation for the evolving phase as well as the Gibbs-Thomson law-both in terms of a suitable distributional formulation featuring an associated potential. We emphasize, however, that without further regularity assumptions on the evolving geometry these two potentials may a priori not agree. This flexibility is in turn a key strength of the gradient flow perspective to allow for less regular evolutions (i.e., a weak solution theory).
Lemma 4 (Interpretation from a PDE perspective). Let (χ, µ) be a varifold solution for Mullins-Sekerka flow with initial data (χ 0 , µ 0 ) in the sense of Definition 1. For a given χ ∈ M m0 , define for each of the two velocity spaces V χ and T χ an associated space of potentials via G χ :
= ∆ −1 N (V χ ) ⊂ H 1 (0) and H χ := ∆ −1 N (T χ ) ⊂ H 1 (0) , respectively. i) There exists a potential u ∈ L 2 (0, T * ; H χ(·,t) ) ⊂ L 2 (0, T * ; H 1 (0) ) such that ∂ t χ = ∆u in Ω×(0, T * ), χ(·, 0) = χ 0 in Ω, and (n ∂Ω · ∇)u = 0 on ∂Ω×(0, T * ), in the precise sense of Ω χ(·, T )ζ(·, T ) dx − Ω χ 0 ζ(·, 0) dx = T 0 Ω χ∂ t ζ dxdt − T 0 Ω ∇u · ∇ζ dxdt(21)
for almost every T ∈ (0, T * ) and all ζ ∈ C 1 (Ω×[0, T * )). ii) There exists a potential w ∈ L 2 (0, T * ; H 1 (Ω)) such that for w 0 := w−− Ω w dx one has w 0 ∈ L 2 (0, T * ; G χ(·,t) ) ⊂ L 2 (0, T * ; H 1 (0) ), and further satisfies the following three properties: first, the Gibbs-Thomson law
Ω×S d−1 (Id−s ⊗ s) : ∇B(x) dµ t (x, s) = Ω χ(·, t)∇ · (w(·, t)B) dx(22)
holds true for almost every t ∈ (0, T * ) and all B ∈ C 1 (Ω; R d ) such that (B · n ∂Ω )| ∂Ω ≡ 0; second, it holds that
Ω 1 2 |∇w(·, t)| 2 dx = 1 2 w 0 (·, t) 2 G χ(·,t) = 1 2 ∂E[χ(·, t), µ t ] 2 V χ(·,t)(23)
for almost every t ∈ (0, T * ); and third, there is
C = C(Ω, d, c 0 , m 0 , χ 0 ) > 0 such that w(·, t) H 1 (Ω) ≤ C(1 + ∇w(·, t) L 2 (Ω) )(24)
for almost every t ∈ (0, T * ). iii) The energy dissipation inequality holds true in the sense that
E[µ T ] + T 0 Ω 1 2 |∇u| 2 + 1 2 |∇w| 2 dxdt ≤ E[µ 0 ](25)
for almost every T ∈ (0, T * ).
Note that in view of Proposition 6, item ii), and the trace estimate (35) from below, if (χ, µ) is a varifold solution that is a limit point of the minimizing movements scheme (see Section 3.1), we may in particular deduce that
c 0 H χ (·, t) = w(·, t)(26)
for almost every t ∈ (0, T * ) up to sets of (|∇χ(·, t)| Ω)-measure zero. Via similar arguments, for any varifold solution, (26) holds up to a constant, conceptually consistent with (6) and Subsections 2.3 and 2.4. Next, we show subsequential compactness of our solution concept and consistency with classical solutions. To formulate the latter, we make use of the notion of a time-dependent family A = (A (t)) t∈[0,T * ) of smoothly evolving subsets A (t) ⊂ Ω, t ∈ [0, T * ). More precisely, each set A (t) is open and consists of finitely many connected components (the number of which is constant in time). Furthermore, the reduced boundary of A (t) in R d differs from its topological boundary only by a finite number of contact sets on ∂Ω (the number of which is again constant in time) represented by ∂(∂ * A (t) ∩ Ω) = ∂(∂ * A (t) ∩ ∂Ω) ⊂ ∂Ω. The remaining parts of ∂A (t), i.e., ∂ * A (t) ∩ Ω and ∂ * A (t) ∩ ∂Ω, are smooth manifolds with boundary (which for both is given by the contact points manifold).
Lemma 5 (Properties of the space of varifold solutions). Let the assumptions and notation of Theorem 1 be in place.
i) (Consistency) Let (χ, µ) be a varifold solution for Mullins-Sekerka flow in the sense of Definition 1 which is smooth, i.e., χ(
x, t) = χ A (x, t) := χ A (t) (x) for a smoothly evolving family A = (A (t)) t∈[0,T * )
. Furthermore, assume that (18f) also holds with δµ t replaced on the left hand side by δE[χ(·, t)] (which for a BV solution does not represent an additional constraint). Then, A is a classical solution for Mullins-Sekerka flow in the sense of (1a)-(1e). If one assumes in addition that 1 c0 µ Ω t ∈ M(Ω×S d−1 ) is an integer rectifiable oriented varifold, it also holds that Vice versa, any classical solution A of Mullins-Sekerka flow (1a)-(1e) gives rise to a (smooth) BV solution χ = χ A in the sense of Definition 1. ii) (Subsequential compactness of the solution space) Let (χ k , µ k ) k∈N be a sequence of BV solutions with initial data (χ k,0 , µ k,0 ) and time horizon 0 < T * < ∞ in the sense of Definition 1. Assume that the associated energies t → E[(µ k ) t ] are absolutely continuous functions for all k ∈ N, that sup k∈N E[µ k,0 ] < ∞, and that the sequence (|∇χ k,0 | Ω) k∈N is tight. Then, one may find a subsequence {k n } n∈N , data (χ 0 , µ 0 ), and a varifold solution (χ, µ) with initial data (χ 0 , µ 0 ) and time horizon T * in the sense of Definition 1 such that
χ kn → χ in L 1 (Ω×(0, T * )) as well as µ kn * ⇀ µ in M((0, T * )×Ω×S d−1 ) as n → ∞.
We remark that the above compactness is formulated in terms of BV solutions so that the generalized mean curvature (Röger's interpretation [44]) is recovered in the limit, an argument which requires the use of geometric machinery developed by Schätzle for varifolds with first variation given in terms of a Sobolev function [47]. One can alternatively formulate compactness over the space GM M (χ k,0 ) of generalized minimizing movements, introduced by Ambrosio et al. [7]. The space GM M (χ k,0 ) is given by all limit points as h → 0 of the minimizing movements scheme introduced in Subsection 3.1. By Theorem 1, every element of GM M (χ k,0 ) is a varifold solution of Mullins-Sekerka flow with initial value χ k,0 . Though we do not prove this, a diagonalization argument shows that for a sequence of initial data as in Part ii) of Lemma 5, (χ k , µ k ) belonging to GM M (χ k,0 ) are precompact and up to a subsequence converge to (χ, µ) in GM M (χ 0 ). Note that this compactness result holds without the assumption of absolute continuity of the associated energies E[(µ k ) t ].
As indicated by the above remark, solutions arising from the minimizing movements scheme can satisfy additional properties. In the following proposition, we collect both structural and regularity properties for the time-evolving varifold associated with a solution which is a limit point of the minimizing movements scheme of Subsection 3.1.
Proposition 6 (Further structure of the evolving varifold for limit points of minimizing movements approximation). Let the assumptions and notation of Theorem 1 be in place. Let (χ, µ) be a varifold solution for Mullins-Sekerka flow (1a)-(1e) with initial data (χ 0 , µ 0 ) in the sense of Definition 1. Assume that (χ, µ) is obtained as a limit point of the minimizing movements scheme (cf. Subsection 3.1) naturally associated with Mullins-Sekerka flow (1a)-(1e).
Then, the varifold µ satisfies the following additional properties: i) (Stronger compatibility conditions) Consider some η ∈ C ∞ (Ω; R d ) such that n ∂Ω · η = 0 on ∂Ω, and consider some ξ ∈ C ∞ (Ω; R d ) with n ∂Ω · ξ = cos α on ∂Ω. Then, it holds that
Ω×S d−1 s · η(x) dµ Ω t (x, s) = Ω c 0 ∇χ(·, t) |∇χ(·, t)| · η(·) d|∇χ(·, t)|,(29)− Ω×S d−1 s · ξ(x) dµ Ω t (x, s) = − Ω c 0 ∇χ(·, t) |∇χ(·, t)| · ξ(·) d|∇χ(·, t)|,(30)+ |µ ∂Ω t | S d−1 (∂Ω) − ∂Ω (cos α)c 0 χ(·, t) dH d−1
for almost every t ∈ (0, T * ). ii) (Integrability of generalized mean curvature vector w.r.t. tangential variations, cf. Röger [45] and Schätzle [47]) For almost every t ∈ (0, T * ), the generalized mean curvature H χ (·, t) from item ii) of Definition 1 satisfies (18f) not only for B ∈ S χ(·,t) but also for all
B ∈ C 1 (Ω; R d ) with B · n ∂Ω = 0 along ∂Ω. Furthermore, for each s ∈ [2, 4] if d = 3 or else s ∈ [2, ∞), there exists C = C(Ω, d, s, c 0 , m 0 , χ 0 ) > 0 (independent of t) such that Ω |H χ (·, t)| s d|∇χ(·, t)| 1 s ≤ C 1+ max 1, ∂E[µ t ] d V χ(·,t) 1+ 1 s (31)
for almost every t ∈ (0, T * ). iii) (Global first variation estimate on Ω) For almost every t ∈ (0, T * ), the oriented varifold µ t is of bounded first variation on Ω such that
sup |δµ t (B)| : B ∈ C 1 (Ω), B L ∞ ≤ 1 ≤ C 1+ max 1, ∂E[µ t ] d V χ(·,t) 3/2 (32) for some C = C(Ω, d, c 0 , m 0 , χ 0 ) > 0 independent of t.
The proof of the last two items of the previous result is based upon the following two auxiliary results, which we believe are worth mentioning on their own. Proposition 7 (First variation estimate up to the boundary for tangential varia-
tions). Let d ∈ {2, 3}, let Ω ⊂ R d be a bounded domain with orientable C 2 bound- ary ∂Ω, let w ∈ H 1 (Ω), let χ ∈ BV (Ω; {0, 1}), and let µ = |µ| S d−1 ⊗ (λ x ) x∈Ω ∈ M(Ω×S d−1 )
be an oriented varifold such that c 0 |∇χ| Ω ≤ |µ| S d−1 Ω in the sense of measures for some constant c 0 > 0. Assume moreover that the Gibbs-Thomson law holds true in form of
Ω×S d−1 (Id − s ⊗ s) : ∇B dµ = Ω χ∇ · (wB) dx(33)
for all tangential variations B ∈ C 1 (Ω; R d ), (B · n ∂Ω )| ∂Ω ≡ 0. There exists r = r(∂Ω) ∈ (0, 1) such that for all x 0 ∈ Ω with dist(x 0 , ∂Ω) < r and all exponents s ∈ [2,4]
if d = 3 or otherwise s ∈ [2, ∞) there exists a constant C = C(r, s, d) > 0 such that Br (x0)∩Ω |w| s d|µ| S d−1 1 s ≤ C 1 + |µ| S d−1 (Ω) + w d H 1 (Ω) 1+ 1 s .(34)
In particular, the varifold µ is of bounded variation with respect to tangential variations (with generalized mean curvature vector H Ω trivially given by
ρ Ω w c0 ∇χ |∇χ| where ρ Ω := c0|∇χ| Ω |µ| S d−1 Ω ∈ [0, 1], cf. (33)
) and the potential satisfies
Ω |w| s d|µ| S d−1 1 s ≤ C 1 + |µ| S d−1 (Ω) + max 1, w d H 1 (Ω) 1+ 1 s .(35)
By a recent work of De Masi [17], one may post-process the previous result to the following statement.
H Ω = ρ Ω w c 0 ∇χ |∇χ| , ρ Ω := c 0 |∇χ| Ω |µ| S d−1 Ω ∈ [0, 1],(36)H ∂Ω ∈ L ∞ (∂Ω, d|µ| S d−1 ), H ∂Ω (x) ⊥ Tan x ∂Ω for |µ| S d−1 ∂Ω-a.e. x ∈ Ω, (37) σ µ ∈ M(∂Ω),(38)
such that the first variation δµ of µ is represented by
δµ(B) = − Ω (H Ω +H ∂Ω ) · B d|µ| S d−1 + ∂Ω B · n ∂Ω dσ µ(39)
for all B ∈ C 1 (Ω; R d ). Furthermore, there exists C = C(Ω) > 0 (depending only on the second fundamental form of the domain boundary ∂Ω) such that
sup |δµ(B)| : B ∈ C 1 (Ω), B L ∞ ≤ 1 ≤ C |µ| S d−1 (Ω)+ w L 1 (Ω,d|µ| S d−1 ) , (40) H ∂Ω L ∞ (∂Ω,d|µ| S d−1 ) ≤ C,(41)σ µ (∂Ω) ≤ C|µ| S d−1 (Ω) + w L 1 (Ω,d|µ| S d−1 )
. (42) 2.3. A closer look at the functional framework. In this subsection, we characterize the difference between the velocity spaces V χ and T χ , defined in (13) and (17) respectively, by expressing the quotient space T χ /V χ in terms of a distributional trace space and quasi-everywhere trace space (see (53)). As an application, this result will show that if |∇χ| is given by the surface measure of a Lipschitz graph, then the quotient space collapses to a point and V χ = T χ . Both spaces V χ and T χ are spaces of velocities, and associated with these will be spaces of potentials where one expects to find the chemical potential. For this, we recall that the inverse of the weak Neumann Laplacian ∆ −1 N :
H −1 (0) → H 1 (0) is defined by u F := ∆ −1 N (F ), F ∈ H −1 (0) , where u F ∈ H 1 (0) is the unique weak solution of the Neumann problem ∆u F = F in Ω, (n ∂Ω · ∇)u F = 0 on ∂Ω.(43)
Recall also that ∆ −1 N : H −1 (0) → H 1 (0) defines an isometric isomorphism (with respect to the Hilbert space structures on H 1 (0) and H −1 (0) defined in Subsection 2.1), and since ∆ N is nothing else but the Riesz isomorphism for the Hilbert space
H 1 (0) , the relation (u F , v) H 1 (0) = F, v H −1 (0) ,H 1 (0)(44)
holds for all v ∈ H 1 (0) . We then introduce a space of potentials associated with V χ given by
G χ := ∆ −1 N (V χ ) ⊂ H 1 (0) .(45)
Likewise we can introduce the space of potentials associated to the "maximal tangent space" T χ given by
H χ := ∆ −1 N (T χ ) ⊂ H 1 (0) .(46)
To understand the relation between the spaces V χ and T χ , we will develop annihilator relations for G χ and H χ in H 1 (0) . Throughout the remainder of this subsection, we identify H 1 (0) with H 1 (Ω)/R, the Sobolev space quotiented by constants, which allows us to consider any v ∈ H 1 (Ω) as an element of H 1 (0) . By [3, Corollary 9.1.7] of Adams and Hedberg,
T χ = {F ∈ H −1 (0) : F, v H −1 (0) ,H 1 (0) = 0 for all v ∈ C 1 c (Ω \ supp |∇χ|)}.
Using (44) and (46), this implies that the space
H 1 0,supp |∇χ| := {v ∈ C 1 c (Ω \ supp |∇χ|)} H 1 (0) ,(47)
satisfies the annihilator relation with (·, ·) H 1 (0)
H 1 0,supp |∇χ| = H ⊥ χ .(48)= H ⊥ χ ,(49)
where u ∈ H 1 (0) ∩ ker Tr supp |∇χ| if and only if Tr supp |∇χ| u ≡ c for some c ∈ R. Similarly, one may use the definition (14) and the relation (44) to show that
G ⊥ χ = u ∈ H 1 (0) : Ω χ∇u · B dx = − Ω χu∇ · B dx for all B ∈ S χ .(50)
For
B ∈ C 1 (Ω; R d ) such that Ω χ∇·B dx = 0, one can consider fixed ξ ∈ C 1 (Ω; R d )
with ξ · n ∂Ω = 0 on ∂Ω such that Ω χ∇ · ξ dx = 0 and use the corrected functioñ (50) to see that the above relation is equivalent to
B := B − Ω χ∇·B dx Ω χ∇·ξ dx ξ inG ⊥ χ = u ∈ H 1 (0) : Ω χ∇(u + c) · B dx = − Ω χ(u + c)∇ · B dx(51)
for some c ∈ R and all B ∈ C 1 (Ω; R d ) with B · n ∂Ω = 0 on ∂Ω .
Thus, G ⊥ χ is the space functions in H 1 (0) which have vanishing trace on supp |∇χ| in a distributional sense.
We now show that G χ ⊂ H χ , which is equivalent to V χ ⊂ T χ . First note that (48) implies
H χ = u ∈ H 1 (0) : ∆u = 0 in Ω \ supp |∇χ| .(52)
As a technical tool, we remark that for fixed v ∈ C 1 c (Ω \ supp |∇χ|), up to a representative, vχ ∈ C 1 c (Ω). To see this, let χ = χ A for A ⊂ Ω and note for any
x ∈ supp v we can find r > 0 such that |B(x, r)∩A| = |B(x, r)| or |B(x, r)∩(Ω\A)| = |B(x, r)|.
We construct a finite cover of supp v given by C := ∪ i B(x i , r i ), and define the set
C ′ := xi:|B(xi,ri)∩A|=|B(xi,ri)| B(x i , r i ).
We have that
vχ = v in C ′ , 0 otherwise,
which is smooth as the balls used in C ′ are disjoint from those balls such that |B(x, r) ∩ (Ω \ A)| = |B(x, r)|, completing the claim. Now, for u ∈ G χ given by u = u B·∇χ with B ∈ S χ , by (44) we compute
(u, v) H 1 (0) = − Ω ∇ · (vB)χ dx = − Ω ∇ · ((vχ)B) dx = 0.
It follows that G χ ⊂ H χ by (47) and (48). Using the quotient space isomorphism Y /X ≃ X ⊥ for a closed subspace X of Y [15, Theorem III.10.2] and the subset relation
G χ ⊂ H χ , we have H χ G χ ≃ G ⊥ χ H ⊥ χ .
Consequently, unifying the results of this subsection, the following characterization of the difference between the velocity spaces follows:
T χ V χ ≃ H χ G χ ≃ (53) u ∈ H 1 (0) : Ω χ∇u · B dx = − Ω χu∇ · B dx for all B ∈ S χ ker Tr supp |∇χ| .
In summary, the gap in the velocity spaces V χ and T χ is exclusively due to a loss in regularity of the interface and amounts to the gap between having the trace in a distributional sense (see (51)) versus a quasi-everywhere sense.
2.4.
On the relation to Le's functional framework. We now have sufficient machinery to discuss our solution concept in relation to the framework developed by Le in [31]. Within Le's work, the critical dissipation inequality for Γ t := supp |∇χ(·, t)|, a C 3 space-time interface, to be a solution of the Mullins-Sekerka flow is given by (7) with f = H Γ (the curvature) and H −1/2 (Γ) again defined by duality and normed by means of the Riesz representation theorem (see also Lemma 2.1 of Le [31]),
E[χ(·, T )] + T 0 1 2 ∂ t χ 2 H −1/2 (Γt) + 1 2 H Γt 2 H 1/2 (Γt) dt ≤ E[χ 0 ], where H Γ H 1/2 (Γ) = ∇f L 2 (Ω) forf satisfyingH −1/2 (Γ) ≃ ∆ N ({u ∈ H 1 (0) : ∆u = 0 in Ω \ Γ}).(54)
As this is simply the image under the weak Neumann Laplacian of functions u associated with the problem (7), we can rewrite this as
H −1/2 (Γ) = ∆ N (H 1/2 (Γ)).(55)
Considering our solution concept now, let (χ, µ) be a solution in the sense of Definition 1 such that Γ t := supp|∇χ(·, t)| is a Lipschitz surface for a.e. t. By (52) and (54), T χ(·,t) = H −1/2 (Γ t ). Then as the classical trace space is well-defined, the isomorphism (53) collapses to the identity showing that
T χ(·,t) = ∆ N (G χ(·,t) ),(56)
verifying the analogue of (55) and implying that G χ(·,t) = H 1/2 (Γ t ). Further, this discussion and (26) (letting c 0 = 1 for convenience) show that
∂ t χ T χ(·,t) = ∂ t χ H −1/2 (Γt) and w(·, t) G χ(·,t) = H Γt H 1/2 (Γt) .
Looking to (18g) and (23), we see that our solution concept naturally subsumes Le's, preserves structural relations on the function spaces, and works without any regularity assumptions placed on Γ.
Though beyond the scope of our paper, a natural question following from the discussion of this subsection and the prior is when does the relation (56) or the inclusion ∂ t χ ∈ V χ ⊂ T χ hold. By (53), both will follow if zero distributional trace is equivalent to having zero trace in the quasi-everywhere sense. Looking towards results on traces (see, e.g., [12], [38], and [39]), characterization of this condition will be a nontrivial result, and applying similar ideas to the Mullins-Sekerka flow may require a fine characterization of the singular set from Allard's regularity theory [5].
2.5. Motivation from the viewpoint of weak-strong uniqueness. Another major motivation for our weak solution concept, especially for the inclusion of a sharp energy dissipation principle, is drawn from the recent progress on uniqueness properties of weak solutions for various curvature driven interface evolution problems. More precisely, it was established that for incompressible Navier-Stokes two-phase flow with surface tension [19] and for multiphase mean curvature flow [21] (cf. also [25] or [26]), weak solutions with sharp energy dissipation rate are unique within a class of sufficiently regular strong solutions (as long as the latter exist, i.e., until they undergo a topology change). Such weak-strong uniqueness principles are optimal in the sense that weak solutions in geometric evolution may in general be non-unique after the first topology change. Extensions to constant contact angle problems as considered in the present work are possible as well, see [27] for Navier-Stokes two-phase flow with surface tension or [24] for mean curvature flow.
The weak-strong uniqueness results of the previous works rely on a Gronwall stability estimate for a novel notion of distance measure between a weak and a sufficiently regular strong solution. The main point is that this distance measure is in particular able to penalize the difference in the location of the two associated interfaces in a sufficiently strong sense. Let us briefly outline how to construct such a distance measure in the context of the present work (i.e., interface evolution in a bounded container with constant contact angle (1e)). To this end, it is convenient to assume next to (18c) and (18d) the two additional compatibility conditions (29) and (30) from Proposition 6. Under these additional assumptions, we claim that the following functional represents a natural candidate for the desired error functional:
E rel [χ, µ|A ](t) := |µ t | S d−1 (Ω) − ∂ * A(t)∩Ω c 0 ∇χ(·, t) |∇χ(·, t)| · ξ(·, t) d|∇χ(·, t)| − ∂Ω (cos α)c 0 χ(·, t) dH d−1 ,
where ξ(·, t) : Ω → {|x|≤1} denotes a suitable extension of the unit normal vector field n ∂A (t) of ∂A (t) ∩ Ω. Due to the compatibility conditions (18c) and (18d) as well as the length constraint |ξ| ≤ 1, it is immediate that E rel ≥ 0. The natural boundary condition for ξ(·, t) turns out to be (ξ(·, t) · n ∂Ω )| ∂Ω ≡ cos α. Indeed, this shows by means of an integration by parts that
E rel [χ, µ|A ](t) = |µ t | S d−1 (Ω) + Ω c 0 χ(·, t)∇ · ξ dx.
The merit of the previous representation of E rel is that it allows one to compute the time evolution of E rel relying in a first step only on the De Giorgi inequality (18g) and using ∇ · ξ as a test function in the evolution equation (21). Furthermore, the compatibility condition (30) yields that
Ω×S d−1 1 2 |s − ξ| 2 dµ Ω t ≤ Ω×S d−1 1 − s · ξ dµ Ω t = E rel [χ, µ|A ](t),
which in turn implies a tilt-excess type control provided by E rel at the level of the varifold interface. Further coercivity properties may be derived based on the compatibility conditions (18c) and (18d) in form of the associated Radon-Nikodỳm
derivatives ρ Ω t := c0|∇χ(·,t)| Ω |µ Ω t | S d−1 Ω ∈ [0, 1] and ρ ∂Ω t := (cos α)c0χ(·,t)H d−1 ∂Ω |µ ∂Ω t | S d−1 +|µ Ω t | S d−1 ∂Ω ∈ [0, 1]
, respectively. More precisely, one obtains the representation
E rel [χ, µ|A ](t) = Ω 1 − ρ Ω t d|µ Ω t | S d−1 + ∂Ω 1 − ρ ∂Ω t d |µ Ω t | S d−1 +|µ ∂Ω t | S d−1 + Ω c 0 1 − ∇χ(·, t) |∇χ(·, t)| · ξ(·, t) d|∇χ(·, t)|.
The last of these right hand side terms ensures tilt-excess type control at the level of the BV interface
c 0 Ω 1 2 ∇χ(·, t) |∇χ(·, t)| − ξ(·, t) 2 d|∇χ(·, t)| ≤ E rel [χ, µ|A ](t).
The other three simply penalize the well-known mass defects (i.e., mass moving out from the bulk to the domain boundary, or the creation of hidden boundaries within the bulk) originating from the lack of continuity of the perimeter functional under weak- * convergence in BV .
In summary, the requirements of Definition 1 (together with the two additional mild compatibility conditions (29) and (30)) allow one to define a functional which on one side penalizes, in various ways, the "interface error" between a varifold and a classical solution, and which on the other side has a structure supporting at least in principle the idea of proving a Gronwall-type stability estimate for it. One therefore may hope that varifold solutions for Mullins-Sekerka flow in the sense of Definition 1 satisfy a weak-strong uniqueness principle together with a weak-strong stability estimate based on the above error functional. In the simplest setting of α = π 2 , a BV solution χ, and assuming no boundary contact for the interface of the classical solution A , this is at the time of this writing work in progress [20].
For the present contribution, however, we content ourselves with the above existence result (i.e., Theorem 1) for varifold solutions to Mullins-Sekerka flow in the sense of Definition 1 together with establishing further properties of these.
Existence of varifold solutions to Mullins-Sekerka flow
3.1. Key players in minimizing movements. To construct weak solutions for the Mullins-Sekerka flow (1a)-(1e) in the precise sense of Definition 1, it comes at no surprise that we will employ the gradient flow perspective in the form of a minimizing movements scheme, which we pass to the limit. Given an initial condition χ 0 ∈ M m0 (see (9)), a fixed time step size h ∈ (0, 1), and E as in (11), we let χ h 0 := χ 0 and choose inductively for each n ∈ N χ h n ∈ arg min
χ∈Mm 0 E[ χ ] + 1 2h χ h n−1 − χ 2 H −1 (0) ,(57)χ h (t) := χ h n−1 for all t ∈ [(n−1)h, nh), n ∈ N,(58)
satisfies the energy dissipation estimate
E[χ h (T )] + T 0 1 2h 2 χ h (t+h) − χ h (t) 2 H −1 (0) dt ≤ E[χ 0 ] for all T ∈ Nh.(59)
Although the previous inequality is already enough for usual compactness arguments, it is obviously not sufficient, however, to establish the expected sharp energy dissipation inequality (cf. (4)) in the limit as h → 0. It goes back to ideas of De Giorgi how to capture the remaining half of the dissipation energy at the level of the minimizing movements scheme, versus, for example, recovering the dissipation from the regularity of a solution to the limit equation. The key ingredient for this is a finer interpolation than the piecewise constant one, which in the literature usually goes under the name of De Giorgi (or variational) interpolation and is defined as follows:
χ h ((n−1)h) := χ h ((n−1)h) = χ h n−1 , n ∈ N, χ h (t) ∈ arg min χ∈Mm 0 E[ χ ]+ 1 2(t−(n−1)h) χ h ((n−1)h) − χ 2 H −1 (0) , t ∈ ((n−1)h, nh).(60)
The merit of this second interpolation consists of the following improved (and now sharp) energy dissipation inequality
E[χ h (T )] + T 0 1 2h 2 χ h (t+h) − χ h (t) 2 H −1 (0) dt + T 0 1 2 ∂E[χ h (t)] 2 d dt ≤ E[χ 0 ],(61)
with T ∈ Nh [7]. The quantity |∂E[χ]| d is usually referred to as the metric slope of the energy E at a given point χ ∈ M m0 , and in our context may more precisely be defined by
∂E[χ] d := lim sup χ∈Mm 0 : χ− χ H −1 (0) →0 E[χ] − E[ χ] + χ − χ H −1 (0) .(62)
We remind the reader that (61) is a general result for abstract minimizing movement schemes requiring only to work on a metric space. However, as it turns out, we will be able to preserve a formal manifold structure even in the limit. This in turn is precisely the reason why the "De Giorgi metric slope" appearing in our energy dissipation inequality (18g) is computed only in terms of inner variations, see (20). With these main ingredients and properties of the minimizing movements scheme in place, our main task now consists of passing to the limit h → 0 and identifying the resulting (subsequential but unconditional) limit object as a varifold solution to Mullins-Sekerka flow (1a)-(1e) in our sense. Furthermore, to obtain a BV solution, we will additionally assume, following the tradition of Luckhaus and Sturzenhecker [35], that for a subsequential limit point χ obtained from (105) below, it holds that
T * 0 E[χ h (t)] dt → T * 0 E[χ(t)] dt.(63)
3.2. Three technical auxiliary results. For the Mullins-Sekerka equation, a mass preserving flow, it will be helpful to construct "smooth" mass-preserving flows corresponding to infinitesimally mass-preserving velocities, i.e., velocities in the test function class S χ (see (12)). Using these flows as competitors in (60) and considering the associated Euler-Lagrange equation, it becomes apparent that an approximate Gibbs-Thomson relation holds for infinitesimally mass-preserving velocities. To extend this relation to arbitrary variations (tangential at the boundary) we must control the Lagrange multiplier arising from the mass constraint. Though the first lemma and the essence of the subsequent lemma is contained in the work of Abels and Röger [2] or Chen [13], we include the proofs for both completeness and to show that the result is unperturbed if the energy exists at the varifold level.
Lemma 9. Let χ ∈ M 0 and B ∈ S χ . Then there exists η > 0 and a family of
C 1 diffeomorphisms Ψ s : Ω → Ω depending differentiably on s ∈ (−η, η) such that Ψ 0 (x) = x, ∂ s Ψ s (x)| s=0 = B(x), and Ω χ • Ψ −1 s = m 0 for all s ∈ (−η, η).
Proof. Fix ξ ∈ C ∞ (Ω) such that (ξ · n ∂Ω )| ∂Ω ≡ 0 and Ω χ∇ · ξ dx = 0. Naturally associated to B and ξ are flow-maps β s and γ r solving ∂ s β s (x) = B(β s (x)) and ∂ r γ r (x) = ξ(γ r (x)), each with initial condition given by the identity map, i.e., β 0 (x) = x. Define the function f , which is locally differentiable near the origin, by
f (s, r) := Ω χ • (β s • γ r ) −1 − m 0 = Ω χ (det(∇(β s • γ r )) − 1) dx.
As f (0, 0) = 0 and ∂ r f (0, 0) = Ω χ∇ · ξ dx = 0 by assumption, we may apply the implicit function theorem to find a differentiable function r = r(s) with r(0) = 0 such that f (s, r(s)) = 0 for s near 0. We can further compute that (see (74))
∂ s f (0, 0) = Ω χ ∇ · B + r ′ (0)∇ · ξ dx.
Rearranging, we find
r ′ (0) = − Ω χ∇ · B dx Ω χ∇ · ξ dx = 0,
and thus the flow given by β s • γ r(s) satisfies ∂ s (β s • γ r(s) )| s=0 = B + r ′ (0)ξ = B, thereby providing the desired family of diffeomorphisms.
Proof. For ξ ∈ C ∞ (Ω) such that (ξ · n ∂Ω )| ∂Ω ≡ 0 and Ω χ∇ · ξ dx = 0, we have that B := B − Ω χ∇·B dx Ω χ∇·ξ dx ξ belongs to S χ , and plugging this into (64) and rearranging, one finds
δµ(B) − Ω χ∇ · (wB) dx = λ Ω χ∇ · B dx, where λ = δµ(ξ) − Ω χ∇ · (wξ) dx Ω χ∇ · ξ dx .(66)
To conclude the lemma, it suffices to make a careful selection of ξ such that
|λ| ≤ C(1 + |∇χ|(Ω)) |µ|(Ω) + ∇w L 2 (Ω) .(67)
Let ρ ǫ be a standard mollifier for ǫ > 0, and let χ ǫ := χ * ρ ǫ with m ǫ := − Ω χ ǫ dx. We solve the Poisson problem
∆φ ǫ = χ ǫ − m ǫ in Ω, (n ∂Ω · ∇)φ ǫ = 0 on ∂Ω, − Ω φ ǫ dx = 0.
As χ ǫ − m ǫ C 1 ≤ C(Ω)/ǫ, we can apply Schauder estimates to find
φ ǫ C 2 (Ω) ≤ C(Ω)/ǫ.(68)
Noting the L 1 estimate
χ ǫ − χ L 1 (Ω) ≤ C(Ω)ǫ (1 + |∇χ|(Ω))
and m ǫ ≤ m 0 , we have
Ω χ∇ · φ ǫ dx = Ω χ(χ ǫ − m ǫ ) dx = (1 − m ǫ )m 0 |Ω| + Ω χ(χ ǫ − χ) dx ≥ (1 − m 0 )m 0 |Ω| − C(Ω)ǫ (1 + |∇χ|(Ω)) ≥ C(m 0 , Ω)(69)
where we have now fixed ǫ = (1−m0)m0|Ω| 4C(Ω)(1+|∇χ|(Ω)) . Choosing ξ = ∇φ ǫ , by (68), (69), the Poincaré inequality for w, and the first variation formula
δµ(ξ) = Ω×S d−1 (Id − s ⊗ s) : ∇ξ(x) dµ(x, s),
we conclude (67) from (66).
We finally state and prove a result which is helpful for the derivation of approximate Gibbs-Thomson laws from the optimality condition (60) of De Giorgi interpolants and is also needed in the proof of Lemma 3.
Lemma 11. Let χ ∈ M 0 and B ∈ S χ , and let (Ψ s ) s∈(−η,η) be an associated family of diffeomorphisms from Lemma 9. Then, for any φ ∈ H 1 (0) it holds
Ω φ χ • Ψ −1 s − χ s dx + B · ∇χ, φ H −1 (0) ,H 1 (0) ≤ φ H 1 (0) o s→0 (1).(70)
In particular, taking the supremum over φ ∈ H 1 (0) with φ H 1 (0) ≤ 1 in (70) implies
χ•Ψ −1 s −χ s → −B · ∇χ strongly in H −1 (0) as s → 0.
Proof. To simplify the notation, we denote χ s := χ • Ψ −1 s . Heuristically, one expects (70) by virtue of the formal relation ∂ s χ s | s=0 = −(B · ∇)χ mentioned after (20). A rigorous argument is given as follows.
Using the product rule, we first expand (14) as
Ω φ χ s − χ s dx + B · ∇χ, φ H −1 (0) ,H 1 (0) = Ω φ χ s − χ s dx − Ω χ (B · ∇)φ + φ∇ · B dx.(71)
Recalling χ s = χ•Ψ −1 s , using the change of variables formula for the map x → Ψ s (x), and adding zero entails
Ω φ χ s − χ s dx = Ω χ φ • Ψ s − φ s dx + Ω χ(φ • Ψ s ) | det ∇Ψ s | − 1 s dx.(72)
Inserting (72) into (71), we have that
Ω φ χ s − χ s dx + B · ∇χ, φ H −1 (0) ,H 1 (0) = I + II,(73)
where
I := Ω χ φ • Ψ s − φ s dx − Ω χ(B · ∇)φ dx, II := Ω χ(φ • Ψ s ) | det ∇Ψ s | − 1 s − Ω χφ∇ · B dx.
To estimate II, we first Taylor expand ∇Ψ s (x) = Id+ s∇B(x)+ F s (x) where, by virtue of the regularity of s → Ψ s and B, the remainder satisfies the upper bound sup x∈Ω |F s (x)| ≤ so s→0 (1). In particular, from the Leibniz formula we deduce
det ∇Ψ s (x) − 1 = s(∇ · B)(x) + f s (x),(74)
where the remainder satisfies the same qualitative upper bound as F s (x). Note that by restricting to sufficiently small s, we may ensure that det ∇Ψ s = | det ∇Ψ s |. Hence, using (74), then adding zero to reintroduce the determinant for a change of variables, and applying the continuity of translation (by a diffeomorphism) in L 2 (Ω), we have
II = φ L 2 (Ω) o s→0 (1) + Ω χ∇ · B(φ • Ψ s − φ) dx = φ L 2 (Ω) o s→0 (1) + Ω φ (χ∇ · B) • Ψ −1 s − χ∇ · B dx + Ω χ∇ · B (φ • Ψ s ) (1−| det ∇Ψ λ (x)|) dx ≤ φ L 2 (Ω) o s→0 (1).(75)
To estimate I, we first make use of the fundamental theorem of calculus along the trajectories determined by Ψ s , reintroduce the determinant by adding zero as in (75), and apply d dλ
Ψ λ (x) = B(x) + o λ→0 (1) to see Ω χ φ • Ψ s − φ s dx = − s 0 Ω χ∇φ Ψ λ (x) · d dλ Ψ λ (x) dxdλ = − s 0 Ω χ∇φ Ψ λ (x) · B(x)| det ∇Ψ λ (x)| dxdλ + ∇φ L 2 (Ω) o s→0 (1).
Hence, by undoing the change of variables in the first term on the right hand side of the previous identity, an argument analogous to the one for II guarantees
I ≤ ∇φ L 2 (Ω) o s→0 (1).(76)
Looking to (73), we use the two respective estimates for I and II given in (76) and (75). By an application of Poincaré's inequality, we arrive at (70).
3.3.
Proof of Theorem 1. We proceed in several steps.
Step 1: Approximate solution and approximate energy inequality. For time discretization parameter h ∈ (0, 1) and initial condition χ 0 ∈ BV (Ω; {0, 1}), we define the sequence {χ h n } n∈N0 as in (57), and recall the piecewise constant function χ h in (58) and the De Giorgi interpolantχ h in (60). We further define the linear interpolantχ h bŷ
χ h (t) = nh − t h χ h n−1 + t − (n−1)h h χ n h for all t ∈ [(n−1)h, nh), n ∈ N.(77)
To capture fine scale behavior of the energy in the limit, we will introduce measures µ h = L 1 (0, T * ) ⊗ (µ h t ) t∈(0,T * ) ∈ M((0, T * )×Ω×S d−1 ) so that for each t ∈ (0, T * ) the total mass of the mass measure |µ h t | S d−1 ∈ M(Ω) associated with the oriented varifold µ h t ∈ M(Ω×S d−1 ) is naturally associated to the energy of the De Giorgi interpolant at time t. More precisely, we define varifolds associated to the varifold lift ofχ h in the interior and on the boundary by
µ h,Ω := L 1 (0, T * ) ⊗ (µ h,Ω t ) t∈(0,T * ) , µ h,Ω t := c 0 |∇χ h (·, t)| Ω ⊗ (δ ∇χ h (·,t) |∇χ h (·,t)| (x) ) x∈Ω ,(78)respectively µ h,∂Ω := L 1 (0, T * ) ⊗ (µ h,∂Ω t ) t∈(0,T * ) , µ h,∂Ω t := (cos α)c 0χ h (·, t) ∂Ω ⊗ (δ n ∂Ω (x) ) x∈∂Ω ,(79)
where n ∂Ω denotes the inner normal on ∂Ω and where we again perform an abuse of notation and do not distinguish betweenχ h (·, t) and its trace along ∂Ω. We finally define the total approximate varifold by
µ h := µ h,Ω + µ h,∂Ω .(80)
The remainder of the first step is concerned with the proof of the following approximate version of the energy dissipation inequality (18g): for τ and T such that 0 < τ < T < T * , and h ∈ (0, T − τ ), we claim that
|µ h T | S d−1 (Ω) + τ 0 1 2 ∂ tχ h 2 H −1 (0) + 1 2 χ h (t) −χ h (⌊t/h⌋h) t − ⌊t/h⌋h 2 H 1 (0) dt ≤ E[χ 0 ].(81)
As a first step towards (81), we claim for all n ∈ N (cf. [7] and [11])
E[χ h (nh)] + h 2 χ n − χ n−1 h 2 H −1 (0) + 1 2 nh (n−1)h χ h (t) −χ h ((n−1)h) t − h(n − 1) 2 H −1 (0) dt ≤ E[χ h ((n−1)h)](82)
and −1)h)] for all n ∈ N and all t ∈ ((n−1)h, nh).
E[χ h (t)] ≤ E[χ h ((n
In particular, using the definition (77) and then telescoping over n in (82) provides for all n ∈ N the discretized dissipation inequality
E[χ h (nh)] + 1 2 nh 0 1 2 ∂ tχ h 2 H −1 (0) + 1 2 χ h (t) −χ h (⌊t/h⌋h) t − ⌊t/h⌋h 2 H 1 (0) dt ≤ E[χ 0 ].
(84) The bound (83) is a direct consequence of the minimality of the interpolant (60) at t. To prove (82), and thus also (84), we restrict our attention to the interval (0, h) and temporarily drop the superscript h. We define the function
f (t) := E[χ(t)] + 1 2t χ(t) − χ 0 2 H −1 (0) , t ∈ (0, h),(85)
and prove f is locally Lipschitz in (0, h) with
d dt f (t) = − 1 2t 2 χ(t) − χ 0 2 H −1 (0) for a.e. t ∈ (0, h).(86)
To deduce (86), we first show
(0, h] ∋ t → χ(t) − χ 0 H −1 (0) is non-decreasing,(87)(0, h] ∋ t → f (t) is non-increasing.(88)
Indeed, for 0 < s < t ≤ h we obtain from minimality of the interpolant (60) at s, then adding zero, and then from minimality of the interpolant (60) at t that
f (s) ≤ E[χ(t)] + 1 2s χ(t) − χ 0 2 H −1 (0) ≤ E[χ(s)] + 1 2t χ(s) − χ 0 2 H −1 (0) + 1 2s − 1 2t χ(t) − χ 0 2 H −1 (0)
.
Recalling definition (85), this is turn immediately implies (87). For a proof of (88), we observe for s, t ∈ (0, h] by minimality of the interpolant (60) at t that
f (t) − f (s) ≤ 1 2t χ(s) − χ 0 2 H −1 (0) − 1 2s χ(s) − χ 0 2 H −1 (0)
.
Rearranging one finds for
0 < s < t ≤ h f (t) − f (s) t − s ≤ − 1 2ts χ(s) − χ 0 2 H −1 (0) ≤ 0,(89)
proving (88). Likewise using minimality of the interpolant (60) at s, one also concludes for 0 < s < t < h the lower bound
f (t) − f (s) t − s ≥ − 1 2ts χ(t) − χ 0 2 H −1 (0) .(90)
As the discontinuity set of a monotone function is at most countable, we infer (86) from (89)
E[χ(h)] + 1 2h χ(h) − χ 0 2 H −1 (0) + t s 1 2τ 2 χ(τ ) − χ 0 2 H −1 (0) dτ ≤ E[χ 0 ].
Sending s ↓ 0 and t ↑ h, we recover (82) and thus also (84).
It remains to post-process (84) to (81). Note first that by definitions (78)
-(80) it holds |µ h t | S d−1 (Ω) = E[χ h (t)]
for all t ∈ (0, T * ). We claim that for all h ∈ (0, 1)
(0, T * ) ∋ t → |µ h t | S d−1 (Ω) is non-increasing.(91)
Indeed, for (n−1)h < s < t ≤ nh we simply get from the minimality of the De Giorgi interpolant (60) at time t
E[χ h (t)] + 1 2(t − (n−1)h) χ h (t) − χ h ((n−1)h) 2 H −1 (0) ≤ E[χ h (s)] + 1 2(t − (n−1)h) χ h (s) − χ h ((n−1)h) 2 H −1 (0)
so that (91) follows from (87) and (83). Restricting our attention to h ∈ (0, T − τ ), there is n 0 ∈ N such that τ < n 0 h < T, and by (91) and positivity of the integrand, we may bound the left-hand side of (81) by (84) with n = n 0 completing the proof of (81).
Step 2: Approximate Gibbs-Thomson law. Naturally associated to the De Giorgi interpolant (60) is the potential w h ∈ L 2 (0, T * ; H 1 (0) ) satisfying
∆w h (·, t) =χ h (t) −χ h (⌊t/h⌋) t − ⌊t/h⌋ in Ω, (n ∂Ω · ∇)w h (·, t) = 0 on ∂Ω,(92)
for t ∈ (0, T * ). Note that this equivalently expresses (81) in the form of
|µ h T | S d−1 (Ω) + τ 0 1 2 ∂ tχ h 2 H −1 (0) + 1 2 ∇w h 2 L 2 (Ω) dt ≤ E[χ 0 ].(93)
By the minimizing property (60) of the De Giorgi interpolant, Allard's first variation formula [5], and Lemma 11, it follows that the De Giorgi interpolant furthermore satisfies the approximate Gibbs-Thomson relation
Ω×S d−1 (Id − s ⊗ s) : ∇B(x) dµ h t (x, s) = Ωχ h (t)∇ · (w h (·, t)B) dx(94)
for all t ∈ (0, T * ) and all B ∈ Sχh (t) .
Applying the result of Lemma 10 to control the Lagrange multiplier arising from the mass constraint, and using the uniform bound on the energy from the dissipation relation (84) and the estimate (83), we find there exist functions λ h ∈ L 2 (0, T ) and a constant C = C(Ω, d, c 0 , m 0 , χ 0 ) > 0 such that for all t in (0, T * )
Ω×S d−1 (Id − s ⊗ s) : ∇B(x) dµ h t (x, s) = Ωχ h (t)∇ · (w h (·, t) + λ h (t))B dx (95)
for all B ∈ C 1 (Ω; R d ) with (B · n ∂Ω )| ∂Ω ≡ 0, and
w h (·, t) + λ h (t) H 1 (Ω) ≤ C(1 + ∇w h (·, t) L 2 (Ω) ).(96)
Step 3: Compactness, part I: Limit varifold. Based on the uniform bound on the energy from the dissipation relation (84) and the estimate (83), we have by weak- * compactness of finite Radon measures, up to selecting a subsequence h ↓ 0,
µ h,Ω * ⇀ µ Ω in M((0, T * )×Ω×S d−1 ),(97)µ h,∂Ω * ⇀ µ ∂Ω in M((0, T * )×∂Ω×S d−1 ).(98)
Define µ := µ Ω + µ ∂Ω .
To ensure the required structure of the limit measures µ Ω and µ ∂Ω , one may argue as follows. Thanks to the monotonicity (91), one may apply [25,Lemma 2] to obtain that the limit measure µ can be sliced in time as
µ = L 1 (0, T * ) ⊗ (µ t ) t∈(0,T * ) , µ t ∈ M(Ω×S d−1 ) for all t ∈ (0, T * ),(99)
and that the standard lower semi-continuity for measures can be applied to almost every slice in time -precisely given by
|µ t | S d−1 (Ω) ≤ lim inf h→0 |µ h t | S d−1 (Ω) < ∞ for a.e. t ∈ (0, T * ).(100)
Next, we show that µ ∂Ω satisfies
µ ∂Ω = L 1 (0, T * ) ⊗ (µ ∂Ω t ) t∈(0,T * ) , µ ∂Ω t ∈ M(∂Ω×S d−1 ) for all t ∈ (0, T * ),(101)
together with (18b). To see that (101) holds, recall that |µ h,∂Ω t | S d−1 is simply given by the measure g h (·, t)H d−1 Ω for the trace g h (·, t) := c 0 cos(α)χ h (·, t). Due to g h L ∞ (∂Ω×(0,T )) ≤ c 0 , up to a subsequence, g h weakly converges to some g in L 2 ((0, T * )×∂Ω; L 1 ⊗ H d−1 ). From this, (98) and (100), it follows that µ ∂Ω has the structure (101) and (18b) with |µ ∂Ω t | S d−1 = g(·, t)H d−1 ∂Ω. Finally, we have to argue that
µ Ω = L 1 (0, T * ) ⊗ (µ Ω t ) t∈(0,T * ) , µ Ω t ∈ M(Ω×S d−1 ) for all t ∈ (0, T * ),(102)
together with (18a). However, (102) and (18a) directly follow from (99), (101), (100), and the definition µ = µ Ω + µ ∂Ω .
Step 4: Compactness, part II: Limit potential. Compactness for w h and λ h follows immediately from bounds (93) and (96) as well as the Poincaré inequality, showing there is w ∈ L 2 (0, T ; H 1 (0) ) and λ ∈ L 2 (0, T ) such that, up to a subsequence, w h ⇀ w in L 2 (0, T ; H 1 (0) ) and λ h ⇀ λ in L 2 (0, T ). In particular, w h + λ h ⇀ w + λ in L 2 (0, T ; H 1 (Ω)).
Step 5: Compactness, part III: Limit phase indicator. To obtain compactness ofχ h andχ h , we will use the classical Aubin-Lions-Simon compactness theorem (see [8], [34], and [49]). By (81), the fundamental theorem of calculus, and Jensen's inequality, it follows that
T * −δ 0 χ h (t+δ) − χ h (t) 2 H −1 (0) dt → 0 uniformly in h as δ → 0(103)
(see, e.g., [50,Lemma 4.2.7]). Looking to the dissipation (81) and recalling the definition of w h in (92), one sees that
T * 0 χ h (t) − χ h (t) 2 H −1 (0) dt ≤ C 1 h 2 .(104)
We claim (103) is also satisfied forχ h for a given sequence h → 0. Fix ǫ > 0, and choose h 1 > 0 such that C 1 h 2 1 < ǫ. Choose δ 1 > 0 such that the left hand side of (103) is bounded by ǫ uniformly in h for 0 < δ < δ 1 . By continuity of translation, which follows from density of smooth functions, we can suppose
T * −δ 0 χ h (t + δ) −χ h (t) 2 H −1 (0)
dt < ǫ for h > h 1 and 0 < δ < δ 1 .
Then by the triangle inequality one can directly estimate that for all h (in the sequence) and 0 < δ < δ 1 , we have
T * −δ 0 χ h (t + δ) −χ h (t) 2 H −1 (0) dt < 3ǫ,
proving the claim. With (103), we may apply the Aubin-Lions-Simon compactness theorem toχ h andχ h in the embedded spaces BV (Ω) ֒→֒→ L p (Ω) ֒→ H −1 (0) for some 6/5 < p < 1 * to obtain χ ∈ L 2 (0, T * ; BV (Ω; {0, 1})) ∩ H 1 (0, T * ; H −1 (0) ) such that
χ h ,χ h ,χ h → χ in L 2 (0, T * ; L 2 (Ω))(105)
(where we have used the Lebesgue dominated convergence theorem to move up to L 2 convergence). To see that the target of each approximation is in fact correctly written as a single function χ, both χ h andχ h must converge to the same limit by (104). Further, by the fundamental theorem of calculus, we have
χ h (t) −χ h (t) H −1 (0) = χ h (ih) −χ h (t) H −1 (0) ≤ t ih ∂ tχ h dt ≤ h 1/2 ∂ tχ h L 2 (0,T ;H −1 (0) ) ,(106)
for some i ∈ N 0 , which shows that χ h andχ h also converge to the same limit, thereby justifying (105). Finally, note the dimension dependent embedding was introduced for technical convenience to ensure that L p (Ω) ֒→ H −1 (0) is well defined, but can be circumnavigated (see, e.g., [30]).
We finally note that the distributional formulation of the initial condition survives passing to the limit forχ h as
T * 0 ∂ t χ, ζ H −1 (0) ,H 1 (0) + Ω χ∂ t ζ dx dt = − Ω χ 0 ζ(x, 0) dx(107)
for all ζ ∈ C 1 c (Ω×[0, T * ))∩H 1 (0, T * ; H 1 (0) ). As the trace of a function in H 1 (0, T ; H −1 (0) ) exists in H −1 (0) (see [33]), (107) implies
Tr| t=0 χ = χ 0 in H −1 (0) .(108)
Step 6: Compatibility conditions for limit varifold. Returning to the definition of µ h,Ω , for any φ ∈ C 1 c (Ω×(0, T * ); R d ) with (φ · n ∂Ω )| ∂Ω×(0,T * ) ≡ 0,
T * 0 Ω×S d−1 φ(x) · s dµ h,Ω t (x, s)dt = c 0 T * 0 Ω φ · d∇χ h dt = −c 0 T * 0 Ωχ h ∇ · φ dxdt.
Using the convergence from (105) as well as the conclusions from Step 3 of this proof, we can pass to the limit on the left-and right-hand side of the above equation, undo the divergence theorem, and localize in time to find (using also a straightforward approximation argument) that for a.e. t ∈ (0, T ), it holds
Ω×S d−1 φ(x) · s dµ Ω t (x, s) = Ω φ · c 0 d∇χ(109)
for all φ ∈ C 1 c (Ω; R d ) with (φ · n ∂Ω )| ∂Ω ≡ 0 (the null set indeed does not depend on the choice of the test vector field φ as C 1 c (Ω; R d ) normed by f → f ∞ + ∇f ∞ is separable). Taking the supremum in the above equation over φ ∈ C c (U ) for any U ⊂ Ω one has c 0 |∇χ|(U ) ≤ |µ Ω t |(U ), which by outer regularity implies (18c). Let now φ ∈ C c (∂Ω×(0, T * )) and fix a C 1 (Ω) extension ξ of the vector field (cos α)n ∂Ω ∈ C 1 (∂Ω) (e.g., by multiplying the gradient of the signed distance function for Ω by a suitable cutoff localizing to a small enough tubular neighborhood of ∂Ω). Recalling the definition of µ h,∂Ω , we obtain
T * 0 Ω×S d−1 φ(x)ξ(x) · s dµ h,Ω t (x, s)dt + T * 0 ∂Ω φ d|µ h,Ω t | S d−1 dt = c 0 T * 0 Ω φξ · d∇χ h dt + c 0 T * 0 ∂Ωχ h φξ · n ∂Ω dH d−1 dt = −c 0 T * 0 Ωχ h ∇ · (φξ) dxdt,
which as before implies for a.e. t ∈ (0, T * )
Ω×S d−1 φ(x)ξ(x) · s dµ Ω t (x, s) + ∂Ω φ d|µ Ω t | S d−1 = Ω φξ · c 0 d∇χ h + ∂Ω φ(cos α)c 0χ h dH d−1 .
Sending ξ → (cos α)n ∂Ω χ ∂Ω and varying φ ∈ C 1 c (∂Ω) now implies (18d). Note that together with Step 3 of this proof, we thus established item i) of Definition 1 with respect to the data (χ, µ).
Step 7: Gibbs-Thomson law in the limit and generalized mean curvature. We can multiply the Gibbs-Thomson relation (95) by a smooth and compactly supported test function on (0, T * ), integrate in time, pass to the limit as h ↓ 0 using the compactness from Steps 3 to 5 of this proof, and then localize in time to conclude that for a.e. t ∈ (0, T * ), it holds
Ω×S d−1 (Id − s ⊗ s) : ∇B(x) dµ t (x, s) = Ω χ(·, t)∇ · (w(·, t)+λ(t))B dx,(110)
for all B ∈ C 1 (Ω; R d ) with (B · n ∂Ω )| ∂Ω ≡ 0 (the null set again does not depend on the choice of B due to the separability of the space C 1 (Ω; R d ) normed by f → f ∞ + ∇f ∞ ). Note that the left-hand side of (110) is precisely δµ t (B) by Allard's first variation formula [5]. Finally, by Proposition 7, the Gibbs-Thomson relation (110) can be expressed as in (18f) with the trace of w + λ replacing c 0 H χ . Directly following the work of Section 4 in Röger [45], which applies for compactly supported variations in Ω, we conclude that the trace of w+λ c0 is given by the generalized mean curvature H χ , intrinsic to the surface supp |∇χ| Ω, for almost every t in (0, T * ). Recalling the integrability guaranteed by Proposition 7, (18f) and the curvature integrability (18e) are satisfied. In particular, we proved item ii) of Definition 1 with respect to the data (χ, µ).
Step 8: Preliminary optimal energy dissipation relation. By the compactness from Steps 3 to 5 of this proof for the terms arising in (93), lower semi-continuity of norms, Fatou's inequality, inequality (100), and first taking h ↓ 0 and afterward τ ↑ T , we obtain for a.e. T ∈ (0, T * )
|µ T | S d−1 (Ω) + T 0 1 2 ∂ t χ 2 H −1 (0) + 1 2 ∇w 2 H 1 (0) dt ≤ E[χ 0 ].(111)
Due to the previous two steps, it remains to upgrade (111) to (18g) to prove that (χ, µ) is a a varifold solution for Mullins-Sekerka flow (1a)-(1e) in the sense of Definition 1.
Step 9: Metric slope. Let t ∈ (0, T * ) be such that (110) holds. For B ∈ S χ(·,t) , let w B ∈ H −1 (0) solve the Neumann problem ∆w B = B · ∇χ(·, t) in Ω,
(n ∂Ω · ∇)w B = 0 on ∂Ω.(112)
Note by definition of the H −1
(0) norm, ∇w B L 2 (Ω) = B · ∇χ(·, t) H −1 (0)
. From the Gibbs-Thomson relation (110), we have
δµ t (B) = Ω χ(·, t)∇ · (w(·, t)B) dx = Ω ∇w(·, t) · ∇w B dx.(113)
Computing the norm of the projection of w B onto G χ(·,t) (see (45)) and recalling the inequality
1 2 (a/b) 2 ≥ a − 1 2 b 2 , we have 1 2 ∇w(·, t) 2 L 2 (Ω) ≥ 1 2 sup B∈S χ(·,t) Ω ∇w(·, t) · ∇w B ∇w B L 2 (Ω) 2 ≥ sup B∈Sχ δµ t (B) − 1 2 B · ∇χ(·, t) 2 V χ(·,t)(114)
for a.e. t ∈ (0, T * ).
Step 10: Time derivative of phase indicator. In this step, we show ∂ t χ(·, t) ∈ T χ(·,t) (see (17)) for a.e. t ∈ (0, T * ). As for the metric slope term, cf. (114), we use potentials as a convenient tool.
From the dissipation (111), χ ∈ H 1 (0, T * ; H −1 (0) ) and there is u ∈ H 1 (0, T * ; H 1 (0) ) such that for almost every t ∈ (0, T * ) the equation ∂ t χ(t) = ∆ N u(t) holds. For any ζ ∈ C ∞ c (Ω × [0, T * )) ∩ L 2 (0, T ; H 1 (0) ) via a short mollification argument, one can compute the derivative in time of χ, ζ H −1
(0) ,H 1 (0) = (χ, ζ) L 2 (Ω) to find (recall (108)) Ω χ(·, T )ζ(·, T ) dx − Ω χ 0 ζ(·, 0) dx = T 0 Ω χ∂ t ζ dxdt − T 0 Ω ∇u · ∇ζ dxdt(115)
for almost every T < T * . To see that (115) holds for general ζ ∈ C ∞ c (Ω × [0, T * )) it suffices to check the equation for c(t) = − Ω ζ(·, t) dx. In this case, the left hand side of (115) becomes m 0 (c(T ) − c(0)) and the right hand side becomes m 0 T 0 ∂ t c(t) = m 0 (c(T ) − c(0)), verifying the assertion. Finally truncating a given test function ζ on the interval (T, T * ), we have that for almost every T < T * , equation (115) holds for all ζ ∈ C ∞ (Ω × [0, T )).
Note that for almost every t ∈ (0, T * )
lim τ ↓0 − t+τ t−τ Ω |∇u(x, t ′ ) − ∇u(x, t)| 2 dxdt ′ = 0.(116)
Furthermore, for a.e. t ∈ (0, T * ) there is a set A(t) ⊂ Ω associated to χ in the sense that χ(·, t) = χ A(t) . As a consequence of (18c), (18e), and (18f) (see equation (2.13) of [47]), for almost every t ∈ (0, T * ), there exists a measurable subset A(t) ⊂ Ω representing a modification of A(t) in the sense
L d A(t)∆ A(t) = 0, A(t) is open, ∂ A(t) ∩ Ω = ∂ * A(t) ∩ Ω ⊂ supp |µ Ω t | S d−1 Ω, supp |∇χ(·, t)| = ∂ A(t).(117)
We now claim that for almost every t ∈ (0, T * ) it holds
∆u(·, t) = 0 in Ω \ ∂ A(t)(118)
in a distributional sense. In other words by (46), (52), and (117), for almost every t ∈ (0, T * )
∂ t χ(·, t) ∈ T χ(·,t) .(119)
For a proof of (118), fix t ∈ (0, T * ) such that (115), (116), and (117) are satisfied.
Fix ζ ∈ C ∞ c (Ω \ A(t); [0, ∞)), consider a sequence s ↓ t so that one may also apply (115) for the choices T = s. Using ζ as a constant-in-time test function in (115) for T = s and T = t, respectively, it follows from the nonnegativity of ζ with the first item of (117) that
0 ≤ 1 s−t Ω χ A(s) ζ dx = −− s t Ω ∇u · ∇ζ dxdt ′ .
Hence, for s ↓ t we deduce from the previous display as well as (116) that
Ω ∇u · ∇ζ dx ≤ 0 for all ζ ∈ C ∞ c (Ω \ A(t); [0, ∞)).
Choosing instead a sequence s ↑ t so that one may apply (115) for the choices T = s, one obtains similarly
Ω ∇u · ∇ζ dx ≥ 0 for all ζ ∈ C ∞ c (Ω \ A(t); [0, ∞)).
In other words, (118) is satisfied throughout Ω \ A(t) for all nonnegative test func-
tions in H 1 (Ω \ A(t))
, hence also for all nonpositive test functions in H 1 (Ω \ A(t)),
and therefore for all smooth and compactly supported test functions in Ω \ A(t).
Adding and subtracting Ω ζ dx to the left hand side of (115), one may finally show along the lines of the previous argument that (118) is also satisfied throughout A(t).
Step 11: Conclusion. We may now conclude the proof that (χ, µ) is a varifold solution for Mullins-Sekerka flow in the sense of Definition 1, for which it remains to verify item iii). However, the desired energy dissipation inequalityà la De Giorgi (18g) now directly follows from (111), (114) and (119).
Step 12: BV solutions. Let χ be a subsequential limit point as obtained in (105). We now show that if the time-integrated energy convergence assumption (63) is satisfied then χ is a BV solution. The main difficulty in proving this is showing that there exists a subsequence h ↓ 0 such that the De Giorgi interpolants satisfȳ χ h (·, t) → χ(·, t) strictly in BV (Ω; {0, 1}) for a.e. t ∈ (0, T * ) as h ↓ 0.
(120)
Before proving (120), let us show how this concludes the result. First, since (120) in particular means |∇χ h (·, t)|(Ω) → |∇χ(·, t)|(Ω) for a.e. t ∈ (0, T * ) as h ↓ 0, it follows from Reshetnyak's continuity theorem, cf. [6,Theorem 2.39], that
µ Ω := L 1 (0, T * ) ⊗ c 0 d|∇χ(·, t)| Ω ⊗ (δ ∇χ(·,t)
|∇χ(·,t)| (x) ) x∈Ω t∈(0,T * ) is the weak limit of µ h,Ω , i.e., µ h,Ω → µ Ω weakly * in M((0, T * )×Ω×S d−1 ) as h → 0. Second, due to (120) it follows from BV trace theory, cf. [6, Theorem 3.88], that
χ h (·, t) → χ(·, t) strongly in L 1 (∂Ω) for a.e. t ∈ (0, T * ) as h ↓ 0.(121)
Hence, defining
µ ∂Ω := L 1 (0, T * ) ⊗ c 0 (cos α)χ(·, t) H d−1 ∂Ω ⊗ (δ n ∂Ω (x) ) x∈∂Ω t∈(0,T * ) ,
we have µ h,∂Ω → µ ∂Ω weakly * in M((0, T * )×∂Ω×S d−1 ) as h → 0. In summary, the relations (19a) and (19b) hold true as required. Furthermore, defining µ := µ Ω + µ ∂Ω , it follows from the arguments of the previous steps that (χ, µ) is a varifold solution in the sense of Definition 1. In other words, χ is a BV solution as claimed.
We now prove (120). To this end, we first show that (105) implies that there
exists a subsequence h ↓ 0 such that E[χ h (·, t)] → E[χ(·, t)] for a.e. t ∈ (0, T * ). Indeed, since E[χ h (·, t)] ≤ E[χ h (·, t)]
for all t ∈ (0, T * ) by the optimality constraint for a De Giorgi interpolant (60), we may estimate using the elementary relation |a| = a + 2a − that
T * 0 E[χ h (·, t)] − E[χ(·, t)] dt = T * 0 E[χ h (·, t)] − E[χ(·, t)] dt + T * 0 2 E[χ h (·, t)] − E[χ(·, t)] − dt ≤ T * 0 E[χ h (·, t)] − E[χ(·, t)] dt + T * 0 2 E[χ h (·, t)] − E[χ(·, t)] − dt.
The first right hand side term of the last inequality vanishes in the limit h ↓ 0 by assumption (63). For the second term on the right-hand side, we note that (105) entails that, up to a subsequence, we haveχ h (·, t) → χ(·, t) strongly in L 1 (Ω) for a.e. t ∈ (0, T * ) as h ↓ 0. Hence, the lower-semicontinuity result of Modica [41,Proposition 1.2] tells us that (E[χ h (·, t)] − E[χ(·, t)]) − → 0 pointwise a.e. in (0, T * ) as h ↓ 0, which in turn by Lebesgue's dominated convergence theorem guarantees that the second term on the right-hand side of the last inequality vanishes in the limit h ↓ 0. In summary, for a suitable subsequence h ↓ 0
E[χ h (·, t)] → E[χ(·, t)]
for a.e. t ∈ (0, T * ),
χ h (·, t) → χ(·, t) strongly in L 1 (Ω) for a.e. t ∈ (0, T * ).
Now, due to (123) and the definition of strict convergence in BV (Ω), (120) will follow if the total variations converge, i.e., |∇χ h (·, t)|(Ω) → |∇χ(·, t)|(Ω) for a.e. t ∈ (0, T * ) as h ↓ 0.
However, this is proven in a more general context (i.e., for the diffuse interface analogue of the sharp interface energy (11)) in [24,Lemma 5]. More precisely, thanks to (122) Proof of Lemma 3. The proof is naturally divided into two parts.
Step 1: Proof of "≤" in (20) without assuming (18f). To simplify the notation, we denote χ s := χ • Ψ −1 s resp. µ s := µ • Ψ −1 s and abbreviate the right-hand side of (20) as
A := sup ∂sχs|s=0=−B·∇χ χs→χ, B∈Sχ lim sup s↓0 (E[µ s ] − E[µ]) + χ s − χ H −1 (0) .(125)
Fixing a flow χ s such that χ s → χ as s → 0 with ∂ s χ s | s=0 = −B · ∇χ for some B ∈ S χ , we claim that the upper bound (20) follows from the assertions
lim s→0 1 s E[µ s ] − E[µ] = δµ(B),(126)lim s→0 1 s χ s − χ H −1 (0) = B · ∇χ H −1 (0) .(127)
Indeed, multiplying the definition in (125) by 1 = s/s, using the inequality 1 2 (a/b) 2 ≥ a − 1 2 b 2 , and recalling the notation (16), we find
1 2 A 2 ≥ lim s→0 E[µ s ] − E[µ] s − 1 2 lim s→0 χ s − χ H −1 (0) s 2 = δµ(B) − 1 2 B · ∇χ Vχ .(128)
Recalling Lemma 9, taking the supremum over B ∈ S χ thus yields "≤" in (20). It therefore remains to establish (126) and (127). However, the former is a classical and well-known result [5] whereas the latter is established in Lemma 11.
Step 2: Proof of "≥" in (20) assuming (18f). To show equality under the additional assumption of (18f), we may suppose that |∂E[µ]| Vχ < ∞. First, we note that B · ∇χ → δµ(B) is a well defined operator on {B · ∇χ : B ∈ S χ } ⊂ V χ (see definition (13)). To see this, let B ∈ S χ be any function such that B · ∇χ = 0 in V χ ⊂ H −1 (0) . Recall that Ω χ∇ · B dx = 0 by the definition of S χ in (12) to find that for all φ ∈ C 1 (Ω), one has
Ω φB · ∇χ |∇χ| d|∇χ| = Ω φ − − Ω φ dx B · ∇χ |∇χ| d|∇χ| = 0.
The above equation implies that B · ∇χ |∇χ| = 0 for |∇χ|−almost every x in Ω, which by the representation of the first variation in terms of the curvature in (18f) shows δµ(B) = 0. Linearity shows the operator is well-defined on {B · ∇χ : B ∈ S χ }.
With this in hand, the bound |∂E[µ]| Vχ < ∞ implies that the mapping B ·∇χ → δµ(B) can be extended to a bounded linear operator L : V χ → R, which is identified with an element L ∈ V χ by the Riesz isomorphism theorem. Consequently,
1 2 |∂E[µ]| 2 Vχ = sup B∈Sχ (L, B · ∇χ) Vχ − 1 2 B · ∇χ 2 Vχ = 1 2 L 2 Vχ .(129)
But recalling (126) and (127), one also has that
A = sup ∂sχs|s=0=−B·∇χ χs→χ, B∈Sχ lim s→0 1 s (E[µ s ] − E[µ]) + lim s→0 1 s χ s − χ H −1 (0) = sup B∈Sχ (δµ(B)) + B · ∇χ Vχ = L Vχ .
The previous two displays complete the proof that A = |∂E[µ]| Vχ .
Proof of Lemma 4. We proceed in three steps.
Proof of item i): We first observe that by (18g)
T * 0 1 2 ∂ t χ 2 H −1 (0) (Ω) dt = T * 0 1 2 ∂ t χ 2 T χ(·,t) dt ≤ E[µ 0 ] < ∞,
which in turn simply means that there exists u ∈ L 2 (0, T * ; H χ(·,t) ) ⊂ L 2 (0, T * ; H 1 (0) ) such that
∂ t χ(·, t) = ∆ N u(·, t) for a.e. t ∈ (0, T * ).(130)
In other words, recalling (44),
T * 0 Ω χ∂ t ζ dxdt = T *
0 Ω ∇u · ∇ζ dxdt for all ζ ∈ C 1 cpt (Ω×(0, T * )). (131) By standard PDE arguments and the requirement χ ∈ C([0, T * ); H −1 (0) (Ω)) such that Tr| t=0 χ = χ 0 in H −1 (0) , one may post-process the previous display to (21). Proof of item ii): Thanks to (18f), we may apply the argument from the proof of Lemma 3 to infer that the map B · ∇χ(·, t) → δµ t (B), B ∈ S χ(·,t) , is welldefined and extends to a unique bounded and linear functional L t : V χ(·,t) → R. Recalling the definition G χ(·,t) = ∆ −1 N (V χ(·,t) ) ⊂ H 1 (0) and the fact that the weak Neumann Laplacian ∆ N : H 1 (0) → H −1 (0) is nothing else but the Riesz isomorphism between H 1 (0) and its dual H −1 (0) , it follows from (18h) and (18g) that there exists a potential w 0 ∈ L 2 (0, T * ; G χ(·,t) ) such that
δµ t (B) = L t B · ∇χ(·, t) = − B · ∇χ(·, t), w 0 (·, t) H −1 (0) ,H 1 (0)
for almost every t ∈ (0, T * ) and all B ∈ S χ(·,t) , as well as
1 2 w 0 (·, t) 2 G χ(·,t) = 1 2 ∂E[µ t ] 2 V χ(·,t)(132)
for almost every t ∈ (0, T * ). In particular,
δµ t (B) = Ω χ(·, t)∇ · Bw 0 (·, t) dx(133)
for almost every t ∈ (0, T * ) and all B ∈ S χ(·,t) . Due to Lemma 10, there exists a measurable Lagrange multiplier λ : (0, T * ) → R such that
δµ t (B) = Ω χ(·, t)∇ · B w 0 (·, t)+λ(t) dx(134)
for almost every t ∈ (0, T * ) and all B ∈ C 1 (Ω; R d ) with (B · n ∂Ω )| ∂Ω ≡ 0, and that there exists a constant C = C(Ω, d, c 0 , m 0 ) > 0 such that
|λ(t)| ≤ C 1 + |∇χ(·, t)|(Ω) |µ t | S d−1 (Ω) + ∇w 0 (·, t) L 2 (Ω)(135)
for almost every t ∈ (0, T * ). Due to (132)-(135), (18c) and (18g), it follows that the potential w := w 0 + λ satisfies the desired properties (22)- (24). Proof of item iii): The De Giorgi type energy dissipation inequality in the form of (25) now directly follows from (23) Proof of Lemma 5. We start with a proof of the two consistency claims and afterward give the proof of the compactness statement.
Step 1: Classical solutions are BV solutions. Let A be a classical solution for Mullins-Sekerka flow in the sense of (1a)-(1e). Define χ(x, t) := χ A (t) (x) for all (x, t) ∈ Ω × [0, T * ). As one may simply define the associated varifold by means of
|µ Ω t | S d−1 := c 0 |∇χ(·, t)| Ω = c 0 H d−1 (∂ * A (t) ∩ Ω), |µ ∂Ω t | S d−1 := c 0 (cos α)χ(·, t)H d−1 Ω = c 0 (cos α)H d−1 (∂ * A (t) ∩ ∂Ω), and (19b) with ∇χ(·,t)
|∇χ(·,t)| = n ∂A (t) , item i) of Definition 1 is trivially satisfied. Note that the varifold energy |µ t | S d−1 (Ω) simply equals the BV energy functional E[χ(·, t)] from (11).
Due to the smoothness of the geometry and the validity of the contact angle condition (1e) in the pointwise strong sense, an application of the classical first variation formula together with an integration by parts along each of the smooth manifolds ∂ * A (t) ∩ Ω and ∂ * A (t) ∩ ∂Ω ensures (recall the notation from Subsection 1.2)
δE[χ(·, t)](B) = c 0 ∂ * A (t)∩Ω ∇ tan · B dH d−1 + c 0 (cos α) ∂ * A (t)∩∂Ω ∇ tan · B dH d−1 = −c 0 ∂ * A (t)∩Ω H ∂A (t) · B dH d−1(136)
for all tangential variations B ∈ C 1 (Ω; R d ). Hence, the identity (18f) holds with H χ(·,t) = H ∂A (t) · n ∂A (t) and the asserted integrability of H χ(·,t) follows from the boundary condition (1c) and a standard trace estimate for the potentialū(·, t). It remains to show (18g). Starting point for this is again the above first variation formula, now in the form of
d dt E[χ(·, t)] = −c 0 ∂ * A (t)∩Ω H ∂A (t) · V ∂A (t) dH d−1 .(137)
Plugging in (1b) and (1c), integrating by parts in the form of (2), and exploiting afterward (1a) and (1d), we arrive at
d dt E[χ(·, t)] = − Ω |∇ū(·, t)| 2 dx.(138)
The desired inequality (18g) will follow once we prove
∂ t χ(·, t) H −1 (0) = ∇ū(·, t) L 2 (Ω) ,(139)∂E[µ t ] V χ(·,t) = ∇ū(·, t) L 2 (Ω) .(140)
Exploiting that the geometry underlying χ is smoothly evolving, i.e., (∂ t χ)(·, t) = −V ∂A (t) · (∇χ Ω)(·, t), the claim (139) is a consequence of the identity
Ω ∇∆ −1 N (∂ t χ)(·, t) · ∇φ dx = − (∂ t χ)(·, t), φ H −1 (0) ,H 1 (0) = − ∂ * A (t)∩Ω n ∂A (t) · [[∇ū(·, t)]]φ dH d−1 = Ω ∇ū(·, t) · ∇φ dx
valid for all φ ∈ H 1 (0) , where in the process we again made use of (1b), (1a), (1d), and an integration by parts in the form of (2).
For a proof of (140), we first note that thanks to (136) and (1c) it holds
δE[χ(·, t)](B) = − ∂ * A (t)∩Ωū (·, t)n ∂A (t) · B dH d−1 = Ω ∇ū(·, t) · ∇∆ −1 N (B · ∇χ(·, t)) dx
for all B ∈ S χ(·,t) . Hence, in view of (18h) it suffices to prove that for each fixed t ∈ (0, T * ) there existsB(·, t) ∈ S χ(·,t) such thatū(·, t) = ∆ −1 N (B(·, t) · ∇χ(·, t)). To construct such aB, first note that from (1a), (1d) and (2)
, we have ∆ Nū (·, t) = (n ∂A (t) · [[∇ū(·, t)]])H d−1 (∂ * A (t) ∩ Ω) in the sense of distributions. Consequently, B(·, t) ∈ C 1 (Ω; R d ) satisfying n ∂A (t) ·B(·, t) = n ∂A (t) · [[∇ū(·, t)]] on ∂ * A (t) ∩ Ω, n ∂Ω ·B(·, t) = 0 on ∂Ω,(141)
for which one has (using (1a) and (3))
Ω χ(·, t)∇ ·B(·, t) dx = − ∂A (t)∩Ω n ∂A (t) · [[∇ū(·, t)]] dH d−1 = 0,
showing thatB(·, t) ∈ S χ(·,t) , will satisfy the claim.
To this end, one first constructs a C 1 vector field B defined on ∂A (t) ∩ Ω with the properties that n ∂A (t) · [[∇ū(·, t)]] = n ∂A (t) · B and B · n ∂Ω = 0. Such B can be constructed by using a partition of unity on the manifold with boundary ∂A (t) ∩ Ω. Away from the boundary, set B := (n ∂A (t) · [[∇ū(·, t)]])n ∂A (t) , and near the boundary one can define B as an appropriate lifting of
(n ∂A (t) · [[∇ū(·, t)]]) cos(π/2 − α) τ ∂A (t)∩∂Ω ,
where τ ∂A (t)∩∂Ω is the unit length vector field tangent to ∂Ω normal to the contact points manifold ∂A (t) ∩ Ω ∩ ∂Ω and with n ∂A (t) · τ ∂A (t)∩∂Ω = cos(π/2 − α). With the vector field B in place, any tangentialB(·, t) ∈ C 1 (Ω; R d ) extending B satisfies (141), which in turn concludes the proof of the first step.
Step 2: Smooth varifold solutions are classical solutions. Let (χ, µ) be a varifold solution with smooth geometry, i.e., satisfying Definition 1 and such that the indicator χ can be represented as χ(x, t) = χ A (t) (x), where A = (A (t)) t∈[0,T * ) is a time-dependent family of smoothly evolving subsets A (t) ⊂ Ω, t ∈ [0, T * ), as in the previous step. It is convenient for what follows to work with the two potentials u and w satisfying the conclusions of Lemma 4, i.e., (21)- (26).
Fix t ∈ (0, T * ) such that (26) holds. By regularity of supp |∇χ(·, t)| Ω = ∂A (t) ∩ Ω and the fact that H χ(·,t) is the generalized mean curvature vector of supp |∇χ(·, t)| Ω in the sense of Röger [
H ∂A (t) = H χ(·,t) H d−1 -a.e. on ∂A (t) ∩ Ω.(142)
Hence, we deduce from (26) that
w(·, t)n ∂A (t) = c 0 H ∂A (t) H d−1 -a.e. on ∂A (t) ∩ Ω.(143)
Recalling w ∈ G χ ⊂ H χ and (52) shows that w is harmonic in A and in the interior of Ω \ A . One may then apply standard elliptic regularity theory for the Dirichlet problem [18] to obtain a continuous representative for w and further conclude that (143) holds everywhere on ∂A (t) ∩ Ω. Next, we take care of the contact angle condition (1e). To this end, we denote by τ ∂A (t)∩Ω a vector field on the contact points manifold, ∂(∂A (t) ∩ Ω) ⊂ ∂Ω, that is tangent to the interface ∂A (t)∩Ω, normal to the contact points manifold, and which points away from ∂A (t)∩Ω. We further denote by τ ∂A (t)∩∂Ω a vector field along the contact points manifold which now is tangent to ∂Ω, again normal to the contact points manifold, and which this time points towards ∂A (t)∩∂Ω. Note that by these choices, at each point of the contact points manifold the vector fields τ ∂A (t)∩Ω , n ∂A (t) , τ ∂A (t)∩∂Ω and n ∂Ω lie in the normal space of the contact points manifold, and that the orientations were precisely chosen such that τ ∂A (t)∩Ω · τ ∂A (t)∩∂Ω = n ∂A (t) · n ∂Ω . With these constructions in place, we obtain from the classical first variation formula and an integration by parts along ∂A (t) ∩ Ω and ∂A (t) ∩ ∂Ω that
δE[χ(·, t)](B) = c 0 ∂A (t)∩Ω ∇ tan · B dH d−1 + c 0 (cos α) ∂A (t)∩∂Ω ∇ tan · B dH d−1 = −c 0 ∂A (t)∩Ω H ∂A (t) · B dH d−1 + c 0 ∂(∂A (t)∩Ω) (τ ∂A (t)∩Ω − (cos α)τ ∂A (t)∩∂Ω ) · B dH d−2 = −c 0 ∂A (t)∩Ω H ∂A (t) · B dH d−1(144)+ c 0 ∂(∂A (t)∩Ω) (τ ∂A (t)∩Ω · τ ∂A (t)∩∂Ω − cos α)(τ ∂A (t)∩∂Ω · B) dH d−2
for all tangential variations B ∈ C 1 (Ω). Recall now that we assume that (18f) even holds with δµ t replaced on the left hand side by δE[χ(·, t)]. In particular, δµ t (B) = δE[χ(·, t)](B) for all B ∈ S χ(·,t) so that the argument from the proof of Lemma 4, item ii), shows
δE[χ(·, t)](B) = − ∂ * A (t)∩Ω w(·, t)n ∂A (t) · B dH d−1
for all tangential B ∈ C 1 (Ω; R d ). Because of (143), we thus infer from (144) that the contact angle condition (1e) indeed holds true. Note that for each t in (0, T * ), the potential u satisfies
∂ t χ(·, t) = −V ∂A (t) · (∇χ Ω)(·, t) = ∆ N u.
In particular, by the assumed regularity of supp |∇χ(·, t)| Ω = ∂A (t) ∩ Ω,
∆u(·, t) = 0 in Ω \ ∂A (t).(145)
Furthermore, we claim that
V ∂A (t) = − n ∂A (t) · [[∇u(·, t)]] n ∂A (t) on ∂A (t) ∩ Ω, (n ∂Ω · ∇)u(·, t) = 0 on ∂Ω \ ∂A (t) ∩ Ω.(146)
To prove (146), we suppress for notational convenience the time variable and show for any open set O ⊂ Ω which does not contain the contact points manifold
∂(∂A (t) ∩ Ω) ⊂ ∂Ω that u ∈ H 3 (O ∩ A (t)) ∩ H 3 (O ∩ (Ω \ A (t))).(147)
With this, ∇u will have continuous representatives in A (t) and Ω \ A (t) excluding contact points, from which (146) will follow by applying the integration by parts formula (2). Note that typical estimates apply for the Neumann problem if O does not intersect ∂A (t), and consequently to conclude (147), it suffices to prove regularity in the case of a flattened and translated interface ∂A (t) with u truncated, that is, for u satisfying
∆u = −V H d−1 {x d = 0} in B(0, 1), u = 0 on ∂B(0, 1),(148)
with V smooth. The above equation can be differentiated for all multi-indices β ⊂ N d−1 representing tangential directions, showing that ∂ β u ∈ H 1 (B(0, 1)). Rearranging (145) to extract ∂ 2 d u from the Laplacian, we have that u belongs to H 2 (B(0, 1) \ {x d = 0}). To control the higher derivatives, note by the comment regarding multi-indices, we already have ∂ i ∂ j ∂ d u ∈ L 2 (Ω) for all i, j = d. Furthermore, differentiating (148) with respect to the i-th direction, where i ∈ {1, . . . , d−1}, and repeating the previous argument shows ∂ 2 d ∂ i u ∈ L 2 (B(0, 1) \ {x d = 0}). Finally, differentiating (145) with respect to the d-th direction away from both ∂A and ∂Ω and then extracting ∂ 3 d u from ∆∂ d u, we get ∂ 3 d u ∈ L 2 (B(0, 1) \ {x d = 0}), finishing the proof of (147).
Finally, we show
(∇u − ∇w)(·, t) L 2 (Ω) = 0.(149)
Note that (149) is indeed sufficient to conclude that the smooth BV solution χ is a classical solution because we already established (143), (146), (145) and (1e). For a proof of (149), we simply exploit smoothness of the evolution in combination with (137), (143), (145), (146), (1e) and (2) to obtain
d dt E[χ(·, t)] = − Ω ∇u(·, t) · ∇w(·, t) dx.
Subtracting the previous identity (in integrated form) from (25) and noting that E[χ(·, T ) ≤ E[µ T ] (due to the definitions (11) and (18i) as well as the compatibility conditions (18c) and (18d)), we get
0 ≤ T 0 Ω 1 2 |∇u − ∇w| 2 dxdt ≤ E[χ(·, T )] − E[µ T ] ≤ 0(150)
for a.e. T ∈ (0, T * ), which in turn proves (149). Note from (150) that E[χ(·, T )] = E[µ T ] for a.e. T ∈ (0, T * ). For general constants a + b = c + d, a ≤ c, and b ≤ d implies a = c and b = d, and we use this with coincidence of the varifold and BV energies to find that both (18c) and (18d) hold with equality. This implies that (27) holds. Assuming now that 1 c0 µ Ω t ∈ M(Ω×S d−1 ) is an integer rectifiable oriented varifold, we consider the Radon-Nikodỳm derivative of both sides of (18d) with respect to H d−1 ∂Ω to find
(cos α)c 0 χ(·, t) = |µ ∂Ω t | S d−1 H d−1 ∂Ω (·, t)+c 0 m(·, t),
for some integer-valued function m : ∂Ω → N ∪ {0}. Necessarily, m ≡ 0, concluding (28).
Step 3: Compactness of solution space. It is again convenient to work explicitly with potentials. More precisely, for each k ∈ N we fix a potential w k subject to item ii) of Lemma 4 with respect to the varifold solution (χ k , µ k ). By virtue of (18g) and (23), we have for all k ∈ N
E[(µ k ) T ]+ 1 2 T 0 (∂ t χ k )(·, t) 2 H −1 (0) + w k (·, t) 2 L 2 (Ω) dt ≤ E[µ k,0 ].
By assumption, we may select a subsequence k → ∞ such that χ k,0 * ⇀ χ 0 in BV (Ω; {0, 1}) to some χ 0 ∈ BV (Ω; {0, 1}). Since we also assumed tightness of the sequence (|∇χ k,0 | Ω) k∈N , it follows that along the previous subsequence we also have |∇χ k,0 |(Ω) → |∇χ 0 |(Ω). In other words, χ k,0 converges strictly in BV (Ω; {0, 1}) along k → ∞ to χ 0 , which in turn implies convergence of the associated traces in L 1 (∂Ω; dH d−1 ). In summary, we may deduce E[µ k,0 ] = E[χ k,0 ] → E[χ 0 ] = E[µ 0 ] for the subsequence k → ∞.
For the rest of the argument, a close inspection reveals that one may simply follow the reasoning from Step 2 to Step 11 of the proof of Theorem 1 as these steps do not rely on the actual procedure generating the sequence of (approximate) solutions but only on consequences derived from the validity of the associated sharp energy dissipation inequalities.
Proof of Proposition 6. We divide the proof into three steps.
Proof of item i): We start by recalling some notation from the proof of Theorem 1. For h > 0, we denoted byχ h the De Giorgi interpolant (60). From the definition (78) of the approximate oriented varifold µ Ω,h ∈ M((0, T * )×Ω×S d−1 ) and the measure |µ ∂Ω,h | S d−1 ∈ M((0, T * )×∂Ω), respectively, and an integration by parts, it then follows that Proof of item iii): Post-processing the first variation estimate (40) from Corollary 8 by means of the trace estimate (35) for s = 2 and the energy estimate (18g) yields (32).
Proof of Proposition 7. We split the proof into three steps. In the first and second steps, we develop estimates for an approximation of the (d − 1)-density of the varifold using ideas introduced by Grüter and Jost [23, Proof of Theorem 3.1] (see also Kagaya and Tonegawa [29, Proof of Theorem 3.2]), that were originally used to derive monotonicity formula for varifolds with integrable curvature near a domain boundary. In the third step, we combine this approach with Schätzle's [47] work, which derived a monotonicity formula in the interior, to obtain a montonicity formula up to the boundary.
Step 1: Preliminaries. Since ∂Ω is compact and of class C 2 , we may choose a localization scale r = r(∂Ω) ∈ (0, 1) such that ∂Ω admits a regular tubular neighborhood of width 2r. More precisely, the map Ψ ∂Ω : ∂Ω × (−2r, 2r) → {y ∈ R d : dist(y, ∂Ω) < 2r}, (x, s) → x + sn ∂Ω (x) is a C 1 -diffeomorphism such that ∇Ψ ∂Ω L ∞ , ∇Ψ −1 ∂Ω L ∞ ≤ C. The inverse splits in form of Ψ −1 ∂Ω = (P ∂Ω , s ∂Ω ), where s ∂Ω represents the signed distance to ∂Ω oriented with respect to n ∂Ω and P ∂Ω represents the nearest point projection onto ∂Ω: P ∂Ω (x) = x − s ∂Ω (x)n ∂Ω (P ∂Ω (x)) = x − s ∂Ω (x)(∇s ∂Ω )(x) for all x ∈ R d such that dist(x, ∂Ω) < 2r.
In order to extend the argument of Schätzle [
ρ ∈ (0, r), x 0 ∈ Ω : dist(x 0 , ∂Ω) < r.
Finally, we set ι x (y) := Id−n ∂Ω (P ∂Ω (x)) ⊗ n ∂Ω (P ∂Ω (x)) y (154) − y · n ∂Ω (P ∂Ω (x)) n ∂Ω (P ∂Ω (x)), y ∈ R d , x ∈ R d : dist(x, ∂Ω) < 2r, which reflects a vector across the tangent space at P ∂Ω (x) on ∂Ω.
Step 2: A preliminary monotonicity formula. Let ρ ∈ (0, r) and x 0 ∈ Ω such that dist(x 0 , ∂Ω) < r.
where the test vector field Φ x0,ρ is given by (recall the definition (155) ofη x0,ρ )
Φ x0,ρ (x) := η |x−x 0 | ρ (x − x 0 ) +η x0,ρ (x)ι x (x − x 0 ).(157)
For any q ∈ [d, ∞), we may further post-process (156) by an application of the chain rule to the effect of
d dρ ρ −(d−1) I x0 (ρ) 1 q ≥ − C q ρ −(d−1) I x0 (ρ) 1 q −1 ρ 1−(d−1) I ′ x0 (ρ) + ρ −(d−1) I x0 (ρ) (158) − 1 q ρ −(d−1) I x0 (ρ) 1 q −1 ρ −d Ω (Id − s ⊗ s) : ∇Φ x0,ρ dµ.
Since we do not yet know that the generalized mean curvature vector field of the interface supp |∇χ| Ω is q-integrable, we can not simply proceed as in [ Noting that by choice of q ≥ d the first right-hand side term of (158) is bounded from below by the derivative of the product ρ 1− d−1 q (I x0 (ρ)) 1/q , we integrate (158) over an interval (σ, τ ) where (ρ −(d−1) I x0 (ρ)) 1/q ≥ 1 to find
(1 + C q τ )f (τ ) − (1 + C q σ)f (σ) ≥ − 1 q τ σ ρ −d Ω (Id − s ⊗ s) : ∇Φ x0,ρ dµ . (159)
The same bound trivially holds, over intervals (σ, τ ) where f (ρ) = 1, and consequently telescoping, we have the montonicity formula (159) for all 0 < σ < τ ≪ 1.
Step 3: Local trace estimate for the chemical potential. We first post-process the preliminary monotonicity formula (159) by estimating the associated second righthand side term involving the first variation. To this end, we recall for instance from [29, p. 147] that the test vector field Φ x0,ρ from (157) is tangential along ∂Ω. In particular, it represents an admissible choice for testing (33): Ω (Id − s ⊗ s) : ∇Φ x0,ρ dµ = Ω χ w(∇ · Φ x0,ρ ) + Φ x0,ρ · ∇w dx.
We distinguish between two cases. If ρ < dist(x 0 , ∂Ω), thenη x0,ρ ≡ 0 and by plugging in (157) as well as the bounds for η ρ −d Ω χ w(∇ · Φ x0,ρ ) + Φ x0,ρ · ∇w dx ≤ Cρ −d Ω∩Bρ(x0) ρ|∇w| + |w| dx.
If instead ρ ≥ dist(x 0 , ∂Ω), straightforward arguments show |η x0,ρ ι x (x − x 0 )| ≤ Cρ, |∇η x0,ρ ||ι x (x−x 0 )| ≤ C, |∇ι x (x−x 0 )| ≤ C, suppη x0,ρ ⊂B ρ (x 0 ) ⊂ B 5ρ (x 0 ), and therefore for ρ ≥ dist(x 0 , ∂Ω)
ρ −d Ω χ w(∇ · Φ x0,ρ ) + Φ x0,ρ · ∇w dx ≤ Cρ −d Ω∩B5ρ(x0) ρ|∇w| + |w| dx.
To control the right-hand side of the display we argue for dimension d = 3 and note that embeddings are stronger in dimension d = 2 (see also [47, Proof of Lemma 2.1]). Extending w in H 1 (Ω) to a function in H 1 ({x : dist(x, ∂Ω) < 2r}∪Ω), we let 2 * = 6 be the dimension dependent Sobolev exponent and apply Hölder's inequality to find ρ −d
Ω∩B5ρ(x0)
ρ|∇w| + |w| dx ≤ ρ 1−d/2 ∇w L 2 (B5ρ(x0)) + ρ −d/2 * w L 2 * (B5ρ(x0)) ≤ Cρ −1/2 w H 1 (Ω) , as the exponents on ρ coincide.
Thus for all 0 < ρ < r 5 due to the previous case study and estimate above
ρ −d Ω (Id − s ⊗ s) : ∇Φ x0,ρ dµ ≤ C w H 1 (Ω) ρ β−1(160)
for some β ∈ (0, 1) (accounting also for d = 2). Inserting (160) back into (159) finally yields that the function ρ → (1 + C q ρ) max ρ −(d−1) I x0 (ρ) 1 q , 1 + Cβ −1 w H 1 (Ω) ρ β is nondecreasing in (0, r 5 ). In particular, since η ≡ 1 on [0, 1 2 ], we obtain one-sided Alhfor's regularity for the varifold as
for some C q,r,d ≥ 1 for all q ≥ d. The estimate (161) is sufficient to apply the trace theory (as in [39]) for the BV function |w| s , and the asserted local estimate for the L s -norm of the trace of the potential w on supp |µ| S d−1 now follows as in Schätzle [47, Proof of Theorem 1.3].
Proof of Corollary 8. In view of the Gibbs-Thomson law (33) and by defining the Radon-Nikodỳm derivative ρ Ω := c0|∇χ| Ω |µ| S d−1 Ω ∈ [0, 1], we recall the fact that H Ω := ρ Ω w c0 ∇χ |∇χ| ∈ L 1 (Ω, d|µ| S d−1 ) represents the generalized mean curvature vector of µ with respect to tangential variations, i.e., . In particular, by a splitting argument into "tangential" and "normal components" of a general variation B ∈ C 1 (Ω), we deduce that the varifold µ is of bounded first variation in Ω with representation (39). The asserted bounds (40)- (42) are finally consequences of the two bounds from the previous display, the representation of the first variation from (39), and the definition of H Ω .
2. 2 .
2Further properties of varifold solutions. The purpose of this subsection is to collect a variety of further results complementing our main existence result, Theorem 1. Proofs of these are postponed until Subsection 3.4. Lemma 3 (Interpretation as a De Giorgi metric slope). Let χ ∈ BV (Ω; {0, 1}) and µ ∈ M(Ω×S d−1 )
Corollary 8 (
8First variation estimate up to the boundary). In the setting of Proposition 7, the varifold µ is in fact of bounded variation on Ω. More precisely, there exist H Ω , H ∂Ω and σ µ with the properties
Lemma 10 .
10Let χ ∈ M 0 , w ∈ H 1 (0) , and µ ∈ M(Ω×S d−1 ) be an oriented varifold such thatδµ(B) = Ω χ∇ · (wB) dx for all B ∈ S χ .(64)Then there is λ ∈ R such thatδµ(B) = Ω χ∇ · (w+λ)B dx for all B ∈ C 1 (Ω; R d ) with (B · n ∂Ω )| ∂Ω ≡ 0and there exists C = C(Ω, d, c 0 , m 0 ) w+λ H 1 (Ω) ≤ C(1 + |∇χ|(Ω)) |µ|(Ω) + ∇w L 2 (Ω) .
, (90) and (87). Integrating (86) on (s, t), using optimality of the interpolant (60) at s in the form of f (s) ≤ E[χ(0)] = E[χ 0 ], and using monotonicity of f from (88), we have
and (123), one may simply apply the argument of [24, Proof of Lemma 5] with respect to the choices ψ(u) := c 0 u, σ(u) := (cos α)c 0 u and τ (u) := (σ • ψ −1 )(u) = (cos α)u to obtain (124), which in turn concludes the proof of Theorem 1. 3.4. Proofs for further properties of varifold solutions. In this subsection, we present the proofs for the various further results on varifold solutions to Mullins-Sekerka flow as mentioned in Subsection 2.2.
and (130). Proof of Remark 2. A careful inspection of the proof of Theorem 1 shows that the conclusions (110) and (111) are indeed independent of an assumption on the value of the ambient dimension d ≥ 2. The same is true for the estimate (65) of Lemma 10. The claim then immediately follows from these observations and the first step of the proof of Lemma 4.
h
(x, t)(∇ · η)(x) dxdt
47 ,
47Proof of Lemma 2.1] up to the boundary of ∂Ω, we employ the reflection technique of Grüter and Jost [23, Proof of Theorem 3.1]. To this end, we introduce further notation. First, we denote bỹ x := 2P ∂Ω (x) − x, x ∈ R d such that dist(x, ∂Ω) < 2r (152) the reflection of the point x across ∂Ω in normal direction. Further, we define the "reflected ball"B ρ (x 0 ) := {x ∈ R d : dist(x, ∂Ω) < 2r, |x − x 0 | < ρ},
Let η : [0, ∞) → [0, 1] be smooth and nonincreasing such that η ≡ 1 on [0, 1 2 ], η ≡ 0 on [1, ∞), and |η ′ | ≤ 4 on [0, ∞). Consider then the functionalI x0 (ρ) := Ω η |x−x 0 | ρ +η x0,ρ (x) d|µ| S d−1 ≥ 0, whereη x0,ρ : Ω → [0, 1] represents the C 1 -functioñ η x0,ρ (x) := η |x−x0| ρ if dist(x, ∂Ω) < 2r, 0 else.(155)A close inspection of the argument given by Kagaya and Tonegawa [29, Estimate (4.11) and top of page 151] reveals that we have the boundd dρ ρ −(d−1) I x0 (ρ) ≥ −C ρ 1−(d−1) I ′ x0 (ρ) + ρ −(d−1) I x0 (ρ) − ρ −d Ω (Id − s ⊗ s) : ∇Φ x0,ρ dµ,
≤
C q,r,d 1+ max |µ| S d−1 (Ω), w q H 1 (Ω)
(
Id−s ⊗ s) : ∇B dµ = − Ω H Ω · B d|µ| S d−1 for all B ∈ C 1 (Ω), (B · n ∂Ω )| ∂Ω ≡ 0. A recent result of De Masi [17, Theorem 1.1] therefore ensures that there exists H ∂Ω ∈ L ∞ (∂Ω, d|µ| S d−1 ) with the property thatH ∂Ω (x) ⊥ Tan x ∂Ω for |µ| S d−1 ∂Ω-a.e. x ∈ Ω, and a bounded Radon measure σ µ ∈ M(∂Ω) such thatδµ(B) = − Ω (H Ω +H ∂Ω ) · B d|µ| S d−1 + ∂Ω B · n ∂Ω dσ µfor all "normal variations" B ∈ C 1 (Ω) in the sense that B(x) ⊥ Tan x ∂Ω for all x ∈ ∂Ω. There moreover exists a constant C = C(Ω) > 0 (depending only on the second fundamental form of the domain boundary ∂Ω) such thatH ∂Ω L ∞ (∂Ω,d|µ| S d−1 ) ≤ C, σ µ (∂Ω) ≤ C|µ| S d−1 (Ω) + H Ω L 1 (Ω,d|µ| S d−1 )
Further by [3, Corollary 9.1.4], for the trace operator Tr supp |∇χ| defined Cap 2 quasieverywhere for H 1 (Ω)-functions, (48) is the same as ker Tr supp |∇χ|
44, Definition 1.1], it follows from Röger's result [44, Proposition 3.1] that
23, Proof of Theorem 3.1] or [29, Proof of Theorem 3.2] (at least with respect to the second right hand side term of the previous display).To circumvent this technicality, define f (ρ) := max{(ρ −(d−1) I x0 (ρ)) 1/q , 1}.
c 0 |∇χ(·, t)| Ω = |µ Ω t | S d−1 Ω, |µ Ω t | S d−1 ∂Ω = 0,(27)(cos α)c 0 χ(·, t) H d−1 ∂Ω = |µ ∂Ω t | S d−1(28)for a.e. t ∈ (0, T * ).
AcknowledgementsThe authors thank Helmut Abels for pointing out his paper[2]to them and Olli Saari for insightful discussion. This project has received funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy -EXC-2047/1 -390685813.for all η ∈ C ∞ (Ω; R d ) with n ∂Ω · η = 0 on ∂Ω and all ζ ∈ C ∞ c (0, T * ), and also (0,T * )×Ω×S d−1 s · ζ(t)ξ(x) dµ Ω,h (t, x, s) + (0,T * )×∂Ω, respectively, that first taking the limit in the previous two displays for a suitable subsequence h ↓ 0 and then undoing the integration by parts in the respective right-hand sides givesfor all η ∈ C ∞ (Ω; R d ) with n ∂Ω · η = 0 on ∂Ω and all ζ ∈ C ∞ c (0, T * ), andfor all ξ ∈ C ∞ (Ω; R d ) with n ∂Ω · ξ = cos α on ∂Ω and all ζ ∈ C ∞ c (0, T * ). The two compatibility conditions(29)and (30) now immediately follow from the previous two equalities and a localization argument in the time variable.Proof of item ii): Let w ∈ L 2 (0, T * ; H 1 (Ω)) be the potential from item ii) of Lemma 4. Thanks to Step 7 of the proof of Theorem 1, the relation (18f) indeed not only holds for B ∈ S χ(·,t) but also for all B ∈ C 1 (Ω; R d ) with B · n ∂Ω = 0 along ∂Ω. Hence, due to(22)and the trace estimate (35) from Proposition 7 for the potential w, it follows c 0 H χ (·, t) = w(·, t) (151) for almost every t ∈ (0, T * ) up to sets of (|∇χ(·, t)| Ω)-measure zero. The asserted estimate (31) follows in turn from the trace estimate(35), the properties(23)and(24), the compatibility condition (18c), and the energy estimate (18g).
On sharp interface limits for diffuse interface models for two-phase flows. H Abels, D Lengeler, Interfaces Free Bound. 163H. Abels and D. Lengeler. On sharp interface limits for diffuse interface models for two-phase flows. Interfaces Free Bound., 16(3):395-418, 2014.
Existence of weak solutions for a non-classical sharp interface model for a two-phase flow of viscous, incompressible fluids. H Abels, M Röger, Ann. Inst. H. Poincaré Anal. Non Linéaire. 266H. Abels and M. Röger. Existence of weak solutions for a non-classical sharp interface model for a two-phase flow of viscous, incompressible fluids. Ann. Inst. H. Poincaré Anal. Non Linéaire, 26(6):2403-2424, 2009.
Function Spaces and Potential Theory. Grundlehren der mathematischen Wissenschaften. D R Adams, L I Hedberg, SpringerBerlin; HeidelbergD. R. Adams and L. I. Hedberg. Function Spaces and Potential Theory. Grundlehren der mathematischen Wissenschaften. Springer Berlin, Heidelberg, 1996.
Convergence of the Cahn-Hilliard equation to the Hele-Shaw model. N D Alikakos, P W Bates, X Chen, Arch. Ration. Mech. Anal. 1282N. D. Alikakos, P. W. Bates, and X. Chen. Convergence of the Cahn-Hilliard equation to the Hele-Shaw model. Arch. Ration. Mech. Anal., 128(2):165-205, 1994.
On the first variation of a varifold. W K Allard, Ann. Math. 953417W. K. Allard. On the first variation of a varifold. Ann. Math., 95(3):417, 1972.
Functions of Bounded Variation and Free Discontinuity Problems (Oxford Mathematical Monographs). L Ambrosio, N Fusco, D Pallara, Oxford University PressL. Ambrosio, N. Fusco, and D. Pallara. Functions of Bounded Variation and Free Disconti- nuity Problems (Oxford Mathematical Monographs). Oxford University Press, 2000.
L Ambrosio, N Gigli, G Savaré, Gradient Flows. BaselBirkhäuser-VerlagL. Ambrosio, N. Gigli, and G. Savaré. Gradient Flows. Birkhäuser-Verlag, Basel, 2005.
Un théoreme de compacité. J.-P Aubin, C. R. Acad. Sci. Paris. 25624J.-P. Aubin. Un théoreme de compacité. C. R. Acad. Sci. Paris, 256(24):5042-5044, 1963.
The dynamics of nucleation for the Cahn-Hilliard equation. P Bates, P Fife, SIAM J. Appl. Math. 53P. Bates and P. Fife. The dynamics of nucleation for the Cahn-Hilliard equation. SIAM J. Appl. Math., 53, 1993.
Local minimization, variational evolution and Γ-convergence. A Braides, Lecture Notes in Mathematics. 2094SpringerA. Braides. Local minimization, variational evolution and Γ-convergence, volume 2094 of Lecture Notes in Mathematics. Springer, Cham, 2014.
Mullins-Sekerka as the Wasserstein flow of the perimeter. A Chambolle, T Laux, Proc. Amer. Math. Soc. 1497A. Chambolle and T. Laux. Mullins-Sekerka as the Wasserstein flow of the perimeter. Proc. Amer. Math. Soc., 149(7):2943-2956, 2021.
Traces and extensions of bounded divergence-measure fields on rough open sets. G Chen, Q Li, M Torres, Indiana Univ. Math. J. 691G. Chen, Q. Li, and M. Torres. Traces and extensions of bounded divergence-measure fields on rough open sets. Indiana Univ. Math. J., 69(1):229-264, 2020.
Generation and propagation of interfaces for reaction-diffusion equations. X Chen, 10.1016/0022-0396(92)90146-EJ. Differential Equations. 961X. Chen. Generation and propagation of interfaces for reaction-diffusion equations. J. Dif- ferential Equations, 96(1):116-141, 1992. doi:10.1016/0022-0396(92)90146-E.
Global asymptotic limit of solutions of the Cahn-Hilliard equation. X Chen, J. Differential Geom. 442X. Chen. Global asymptotic limit of solutions of the Cahn-Hilliard equation. J. Differential Geom., 44(2):262-311, 1996.
A Course in Functional Analysis. J B Conway, Graduate Texts in Mathematics. SpringerJ. B. Conway. A Course in Functional Analysis. Graduate Texts in Mathematics. Springer New York, NY, 2007.
An introduction to Γ-convergence. G Maso, Progress in Nonlinear Differential Equations and their Applications. Boston, MABirkhäuser Boston, Inc8G. Dal Maso. An introduction to Γ-convergence. Progress in Nonlinear Differential Equations and their Applications, 8. Birkhäuser Boston, Inc., Boston, MA, 1993.
Rectifiability of the free boundary for varifolds. L De Masi, Indiana Univ. Math. J. 70L. De Masi. Rectifiability of the free boundary for varifolds. Indiana Univ. Math. J., 70:2603- 2651, 2021.
Partial differential equations. L Evans, Graduate Studies in Mathematics. 19American Mathematical SocietyL. Evans. Partial differential equations, volume 19 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 1998.
Weak-strong uniqueness for the Navier-Stokes equation for two fluids with surface tension. J Fischer, S Hensel, Arch. Ration. Mech. Anal. 2362J. Fischer and S. Hensel. Weak-strong uniqueness for the Navier-Stokes equation for two fluids with surface tension. Arch. Ration. Mech. Anal., 236(2):967-1087, 2020.
A weak-strong uniqueness principle for the Mullins-Sekerka equation. J Fischer, S Hensel, T Laux, T Simon, 2022In preparationJ. Fischer, S. Hensel, T. Laux, and T. Simon. A weak-strong uniqueness principle for the Mullins-Sekerka equation. In preparation, 2022.
The local structure of the energy landscape in multiphase mean curvature flow: Weak-strong uniqueness and stability of evolutions. J Fischer, S Hensel, T Laux, T M Simon, arXiv:2003.05478v22020arXiv preprintJ. Fischer, S. Hensel, T. Laux, and T. M. Simon. The local structure of the energy landscape in multiphase mean curvature flow: Weak-strong uniqueness and stability of evolutions. arXiv preprint, 2020. arXiv:2003.05478v2.
Curvature driven interface evolution. H Garcke, Jahresber Dtsch Math-Ver. 1152H. Garcke. Curvature driven interface evolution. Jahresber Dtsch Math-Ver, 115(2):63-100, 2013.
Allard type regularity results for varifolds with free boundaries. M Grüter, J Jost, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 134M. Grüter and J. Jost. Allard type regularity results for varifolds with free boundaries. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4), 13(1):129-169, 1986.
BV solutions to mean curvature flow with constant contact angle: Allen-Cahn approximation and weak-strong uniqueness. In (minor) revision at Indiana Univ. S Hensel, T Laux, arXiv:2112.11150Math. J. S. Hensel and T. Laux. BV solutions to mean curvature flow with constant contact angle: Allen-Cahn approximation and weak-strong uniqueness. In (minor) revision at Indiana Univ. Math. J., 2021. arXiv:2112.11150.
A new varifold solution concept for mean curvature flow: Convergence of the Allen-Cahn equation and weak-strong uniqueness. S Hensel, T Laux, arXiv:2109.042332021arXiv preprintS. Hensel and T. Laux. A new varifold solution concept for mean curvature flow: Con- vergence of the Allen-Cahn equation and weak-strong uniqueness. arXiv preprint, 2021. arXiv:2109.04233.
Weak-strong uniqueness for the mean curvature flow of double bubbles. To appear at Interfaces Free Bound. S Hensel, T Laux, arXiv:2108.017332021S. Hensel and T. Laux. Weak-strong uniqueness for the mean curvature flow of double bubbles. To appear at Interfaces Free Bound., 2021. arXiv:2108.01733.
Weak-strong uniqueness for the Navier-Stokes equation for two fluids with ninety degree contact angle and same viscosities. In (minor) revision at. S Hensel, A Marveggio, arXiv:2112.11154J. Math. Fluid Mech. S. Hensel and A. Marveggio. Weak-strong uniqueness for the Navier-Stokes equation for two fluids with ninety degree contact angle and same viscosities. In (minor) revision at J. Math. Fluid Mech., 2021. arXiv:2112.11154.
Convergence of the Allen-Cahn equation to Brakke's motion by mean curvature. T Ilmanen, J. Differential Geom. 382T. Ilmanen. Convergence of the Allen-Cahn equation to Brakke's motion by mean curvature. J. Differential Geom., 38(2):417-461, 1993.
A fixed contact angle condition for varifolds. T Kagaya, Y Tonegawa, Hiroshima Math. J. 472T. Kagaya and Y. Tonegawa. A fixed contact angle condition for varifolds. Hiroshima Math. J., 47(2):139-153, 2017.
The Hele-Shaw flow as the sharp interface limit of the Cahn-Hilliard equation with disparate mobilities. M Kroemer, T Laux, arXiv:2111.145052021arXiv preprintM. Kroemer and T. Laux. The Hele-Shaw flow as the sharp interface limit of the Cahn-Hilliard equation with disparate mobilities. arXiv preprint, 2021. arXiv:2111.14505.
A Gamma-convergence approach to the Cahn-Hilliard equation. N Q Le, Calc. Var. Partial Differential Equations. 324N. Q. Le. A Gamma-convergence approach to the Cahn-Hilliard equation. Calc. Var. Partial Differential Equations, 32(4):499-522, 2008.
On the convergence of the Ohta-Kawasaki equation to motion by nonlocal Mullins-Sekerka law. N Q Le, SIAM J. Math. Anal. 424N. Q. Le. On the convergence of the Ohta-Kawasaki equation to motion by nonlocal Mullins- Sekerka law. SIAM J. Math. Anal., 42(4):1602-1638, 2010.
Non-Homogeneous Boundary Value Problems and Applications, I, volume 181 of Grundlehren der mathematischen Wissenschaften. J Lions, E Magenes, Springer-VerlagBerlin HeidelbergJ. Lions and E. Magenes. Non-Homogeneous Boundary Value Problems and Applications, I, volume 181 of Grundlehren der mathematischen Wissenschaften. Springer-Verlag Berlin Heidelberg, 1972.
Quelques méthodes de résolution des problemes aux limites non linéaires. J.-L Lions, Dunod Paris23J.-L. Lions. Quelques méthodes de résolution des problemes aux limites non linéaires. Dunod Paris, 23, 1969.
Implicit time discretization for the mean curvature flow equation. S Luckhaus, T Sturzenhecker, Calc. Var. Partial Differential Equations. 32S. Luckhaus and T. Sturzenhecker. Implicit time discretization for the mean curvature flow equation. Calc. Var. Partial Differential Equations, 3(2):253-271, 1995.
A rigorous sharp interface limit of a diffuse interface model related to tumor growth. S Melchionna, E Rocca, J. Nonlinear Sci. 27S. Melchionna and E. Rocca. A rigorous sharp interface limit of a diffuse interface model related to tumor growth. J. Nonlinear Sci., 27:847-872, 2017.
Varifold solutions of a sharp interface limit of a diffuse interface model for tumor growth. S Melchionna, E Rocca, Interfaces Free Bound. 194S. Melchionna and E. Rocca. Varifold solutions of a sharp interface limit of a diffuse interface model for tumor growth. Interfaces Free Bound., 19(4):571-590, 2017.
Integral inequalities of Poincaré and Wirtinger type. N G Meyers, Arch. Rational Mech. Anal. 682N. G. Meyers. Integral inequalities of Poincaré and Wirtinger type. Arch. Rational Mech. Anal., 68(2):113-120, 1978.
Integral inequalities of Poincaré and Wirtinger type for BV functions. N G Meyers, W P Ziemer, Amer. J. Math. 996N. G. Meyers and W. P. Ziemer. Integral inequalities of Poincaré and Wirtinger type for BV functions. Amer. J. Math., 99(6):1345-1360, 1977.
The gradient theory of phase transitions and the minimal interface criterion. L Modica, Arch. Rational Mech. Anal. 982L. Modica. The gradient theory of phase transitions and the minimal interface criterion. Arch. Rational Mech. Anal., 98(2):123-142, 1987.
Gradient theory of phase transitions with boundary contact energy. L Modica, Annales de l'I.H.P. Analyse non linéaire. 45L. Modica. Gradient theory of phase transitions with boundary contact energy. Annales de l'I.H.P. Analyse non linéaire, 4(5):487-512, 1987.
Un esempio di Γ-convergenza. L Modica, S Mortola, Boll. Un. Mat. Ital. B. 145L. Modica and S. Mortola. Un esempio di Γ-convergenza. Boll. Un. Mat. Ital. B (5), 14(1):285-299, 1977.
Front migration in the nonlinear Cahn-Hilliard equation. R L Pego, Proc. R. Soc. A: Math. Phys. Eng. Sci. 422R. L. Pego. Front migration in the nonlinear Cahn-Hilliard equation. Proc. R. Soc. A: Math. Phys. Eng. Sci., 422:261 -278, 1989.
Solutions for the Stefan problem with Gibbs-Thomson law by a local minimisation. M Röger, Interfaces Free Bound. 61M. Röger. Solutions for the Stefan problem with Gibbs-Thomson law by a local minimisation. Interfaces Free Bound., 6(1):105-133, 2004.
Existence of weak solutions for the Mullins-Sekerka flow. M Röger, SIAM J. Math. Anal. 371M. Röger. Existence of weak solutions for the Mullins-Sekerka flow. SIAM J. Math. Anal., 37(1):291-301, 2005.
Gamma-convergence of gradient flows with applications to Ginzburg-Landau. E Sandier, S Serfaty, Comm. Pure Appl. Math. 5712E. Sandier and S. Serfaty. Gamma-convergence of gradient flows with applications to Ginzburg-Landau. Comm. Pure Appl. Math., 57(12):1627-1672, 2004.
Hypersurfaces with mean curvature given by an ambient Sobolev function. R Schätzle, J. Differential Geom. 583R. Schätzle. Hypersurfaces with mean curvature given by an ambient Sobolev function. J. Differential Geom., 58(3):371-420, 2001.
Gamma-convergence of gradient flows on Hilbert and metric spaces and applications. S Serfaty, Discrete Contin. Dyn. Syst. -A. 314S. Serfaty. Gamma-convergence of gradient flows on Hilbert and metric spaces and applica- tions. Discrete Contin. Dyn. Syst. -A, 31(4):1427-1451, 2011.
Compact sets in the space L p. J Simon, O, TB). Annali di Matematica Pura ed Applicata146J. Simon. Compact sets in the space L p (O, T ; B). Annali di Matematica Pura ed Applicata, 146:65-96, 01 1986.
Analysis of a Variational Model for Lithium-Ion Batteries. K Stinson, Carnegie Mellon UniversityPhD dissertationK. Stinson. Analysis of a Variational Model for Lithium-Ion Batteries. PhD dissertation, Carnegie Mellon University, 2021.
| [] |
[
"Dipole moments of the Electron, Neutrino and Neutron in the MSSM without R-parity Symmetry",
"Dipole moments of the Electron, Neutrino and Neutron in the MSSM without R-parity Symmetry"
] | [
"S A Abel \nService de Physique Théorique\nCEA-SACLAY\n91191Gif-sur-YvetteFrance\n",
"A Dedes \nRutherford Appleton Laboratory\nOX11 0QXChilton, DidcotUK\n",
"H K Dreiner \nRutherford Appleton Laboratory\nOX11 0QXChilton, DidcotUK\n"
] | [
"Service de Physique Théorique\nCEA-SACLAY\n91191Gif-sur-YvetteFrance",
"Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotUK",
"Rutherford Appleton Laboratory\nOX11 0QXChilton, DidcotUK"
] | [] | We show that in the MSSM without R-parity symmetry there are no new contributions to electron and neutron electric dipole moments (EDMs) at 1-loop induced by the R-parity violating Yukawa couplings. Non-zero EDMs for the electron and neutron first arise at the 2-loop level. As an example we estimate the contribution of a two-loop graph which induces electron EDMs. On the other hand, we show that the (Majorana) neutrino electric and magnetic transition moments are non-zero even at the 1-loop level. Constraints on the R-parity violating couplings are derived from the existing bounds on the neutrino dipole moments. | 10.1088/1126-6708/2000/05/013 | [
"https://export.arxiv.org/pdf/hep-ph/9912429v2.pdf"
] | 15,232,761 | hep-ph/9912429 | d6879f9d88b05a15f17934aedd32aa8f70691bad |
Dipole moments of the Electron, Neutrino and Neutron in the MSSM without R-parity Symmetry
21 Dec 1999 March 27, 2022
S A Abel
Service de Physique Théorique
CEA-SACLAY
91191Gif-sur-YvetteFrance
A Dedes
Rutherford Appleton Laboratory
OX11 0QXChilton, DidcotUK
H K Dreiner
Rutherford Appleton Laboratory
OX11 0QXChilton, DidcotUK
Dipole moments of the Electron, Neutrino and Neutron in the MSSM without R-parity Symmetry
21 Dec 1999 March 27, 2022
We show that in the MSSM without R-parity symmetry there are no new contributions to electron and neutron electric dipole moments (EDMs) at 1-loop induced by the R-parity violating Yukawa couplings. Non-zero EDMs for the electron and neutron first arise at the 2-loop level. As an example we estimate the contribution of a two-loop graph which induces electron EDMs. On the other hand, we show that the (Majorana) neutrino electric and magnetic transition moments are non-zero even at the 1-loop level. Constraints on the R-parity violating couplings are derived from the existing bounds on the neutrino dipole moments.
Introduction
The electric dipole moment (EDM) d f , and magnetic dipole moment (MDM) µ f , of a spin-1/2 particle can be defined by the form factors appearing in the decomposition of the matrix element of the electromagnetic current [1]:
< f (p ′ )|J µ |f (p) >=ū(p ′ )Γ γf f µ (q)u(p) ,(1)
where
Γ γf f µ (q) = ie γ µ V f (q 2 ) − A f (q 2 )γ 5 + q ν σ µν i µ f (q 2 ) e − d f (q 2 ) e ,(2)
and q = p ′ − p. This formula arises after making use of Lorentz invariance, the Gordon identities and the fact that the external photons and fermions are on-shell. The operator which violates CP-symmetry, L EDM = − i 2 d fψ σ µν γ 5 ψF µν is non-renormalizable and of dimension five. It reduces to the effective dipole interaction L EDM = d f σ · E in the non-relativistic limit.
Experimental searches for electron and neutron EDMs currently provide some of the most severe constraints on new models of particle physics:
|d e | ≤ 4.3 × 10 −27 ecm [2],
(3) |d n | ≤ 6.3 × 10 −26 ecm [3] .
All of the contributions to the EDMs or MDMs must be ultraviolate finite because they are non-renormalizable interactions. In addition the interactions flip the chirality of the external fermions and thus break SU(2) L invariance. The chirality flip then comes from the fermion masses which in turn come from the spontaneous breakdown of electroweak gauge symmetry. By itself this is able to generate MDMs but not EDMs for which CP-violation is needed. In the SM the required source of CP-violation resides in the complexity of the Yukawa couplings which is parameterized by the CKM-phase 1 . However the CKM-phase has only a tiny contribution of 10 −30 ecm to the neutron MDM [1,5]. The CKM-phase can also penetrate the lepton sector and generate EDMs for the leptons at higher loops but it has been shown that these contributions vanish to three loops [6]. Consequently EDMs are a sensitive test of CP violation beyond the SM. If neutrinos are massive (as they indeed appear to be) then CP-symmetry can also be violated in the leptonic sector and one expects neutrino EDMs as well. These EDMs are induced by either a CKM-like phase for Dirac neutrinos or by the three phases for Majorana neutrinos in the leptonic mixing matrix. Since one needs a chirality flip for the EM vertex in order to generate EDMs, Majorana neutrinos cannot have diagonal EDMs or MDMs. They can only have transition electric or magnetic dipole moments [7], i.e. a photon vertex associated with two different neutrino flavours. The experimental bounds on the neutrino dipole moments are divided into two categories: The "Earth bound" constraints:
|µ ν | ≤ 1.5 × 10 −10 µ B [8],(5)|d ντ | ≤ 5.2 × 10 −17 ecm [9] ,(6)
and the cosmological ones:
|µ ν | ≤ 3 × 10 −12 µ B [10], (7) |d ν | ≤ 2.5 × 10 −22 ecm [11] .(8)
These bounds can be used either for Dirac (diagonal EDMs or MDMs) or for Majorana (transition EDMs or MDMs) neutrinos.
In the SM the EDMs for the leptons due to a possible CP-violation in the leptonic sector are too small to be significant. There is a tendency for the various contributions to cancel or to be proportional to V l V * l . The MDMs of the Dirac neutrinos in the SM with a right handed singlet were calculated almost twenty years ago [12], and it was found that only a tiny loop induced magnetic moment µ ν ≃ 3 × 10 −19 µ B (m ν /1eV) arises. Thus we conclude that in the SM the corrections to the electron, neutron and neutrino EDMs and MDMs are very small and are consistent with the data.
In the MSSM, apart from the CKM-phase or possible CP-violation in the leptonic sector, there are additional sources of CP-violation [13]. The soft breaking masses and couplings can in general be complex, and their phases generate electron and neutron EDMs even at the one-loop level in diagrams involving internal squark/sleptons, charginos or gluinos [14]. The current experimental bounds on the EDMs constrain those phases to be less than ∼ 10 −2 , unless the phases are flavour off-diagonal, there are cancellations between various contributions or the superpartner masses are of order of 1 TeV [15]. In the case of the neutrino MDM only small corrections have been found in the MSSM with conserved R-parity symmetry [16] and the result turns out to be similar to the case of the SM (∼ 10 −19 µ B ).
Models that violate R-parity can in principle induce additional contributions to all these parameters [17]. In this paper we examine the effect of breaking R-parity on CP violating parameters. We extend previous results by presenting a complete calculation of electron, neutron and neutrino EDM/MDMs in the MSSM without R-parity symmetry.
Before tackling the analysis in detail, we should make clear where our results differ from the previous estimates appearing in the literature. First, we show that any new contributions to the electron and neutron EDMs are small and in fact appear only at two loops. In particular this means that constraints are only on products of 4 or more R-parity violating couplings. This result is in disagreement with the existing analysis in the literature [18].
Contributions to neutrino EDMs and MDMs can occur at one-loop, however. Babu and Mohapatra [19] were the first to consider the MDM of the neutrino in the context of the MSSM with broken R-parity symmetry. They found contributions of order ∼ 10 −11 µ B from the new loop graphs and thus a possible solution to the Solar neutrino problem through the mechanism suggested in ref. [20]. We find results that are smaller than theirs by a factor of 8. Barbieri et. al [21] have also calculated the neutrino MDMs and although we agree numerically with their result to within an order of magnitude, we find that their formula for the neutrino MDMs is unclear. For example, it is not obvious from their analysis that the diagonal neutrino MDM vanishes. Finally, very recently the neutrino MDMs were calculated in ref. [22] in a notation (mass insertion) following closely that of ref. [21]. We agree numerically to within an order of magnitude with these previous estimates of the MDMs. Here we also consider for the first time neutrino EDMs and determine the corresponding bounds.
Electron and neutron EDMs
Despite claims in the literature to the contrary [18], the leading contributions to EDMs occur at two loops. In this section we show this using a combination of inspection and power counting arguments.
First consider the extra contributions to the electron EDM from the λLLE interactions. The EDM is found from the q → 0 limit of d f in the matrix element of eq.(1). Hence the relevant diagrams have one external left handed one external right handed fermion and one photon. Let us denote the number of chiral superfields in the diagram by n L and n E , and the number of antichiral superfields by n L * and n E * . Adding n λ of the LLE vertices to a diagram adds 2n λ to n L and n λ to n E . In addition we allow D L and D E of their respective propagators. Each L propagator removes one L and one L * and similarly for the E propagators. Finally, we allow D m propagators with a mass insertion which changes the helicity on a line. Each of these removes one L * and one E * . (We also allow D m * conjugate propagators). Finally we note that any gauge boson insertion do not change the number of E, E * , L or L * .
Insisting that the final diagram has n L = 1, n E = 1, n L * = 0, n E * = 0 yields four equations;
n L = 1 = 2n λ − D L − D m * n E = 1 = n λ − D E − D m * n L * = 0 = 2n λ * − D L − D m n E * = 0 = n λ * − D E − D m(9)
giving
n λ = n λ * D m = D m * + 1,(10)
i.e. we need at least one mass insertion to flip helicity. Calling the number of non-gauge vertices V = n λ + n λ * , the number of non-gauge propagators I = D L + D E + D m + D m * and the number of non-gauge external legs E = 2, we can now use the standard power counting result
E = 3V − 2I L = I − V + 1(11)
where L is the number of loops. A little more algebra then gives
L = n λ I = 3n λ − 1.(12)
Now consider one loop diagrams. The above tells us that they must have n λ = n λ * = 1 and that they must include at least one mass insertion. Inspection now shows that there are no irreducible diagrams of this kind that can give a contribution. Indeed let us consider the contribution from the apparently offending diagram shown in fig. 1. The relevant interaction terms come from Figure 1: A one-loop diagram for neutron EDMs showing explicitly the required helicity flips. This diagram does not contribute to the EDM in R-parity violating models of supersymmetry because of the absence of the crossed out vertex.
δL L = 1 2 λ ijk ẽ L j e k P L ν i +ν L i e k P L e j +ẽ * R k ν c i P L e j − (i ↔ j) + h.c. ẽ R ẽ L d d L u u R L R+ λ ′ ijk d L j d k P L ν i +ν L i d k P L d j +d * R k ν c i P L d j −ũ L j d k P L e i −ẽ L i d k P L u j −d * R k e i P L u j + h.c.(13)
where P L are projection operators. In general, for Lagrangians of the form
φ * aψ 2 P R ψ 1 + bψ 2 P L ψ 1(14)
where ψ and φ are generic fermions and scalars, the one loop contributions to the fermion EDMs are proportional to Im(a * b) (see the third reference in [14]). Thus the same scalar has to couple to both left and right helicities of a given fermion. For example, in the MSSM the one loop diagram with an internal chargino gives a contribution to the down EDM because it contains bothh 1 which couples toũd R andh 2 which couples toũd L (note the helicity flip required on the up-squark). There is an additional contribution that instead ofh 2 involves the wino which also couples toũd L . On the other hand the gluino gives a one loop contribution because it is a chargeless particle that can couple to both helicities thanks to its large Majorana mass. Indeed one can check that these three EDM contributions vanish if µH 1 H 2 = 0, g 2 = 0 and mg = 0. Considering the R-parity violating one-loop diagrams with an internal selectron, it is clear that, since there are no interactions that involve u R , there can be no contribution to the u or d EDMs. (Note that there are extensions of the MSSM that do include such an interaction -however these also involve additional multiplets such as isosinglet down quarks coming from the 27 of E 6 [23].) Likewise, λ ′ only couples the electron to u L or d L (and their conjugates) so that there is also no contribution to the electron EDM from the λ ′ vertex. The diagrams with internal (s)neutrinos can give EDMs only if (like the gluino) the neutrino has a large (∆L = 2) Majorana mass which of course it does not. Finally we see that the only P R projection from the λ vertex acts on the neutrino so that this vertex is also unable to contribute to electron or quark EDMs.
In fact the first EDM contributions occur at two loops and hence must have at least 4 λ or λ ′ vertices. Examples of the leading diagrams are shown in fig. 2, where the additional photon line may be attached to any internal (s)electron or (s)quark. The EDM can be found by extracting the leading linear term in q. For the example where the photon line is attached to the internal electron, we find
Γ µ = e ijlmn m l λ 1mn λ * jln λ * iml λ ij1 e α e β × d 4 p (2π) 4 d 4 k (2π) 4 1 k 2 − m 2 e L 1 p 2 − m 2 e R ×q ρ 2γ ρ γ σ p σ (k + p) µ k 2 ((k + p) 2 − m 2 l ) 2 p 2 + 4γ µ γ σ p σ (k + p) ρ k 2 ((k + p) 2 − m 2 l ) 3 + 4γ σ γ ν p σ k ν k ρ (k + p) µ k 4 ((k + p) 2 − m 2 l )p 2 + γ σ γ µ γ ρ γ ν k σ p ν k 2 ((k + p) 2 − m 2 l )p 2 αγ P L γβ (15)
The F 3 term comes by, for example, writing γ µ γ ν = −iσ µν + g µν . The evaluation of this integral by numerical methods is particularly difficult due to the presence of a kinematical singularity [24,25]. Instead we note that the full calculation is similar to that of the anomalous magnetic moment of the muon presented in ref. [25]. For the present paper it is therefore sufficient to estimate the resulting EDM as
d e = ijlmn Im e (4π) 4 mẽ2 (a + b log z l )m l λ 1mn λ * jln λ * iml λ ij1(16)
where
z l = m 2 l /mẽ2,(17)
and a and b are constants of O(1 − 10). Putting in numbers we find that
d e ≈ 1.4 × 10 −22 ecm 100 GeV mẽ 2 × Im ijlmn (a + b log(z l )) m l m τ λ 1mn λ * jln λ * iml λ ij1 . (18)
Comparing this number with the experimental constraint of d e < 10 −28 [2] we find a bound
Im ijlmn m l m τ λ 1mn λ * jln λ * iml λ ij1 < ∼ 10 −6 ,(19)
where we conservatively take a, b = 1. The equivalent diagram for the neutrons yields
d n ≈ 1.4 × 10 −20 ecm 100 GeV mẽ 2 × Im ijlmn m l m t λ 1mn λ * jln λ * iml λ ij1 ,(20)
and comparing with experiment gives
Im ijlmn m l m t λ 1mn λ * jln λ * iml λ ij1 < ∼ 3 × 10 −6 .(21)
Given the strong bounds already existing on products of couplings [26], it is clear that these constraints from the neutron and electron EDMs are far less important than previously estimated [18]. In particular, since they involve products of 4 couplings it is also clear that they should easily be satisfied within any particular model.
Neutrino MDMs and EDMs
The exception in the discussion of the previous section was the neutrino which can get E(M)DM contributions even at one-loop. The corresponding diagrams contributing to the neutrino MDM and EDM are shown in fig. 3. The neutrino MDMs and EDMs in the MSSM without R-parity can be written as:
µ ν ij = e 32π 2 ℜe 2 a=1 Uẽ k 1a U * ẽ k 2a 3 l,k=1 λ ikl λ jlk − λ jkl λ ilk m e l f (m 2 e l , m 2 e ka ) + 2 a=1 Ud k 1a U * d k 2a 3 l,k=1 λ ′ ikl λ ′ jlk − λ ′ jkl λ ′ ilk m d l f (m 2 d l , m 2 d ka ) (22) d ν ij = − e 32π 2 ℑm 2 a=1 Uẽ k 1a U * ẽ k 2a 3 l,k=1 λ ikl λ jlk − λ jkl λ ilk m e l f (m 2 e l , m 2 e ka ) + 2 a=1 Ud k 1a U * d k 2a 3 l,k=1 λ ′ ikl λ ′ jlk − λ ′ jkl λ ′ ilk m d l f (m 2 d l , m 2 d ka )(23)
where the function f (x, y) is 2 The matrix Uẽ(Ud) diagonalizes the slepton (down squark) mass matrix and is given 3 in terms of the mixing angle θẽ i (θd i )
f (x, y) = 1 y 2 + ln y x .(24)Uẽ i /d i = cos θẽ i /d i − sin θẽ i /d i sin θẽ i /d i cos θẽ i /d i .(25)
If the SUSY soft breaking masses of the slepton(squark) doublets, mL i (mQ i ,mŨ i ), are equal to those of the right handed singlets, mẽ i (md i ), then to a very good approximation (the D-term contributions to the mixing angle are always small) we have: sin 2θẽ i /d i ≃ 1. Motivated also by the fact that the bounds on R-parity violating couplings are usually given using this simplification, we shall henceforth impose it. Thus, by expanding the sum over the mass eigenstates of the sleptons(squarks) we obtain,
µ ν ij = e 64π 2 ℜe 3 l,k=1 λ ikl λ jlk − λ jkl λ ilk m e l f (m 2 e l , m 2 e k 1 ) − f (m 2 e l , m 2 e k 2 ) + 3 l,k=1 λ ′ ikl λ ′ jlk − λ ′ jkl λ ′ ilk m d l f (m 2 d l , m 2 d k 1 ) − f (m 2 d l , m 2 d k 2 ) (26) d ν ij = − e 64π 2 ℑmλ ′ ikl λ ′ jlk − λ ′ jkl λ ′ ilk m d l f (m 2 d l , m 2 d k 1 ) − f (m 2 d l , m 2 d k 2 )(27)
Note that the diagrams of the first row in fig. 3 differ by a sign from those of the second row (due to the photon vertex) and an interchange of the indices i ↔ j (see eqs. (22,23)).
Some remarks are in order here:
• The diagonal elements of the MDMs and EDMs of the neutrinos are zero, i.e. , µ ν ij = d ν ij = 0. This is of course a general statement for the Majorana neutrinos. i = j is assumed below.
• For k = l the contribution to eqs. (26,27) for both the neutrino MDMs and EDMs are zero.
• If one R-parity violating coupling dominates over the others then again their contributions to the neutrino MDMs and EDMs are zero. It is known [26] that even if we assume one coupling at a time at the GUT scale a number of lepton number violating couplings appear at the electroweak scale since there is no symmetry (lepton symmetry) to protect them. However, we find that the effect on the neutrino MDMs and EDMs is tiny [27].
• If the sleptons and the squarks mass eigenstates are nearly degenerate then the MDM and EDM for the neutrinos are much less than the experimental constraints.
If there are no other CP-violating sources (such as SUSY CP-phases) apart from the CKM-phase then one might still expect some transmission of this phase into the EDMs of the neutrinos. Here we prove that there is no such effect. Without loss of generality, we assume that the CP-violating phase appears in the CKM through the down quark Yukawa couplings. Then after the redefinition of the fields [28,26] we obtain
λ ′ ijk =λ ′ ilm (V CKM ) mk (V † CKM ) jl ,(28)
whereλ ′ ilm in the right hand side is a real coupling, whence
ℑm 3 l,k=1 λ ′ ikl λ ′ jlk − λ ′ jkl λ ′ ilk = ℑm λ ′ iklλ ′ jlk −λ ′ jklλ ′ ilk = 0 .(29)
Hence there are no neutrino EDMs coming from the CKM-phase contribution. Following precisely the same arguments we can prove that the neutrino EDMs are zero even if we make the assumption that there is CP-violation in the leptonic sector (from the three Majorana phases). We now consider the contributions to neutrino MDMs. The importance of each term in eq.(26) depends on which are the dominant R-parity violating couplings and also on the degeneracy of the slepton and squark mass eigenstates. Here we shall assess the maximum contribution of the RPV couplings to the neutrino MDMs by taking one of the two slepton/squark mass eigenstates e.g mẽ 2 , md 2 to be in the decoupling region (the function f (x, y) goes to zero for large y ). Beacom and Vogel [8] have recently shown that for Majorana neutrinos with two flavours the neutrino MDMs are given by
µ 2 νe = |µ 12 | 2 ,(30)
for either vacuum or MSW mixing, and the bound obtained from SuperKamiokande solar neutrino data [29] is |µ νe | ≤ 1.5 × 10 −10 µ B (90%CL) .
(31) By using this bound and assuming that the sleptons and squarks of each generation are almost degenerate i.e. , mẽ = mμ = mτ and md = ms = mb we find
λ 121 λ 212 m e f (m 2 e , m 2 e ) − m µ f (m 2 µ , m 2 e ) + λ 131 λ 213 m e f (m 2 e , m 2 e ) − m τ f (m 2 τ , m 2 e ) + λ 123 λ 232 m τ f (m 2 τ , m 2 e ) − m µ f (m 2 µ , m 2 e ) + λ ′ 121 λ ′ 212 − λ ′ 221 λ ′ 112 m d f (m 2 d , m 2 d ) − m s f (m 2 s , m 2 d ) + λ ′ 131 λ ′ 213 − λ ′ 231 λ ′ 113 m d f (m 2 d , m 2 d ) − m b f (m 2 b , m 2 d ) + λ ′ 132 λ ′ 223 − λ ′ 123 λ ′ 232 m s f (m 2 s , m 2 d ) − m b f (m 2 b , m 2 d ) ≤ 10 −4 .(32)
For mẽ = md = 100 GeV and one dominant pair of R-parity violating couplings at a time we obtain the following bounds:
ℜe(λ 121 λ 212 ) < 0.58 (33) ℜe(λ 131 λ 213 ) < 0.059 (34) ℜe(λ 123 λ 232 ) < 0.063 (35) ℜe(λ ′ 121 λ ′ 212 ) , ℜe(λ ′ 221 λ ′ 112 ) < 0.60 (36) ℜe(λ ′ 131 λ ′ 213 ) , ℜe(λ ′ 231 λ ′ 113 ) , ℜe(λ ′ 132 λ ′ 223 ) , ℜe(λ ′ 123 λ ′ 232 ) < 0.030 .(37)
If we now compare these bounds with those shown in ref. [26], we find that they are all far more relaxed than the constraints obtained from other processes (even in some cases more relaxed than the individual bounds on the corresponding RPV couplings). Thus we conclude that the contribution from the R-parity violating couplings to the neutrino MDMs is rather small. It is possible in general to start with complex RPV couplings. Assuming no neutrino mixing here the induced neutrino EDMs 4 are given by [10]
d ν = 1 2 3 i,j=1
|d ij | 2 = |d 12 | 2 + |d 13 | 2 + |d 23 | 2 .
By considering the cosmological bound 5 of [11] d ν = 2.5 × 10 −22 ecm and assuming one dominant pair of RPV couplings at a time we obtain (mẽ = md = 100 GeV):
ℑm(λ i21 λ j12 ) < 0.05 (39) ℑm(λ i31 λ j13 ) < 0.004 (40) ℑm(λ i23 λ j32 ) < 0.005 (41) ℑm(λ ′ i21 λ ′ j12 ) < 0.06 (42) ℑm(λ ′ i31 λ ′ j13 ) , ℑm(λ ′ i32 λ ′ j23 ) < 0.0024 .
These new EDM results conclude this chapter and the discussion on neutrino MDMs and EDMs.
Conclusions
We have shown that in the MSSM without R-parity symmetry it is impossible to generate additional electron and neutron EDMs at 1-loop from the R-parity violating Yukawa couplings. EDMs for the electron and neutron first arise at the 2-loop level and we estimated the contribution of the new two-loop graphs. We find that the resulting constraints are on products of at least 4 R-parity violating Yukawa couplings;
Im ijlmn m l m τ λ 1mn λ * jln λ * iml λ ij1 < ∼ 10 −6 , Im ijlmn m l m t λ 1mn λ * jln λ * iml λ ij1 < ∼ 3 × 10 −6 .(44)
Conversely we find that (Majorana) neutrino electric and magnetic transition moments are non-zero even at the 1-loop level. Constraints on the R-parity violating couplings were derived using the current bounds on neutrino dipole moments.
Figure 2 :
2The leading 2 loop contributions to electron and neutron EDMs in R-parity require at least two loops and 4 R-parity violating couplings.
Figure 3 :
3Diagrams contributing to neutrino MDM and EDM from the R-parity violating coupling λ ijk L i L jĒk . The equivalent diagrams with the λ ′ ijk L i Q jDk vertex are obtained by replacing e with d.
λ 2 e k 1 ) 2 e k 2 ) 3
21223ikl λ jlk − λ jkl λ ilk m e l f (m 2 e l , m − f (m 2 e l , m Here we assume that the soft SUSY CP-phases are small. Additional contributions to the EDMs for the neutrinos are possible if they are large.
The main contribution to the EDMs comes from the QCD θ-angle. Here we will assume (for alternatives see the discussion in ref.[4]) a Peccei-Quinn symmetry which is able to set this parameter to zero (albeit at the price of an axion).
We have made use of the approximation m l ≪ ml and m q ≪ mq.
There are no neutrino MDMs in this case.5 The accelerator bound of[9] is almost five orders of magnitude more relaxed than the cosmological one and does not constrain the RPV couplings at all.
Note added in proof: Whilst in the final stages of preparation ref.[30]appeared which draws the same conclusion concerning the electron and neutron EDMs.
. W Bernreuther, M Suzuki, Rev. Mod. Phys. 63313W. Bernreuther and M. Suzuki, Rev. Mod. Phys. 63 (1991) 313.
. K Abdullah, Phys. Rev. Lett. 652340K. Abdullah et al., Phys. Rev. Lett. 65 (1990) 2340;
. E Commins, Phys. Rev. A. 502960E. Commins et al., Phys. Rev. A 50 (1994) 2960;
. B E Sauer, J Wang, E A Hinds, Phys. Rev. Lett. 741554B.E. Sauer, J. Wang and E.A. Hinds, Phys. Rev. Lett. 74 (1995) 1554;
. J. Chem. Phys. 1057412J. Chem. Phys 105 (1996) 7412.
. P G Harris, Phys. Rev. Lett. 82904P.G. Harris et al., Phys. Rev. Lett. 82 (1999) 904.
. A Dedes, M Pospelov, hep-ph/9912293A. Dedes and M. Pospelov, hep-ph/9912293.
CP Violation Without Strangness. I B Khriplovich, S K Lamoreaux, SpringerI.B. Khriplovich and S.K. Lamoreaux, "CP Violation Without Strangness", Springer 1997.
. M Pospelov, private communicationM. Pospelov, private communication.
. L Wolfenstein, Phys. Lett. 10777L. Wolfenstein, Phys. Lett. B107 (1981) 77;
. J Schechter, J W F Valle, Phys. Rev. 241883J. Schechter and J.W.F. Valle, Phys. Rev. D24 (1981) 1883;
. B Kayser, Phys. Rev. 261662B. Kayser, Phys. Rev. D26 (1982) 1662;
. R Shrock, Nucl. Phys. 206359R. Shrock, Nucl. Phys. B206 (1982) 359;
. J F Nieves, Phys. Rev. 263152J.F. Nieves, Phys. Rev. D26 (1982) 3152.
. J F Beacom, P Vogel, hep-ph/9907383J.F. Beacom and P. Vogel, hep-ph/9907383.
. R Escribano, E Masso, hep-ph/9609423Phys. Lett. 395R. Escribano and E. Masso, Phys. Lett. B395 (1997) 369 hep-ph/9609423.
. G G Raffelt, Phys. Rev. Lett. 642856G.G. Raffelt, Phys. Rev. Lett. 64 (1990) 2856.
. J A Morgan, D B Farrant, Phys. Lett. 128431J.A. Morgan and D.B. Farrant, Phys. Lett. B128 (1983) 431.
. B W Lee, R E Shrock, Phys. Rev. 161444B.W. Lee and R.E. Shrock, Phys. Rev. D16 (1977) 1444;
. W J Marciano, A I Sanda, Phys. Lett. 67303W.J. Marciano and A.I. Sanda, Phys. Lett. 67B (1977) 303.
Masiero and L. Silvestrini, lecture given at the 'International School on Subnuclear Physics. hep-ph/9711401Erice, Italy35th CourseFor a review, see, e.g. : A. Masiero and L. Silvestrini, lecture given at the 'International School on Subnuclear Physics, 35th Course', Erice, Italy, 1997, hep-ph/9711401.
. Y Kizukuri, N Oshimo, Phys. Rev. 463025Y. Kizukuri and N. Oshimo, Phys. Rev. D46 (1992) 3025;
. S Bertolini, F Vissani, hep-ph/9511326Phys. Lett. S.A. Abel, W.N. Cottingham and I.B. Whittingham324106Phys. Lett.S. Bertolini and F. Vis- sani, Phys. Lett. B324 (1994) 164 hep-ph/9311293; S.A. Abel, W.N. Cottingham and I.B. Whittingham, Phys. Lett. B370 (1996) 106 [hep-ph/9511326];
. A Romanino, A Strumia, Nucl. Phys. 4903A. Romanino and A. Strumia, Nucl. Phys. B490 (1997) 3;
. T Ibrahim, P Nath, Phys. Rev. 57478T. Ibrahim and P. Nath, Phys. Rev. D57 (1998) 478;
. T Falk, K A Olive, Phys. Lett. 43971T. Falk and K.A. Olive, Phys. Lett. B439 (1998) 71;
. T Falk, A Ferstl, K A Olive, Phys. Rev. 5955009T. Falk, A. Ferstl and K.A. Olive, Phys. Rev. D59 (1999) 055009;
. T Ibrahim, P Nath, Phys. Rev. 58111301T. Ibrahim and P. Nath, Phys. Rev. D58 (1998) 111301;
. M Brhlik, G J Good, G L Kane, Phys. Rev. 59115004M. Brhlik, G.J. Good and G.L. Kane, Phys. Rev. D59 (1999) 115004;
hep-ph/9906206; Some of the two loop contributions to the electron and neutron EDMs in the MSSM have been. A Bartl, T Gajdosik, W Porod, P Stockinger, H Stremnitzer, ; T Falk, K A Olive, M Pospelov, R Roiban, ; S Pokorski, J Rosiek, C A Savoy, ; D Chang, W Chang, W Keung, hep-ph/9912253preprint UWThPh-1998-63, HEPHY-PUB 705. D. Chang, W. Keung and A. Pilaftsis82hep-ph/9811202; A. Pilaftsis, hep-ph/9909485. hep-ph/9910465; A. PilaftsisA. Bartl, T. Gajdosik, W. Porod, P. Stockinger and H. Stremnitzer, preprint UWThPh-1998-63, HEPHY-PUB 705, March 1999, hep-ph/9903402; T. Falk, K.A. Olive, M. Pospelov and R. Roiban, hep-ph/9904393. S. Pokorski, J. Rosiek and C.A. Savoy, hep-ph/9906206; Some of the two loop contributions to the elec- tron and neutron EDMs in the MSSM have been found in: D. Chang, W. Keung and A. Pilaftsis, Phys. Rev. Lett. 82 (1999) 900 hep-ph/9811202; A. Pilaftsis, hep-ph/9909485; D. Chang, W. Chang and W. Keung, hep-ph/9910465; A. Pi- laftsis, hep-ph/9912253.
. S Dimopoulos, G Giudice, Phys. Lett. 357573S. Dimopoulos and G. Giudice, Phys. Lett. B357 (1995) 573;
. S A Abel, J.-M Frère, Phys. Rev. 551623S.A. Abel and J.- M. Frère Phys. Rev. D55 (1997) 1623;
. S A Abel, Phys. Lett. 410173S.A. Abel, Phys. Lett. B410 (1997) 173;
. S Khalil, T Kobayashi, A Masiero, hep- ph/9903544Phys. Rev. 6075003S. Khalil, T. Kobayashi and A. Masiero, Phys. Rev. D60 (1999) 075003 [hep- ph/9903544];
. G C Branco, F Cagarrinho, F Kruger, hep-ph/9904379Phys. Lett. 459224G. C. Branco, F. Cagarrinho and F. Kruger, Phys. Lett. B459 (1999) 224 [hep-ph/9904379];
. D A Demir, hep- ph/9905571Phys. Rev. 6095007D.A. Demir, Phys. Rev. D60 (1999) 095007 [hep- ph/9905571];
. S Khalil, T Kobayashi, hep- ph/9906374Phys. Lett. 460341S. Khalil and T. Kobayashi, Phys. Lett. B460 (1999) 341 [hep- ph/9906374];
. M Brhlik, L Everett, G L Kane, J Lykken, hep-ph/9905215Phys. Rev. Lett. 832124M. Brhlik, L. Everett, G. L. Kane and J. Lykken, Phys. Rev. Lett. 83 (1999) 2124 [hep-ph/9905215];
M Brhlik, L Everett, G L Kane, S F King, O Lebedev, hep-ph/9911321hep-ph/9909480; see Y. Nir. M. Brhlik, L. Everett, G. L. Kane, S. F. King and O. Lebedev, hep-ph/9909480; see Y. Nir, hep-ph/9911321.
. S N Biswas, A Goyal, J N Passi, Phys. Rev. 28671S.N. Biswas, A. Goyal and J.N. Passi, Phys. Rev. D28 (1983) 671;
. T M Aliev, Yad. Fiz. 441043T. M. Aliev, Yad. Fiz. 44 (1986) 1043;
. J Liu, Phys. Rev. 3447. K.L. Ng35289Z. Phys.J. Liu, Phys. Rev. D35 (1987) 3447. K.L. Ng, Z. Phys. C48 (1990) 289.
For a review on R-parity violation see. hep-ph/9707435For a review on R-parity violation see, e.g. H. Dreiner, hep-ph/9707435;
. G Bhattacharyya, hep-ph/9608415; ibid hep- ph/9709395Nucl. Phys. Proc. Suppl. 52G. Bhat- tacharyya, Nucl. Phys. Proc. Suppl. 52A (1997) 83, hep-ph/9608415; ibid hep- ph/9709395.
. M Frank, H Hamidian, hep-ph/9706510J. Phys. 242203M. Frank and H. Hamidian, J. Phys. G24 (1998) 2203, hep-ph/9706510.
. K S Babu, R N Mohapatra, Phys. Rev. Lett. 641705K.S. Babu and R.N. Mohapatra, Phys. Rev. Lett. 64 (1990) 1705.
. L B Okun, M B Voloshin, M I Vysotsky, Sov. J. Nucl. Phys. 44440L.B. Okun, M.B. Voloshin and M.I. Vysotsky, Sov. J. Nucl. Phys. 44 (1986) 440;
. R Cisneros, Astrophys. Space Sci. 1087R. Cisneros, Astrophys. Space Sci. 10, 87 (1971).
. R Barbieri, M M Guzzo, A Masiero, D Tommasini, Phys. Lett. 252251R. Barbieri, M.M. Guzzo, A. Masiero and D. Tommasini, Phys. Lett. B252 (1990) 251.
. G Bhattacharyya, H V Klapdor-Kleingrothaus, H Pas, hep-ph/9907432Phys. Lett. 463G. Bhattacharyya, H.V. Klapdor-Kleingrothaus and H. Pas, Phys. Lett. B463 (1999) 77, hep-ph/9907432.
. J A Grifols, J Sola, Phys. Lett. 18963J. A. Grifols and J. Sola, Phys. Lett. B189 (1987) 63
P A Baikov, D J Broadhurst, New Computing techniques in Physics Research IV. 167P.A.Baikov and D.J.Broadhurst, New Computing techniques in Physics Research IV (1995) 167.
. L V Avdeev, J Fleischer, M Yu, M N Kalmykov, Tentyukov, Comput. Phys. Commun. 107155L.V.Avdeev, J.Fleischer, M.Yu.Kalmykov and M.N.Tentyukov, Comput. Phys. Com- mun. 107 (1997) 155.
. B C Allanach, A Dedes, H K Dreiner, hep- ph/9906209Phys. Rev. 60B.C. Allanach, A. Dedes and H.K. Dreiner, Phys. Rev. D60 (1999) 075014 hep- ph/9906209.
For this we have made use of the computations described in. hep-ph/9902251Phys. Rev. B.C. Allanach, A. Dedes, H.K. Dreiner6056002For this we have made use of the computations described in B.C. Allanach, A. Dedes, H.K. Dreiner, Phys. Rev. D60 (1999) 056002, hep-ph/9902251.
. K Agashe, M Graesser, Phys. Rev. 549510439K. Agashe and M. Graesser, Phys. Rev. D54 (1996) 4445 hep-ph/9510439.
. Y Fukuda, SuperKamiokande CollaborationPhys. Rev. Lett. 81SuperKamiokande Collaboration, Y. Fukuda et.al., Phys. Rev. Lett. 81 (1562) 1998.
. R M Godbole, S Pakvasa, S D Rindani, X Tata, hep-ph/9912315R. M. Godbole, S. Pakvasa, S. D. Rindani and X. Tata, hep-ph/9912315.
| [] |
[
"Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)",
"Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)",
"Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)",
"Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)"
] | [
"Lívia Rodrigues Tomás \nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil\n",
"| Giovanni ",
"Guarnieri Soares \nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n",
"| Aurelienne ",
"A S Jorge \nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n",
"| Jeferson ",
"Feitosa Mendes \nDepartment of Environmental Engineering\nSão Paulo State University\n12245-00São PauloBrazil\n",
"| Vander ",
"L S Freitas \nDepartment of Computing\nFederal University of Ouro Preto\nOuro Preto\n\nMinas Gerais\n35400-000Brazil\n\nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\nCorrespondence Lívia Rodrigues Tomás\nBrazil\n",
"Leonardo B L Santos \nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil\n\nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n",
"Lívia Rodrigues Tomás \nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil\n",
"| Giovanni ",
"Guarnieri Soares \nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n",
"| Aurelienne ",
"A S Jorge \nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n",
"| Jeferson ",
"Feitosa Mendes \nDepartment of Environmental Engineering\nSão Paulo State University\n12245-00São PauloBrazil\n",
"| Vander ",
"L S Freitas \nDepartment of Computing\nFederal University of Ouro Preto\nOuro Preto\n\nMinas Gerais\n35400-000Brazil\n\nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\nCorrespondence Lívia Rodrigues Tomás\nBrazil\n",
"Leonardo B L Santos \nNational Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil\n\nNational Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil\n"
] | [
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil",
"Department of Environmental Engineering\nSão Paulo State University\n12245-00São PauloBrazil",
"Department of Computing\nFederal University of Ouro Preto\nOuro Preto",
"Minas Gerais\n35400-000Brazil",
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\nCorrespondence Lívia Rodrigues Tomás\nBrazil",
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil",
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil",
"Department of Environmental Engineering\nSão Paulo State University\n12245-00São PauloBrazil",
"Department of Computing\nFederal University of Ouro Preto\nOuro Preto",
"Minas Gerais\n35400-000Brazil",
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\nCorrespondence Lívia Rodrigues Tomás\nBrazil",
"National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)\n12245-230São José dos Campos, São PauloBrazil",
"National Institute for Space Research (INPE)\n12227-010São José dos Campos, São PauloBrazil"
] | [] | Cities increasingly face flood risk primarily due to extensive changes of the natural land cover to built-up areas with impervious surfaces. In urban areas, flood impacts come mainly from road interruption. This paper proposes an urban flood risk map from hydrological and mobility data, considering the megacity of São Paulo, Brazil, as a case study. We estimate the flood susceptibility through the Height Above the Nearest Drainage algorithm; and the potential impact through the exposure and vulnerability components. We aggregate all variables into a regular grid and then classify the cells of each component into three classes: Moderate, High, and Very High. All components, except the flood susceptibility, have few cells in the Very High class. The flood susceptibility component reflects the presence of watercourses, and it has a strong influence on the location of those cells classified as Very High. K E Y W O R D S flood risk, exposure, vulnerability, urban mobility, road interruption * Equally contributing authors. | 10.1111/tgis.12962 | [
"https://export.arxiv.org/pdf/2204.05982v1.pdf"
] | 248,119,052 | 2204.05982 | 4e75c85ad0fa8878a2f188306ade527026ac2e32 |
Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)
Lívia Rodrigues Tomás
National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)
12245-230São José dos Campos, São PauloBrazil
| Giovanni
Guarnieri Soares
National Institute for Space Research (INPE)
12227-010São José dos Campos, São PauloBrazil
| Aurelienne
A S Jorge
National Institute for Space Research (INPE)
12227-010São José dos Campos, São PauloBrazil
| Jeferson
Feitosa Mendes
Department of Environmental Engineering
São Paulo State University
12245-00São PauloBrazil
| Vander
L S Freitas
Department of Computing
Federal University of Ouro Preto
Ouro Preto
Minas Gerais
35400-000Brazil
National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)
Correspondence Lívia Rodrigues Tomás
Brazil
Leonardo B L Santos
National Center for Monitoring and Early Warning of Natural Disasters (Cemaden)
12245-230São José dos Campos, São PauloBrazil
National Institute for Space Research (INPE)
12227-010São José dos Campos, São PauloBrazil
Flood risk map from hydrological and mobility data: a case study in São Paulo (Brazil)
O R I G I N A L A R T I C L E J o u r n a l S e c t i o n
Cities increasingly face flood risk primarily due to extensive changes of the natural land cover to built-up areas with impervious surfaces. In urban areas, flood impacts come mainly from road interruption. This paper proposes an urban flood risk map from hydrological and mobility data, considering the megacity of São Paulo, Brazil, as a case study. We estimate the flood susceptibility through the Height Above the Nearest Drainage algorithm; and the potential impact through the exposure and vulnerability components. We aggregate all variables into a regular grid and then classify the cells of each component into three classes: Moderate, High, and Very High. All components, except the flood susceptibility, have few cells in the Very High class. The flood susceptibility component reflects the presence of watercourses, and it has a strong influence on the location of those cells classified as Very High. K E Y W O R D S flood risk, exposure, vulnerability, urban mobility, road interruption * Equally contributing authors.
| INTRODUCTION
Cities increasingly face a variety of hazards primarily due to extensive changes of the natural land cover to built-up areas with impervious surfaces, along with climate change and inadequate management [1,2,3,4]. The potential for a hazard to become a disaster depends on the degree of exposure of a population and its physical or economic assets, coupled with their respective vulnerability [5].
Floods have severely impacted many cities, being one of the most frequent and damaging natural hazards worldwide [6]. Floods affect millions of people around the globe and result in significant disruption to the built environment in a community [7,8]. In Brazil, the sudden and gradual floods correspond to 50% of the occurrences of disasters recorded in recent years [9].
In the municipality of São Paulo, Brazil, the largest city in Latin America, almost 12.4 million people live in 1,521 km² [10], and there are 8.7 million vehicles registered, which corresponds to 8.17% of the total number of vehicles in Brazil [11]. On average, a São Paulo inhabitant spends 2 hours and 25 minutes with daily commutes [12]. Vale [13] says that opportunity costs arising from urban immobility can reach 1.4 billion dollars per year in São Paulo.
This megacity is experiencing some consequences of climate change: more frequent heavy rains and floods, higher temperatures, and decreased air humidity [5]. In addition to economic losses, damage caused by floods impairs the mobility of people who live in or pass through flooded areas. In São Paulo, several factors related to the hydrographic basin, local catchments, topography, and land use and occupation, for instance, cause floods [14,15].
According to Traffic Engineering Company's (CET) data, the city averaged 77 and 85 kilometers of traffic jams in the morning and afternoon peak hours, respectively, in 2019. Massive traffic events can result from the increased number of vehicles, unplanned urbanization, accidents on the roads, heavy precipitations, and flooding. Heavy precipitations usually result in slower traffic and more dangerous traffic conditions since it causes bad visibility and changes road friction [16,15].
For most cities in developing countries, collecting reliable and timely data is a challenging task, especially in the construction of risk scenarios from a multidisciplinary perspective. Integrating knowledge from multiple fields better reflects the geographic environment in which various natural processes, human activities, and information interactions exist. In this sense, methodologies comprised of hydrology, mobility, and Geographic Information Systems (GIS) have proved effective in many case studies to model complex urban systems [1]. Furthermore, GIS are of great help to analyze data and produce meaningful information for urban management and resilience planning. This paper presents an urban flood risk map combining elements from natural geographies, such as hydrological indexes, and elements from human geographies, such as urban mobility data, in both cases using different sources and
| THEORETICAL BACKGROUND
We can use the relationship between hazard and vulnerability to discuss risk scenarios. Hazard refers to a potentially harmful natural process or phenomenon occurring in a given location and within a specified period. Vulnerability is the set of processes and conditions resulting from physical, social, economic, and environmental factors, which increases the susceptibility of a community (element at risk) to the impact of hazards. Vulnerability comprises both physical aspects (resistance of buildings and infrastructure protections) and human factors, such as economic, social, political, technical, cultural, educational, and institutional [17,18].
The United Nations Office for Disaster Risk Reduction [17] defines disaster risk as to the potential loss of life, injury, or destroyed or damaged assets that could occur to a system, society, or a community in a specific period, determined stochastically as a function of hazard, exposure, vulnerability, and capacity.
Other variables can be included in risk analysis such as number of deaths, number of people affected as a result of a disaster, demographic density (inhabitant/km²), poverty index, elderly population, municipal human development index, number of events per year, total resident population and municipality area [19].
According to Kron [20], the scientific community widely agrees that risk is the product of a hazard and its consequences. There is no risk in a region where there are no people or values that can be affected by a natural phenomenon. Therefore, the risk is always present in urbanized areas with people, constructions, and road infrastructure.
Three components determine the risk [21]:
• the hazard: the threatening natural event including its probability of occurrence;
• the exposure: the values/humans that are present at the location involved;
• the vulnerability: the lack of resistance to damaging/destructive forces.
In the case of the hazard being flooding, this is a hydrological phenomenon caused by an excess capacity of surface runoff and urban drainage systems forming accumulations of water in impermeable areas [22], it is usually associated with rain. According to Walesh [23], there are two main types of floods caused by rainfall. The first is a large amount of rainfall, occurring at a relatively low intensity over a long period and on a large area. The second is caused by high-intensity thunderstorms occurring over small regions.
As for Kron [20], the three main types of flooding are: storm surge, river flood, and flash flood. Flash floods occur when water quickly sweeps over an area which is challenging to deal with, and it is not easy to predict the amount of rain expected within the spatial area over a short period [24].
When floods occur in urban areas during heavy rainfall, causing momentary accumulation of rainwater in certain places due to a deficiency in the drainage system, it is called Surface Water Flooding (SWF). It includes pluvial flooding, sewer flooding, flooding from small open-channel and culverted urban watercourses, and overland flows from groundwater springs. The predominant cause of SWF is short-duration intense rainfall, occurring locally [25].
The impacts of floods occur at different spatiotemporal scales: from intra-urban mobility to inter-urban, in a period that can vary from hours to months or even years, in some cases [24,26]. Regarding SWF, the impacts on daily activities come mainly from infrastructure disruption, such as inundated roads that block people's daily routes. People often stand up to high risk when commuting in bad weather since they cannot cancel regular trips [27].
The characteristics of urban drainage hinder the management of SWF. Land use has a significant influence on surface water behavior, wherein the presence of built-up areas raises the volume of surface water runoff [28,29].
Several researchers have analyzed the interactions between elements of a risk framework in recent years. Beden and Keskin [30] produced a flood map and evaluated the flood risks in case of insufficient flow data in Turkey;
Kazmierczak and Cavan. [28] estimated the SWF risk, encompassing hazard, vulnerability, and exposure, to urban communities in Greater Manchester, UK. Ramkar and Yadav [31] created a flood risk index in data-scarce river basins using the Analytical Hierarchical Process and GIS approach. Zokagoa et al. [32] mapped the flood risk using uncertainty propagation analysis on a peak discharge in Quebec.
Beden and Keskin [30] Kazmierczak and Cavan [28] explored the spatial distribution of SWF, the vulnerability of communities to flooding, and the characteristics of the physical environment and land use that affect people's exposure to flooding. They used four indicators for the vulnerability of people to flooding, and the analysis of the presence and spatial distribution of SWF areas, land use types, green cover, and housing to perform a spatial association between hazard, vulnerability, and exposure. Their results indicate that some of the most vulnerable people are at high risk of flooding due to socioeconomic characteristics of the population, spatial distribution of the hazard, and the land use and housing types present in the area. and Praharaj et al. [33], Kasmalkar et al. [34], and Lu et al. [35] which estimated the impact of flooding in transport infrastructure.
Liu et al. [27] proposed an approach to estimate flood-affected populations by combining mobility patterns with multi-source data. They used the Gravity model and Monte Carlo simulation, together with points of interest and building footprint data, to model automobile commute patterns. The locations of impassable roads were retrieved by social media data with real-time inundation records and flood hazard maps with inundation depths. Finally, they estimated the affected population employing the difference in commute time between no-flood and flood conditions. Praharaj et al. [33] quantified transportation impacts of recurring flooding using a predictive model. Traffic volumes and flood incidents were estimated through a combination of agency-provided and crowdsourced data. Hydrological data include rain and tidal gauge data. The authors concluded that the impact of recurring flooding events on transportation networks is local; thus, they do not recommend a citywide or regional analysis due to the heterogeneous effects of flooding across various links.
Kasmalkar et al. [34] integrated a traffic model with flood maps to simulate regional traffic patterns in the San Francisco Bay Area in the presence of coastal flooding. They used a road network model, origin-destination commuter data for weekday morning commutes, number of employees who reside and work in a given census block, and flood maps. Their model scale is associated with census blocks that range from 100 m to 10 km in length. Traffic analyses were conducted on zones that range from 1 to 50 km in length. Their model highlights traffic flow disruption caused by flooding.
Lu et al. [35] proposed a road prioritization methodology based on a location-based accessibility index. This index measures the network-wide performance before and after transportation network interdiction and quantifies the degree of network degradation. The methodology is applied to the road network threatened by flood risk from storm surge, sea-level rise, and intense precipitation. The results show that some infrastructure is critical to adjacent areas, while some become important to a broader region. 5 The references cited in this section demonstrate the diversity of existing methods for estimating flood risk and its associated components. There are stochastic and deterministic methods, and others use pre-existing data from one or more components and those that estimate all components.
There is no one method better than another for all cases, but one that is better adapted to the characteristics of the study area. Building upon the literature review, we present the material and methods adopted in this work.
| MATERIAL AND METHODS
| Study Area and Datasets
The study area comprises 115.3 km² of the Tamanduateí river basin within the municipality of São Paulo, the capital of the state of the same name, and the leading financial center of Brazil ( Figure 1). More than 80% of the Tamanduateí riverbed is impermeable, and floods along the riverbanks are constant [36]. This basin presents a high number of extreme rainfall events in the city [37]. We use data from different sources and geographic topologies. Therefore we chose the Brazilian statistical grid [39] to integrate and analyze data. Geographically and socially speaking, this unit is arbitrary, not having a meaning that can be transported to the real world, as it does not consider the distribution of any underlying process or phenomenon [40]. However, the cells perfectly serve the purpose of a receptacle, remaining stable over time, presenting a F I G U R E 2 Study area districts and the top 6 roads in flood frequency in 2019. regular and simple shape, with small enough dimensions to act as bricks in the construction of any desired geographic outline [41,42], in addition to meeting the demands of data dissemination for small areas and integrate incompatible geometries or different administrative boundaries.
The Brazilian statistical grid [39] has cells with dimensions of 1 km x 1 km in rural areas and 200 m x 200 m in urban areas. Except for one cell, the entire study area is urban. In this work, we use the attribute "resident population" by cell and the geometry of the grid. We use two vector data to select the cells from the grid that form the study area: São Paulo municipality boundary and Tamanduateí river Basin Boundary. Their intersection gives 3,087 cells from which we exploit the following data:
• Resident population by cell from 2010 Brazilian census [39];
• Geographic coordinates of Workplaces and the number of people who work at each location from Origin Destination Research (OD Database) [43];
• Geographic coordinates of Educational institutions and the number of people who study at each location from OD Database [43];
• Flood registries in 2019 obtained from Emergency Management Center of the city of São Paulo -CGE website [44];
• Road system vector data from Center for Metropolitan Studies -CEM [45];
• SRTM Digital Elevation Model -DEM [46].
In the first step of the methodology, we perform an exploratory analysis to better understand the characteristics of the datasets and the relationships between the analyzed variables and then define the best conceptual model. Thereafter, we estimate the F S and potential impact (P I ) components to calculate the urban flood risk (R ). The P I comes from the E and V components.
| Flood Susceptibility
Topography is a hydrologic driver, defining the speed and direction of flows, which defines hydrological relations between different points within a basin. Considering a DEM represented by a grid, the simplest and most widely used method to determine flow directions is the D8 (eight flow directions) [47]. The flow goes to the steepest downward slope within the set of eight possible directions around a grid point. Based on this approach, it is possible to determine a synthetic drainage network under a drainage threshold -a cell is considered in the drainage network if the upstream area associated with it is equal or greater than the threshold. Details in [48,49].
Assuming that all points belong to a flow path and that all flow paths are associated with respective drainage points, it is possible to define the Height Above the Nearest Drainage (HAND) of any given point. Therefore, the HAND map is a robust and versatile digital terrain model normalized by the drainage network. Low HAND's values represent points with altimetry close to the altimetry of the nearest drainage element. Thus the lower the HAND, the more significant the flood probability of occurrence [48,49]. We use this well-established concept to create the classes of flood susceptibility.
We use the SRTM DEM model [46] as input in the TerraHidro plugin, in the TerraView software [50], with a drainage threshold of 5000 cells to obtain the HAND raster with a spatial resolution of 30 meters. Since the HAND raster has different geometry and spatial resolution from the grid we use to aggregate the variables, each grid cell receives the minimum HAND value among those that intersect it. The HAND values (in meters) aggregated in the grid cells are then classified, as detailed in section 3.5, to obtain the F S classes.
| Exposure
Damage caused by disasters impairs the mobility and accessibility of people living in or moving through the affected areas. Considering these two situations, we use the resident population, initially aggregated by grid cell, and the number of people who work or study in the study area (obtained from OD Database), disregarding people who work from home, to represent people moving through the study area. People's main reasons for moving are work and study, justifying our choice.
The São Paulo Subway Company conducted the OD Research in 2017, which encompasses all 39 municipalities of the metropolitan region. The OD Database is available on the São Paulo Subway Company website [43]. The geographic coordinates of workplaces and study institutions and the number of people who commutes to each point are available in the database. Using these fields, we create a new vector data containing points representing workplaces and study institutions with the attribute "number of people that commutes to each point".
The commuting pattern results in a particular picture of exposure, with variation in space and time. The consequences of a disaster vary in size, if it occurs on a typical workday, a weekend, or holiday, for example. On a typical working day, roads have higher traffic, streets are much more crowded, and people are at schools or work instead of being home. Thus, people's exposure is directly related to the time a disaster occurs, influencing the degree of exposure. Legeard, as cited in Veyret [51], argued that, once this dynamic nature of mobility processes is well known, it is possible to produce exposure maps by time slots:
• Daily exposure: Commercial period, except for peak hours.
• Peak exposure: Period of active movement in transportation net (roads, collective transportation, stations).
• Night exposure: When the population is concentrated in residential areas.
Then, we aggregate the number of commuters, from a point vector data, in the regular grid. Each grid cell has the sum of commuters and resident population, resulting in the exposed population.
| Vulnerability
Data of extreme rainfall events in the year 2019, obtained from the CGE, was used to estimate LV . We chose 2019 because it was the year before the beginning of mobility restrictions due to SARS-CoV-2 in Brazil.
The start and end time of road blocking and the address (street name and intersection) where there is a flood is available on the CGE website [44]. We tabulate this information and extract the Universal Transverse Mercator -UTM coordinates for each point, using vector data of the São Paulo municipality's street network. Then, we create vector data with 1,165 points representing traffic blocks due to floods. Ultimately, we intersect the points with the regular grid to select occurrences in the study area.
There were 1,165 traffic blocks due to floods in São Paulo municipality in the year 2019 [44], of which 280 were inside the study area. These points correspond to locations where streets needed to be blocked due to extreme rainfall events. We sum up the street block duration in each cell to find the LV . Figure 3 illustrates a street block caused by an extreme rainfall event in the Tamanduateí River, in the city of São Paulo, in March 2019.
F I G U R E 3 Street block caused by an extreme rainfall event in the Tamanduatei River in March 2019 [52].
The second variable used to calculate V is the NV , which depends on the road system vector data [45], which has 23,702 lines and represents the roads. In addition to geometry, this file has alphanumeric attributes, such as the road hierarchy (or road function classification) that we use in the exploratory analysis of Section 4.1.
Roads serve as an essential means for people to travel from one place to another, connecting regions and neighborhoods and accessing properties. The characteristics of the road system can be investigated through a complex 9 network, a structure in which connections (edges) link pairs of elements (nodes), forming a graph with particular topological properties [53].
We use the Gis4Graph tool [54] to build a geographical graph from the road system. The adopted methodology is based on the concept of (geo)graph, a graph whose vertices are attached to a geographical location and edges with an intrinsic spatial dependence [55]. The process consists of identifying the intersections between the lines. As a result, each road becomes a node, and each intersection is an edge in our graph. Furthermore, the tool produces vector data with the resulting network to make it possible to analyze it in a GIS environment [54]. The (geo)graph's approach is applied in a variety of different domains [55,53,56,57,58].
The role of a road system is to provide adequate fluidity of the travel demand. The operation of the road system is subject to hazards that could negatively affect the level of service. The hazards can be related to meteorological events that affect vehicular traffic, for example. The analysis of vulnerability in a road system can be understood as an assessment of the behavior of the road network when it suffers interference from adverse events (unexpected or undesirable) in the elements that comprise it [59].
In this study, we estimate the topological vulnerability of the (geo)graph to understand how the network behaves in the absence of a road for whatever reason, using the igraph module for Python [60]. The method consists in evaluating performance changes in the network when disconnecting nodes from the grid [61]. We calculate the NV to find important links and components in the (geo)graph. We use efficiency as performance, and the inverse of the shortest path between nodes as efficiency [62]. Thus, we start by calculating the network's global efficiency to use it as a performance. Then, we go node by node, disconnect it and recalculate the new global efficiency for each case.
We calculate the node vulnerability for each node, independently of what happens in other nodes, as follows
nV (i ) = E (G ) − E (G i ) E (G ) ,(1)
being nV (i ) the node vulnerability, E (G ) the original global efficiency and the E (G i ) the global efficiency of G considering the disconnection of node i .
Since our network has about 20,000 nodes and 50,000 edges, calculating the shortest paths and removing each node is a resource and time-demanding task, which we solve via High-Performance Computing, using Python's multiprocessing module. The computing architecture we use consists of two AMD Ryzen Threadripper 3960X 24-Core Processor, 128GB of RAM, and an NVIDIA Titan XP.
Using this method, we obtain a line vector file with node vulnerability ranging between 0 and 1. A lower node vulnerability value leads to a less relevant node regarding the network robustness. The set of nodes vulnerabilities forms the NV . Each grid cell receives the maximum value of the node vulnerability of the intersected lines, considering their spatial location. The aggregation follows
NV (i ) = max{nV (j ), j ∈ C i },(2)
where NV (i ) is the network vulnerability, nV (j ) is the node vulnerability, i is the cell index, and C i is the set of streets that intersect cell i .
The combination of the LV and NV to form the V component is detailed in the next section.
| Component Classification
After we aggregate the E , LV , NV , and F S components into grid cells, we classify the cells of each component into three classes: Moderate, High, and Very High. We use the Jenks Natural Breaks algorithm [63] to define the thresholds of the classes (except for the LV ). They are based on natural groupings inherent in the data. The algorithm identifies breakpoints by minimizing the variances within each class. In this way, similar points are grouped.
The distribution of the LV is very skewed; 96% or 2,971 of the cells have no flooding occurrence, and therefore have no duration. So, we classify all cells with a value less than one hour as Moderate. The remaining cells were manually divided, observing the frequency graph ( Figure 12).
The The city of São Paulo is highly urbanized and dense. The city has a strong attraction for travel from neighboring cities, and its transport network has high traffic of vehicles and people daily. According to the OD Survey [43], the number of trips exceeds 25 million on a typical day. Given this scenario, we propose that risk classification begins at a level Moderate because it inspires attention and monitoring.
First, we cross the LV with NV to find V classes, according to Table 1. Next, we combine V with E classes to find the P I . Finally, the R comes from the crossing between P I and F S . The results of each combination are presented in Section 4.
| RESULTS AND DISCUSSION
| Exploratory Analysis
Here are the results of the exploratory analysis of the data before aggregation into the grid. These results guide the method decision and give us an overview of the variables. We analyze the HAND, population, floods, and network vulnerability index.
The HAND raster has a spatial resolution of 30 m and 129,057 cells with values between 0 and 127 m (Figure 4). workers, it is possible to notice two kernels, the biggest one in the historic center, in districts Sé and República ( Figure 5). Concerning study locations, there are 1,040 points, resulting in 1,390,969 students. Although it does not form a kernel, the concentration of students is more significant in five districts (number 7, 26, 49, 66, 78) ( Figure 5). The population that works and studies in the study area is 4,760,743 people, disregarding people who work from home. The records from December to March represent 70% of the total (Figure 7). That is, the seasonality of the rains is correlated with the occurrences of flooding. Concerning the occurrences, 28% of floods lasted up to 30 minutes, 23% from 30min to 1h, 23% from 1h to 2h; 10% from 2h to 3h; that is 84% of the floods lasted up to 3 hours.
Regarding the road hierarchy, 64% of flooding occurred in arterial roads, 21% occurred in Collectors roads, 9% occurred in local roads; and 5% in Fast Lane (Table 2). Roads with the highest frequency of flooding are: Rio Branco (Table 3). These roads are highlighted in Figure 2. The 280 flood points are located in 116 of the 3,087 grid cells. When we consider the duration of these events, we have 3,006 cells with a value of less than one hour. The LV map reflects this characteristic. However, it is essential to highlight that the non-occurrence of flooding in a given place does not mean the absence of future occurrences at this location.
Road Hierarchy
| Analysis of components
NV has 2.3% of the cells in the Very High class and 16.3% of cells in the High class. The location of these cells reflects the topology of the input data, and we can relate them to the road axes ( Figure 13). As in the previous graphics, and services). This is an area with great attraction for travel because of this predominant typology.
F I G U R E 2 0 Area with the highest concentration of cells in the Very High class.
| CONCLUSIONS AND PERSPECTIVES
This article proposes an urban flood risk map from a multidisciplinary perspective, integrating data from different sources and geographic units into a regular grid. The methodology allows the use of components in an isolated or integrated way, with several possible components' combinations. In our study, all components, except F S , have few cells in the Very High class (minimum value of 0.5% in the case of LV and maximum value of 2.3% for NV ). F S component reflects the presence of watercourses, and it has a strong influence on the location of cells classified as
High. The cells classified as Very High risk are primarily in downtown, a built-up area with impervious surfaces.
In addition, the F S can be used to better position teams when it starts to rain; and the potential impact can help prioritize care when there are two simultaneous floods. The flood risk map enables policymakers to figure out where to allocate resources as it is also helpful for emergency managers, the exposed population, land-use planners, and infrastructure owners. They are all potential users of flood maps. Flood maps are a vital tool in flood risk management, including communication. They are more assertive in visualizing the spatial distribution of the flood risk than other forms of presentation, increasing awareness of decision-makers and people at risk. Furthermore, flood maps can serve as a base for deriving flood insurance premiums or allowing disaster managers to prepare for emergencies.
Exploratory analysis helps not only to delineate the methodology but also to understand how the variables behave.
This tool allows us to know the parts that make up the urban space and how they are related. In our study, flooding has a seasonal behavior with more expressive numbers in the summer. The exposed population will also vary according to the day and time, especially in areas with a predominance of commerce and services. These variations can be explored in a dynamic risk map, with seasonal or fluctuating risks throughout the day.
Another critical issue is to determine the areal unit of analysis since each variable has a different boundary. We work with punctual (workplace, study places, flooding), linear (road system), and matrix (statistical grid and HAND) data, which are incompatible systems. We chose to aggregate this data into a square grid cell to handle official data and harmonize different spatial data. This topic should receive attention in methodology, as it will always be recurrent in multidisciplinary studies.
The methodology can also be adapted, including new variables, to better reflect the dynamics of complex systems.
We made an advance in estimating the population exposed to floods by including the population that works or studies in the analyzed area. The aggregation of the population took place in the final destinations of the trip (workplaces and study institutions) without considering the route taken.
As a future investigation, we suggest including dynamic data, which considers how the population moves until reaching the final destination, whether by volumetric vehicle counting, crowdsourced data, or specific mobility surveys.
And, to improve the calculation of the network vulnerability index, we are developing, as future work, an index by event to reach more than one element at the same time.
geographic units. To estimate the exposure (E ) component, we consider the resident population and the population that works and studies within the study area. The vulnerability (V ) component takes into account the local vulnerability (LV ) and the network vulnerability (NV ) of the road network. The flood susceptibility (F S ) is estimated by the HAND (Height Above the Nearest Drainage) algorithm.
produced a hydraulic model and flood map of the Ceviz Stream in Turkey, where many people live and work, and there are a large number of private properties and public infrastructure. The authors used the MIKE FLOOD software to develop the hydraulic model. The modeling phase generated water depth, velocity, and flood extent area data used as parameters in flood risk assessment.
Ramkar and Yadav[31] developed a flood risk index map using an integrated approach of Geospatial technique and Multiple Criteria Decision-Making Technique. The flood risk index was calculated by integrating the flood hazard and vulnerability maps. The flood hazard map considered slope, distance from the main river, land use, land cover, soil, drainage density, and rainfall. The vulnerability index took into account population density, crop production, and density of road-river intersection. The flood risk map results from the multiplication of flood hazard and vulnerability maps. Zokagoa et al. [32] produced a probabilistic flood map using uncertainty propagation in real flood events in Quebec. They adopted a Monte Carlo method to generate an ensemble of results from different combinations of uncertain input parameters. Then, the weighted average of ensemble results is used to derive a probabilistic floodplain map. Other authors estimated one component of the risk of flooding, as Liu et al. [27] that estimated the exposure;
Figure 1
1also shows the occurrences of flooding with road disruptions in the Tamanduateí basin in 2019. The roads with the highest frequency of flooding in 2019 are highlighted in Figure 2.F I G U R E 1 Location of the study area in the Tamanduateí basin with flood occurrences in 2019. In this hierarchical order, the city of São Paulo has some official administrative divisions: municipality, sub-prefectures, and districts. The study area covers 25 districts (some of them, partially), as shown in Figure 2. The Sé and Repúlica districts form the historical center, the original nucleus of the city, where approximately 600 thousand people circulate per day [38]. Some famous places in the city, such as The Sé Square, The Anhangabaú Valley, and the Municipal Market, are located in the Sé District (Figure 20). Although they are not a unit of analysis, districts are used throughout the text and maps to highlight relevant locations in the results.
The average value of the cells is 29.40 m. Cells with a value equal to zero, those with the presence of watercourses, have the highest frequency with 3,310 cells or 2.6%. Values between 0 and 10 m represent 25% of cells. Thefigure 18shows that smaller values are the most frequent.The resident population totals 1,637,265 and is spread out the study area, with only 7.7% of cells (or 237 cells) without population. The cell with the highest value has 5,597 inhabitants, and the average value is 530. Considering the density of inhabitants, it is possible to notice some kernels, the most prominent being in Bela Vista, Consolação, and República districts (Figure 5). Analyzing the OD survey data concerning workplaces, there are 4,560 points, of which 4,527 have different geographic coordinates. A total of 3,369,774 people work at these points. The place with the highest number of workers has 13,361 people, and the average value is 739 workers. Observing the density of F I G U R E 4 HAND raster in meters.
F I G U R E 5
5Kernel density of inhabitants, workers, and students. From January 1st to December 31st, 2019, there were 280 points of flooding with interdiction of roads in our study area, of which 48% started in the time slot from 4 pm to 7 pm (Figure 6). Considering this time slot, about 10.336 million people traveled in the Metropolitan Region of São Paulo (RMSP), of which 4,963,926 were leaving work; and 3,787,260 people were leaving or heading to an educational institution, according to OD Database[43].
F I G U R E 6
6Number of floods per start time (in Hour). F I G U R E 7 Number of floods per month. (10%), Mércurio (10%), Prof. Abraão de Morais (9%), Prof. Luiz Ignacio Anhaia Mello (7%), do Estado (6%), da Bandeira Square (4%). Except for da Bandeira Square, the other roads are all arterial in the road hierarchy
E 3
3Number of floods by road (Top 6 roads in flood frequency).F I G U R E 8 Roads vulnerability index. Crossing the location of the flood points with watercourses, it is observed that the flood points are, on average, 300 meters from a watercourse. 50% of these points are up to 124 meters away from a watercourse. When crossing the location of the flood points with the HAND, the average value is 10.55 m, and 74% of points are up to 10 m. The crossing of the points with the watercourses and the HAND indicates the presence of two types of flooding: SWF and river flood. The network is composed of 23, 702 nodes. If one road gives access to another, there is an edge between them.There are 52, 921 edges in the network. The most significant topological distance (number of edges) between one road to another is 39 edges -the network's diameter. On average, the distance between roads is 7.22 intersections, and a road gives access to 4.46 others. Thefigure 8shows the road's vulnerability index, where lower values lead to a less relevant node to the network. The least relevant nodes are those that, when removed, have less impact on the collective characteristics of the network. All vulnerability data (code, inputs, and outputs) is available on GitHub repository[64].
and 9 .
93% in the High class. These cells are mainly in the northwest portion of the study area, mainly in the historical center and surroundings (Figure 9). It is possible to notice that cells with fewer people are more frequent and are in the Moderate class; and that the greater the number of people per cell, the lower the frequency (Figure 10). F I G U R E 9 Exposure Map. F I G U R E 1 0 Population distribution plot.LV has 0.5% of the cells in the Very High class and 2.1% of cells in the High class. These cells are spatially dispersed, but most of them are located in the center of the city(Figure 11). Floods with a shorter duration (up to 1 hour) are more frequent, occupying the Moderate class. Flooding lasting between 1 hour and 10 hours is in the High class, and those that last longer than 10 hours are in the Very High class. The longer the duration, the lower the frequency 12.
Network vulnerability map. F I G U R E 1 4 Network vulnerability distribution plot. TheV has only 0.6% of the cells in the Very High class and 2.7% of cells in the High class, with higher concentration in the center (Figure15). It is still possible to perceive some road axes. However, the recurrent location of the flooding causes the scatter of these cells. The integration of the E and V components forms the P I , which has 0.3% of the cells in the Very High class and 1.9% of cells in the High class located in the same portion of the V , but with a smaller number of cells (Figure 16). The F S component behaves opposite to the others, with a more significant number of cells (48.1%) in the Very High class and 33.3% of cells in the High class, summing up 81.4% together. This high number in these two classes demonstrates the proximity of cells to watercourses (Figure 17). Values up to 15 m are in the Very High class; values between 15 m and 38 m are in the High class, and values above 38 m are in the Moderate class. It means that the lower the HAND, the more significant the F S (Figure 18). Flood susceptibility map. F I G U R E 1 8 HAND distribution plot. Finally, the R has 1.2% of the cells in the Very High class and 47.5% of cells in the High class (Figure 19). 21 of the 37 cells belonging to the Very High class are located in the historic center of the city, covering the Sé Square, the Anhangabau valley, and the Municipal Market (Figure 20). Although F S strongly influences R , R has only 37 cells in the Very High class. Even if the F S is Very High, the R will only be classified as Very High if the P I is at least classified as High. Therefore, the two components have equal importance in R estimation.The Secretariat of Urban Development of São Paulo established an aggregation methodology resulting from the crossing between the "use" and "pattern" values for each registered property generating 16 types of land use. The classification is made by fiscal block, considering the predominant land use (greater than or equal to 60%) of the properties that make up the block[65].
Figure 20
20shows the land use typology (left) and satellite image (right) of the central area with the highest concentration of cells classified as Very High risk. Analyzing this clipping, it is possible to notice that it is a built-up area with impervious surfaces with a high density of commercial buildings (50% of the blocks are classified as commerce F I G U R E 1 9 Flood risk map.
table 1 shows the resulting class when combining two classes.Classes
Moderate
High
Very High
Moderate
Moderate
Moderate
High
High
Moderate
High
Very High
Very High
High
Very High
Very High
TA B L E 1 Resulting class combining two classes.
Table 4
4summarizes the component results by cells. Except for F S , all components have few cells in the Very High class (minimum value of 0.5% for LV , and maximum value of 2.3% for NV ). The results are also presented in maps and graphs. The maps allow us to visualize the geographic location of the classes and their neighborhood relationship.The graphs, in turn, allow us to observe the ranges used in the division of the classes of each component (x-axis) and
relate them to the frequency (y-axis). The colored divisions of graphs are compatible with the respective map.
The E component sums 6.4 million people, of which 48% of people is in the Moderate class; 36% is in the High
class, and 16% is in the Very High class. Observing the distribution of cells, 0.9% of the cells in the Very High class
ACKNOWLEDGEMENTSWe thank Professor Carlos Augusto Morales Rodriguez for the valuable initial discussions about flood data in Sao Paulo. Also, we thank the Intelligent Systems Computing Laboratory (CSILab) from the Federal University of Ouro Preto for sharing computational resources and the financing institutions that supported us in the development of this research.CONFLICT OF INTERESTNo potential conflict of interest was reported by the author(s).
Modeling the traffic disruption caused by pluvial flash flood on intra-urban road network. M Li, Q Huang, L Wang, J Yin, J Wang, https:/onlinelibrary.wiley.com/doi/abs/10.1111/tgis.12311Transactions in GIS. 221Li M, Huang Q, Wang L, Yin J, Wang J. Modeling the traffic disruption caused by pluvial flash flood on intra-urban road network. Transactions in GIS 2018;22(1):311-322. https://onlinelibrary.wiley.com/doi/abs/10.1111/tgis.12311.
Modelling the anthropogenic impacts on fluvial flood risks in a coastal mega-city: A scenario-based case study in Shanghai. J Yin, D Yu, Z Yin, J Wang, S Xu, China. Landscape and Urban Planning. 1364Yin J, Yu D, Yin Z, Wang J, Xu S. Modelling the anthropogenic impacts on fluvial flood risks in a coastal mega-city: A scenario-based case study in Shanghai, China. Landscape and Urban Planning 2015;136(4):144-155.
An evaluation of the impacts of land surface modification, storm sewer development, and rainfall variation on waterlogging risk in Shanghai. X Wu, D Yu, Z Chen, R Wilby, Natural Hazards. 63Wu X, Yu D, Chen Z, Wilby R. An evaluation of the impacts of land surface modification, storm sewer development, and rainfall variation on waterlogging risk in Shanghai. Natural Hazards 2012;63:305-323.
Relative importance of impervious area, drainage density, width function, and subsurface storm drainage on flood runoff from an urbanized catchment. F Ogden, N Pradhan, C Downer, J Zahner, Water Resources Research. 4712Ogden F, Pradhan N, Downer C, Zahner J. Relative importance of impervious area, drainage density, width function, and subsurface storm drainage on flood runoff from an urbanized catchment. Water Resources Research 2011;47(12):1-12.
Urban Risk Assessments: an approach for understanding disaster and climate risk in cities. E Dickson, J L Baker, D Hoornweg, A Tiwari, The World BankDickson E, Baker JL, Hoornweg D, Tiwari A. Urban Risk Assessments: an approach for understanding disaster and climate risk in cities. The World Bank; 2012.
Validation of flood risk maps using open source optical and radar satellite imagery. E Quirós, A S Gagnon, Transactions in GIS. Quirós E, Gagnon AS. Validation of flood risk maps using open source optical and radar satellite imagery. Transactions in GIS 2020;p. 1-19.
Declining vulnerability to river floods and the global benefits of adaptation. Jongman Be, Proc Natl Acad Sci. 112Jongman Be. Declining vulnerability to river floods and the global benefits of adaptation. Proc Natl Acad Sci 2015;112:E2271-E2280.
High-resolution flood risk approach to quantify the impact of policy change on flood losses at community-level. O M Nofal, J W Van De Lindt, International Journal of Disaster Risk Reduction. 62102429Nofal OM, van de Lindt JW. High-resolution flood risk approach to quantify the impact of policy change on flood losses at community-level. International Journal of Disaster Risk Reduction 2021;62:102429.
Fundação Instituto Brasileiro de Geografia e Estatística -IBGE, Panorama of Brazilian municipalities. Fundação Instituto Brasileiro de Geografia e Estatística -IBGE, Panorama of Brazilian municipalities; 2020. Accessed: 2021-04-08. https://cidades.ibge.gov.br/brasil/sp/sao-paulo/panorama.
Ministerio da Infraestrutura do Brasil. Frota de Veiculos. Ministerio da Infraestrutura do Brasil, Frota de Veiculos 2020; 2020. Accessed: 2021-04-08. https://www.gov.br/ infraestrutura/pt-br/assuntos/transito/conteudo-denatran/frota-de-veiculos-2020.
Viver em São Paulo: Mobilidade Urbana. Pesquisa Ibope Inteligência, De Opinião, Pública, IBOPE Inteligência, Pesquisa de Opinião Pública. Viver em São Paulo: Mobilidade Urbana; 2019. Accessed: 2021- 04-08. https://www.nossasaopaulo.org.br/wp-content/uploads/2019/09/Pesquisa_ViverEmSP_MobilidadeUrbana_ completa_2019.pdf.
Quanto custa a imobilidade urbana em São Paulo?. R C C Vale, POLICY. 3Vale, R C C . Quanto custa a imobilidade urbana em São Paulo? POLICY 2020;(3).
Flash Flood Forecasting in São Paulo Using a Binary Logistic Regression Model. Asv López, Cam Rodriguez, Atmosphere. 11473López ASV, Rodriguez CAM. Flash Flood Forecasting in São Paulo Using a Binary Logistic Regression Model. Atmosphere 2020;11:473.
Optimal rain gauge allocation to reduce rainfall impacts on urban mobility -a spatial sensitivity analysis. F Simoyama, L R Tomás, F Pinto, L Santos, Salles Neto, L , submittedSimoyama F, Tomás LR, Pinto F, Santos L, Salles Neto L. Optimal rain gauge allocation to reduce rainfall impacts on urban mobility -a spatial sensitivity analysis;, submitted.
Using weather information to improve route planning. In: Bridging the Geographic Information Sciences Springer. P Litzinger, G Navratil, Å Sivertun, D Knorr, Litzinger P, Navratil G, Sivertun Å, Knorr D. Using weather information to improve route planning. In: Bridging the Geographic Information Sciences Springer; 2012.p. 199-214.
Terminology on Disaster Risk Reduction. Reduction Uunisfd, REDUCTION UUNISFD, Terminology on Disaster Risk Reduction; 2009. Accessed: 2021-12-09. https://www. preventionweb.net/files/7817_UNISDRTerminologyEnglish.pdf.
At risk: natural hazards, people's vulnerabillity, and disasters. 2th ed. ed. RoutledgeAt risk: natural hazards, people's vulnerabillity, and disasters. 2th ed. ed. Routledge; 2004.
Reducing disaster risk: a challenge for development. Programme Uund, John S. Swift CoPROGRAMME UUND. Reducing disaster risk: a challenge for development. John S. Swift Co; 2004. https://www.undp. org/publications/reducing-disaster-risk-challenge-development.
Flood Risk = Hazard • Values • Vulnerability. W Kron, 10.1080/02508060508691837Water International. 301Kron W. Flood Risk = Hazard • Values • Vulnerability. Water International 2005;30(1):58-68. https://doi.org/10. 1080/02508060508691837.
What can cities d to increase resilience. D Crichton, Phil Trans R Soc A. 365Crichton D. What can cities d to increase resilience? Phil Trans R Soc A 2007;365:2731-2739.
Ministério de Integração Nacional and Secretaria Nacional de Defesa Civil and Centro Nacional de Gerenciamento de Riscos e Desastres -CENAD . Anuário Brasileiro de Desastres Naturais. Ministério de Integração Nacional and Secretaria Nacional de Defesa Civil and Centro Nacional de Gerenciamento de Riscos e Desastres -CENAD . Anuário Brasileiro de Desastres Naturais; 2014.
Urban water management. S G Walesh, John Wiley and Sons, IncWalesh SG. Urban water management. John Wiley and Sons, Inc.; 1989.
Flood Disaster Hazards; Causes, Impacts and Management: A State-of-the-Art Review. F J Glago, Farsangi ENNatural Hazards: impacts, adjustments and resilience IntechOpenGlago FJ. Flood Disaster Hazards; Causes, Impacts and Management: A State-of-the-Art Review. In: Farsangi EN, editor. Natural Hazards: impacts, adjustments and resilience IntechOpen; 2021.p. 1-18.
Pluvial flooding: new approaches in flood warning, mapping and risk management. R H Falconer, D Cobby, P Smyth, G Astle, J Dent, B Golding, Journal of Flood Risk Management. 23Falconer RH, Cobby D, Smyth P, Astle G, Dent J, Golding B. Pluvial flooding: new approaches in flood warning, mapping and risk management. Journal of Flood Risk Management 2009;2(3):198-208.
Rotas de prevenção: a redução de vulnerabilidade a desastres no âmbito das infraestruturas de transporte e da mobilidade urbana no Brasil. L R Londe, Lbl Santos, V Marchezini, V Marchezini, B Wisner, L R Londe, Smo Saito, Reduction of vulnerability to disasters: from knowledge to action. 1Londe LR, Santos LBL, Marchezini V. Rotas de prevenção: a redução de vulnerabilidade a desastres no âmbito das infraestruturas de transporte e da mobilidade urbana no Brasil. In: Marchezini V, Wisner B, Londe LR, Saito SMO, editors. Reduction of vulnerability to disasters: from knowledge to action, vol. 1 Rima Editora; 2017.p. 517-528.
A new approach to estimating flood-affected populations by combining mobility patterns with multi-source data: A case study of Wuhan. X Liu, S Yang, T Ye, An R Chen, C , China. International Journal of Disaster Risk Reduction. 55102106Liu X, Yang S, Ye T, An R, Chen C. A new approach to estimating flood-affected populations by combining mobil- ity patterns with multi-source data: A case study of Wuhan, China. International Journal of Disaster Risk Reduction 2021;55:102106.
Surface water flooding risk to urban communities: Analysis of vulnerability, hazard and exposure. Landscape and Urban Planning. 1032Surface water flooding risk to urban communities: Analysis of vulnerability, hazard and exposure. Landscape and Urban Planning 2011;103(2):185-197.
Adapting cities for climate change; the role of green infrastructure. S E Gill, J F Handley, A R Ennos, S Pauleit, Built Environment. 33Gill SE, Handley JF, Ennos AR, Pauleit S. Adapting cities for climate change; the role of green infrastructure. Built Environment 2007;33:115-133.
Flood map production and evaluation of flood risks in situations of insufficient flow data. N Beden, Ulke Keskin, A , Nat Hazards. 105Beden N, Ulke Keskin A. Flood map production and evaluation of flood risks in situations of insufficient flow data. Nat Hazards 2021;105:2381-2408.
Flood risk index in data-scarce river basins using the AHP and GIS approach. P Ramkar, S M Yadav, Nat Hazards. 109Ramkar P, Yadav SM. Flood risk index in data-scarce river basins using the AHP and GIS approach. Nat Hazards 2021;109:1119-1140.
Flood risk mapping using uncertainty propagation analysis on a peak discharge: case study of the Mille Iles River in Quebec. J Zokagoa, A Soulaïmani, P Dupuis, Nat Hazards. 107Zokagoa J, Soulaïmani A, Dupuis P. Flood risk mapping using uncertainty propagation analysis on a peak discharge: case study of the Mille Iles River in Quebec. Nat Hazards 2021;107:285-310.
Estimating impacts of recurring flooding on roadway networks: a Norfolk, Virginia case study. S Praharaj, T D Chen, F T Zahura, M Behl, J Goodall, Nat Hazards. 107Praharaj S, Chen TD, Zahura FT, Behl M, Goodall J. Estimating impacts of recurring flooding on roadway networks: a Norfolk, Virginia case study. Nat Hazards 2021;107:2363-2387.
When floods hit the road: Resilience to floodrelated traffic disruption in the San Francisco Bay Area and beyond. I G Kasmalkar, K A Serafin, Y Miao, I A Bick, L Ortolano, D Ouyang, https:/www.science.org/doi/abs/10.1126/sciadv.aba2423Science Advances. 6322423Kasmalkar IG, Serafin KA, Miao Y, Bick IA, Ortolano L, Ouyang D, et al. When floods hit the road: Resilience to flood- related traffic disruption in the San Francisco Bay Area and beyond. Science Advances 2020;6(32):eaba2423. https: //www.science.org/doi/abs/10.1126/sciadv.aba2423.
Identification and Prioritization of Critical Transportation Infrastructure: Case Study of Coastal Flooding. Q C Lu, Z R Peng, J Zhang, https:/ascelibrary.org/doi/abs/10.1061/%28ASCE%29TE.1943-5436.0000743Journal of Transportation Engineering. 14134014082Lu QC, Peng ZR, Zhang J. Identification and Prioritization of Critical Transportation Infrastructure: Case Study of Coastal Flooding. Journal of Transportation Engineering 2015;141(3):04014082. https://ascelibrary.org/doi/abs/10.1061/ %28ASCE%29TE.1943-5436.0000743.
Modeling a densely urbanized watershed with an artificial neural network, weather radar and telemetric data. A J Pereira Filho, C C Santos, Journal of Hydrology. 3171-2Pereira Filho AJ, dos Santos CC. Modeling a densely urbanized watershed with an artificial neural network, weather radar and telemetric data. Journal of Hydrology 2006;317(1-2):31-48.
Análise geoespacial e mapeamento da densidade de pontos de alagamento em vias públicas do município de São Paulo. Tas Coelho, Universidade Estadual de CampinasPhD thesisCoelho TAS. Análise geoespacial e mapeamento da densidade de pontos de alagamento em vias públicas do município de São Paulo, entre 2008 e 2013. PhD thesis, Universidade Estadual de Campinas; 2016.
Historic Center Manual: Maintenance, Conservation, Renovation and Restoration. Paulo De São, Pp, de São Paulo PP, Historic Center Manual: Maintenance, Conservation, Renovation and Restoration; 2021. https:// gestaourbana.prefeitura.sp.gov.br/wp-content/uploads/2021/05/Cartilha_REV2021-compactado.pdf.
. Grade Estatística Censo. Fundação Instituto Brasileiro de Geografia e Estatística -IBGEFundação Instituto Brasileiro de Geografia e Estatística -IBGE, Grade Estatística Censo 2010; 2016. Accessed: 2021- 04-08. https://www.ibge.gov.br/geociencias/downloads-geociencias.html.
The Modifiable Areas Unit Problem -Final Report Of Espon Project 3.4.1. ESPON -European Spatial Planning Observation Network. C Grasland, M Madelin, Grasland C, Madelin M. The Modifiable Areas Unit Problem -Final Report Of Espon Project 3.4.1. ESPON -European Spatial Planning Observation Network 2006;https://www.espon.eu/sites/default/files/attachments/ espon343_maup_final_version2_nov_2006.pdf.
Achieving Data Compatibility over Space and Time: Creating Consistent Geographical Zones. P Norman, P Rees, P Boyle, International Journal of Population Geography. 9Norman P, Rees P, Boyle P. Achieving Data Compatibility over Space and Time: Creating Consistent Geographical Zones. International Journal of Population Geography 2003;9:365-386.
The Demography of Adaptation to Climate Change. UNFPA and IIED and El Colegio de México. J M Guzmán, D Schensul, S Zhang, Martine G, Schensul DUnderstanding Vulnerability and Adaptation Using Census DataGuzmán JM, Schensul D, Zhang S. Understanding Vulnerability and Adaptation Using Census Data. In: Mar- tine G, Schensul D, editors. The Demography of Adaptation to Climate Change. UNFPA and IIED and El Cole- gio de México; 2013.https://www.unfpa.org/sites/default/files/pub-pdf/The%20Demography%20of%20Adaptation% 20to%20Climate%20Change.pdf.
. Companhia Metro, Paulo Do Metropolitano De São, Pesquisa Origem-Destino. METRO, Companhia do Metropolitano de São Paulo, Pesquisa Origem-Destino 2017 RMSP; 2019. http://www.metro. sp.gov.br/pesquisa-od/.
São Paulo's Emergency Management Center -CGE, Flood records in São Paulo. Brazil in. São Paulo's Emergency Management Center -CGE, Flood records in São Paulo, Brazil in 2019; 2021. https://www. cgesp.org/v3/.
Digital Geo-referenced Cartographic Base of Roads System in the Metropolitan Region of São Paulo -RMSP -2020 Edition. Center for Metropolitan Studies -CEMCenter for Metropolitan Studies -CEM, Digital Geo-referenced Cartographic Base of Roads System in the Metropolitan Region of São Paulo -RMSP -2020 Edition; 2021. https://centrodametropole.fflch.usp.br/pt-br/download-de- dados?f%5B0%5D=facets_temas%3Asistema%20viario.
The Shuttle Radar Topography Mission. T G Farr, P A Rosen, E Caro, R Crippen, R Duren, S Hensley, https:/agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2005RG000183Reviews of Geophysics. 452Farr TG, Rosen PA, Caro E, Crippen R, Duren R, Hensley S, et al. The Shuttle Radar Topography Mission. Reviews of Geophysics 2007;45(2). https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2005RG000183.
The extraction of drainage networks from digital elevation data. J F O'callaghan, D M Mark, 28Computer Vision, Graphics, and Image ProcessingO'Callaghan JF, Mark DM. The extraction of drainage networks from digital elevation data. Computer Vi- sion, Graphics, and Image Processing 1984;28(3):323-344. https://www.sciencedirect.com/science/article/pii/ S0734189X84800110.
C D Rennó, A D Nobre, L A Cuartas, J V Soares, M G Hodnett, J Tomasella, a new terrain descriptor using SRTM-DEM: Mapping terra-firme rainforest environments in Amazonia. 112Rennó CD, Nobre AD, Cuartas LA, Soares JV, Hodnett MG, Tomasella J, et al. HAND, a new terrain descriptor using SRTM- DEM: Mapping terra-firme rainforest environments in Amazonia. Remote Sensing of Environment 2008;112(9):3469- 3481. https://www.sciencedirect.com/science/article/pii/S003442570800120X.
Height Above the Nearest Drainage -a hydrologically relevant new terrain model. A D Nobre, L A Cuartas, M Hodnett, C D Rennó, G Rodrigues, A Silveira, Journal of Hydrology. 4041Nobre AD, Cuartas LA, Hodnett M, Rennó CD, Rodrigues G, Silveira A, et al. Height Above the Nearest Drainage -a hydrologically relevant new terrain model. Journal of Hydrology 2011;404(1):13-29. https://www.sciencedirect.com/ science/article/pii/S0022169411002599.
TerraLib and TerraView. National Institute for Space ResearchNational Institute for Space Research, TerraLib and TerraView; 2021. Accessed: 2021-08-23. http://www.dpi.inpe.br/ terralib5/wiki/doku.php.
Os riscos: o homem como agressor e vítima do meio ambiente. Contexto. Y Veyret, Veyret Y. Os riscos: o homem como agressor e vítima do meio ambiente. Contexto; 2007.
Temporal alagou região do Ipiranga e fez vítimas fatais. J Z Sul, Sul JZ. Temporal alagou região do Ipiranga e fez vítimas fatais; 2019, (accessed 2021-04-08).
Survivability Analysis, and Disaster Risk Reduction Cham. Lbl Santos, L R Londe, T J De Carvalho, D Menasché, D A Vega-Oliveros, 10.1007/978-3-030-21205-6_10About Interfaces Between Machine Learning, Complex Networks. Bacelar Lima Santos L, Galante Negri R, de Carvalho TJSpringer International PublishingSantos LBL, Londe LR, de Carvalho TJ, S Menasché D, Vega-Oliveros DA. In: Bacelar Lima Santos L, Galante Negri R, de Carvalho TJ, editors. About Interfaces Between Machine Learning, Complex Networks, Survivability Analysis, and Disaster Risk Reduction Cham: Springer International Publishing; 2019. p. 185-215. https://doi.org/10.1007/978-3- 030-21205-6_10.
GIS4Graph: a tool for analyzing (geo)graphs applied to study efficiency in a street network. Aas Jorge, M Rossato, R B Bacelar, Lbl Santos, Brazilian Symposium on Geoinformatics. Jorge AAS, Rossato M, Bacelar RB, Santos LBL. GIS4Graph: a tool for analyzing (geo)graphs applied to study efficiency in a street network. Brazilian Symposium on Geoinformatics 2017;.
(geo)graphs -Complex Networks as a shapefile of nodes and a shapefile of edges for different applications. Lbl Santos, Aas Jorge, M Rossato, J D Santos, O A Candido, W Seron, Santos LBL, Jorge AAS, Rossato M, Santos JD, Candido OA, Seron W, et al., (geo)graphs -Complex Networks as a shapefile of nodes and a shapefile of edges for different applications; 2017.
How do urban mobility (geo)graph's topological properties fill a map?. Lima Santos, L B Carvalho, L M Seron, W Coelho, F C Macau, E E Quiles, M G , Appl Netw Sci. 4Lima Santos LB, Carvalho LM, Seron W, Coelho FC, Macau EE, Quiles MG, et al. How do urban mobility (geo)graph's topological properties fill a map? Appl Netw Sci 2019;(4).
Community Detection in Very High-Resolution Meteorological Networks. W Ceron, Lbl Santos, G D Neto, M G Quiles, O A Candido, IEEE Geoscience and Remote Sensing Letters. 1711Ceron W, Santos LBL, Neto GD, Quiles MG, Candido OA. Community Detection in Very High-Resolution Meteorological Networks. IEEE Geoscience and Remote Sensing Letters 2020;17(11):2007-2010.
Topological indexes and community structure for urban mobility networks: Variations in a business day. J D Lamosa, L R Tomás, M G Quiles, L R Londe, Lbl Santos, Een Macau, 10.1371/journal.pone.0248126PLOS ONE. 20213Lamosa JD, Tomás LR, Quiles MG, Londe LR, Santos LBL, Macau EEN. Topological indexes and community structure for urban mobility networks: Variations in a business day. PLOS ONE 2021 03;16(3):1-17. https://doi.org/10.1371/ journal.pone.0248126.
(eds) THG, editors. Critical Infrastructure: Reliability and Vulnerability Springer. A T Murray, T H Grubestic, Murray AT,Overview of Reliability and Vulnerability in Critical InfrastructureMurray AT, Grubestic TH. Overview of Reliability and Vulnerability in Critical Infrastructure. In: Murray AT, (eds) THG, editors. Critical Infrastructure: Reliability and Vulnerability Springer; 2007.p. 1-8.
. Tic Team, Zenodo, Team TIC, igraph. Zenodo; 2020. https://zenodo.org/record/3630268.
Vulnerability and protection of infrastructure networks. V Latora, M Marchiori, 10.1103/PhysRevE.71.015103Physical Review E. 711Latora V, Marchiori M. Vulnerability and protection of infrastructure networks. Physical Review E 2005 Jan;71(1). http://dx.doi.org/10.1103/PhysRevE.71.015103.
Vulnerability and Hierarchy of Complex Networks. V Goldshtein, G A Koganov, G I Surdutovich, Goldshtein V, Koganov GA, Surdutovich GI, Vulnerability and Hierarchy of Complex Networks; 2004.
Error on Choroplethic Maps: Definition, Measurement, Reduction. G F Jenks, F C Caspall, Annals of the Association of American Geographers. 612Jenks GF, Caspall FC. Error on Choroplethic Maps: Definition, Measurement, Reduction. Annals of the Association of American Geographers 1971;61(2):217-244.
. G G Santos, Tamanduatei Vulnerability, Santos GG, Tamanduatei Vulnerability; 2022. https://github.com/gioguarnieri/Tamanduatei_Vulnerability.
Digital map of the city of Sao Paulo. Sao Paulo City Hall, Sao Paulo City Hall, Digital map of the city of Sao Paulo; 2021. Accessed: 2021-09-15. http://geosampa.prefeitura. sp.gov.br/PaginasPublicas/_SBC.aspx.
| [
"https://github.com/gioguarnieri/Tamanduatei_Vulnerability."
] |
[
"In Search of SUSY",
"In Search of SUSY"
] | [
"W De Boer \nInst. für Experimentelle Kernphysik\nUniversität Karlsruhe Postfach\n6980 D-76128Karlsruhe\n"
] | [
"Inst. für Experimentelle Kernphysik\nUniversität Karlsruhe Postfach\n6980 D-76128Karlsruhe"
] | [] | Electroweak precision tests of the SM and MSSM as well as Searches for Supersymmetric Particles and Higgs bosons at LEP II and their significance within the MSSM are discussed. | null | [
"https://export.arxiv.org/pdf/hep-ph/9705309v2.pdf"
] | 119,488,617 | hep-ph/9705309 | fbdb6cd5b2c139f5e7958a9adf5910c312e13250 |
In Search of SUSY
May 1997 March, 1997
W De Boer
Inst. für Experimentelle Kernphysik
Universität Karlsruhe Postfach
6980 D-76128Karlsruhe
In Search of SUSY
May 1997 March, 1997arXiv:hep-ph/9705309v2 14
Electroweak precision tests of the SM and MSSM as well as Searches for Supersymmetric Particles and Higgs bosons at LEP II and their significance within the MSSM are discussed.
Introduction
Although at present the Standard Model (SM) shows good agreement with all available data, many questions can only be answered by assuming new physics beyond the SM. An excellent candidate for new physics is the the supersymmetric extension of the SM (MSSM), which was found to describe the electroweak data equally well. In addition the MSSM allows • Unification of the gauge coupling constants;
• Unification of the Yukawa couplings;
• Natural occurrence of the Higgs mechanism at a low scale;
• Cancellation of the quadratic divergences in the radiative corrections of the SM • Relic abundance of dark matter.
After the discovery that unification within the SM is excluded by the precise measurements of the coupling constants at LEP I [1,2,3], a flood of papers on these subjects have emerged. Some recent contributions of the groups involved are given in refs. [4]- [12] It is surprising that one can find a region of parameter space within the minimal SUSY model, where all the independent constraints mentioned above can be fulfilled simultaneously.
The paper has been organized as follows: first the electroweak precision tests of the SM and MSSM are discussed, followed by the corresponding restrictions on the MSSM parameter space, both from the searches and the unification conditions.
Electroweak Precision Tests of the SM and MSSM
In this section an equivalent analysis of all electroweak data, both in the SM and its supersymmetric extension, is described using all actual electroweak data from Tevatron, LEP and SLC [13], the measurement of BR(b→sγ) BR(b→ceν) from CLEO [14] and limits on the masses of supersymmetric particles. The observed b → sγ decay rate is 30% below the SM prediction, while the decay Z 0 → bb is about 1.8σ above the SM prediction. In the MSSM light stops and light charginos increase R b [15] - [23] and decrease the b → sγ rate, so both observations can be brought into agreement with the MSSM for the same region of parameter space. However, as will be shown, the resulting χ 2 value for the MSSM fits is only marginally lower. In addition, the splitting in the stop sector has to be unnaturally high, so it remains to be seen if these effects are real or due to a fluctuation. Further details of the procedure and extensive references are given elsewhere [24].
Standard Model Fits
The SM cross sections and asymmetries are completely determined by M Z , m t , m H , G F , α, α s . From the combined CDF and D0 data m t has been determined to be 175 ± 6 GeV [25], so the parameters with the largest uncertainties are m H and α s . The error on the finestructure constant α is limited by the uncertainty in the hadronic cross section in e + e − annihilation at low energies, which is used to determine the vacuum polarization contributions to α. The error was taken into account by considering α to be a free parameter in the fit and constraining it to the value 1/α = 128.89 ± 0.09 [26]. If this error is not taken into account, the error on the Higgs mass is underestimated by 30%. Using the input values discussed in the introduction yields: Figure 1: Dependence of the SM sin 2 Θ lept ef f on the Higgs mass. The top mass m t = 175 ± 6 GeV was varied within its error, as shown by the dashed band labelled SM. The SLD and the LEP measurements of sin 2 Θ lept ef f are also shown as horizontal bands. The SLD value yields a Higgs mass below the recents limits by direct Higgs searches at LEP (shaded area).
Minor deviations from the EWWG fit results [13] are due to the incorporation of the b → sγ data from CLEO [14], which are important for the MSSM fits described below. From the SM fit parameters one can derive the value of the electroweak mixing parameter in the M S scheme: sin 2 θ M S = 0.2316 ± 0.0004, which is within errors equal to sin 2 Θ lept ef f . The main contributions to the χ 2 /d.o.f = 18.5/15 originate from sin 2 Θ lept ef f from SLD (∆χ 2 = 4.9), R b (∆χ 2 = 3.1) and A b F B (∆χ 2 = 3.5), but the overall SM agreement is good: the χ 2 /d.o.f.=18.5/15 corresponds to a probability of 24%.
The low value of sin 2 Θ lept ef f from SLD as compared to the LEP value yields a Higgs mass below the lower limit on the SM Higgs mass from direct searches, as demonstrated in fig. 1. The LEP data alone without SLD yield m H ≈ 240 GeV, while sin 2 Θ lept ef f from SLD corresponds to m H ≈ 15 GeV, as indicated by the squares in fig. 1. The latter value is excluded by the 95% C.L. lower limit of 63.9 GeV from the LEP experiments [27,28]. The different values of sin 2 Θ lept ef f from LEP and SLD translate into different predictions for M W , as shown in fig. 2. The present M W measurements, including the preliminary value from the LEP II measurements [29] lie in between these predictions.
MSSM Fits and Comparison with the SM
As mentioned in the introduction, the MSSM can increase the value of R b , which experimentally is slightly above the SM value. The major additional contributions originate from vertex contributions with light charginos and light right handed stops in the low tan β scenario and light higgses for large tan β values. Since the large tan β scenario does not improve R b significantly [24], it will not be discussed here anymore. The R b dependence on chargino and stop masses is shown in fig. 3. The experimental value R b = 0.2178 ± 0.0011 is clearly above the SM value of 0.2158 and can be obtained for charginos around 85 GeV and the lightest stop mass around 50 GeV (best fit results, [24]), although the second stop mass has to be heavy, i.e. well above m t .
As will be discussed in the next section, such a large splitting in the stop sector is difficult to obtain in the MSSM, if one requires unification of the left and right-handed stop squarks at the GUT scale. Final analysis of available LEP data will teach of the present preliminary value of R b will indeed stay above the SM value. In counting the d.o.f the insensitive (and fixed) parameters were ignored [24].
It is interesting to note that the predicted value of m W tends to be higher in the MSSM than in the SM, especially for light stops, as shown in fig. 5.
Another interesting point are the α s (M Z ) values. An increase in R b implies in increase in the total width of the Z 0 boson, which can be compensated by a decrease in the QCD corrections, i.e. α s . However, since R b is only marginally above the SM value, the fitted value of α s (M Z ) between SM and MSSM is within the error bars. Note that the α s crisis has disappeared after the LEP value from the total cross section came down and the value from both lattice calculations and deep inelastic scattering went up [30].
The Minimal SuperSymmetric Model (MSSM)
Supersymmetry presupposes a symmetry between fermions and bosons, which can only be realized in nature by assuming for every particle of the SM with spin j a supersymmetric partner (sparticles) with spin j − 1/2. These spartners must have the same mass and couplings as the particles, if supersymmetry is an exact symmetry in nature. However, since the sparticles have not been observed sofar, supersymmetry must be broken. The MSSM can be obtained from the SM by replacing the known fields with the superfields, which include the spin 0 sfermions and the spin 1/2 gauginos.
In addition supersymmetry requires two complex SU(2) doublets for the Higgs sector instead of only one in the SM. The reasons are twofold: a) in the SM one can give mass to the down-type quarks and leptons by using the complex conjugate of the Higgs doublet. Since the Higgses in the MSSM are part of the bosonic fields, one cannot just take the complex conjugate of just a part of the superfield structure, so one needs separate Higgs doublets for up-and down-type fermions. b) the superpartners of the Higgses are fermions, which contribute the triangle anomalies, unless the total hypercharge equals zero. This requires the introduction of two SU(2) Higgs doublets with opposite hypercharge. Since the top quark is much heavier than the bottom quark, the Yukawa corrections for the two mass terms of the two Higgs doublets are very different, thus breaking the symmetry between them. These radiative corrections automatically lead to the Higgs mechanism of spontaneous electroweak symmetry breaking at a scale far below the unification scale, as discussed in many reviews [4].
Another difference between the interactions in the SM and MSSM arises from the triple vertices: in the SM a spin 1/2 fermion cannot couple to two other fermions, since this would violate conservation of angular momentum (or more general Lorentz invariance). For spin 0 particles such triple vertices are allowed, so fermions can couple to a sfermion and a fermion! Such vertices with three fermions violate lepton and/or baryon number. They can be avoided in the MSSM by introducing an additional multiplicative quantum number, called R − parity, defined as:
R = (−1) 3(B−L)+2S .(1)
This quantity is +1 for SM particles and -1 for the supersymmetric partners, because of the change in the spin S. R-parity consercation forbids the coupling of a fermion to a sfermion and fermion, since the final state would have R = −1 . +1 = −1, thus eliminating the dangerous baryon -and lepton number violating vertices. It is usually assumed that R-parity is conserved exactly, since the experimental limits on the R-parity violating couplings are very severe. R-parity conservation implies that:
• sparticles can only be produced in pairs • the lightest supersymmetric particle is stable, since its decay into normal matter would violate R-parity.
χ 2 /d.o.f = 18.5/15 LEP: N ν M Z Γ Z σ had R l A FB l R b R c A FB b A FB c A b A c A τ A e Msin 2 Θ lept ef f , A b F B and R b .
• the interactions of particles and sparticles can be different. For example, the photon couples to electron-positron pairs, but the photino does not couple to selectron-spositron pairs, since in the latter case the R-parity would change from -1 to +1. In other words, each triple vertex must have 2 sparticles attached to it, thus forbidding the triple vertices in which a fermion couples to a sfermion and another fermions.
Obviously SUSY cannot be an exact symmetry of nature; else the supersymmetric partners would have the same mass as the normal particles. In the absence of a fundamental understanding of the origin of supersymmetry breaking one considers all breaking terms, which do not introduce quadratic divergences. This cancellation between fermions and bosons in the loop corrections is one of the great advantages of the MSSM, since it allows one to calculate radiative corrections up to the unification scale without divergences.
The breaking terms consist of the gaugino mass terms, the scalar mass terms, the trilinear (A-term) interactions amongst the scalars and the analogous bilinear (B-term) interactions [31].
If one assumes that SUSY is broken due to the universal gravitational interactions one needs only a few independent SUSY breaking parameters at the unification scale: a common mass m 1/2 for the gauginos, a common mass m 0 for the scalars, a common trilinear interaction A 0 and a bilinear coupling B 0 .
In addition to these soft breaking terms one needs to specify the ratio tan β of the two Higgs VEVs and a supersymmetric Higgsino mixing parameter µ. The minimization conditions of the Higgs potential requiring a non-trivial minimum for electroweak symmetry breaking [32] yields a relation between the bilinear coupling B 0 and tan β and determines the value of µ 2 , so finally the SUSY mass Figure 5: m W and m t from direct (Tevatron and LEP II) and indirect measurements in comparison with the SM (shaded area) and MSSM (crossed area) predictions. The uncertainty from the SM prediction originates from the unknown Higgs mass, while for the MSSM it is mainly the uncertainty from the stop mass, since the Higgs mass is quite well predicted in the MSSM. The highest m W mass is obtained for the lightest stop mass. spectrum in this supergravity inspired scenario is determined by the following parameters:
m 0 , m 1/2 , tan β, A 0 , sign(µ)(2)
As will be shown in the next section, tan β has only two solutions from the known top mass, while sign(µ) and A 0 do not influence the mass spectrum strongly (except for the mixing in the stop sector, which can change the lightest Higgs mass by 10-15 GeV), so the main variables for the prediction of the SUSY mass spectrum are m 0 and m 1/2 . The various MSSM masses and couplings have to be evolved via the renormalization group equations (RGE) from their common value at the unification scale to the electroweak scale. This involves solving typically 26 coupled differential equations with common values as boundary conditions at M GU T (t = ln(M/M GU T ) 2 = 0):
scalars :m 2 Q =m 2 U =m 2 D =m 2 L =m 2 E = m 2 0 ;(3)
gauginos :
M i = m 1/2 , i = 1, 2, 3; (4) couplings :α i (0) =α GU T , i = 1, 2, 3.(5)
Here M 1 , M 2 , and M 3 are the gauginos masses of the U (1), SU (2) and SU (3) groups. One has, however, to take into account the mixing between various states.
Gaugino-Higgsino Mass Terms: Charginos and Neutralinos
Gauginos and Higgsinos both have spin j = 1/2, so the mass eigenstates can be different from the interaction eigenstates because of the non-diagonal mass terms. The partners of the two neutral gauge bosons and two neutral Higgs bosons are the four neutralinosχ 0 i (i = 1, 4) after mixing; correspondingly, the charginosχ ± i (i = 1, 2) are mixtures of the wino and charged higgsino. The neutralino mixing is described by the following mass matrix:
M (0) = M 1 0 −M Z cos β sin W M Z sin β sin W 0 M 2 M Z cos β cos W −M Z sin β cos W −M Z cos β sin W M Z cos β cos W 0 −µ M Z sin β sin W −M Z sin β cos W −µ 0 .(6)
The physical neutralino masses Mχ0 i are obtained as eigenvalues of this matrix after diagonalization. For charginos one has similarly:
M (c) = M 2 √ 2M W sin β √ 2M W cos β µ .(7)
The M 1 and M 2 terms are the gaugino masses at low energies. They are linked to their coomon values at the GUT scale (m 1/2 ) by the RGE group equations. Numerically one finds at the weak scale:
M 3 (g) ≈ 2.7m 1/2 ,(8)M 2 (M Z ) ≈ 0.8m 1/2 ,(9)M 1 (M Z ) ≈ 0.4m 1/2 ,(10)µ(M Z ) ≈ 0.63µ(0).(11)
Since the gluinos obtain corrections from the strong coupling constant α 3 , they grow heavier than the gauginos of the SU (2) ⊗ U (1) group. In the case favoured by the fit discussed below one finds µ >> M 2 > M W , in which case the charginos eigenstates are approximately M 2 and µ and the four neutralino mass eigenstates are |M 1 |, |M 2 |, |µ|, and |µ|, respectively. In other words, the neutralinos and charginos do not mix strongly, so the lightest chargino is wino-like, while the the LSP is bino-like, which has consequences for dark matter searches.
Squark and Slepton Masses
The non-negligible Yukawa couplings cause a mixing between the electroweak eigenstates and the mass eigenstates of the third generation particles. The mixing matrix for the stopsector is:
m 2 tL m t (A t − µ cot β) m t (A t − µ cot β)m 2 tR .(12)
The mass eigenstates are the eigenvalues of this matrix. Similar matrices exist for sbottom and stau, except that in the off-diagonal elements m t is replaced by m b(τ ) and cot β is replaced by tan β, so the mixing effects are smaller, unless tan β is large. For the first and second generation the mixing can be neglected, since the off-diagonal terms are proportional to the quark masses of the first and second generation.
The squark and slepton masses are assumed to all have the same value at the GUT scale. However, in contrast to the sleptons, the squarks get radiative corrections from virtual gluons which make them heavier than the sleptons at low energies. 8
CMSSM and R b
An increase in R b requires one (mainly right handed) stop to be light and the other one to be heavy (see previous section). If both would be light, then all other squarks are likely to be light, which would upset the good agreement between the SM and the electroweak data. A large mass splitting in the stop sector needs a very artificial fine tuning of the few free parameters in the Constrained MSSM, which connects unified masses and couplings at the GUT scale to their values at the electroweak scale via RGE, as will be discussed in the next section. This is obvious from the mixing matrix in the squark sector (see 12): if one of the diagonal elements is much larger than m t , the off-diagonal terms of the order m t will not cause a mixing and the difference between the left-and right-handed stops has to come from the evolution of the diagonal terms, which depend on the Yukawa couplings for top and bottom (Y t , Y b ) and the trilinear couplings A t(b) . For low tan β Y b is negligible, while A t and Y t are not free parameters, since they go to fixed point solutions [33], i.e. become independent of their values at the GUT scale. Therefore there is little freedom to adjust these parameters within the CMSSM in order to get a large splitting between the left-and right-handed stops.
In addition, problems arise with electroweak symmetry breaking, since this requires the Higgs mixing parameter µ to be much heavier than the gaugino masses [33], while R b requires low values of µ for a significant enhancement (since the chargino has to be preferably Higgsino-like). In conclusion, within the CMSSM an enhancement of R b above the SM is practically excluded; only if all squark and gaugino masses are taken as free parameters without considering the RGE and common values at the GUT scale, then one can obtain an improvement in R b .
Low energy Constraints in the CMSSM
Within the Constrained Minimal Supersymmetric Model (CMSSM) it is possible to predict the low energy gauge couplings and masses of the 3 generation particles from the few supergravity inspired parameters at the GUT scale. The main ones are m 0 and m 1/2 as discussed in section 3, eq. 2. Moreover, the CMSSM predicts electroweak symmetry breaking due to large radiative corrections from the Yukawa couplings, thus relating the Z 0 boson mass to the top quark mass via the renormalization group equations (RGE). In addition, the cosmological constraints on the lifetime of the universe are considered in the fits. The new precise measurements of the strong coupling constant and the top mass as well as higher order calculations of the b → sγ rate exclude perfect fits in the CMSSM, although the discrepancies from the best fit parameters are below the 2σ level.
In this analysis the coupling constants were taken from the fits described in the first section. The new higher order calculations for the important b → sγ rate indicate that next to leading log (QCD) corrections increase the SM value by about 10% [34]. This can be simulated in the lowest level calculation by choosing a renormalization scale µ = 0.65m b , which will be done in the following. Here we repeat an update of a previous analysis [35] with the new input values mentioned above. The input data and fitted parameters have been summarized in table 1.
Constraints from Gauge Coupling Unification
The most restrictive constraints are the coupling constant unification and the requirement that the unification scale has to be above 10 15 GeV from the proton lifetime limits, assuming decay via s-channel exchange of heavy gauge bosons. They exclude the SM [2] as well as many other models [36,37].
Constraints from the top mass
In the MSSM the top mass is given by:
m 2 t = 4πY t v 2 tan 2 β 1 + tan 2 β .(13)
The top Yukawa coupling Y t is given by the RGE, which shows a fixed point behaviour, i.e. its low energy value is independent of its value at the GUT scale [4], but only determined by the known gauge input data ⇒ Fit parameters Table 1: Summary of input data and fit parameters for the global fit from ref. [35]. All parameters were fitted simultaneously in order to take care of the correlations, but the GUT scale M GU T and corresponding coupling constant α GU T are mainly determined from gauge coupling unification, tan β and the Yukawa couplings Y 0 (t,b,τ ) at the GUT scale from the masses of the 3th generation, and µ from electroweak symmetry breaking (EWSB). For the low tan β scenario the trilinear coupling A 0 is not very relevant, but for large tan β it is determined by b → sγ and bτ -unification. The scalar-and gaugino masses (m 0 , m 1/2 ) enter in all observables.
α 1 , α 2 , α 3 min. M GUT , α GUT m t m b , m τ χ 2 Y 0 t , Y 0 b = Y 0 τ M Z m 0 , m 1/2 , µ, tan β b → sγ A 0 τ universe
couplings. Since the VEV of the Higgs field v = 174 GeV is known from the Z 0 mass, all parameters except tan β are known, so the MSSM predicts the top (pole) mass to be:
m 2 t ≈ (205 GeV) 2 sin 2 β.(14)
The
Electroweak Symmetry Breaking (EWSB)
Radiative corrections can trigger spontaneous symmetry breaking in the electroweak sector. In this case the Higgs potential does not have its minimum for all fields equal zero, but the minimum is obtained for non-zero vacuum expectation values of the fields. Minimization of the Higgs potential yields:
M Z 2 2 = m 2 1 + Σ 1 − (m 2 2 + Σ 2 ) tan 2 β tan 2 β − 1 ,(15)
where m 1,2 are the mass terms in the Higgs potential and Σ 1 and Σ 2 their radiative corrections. Note that the radiative corrections are needed, since unification at the GUT scale with m 1 = m 2 would lead to M Z < 0. In order to obtain M Z > 0 one needs to have m 2 2 + Σ 2 < m 2 1 + Σ 1 which happens at low energy since Σ 2 (Σ 1 ) contains large negative corrections proportional to Y t (Y b ) and Y t ≫ Y b . Electroweak symmetry breaking for the large tan β scenario is not so easy, since eq. 15 can be rewritten as:
tan 2 β = m 2 1 + Σ 1 + 1 2 M 2 Z m 2 2 + Σ 2 + 1 2 M 2 Z .(16)
For large tan β Y t ≈ Y b , so Σ 1 ≈ Σ 2 (see fig. 6). Eq. 16 then requires the starting values of m 1 and m 2 to be different in order to obtain a large value of tan β, which could happen if the symmetry group above the GUT scale has a larger rank than the SM, like e.g. SO(10) [38]. In this case the quartic Alternatively, one has to assume the simplest GUT group SU (5), which has the same rank as the SM, so no additional groups are needed to break SU(5) and consequently no D-terms are generated. In this case EWSB can only be generated, if Y b is sufficiently below Y t , in which case the different running of m 1 and m 2 is sufficient to generate EWSB. The resulting SUSY mass spectrum is not very sensitive to the two alternatives for obtaining m 2 1 + Σ 1 > m 2 2 + Σ 2 : either through a splitting between m 1 and m 2 already at the GUT scale via D-terms or by generating a difference via the radiative corrections.
Discussion of the remaining constraints
In fig. 7 the total χ 2 distribution is shown as a function of m 0 and m 1/2 for the two values of tan β determined above. One observes minima at m 0 , m 1/2 around (200,270) and (800,90), as indicated by the stars. These curves were still produced with the data from last year. With the new coupling constants one finds slightly different minima, as given in table 2. In this case the minimum χ 2 is not as good, since the fit wants α s ≈ 0.125, i.e. about 1.6σ above the measured LEP value and the calcaluted b → sγ rate is above the experimental value too, if one takes as renormalization scale µ ≈ 0.65m b . At this scale the next higher order corrections, as calculated by [34], are minimal. The contours in fig. 7 show the regions excluded by different constraints used in the analysis:
LSP Constraint: The requirement that the LSP is neutral excludes the regions with small m 0 and relatively large m 1/2 , since in this case one of the scalar staus becomes the LSP after mixing via the off-diagonal elements in the mass matrix. The LSP constraint is especially effective at the high tan β region, since the off-diagonal element in the stau mass matrix is proportional to A t m 0 − µ tan β.
b → sγ Rate: At low tan β the b → sγ rate is close to its SM value for most of the plane. The charginos and/or the charged Higgses are only light enough at small values of m 0 and m 1/2 to contribute significantly. The trilinear couplings were found to play a negligible role for low tan β. However, for large tan β the trilinear coupling needs to be left free, since it is difficult to fit simultaneously b → sγ, m b and m τ . The reason is that the corrections to m b are large for large values of tan β due to the large contributions fromg −q andχ ± −t loops proportional to µtan β. They become of the order of 10-20%. In order to obtain m b (M Z ) as low as 2.84 GeV, these corrections have to be negative, thus requiring µ to be negative. The b → sγ rate is too large in most of the parameter region for large tan β, because of the dominant chargino contribution, which is proportional to A t µ. For positive (negative) values of A t µ this leads to a larger (smaller) branching ratio BR(b → sγ) than for the Standard Model with two Higgs doublets. In order to reduce this rate one needs A t (M Z ) > 0 for µ < 0. Since for large tan β A t does not show a fixed point behaviour, this is possible.
Relic Density: The long lifetime of the universe requires a mass density below the critical density, else the overclosed universe would have collapsed long ago. This requires that the contribution from the LSP to the relic density has to be below the critical density, which can be achived if the annihilation rate is high enough. Annihilation into electron-positron pairs proceeds either through t-channel selectron exchange or through s-channel Z 0 exchange with a strength given by the Higgsino component of the lightest neutralino. For the low tan β scenario the value of µ from EWSB is large [35]. In this case there is little mixing between the higgsino-and gaugino-type neutralinos as is apparent from the neutralino mass matrix: for |µ| ≫ M 1 ≈ 0.4m 1/2 the mass of the LSP is simply 0.4m 1/2 and the "bino" purity is 99% [35]. For the high tan β scenario µ is much smaller and the Higgsino admixture becomes larger. This leads to an enhancement ofχ 0 −χ 0 annihilation via the s-channel Z boson exchange, thus reducing the relic density. As a result, in the large tan β case the constraint Ωh 2 0 < 1 is almost always satisfied unlike in the case of low tan β.
Discovery Potential at LEP II
All LEP experiments have been searching for the sparticles and Higgs bosons predicted by the MSSM. Table 2 shows that charginos, neutralinos and the lightest Higgs belong to the lightest particles in the MSSM, so we will concentrate on these searches and show only a few typical results for each experiment keeping in mind that the other experiments have usually similar results on the same channel.
Charginos are expected to be easy to discover, since they will be pair produced with a large cross section of several pb and lead to events with characteristic decays similar to W ± pairs plus missing energy. The typical limits are close to the beam limit, as shown in Fig. 8 by recent results from DELPHI [39]. Since the chargino mass depends on the SUSY parameters µ, M 2 and tan β these limits can be shown as contours in the µ − M 2 plane for a given value of tan β, as shown in Fig. 9 for LEP data at 161 and 172 GeV centre of mass energies from OPAL [40]. If one assumes the GUT relation M 2 ≈ 2M 1 (eq. 11) the neutralino limits are related to the chargino limits. Combining it with direct neutralino searches, both at LEP I and LEP II, L3 finds a lower limit on the neutralino mass of 24.6 GeV [41] as shown in Fig. 10. The Higgs mass is a function of the pseudoscalar Higgs mass m A , tan β and the topmass via the radiative corrections. Higgs bosons can be produced through Higgs-strahlung e + e − → hZ and associated production e + e − → hA. The first one is proportional to sin 2 (β − α), while the second one to cos 2 (β − α), so the total cross section is independent of the mixing angles β − α. If one searches for both processes one can find a Higgs limit independent of tan β, as shown for the ALEPH data in fig. 11 (from ref. [42]).
The Higgs mass depends on the top mass as shown in fig. 12. Here the most significant second order corrections to the Higgs mass have been incorporated [43], which reduces the Higgs mass by about 15 GeV [44]. In this case the Higgsmass is below 90 GeV, provided the top mass is below 180 GeV (see fig. 12), which implies that the foreseen LEP energy of 192 GeV is sufficient to cover the whole parameter space. Figure 8: Chargino limits from DEL-PHI for 4 different cases: for heavy sneutrinos with stable and unstable neutralinos the chargino mass is above 84.5 GeV at 95% C.L. (upper part); As indicated, the unstable neutralino is assumed to decay into a photon and gravitino. For light sneutrinos the negative interference between s-and t-channel reduces the cross section, thus leading to worse limits as shown in the bottom part. It is assumed that the lightest chargino is non-degenerate with the LSP. In case the lightest chargino is Higgsino-like, implying µ < M 2 (see eq. 7), the chargino can be degenerate with the LSP, but the t-channel sneutrino is suppressed in this case. If the degeneracy is less than 5 GeV, limits rather close to the kinematic limit are obtained. From [39].
13
DELPHI Ecm = 172 GeV
Summary
In summary, in the Constrained Minimal Supersymmetric Model (CMSSM) the allowed region of the GUT scale parameters and the corresponding SUSY mass spectra for the low and high tan β scenario have been determined from a combined fit to the low energy data on couplings, quark and lepton masses of the third generation, the electroweak scale M Z , b → sγ, and the lifetime of the universe. The new precise determinations of the strong coupling constant α s = 0.120 ± 0.003 are slightly below the preferred CMSSM fit value of about 0.125. In addition, the observed b → sγ value of (2.32 ± 0.6)10 −4 is below the predicted value, at least for the SM (3.2 · 10 −4 ) and the low tan β scenario of the MSSM.
The lightest particles preferred by these fits are charginos and higgses. The charginos are preferably light in case of the high tan β scenario, while the lightest higgs wiil be within reach of LEP II in case of the low tan β scenario (see fig. 6). So the light tan β scenario of the CMSSM can be confirmed or excluded at LEP II (provided the top mass is indeed below 180 GeV), while the complete parameter space for the high tan β scenario will become only accesible at future accelerators.
It should be noted that recent speculation about evidence for SUSY from the eeγγ event observed by the CDF collaboration [45], the too high value of R b [13,24] and the ALEPH 4-jet events [46] has not been confirmed sofar:
• if the single CDF eeγγ + E miss event would originate from selectron pair production with the two gammas coming from neutralino decay into either the LSP or gravitino, one would expect 14 anomalous inclusive pp → γγ + E miss + X production, which has not been observed [47].
• The R b anomaly is reduced to "a-less-than-2σ-effect" [13].
• The anomalous ALEPH 4-jet events have not been confirmed by the other three LEP Collaborations [48].
the W-masses. The vertical lines indicate the predictions from the LEP I and SLD electroweak data determining sin 2 Θ lept ef f and the data points represent the various direct measurements of M W .
The fit results are compared with the Standard Model fits in fig. 4. The Standard Model χ 2 /d.o.f. = 18.5/15 corresponds to a probability of 24%, the MSSM χ 2 /d.o.f. = 16.1/12 to a probability of 19%.
Figure 3 :
3R b as function of the stop and chargino masses for tan β = 1.6. The experimental value R b = 0.2178 ± 0.0011 is clearly above the SM value of 0.2158 and can be obtained for light charginos and stops.
Figure 4 :
4Fit results normalized to the SM-and MSSM (tan β = 1.6) values. The difference in χ 2 /d.o.f between the SM and MSSM originates mainly from
maximum possible topmass is around 205 GeV and a top mass of 175 GeV corresponds to tan β ≈ 1.5, as shown in the top part of fig. 6. For large values of tan β the bottom and τ Yukawa couplings become large too (see middle part of fig. 6) and the top Yukawa coupling cannot be predicted from the gauge couplings alone. However, if one assumes b − τ unification (Y b = Y τ at the GUT scale), one finds a second large tan β solution, as shown in the top part of fig. 6 too, so for m t = 175 ± 6 GeV only two regions of tan β give an acceptable χ 2 , as shown in the bottom part of fig. 6.
Figure 6 :
6The top quark mass as function of tan β (top). The middle part shows the corresponding values of the Yukawa couplings at the GUT scale and the lower part the χ 2 values. For tan β < 20 the Yukawa coupling of the b-quark Y b is small compared to Y t , in which case the top quark mass is given by the infrared fixed point solution of Y t . For large values of tan β Y t is reduced by the negative corrections from Y b and Y τ , which were assumed to have common values at the GUT scale (b − τ unification). If the top constraint (m t = 175 ± 6, horizontal band) is not applied, all values of tan β are allowed (thin dotted lines at the bottom), but if the top mass is constrained to the experimental value, only the regions around tan β ≈ 1.7 and tan β ≈ 35 are allowed.
Figure 7 :
7Contours of the χ 2distribution for the low and high tan β solutions in the m 0 versus m 1/2 plane. The different shades indicate steps of ∆χ 2 = 1, so basically only the light shaded region is allowed. The stars indicate the optimum solution. Contours enclose domains excluded by the particular constraints used in the analysis.
Figure 9 :Figure 10 :
910M 2 versus µ from OPAL for LEP data from √ s = 161 and 172 GeV. Note that for chargino/neutralino searches the reach in parameter space increases only linear with energy in contrast to the Higgs searches. From[40]. The mass of the lightest stable neutralino, assumed to be the LSP, is related to the lightest chargino mass via the RGE equations, which connect common gaugino masses at the GUT scale to the electroweak scale. Combining the search limits of chargino and neutralino pair production leads to a lower mass limit of 24.6 GeV for the invisible LSP for all values of tan β, provided the lightest sneutrino is heavy. For light sneutrinos the negative t-and s-channel interference reduce the chargino cross section, thus reducing the LSP limit. From[41].
Figure 11 :Figure 12 :
1112ftan β versus the Higgs mass from ALEPH[42]. The dashed area is excluded by the search for the hZ and hA final states, which require both m h and m A to be above 62.5 GeV at 95% C.L. The dark regions are excluded in case of large mixing in the stop sector, the solid line in case of no mixing. In the constrained MSSM the mixing is usually small, so for small tan β the combined data from all LEP experiments will exclude the low tan β scenario. The region for 2 < tan β < 40 is excluded from the solution of the RGE for the top Yukawa coupling, as shown infig. 6. The mass of the lightest CP-even Higgs as function of the top mass at Born level (dotted lines), including complete one-loop contributions of all particles (dashed lines). Two-loop contributions reduce the one-loop corrections significantly as shown by the dashed area (the upper boundary corresponds to µ > 0, the lower one to µ < 0). The solid line just below the dashed line is the one-loop prediction from the third generation only, which apparently gives the main contribution. The upper 17
Table 2 :
2Values of the fitted SUSY parameters (upper part) and corresponding susy masses (lower part) for low and high tan β solutions using the new input data discussed in the text.interaction (D-) terms in the Higgs potential can generate quadratic mass terms, if the Higgs fields develop non-zero VEVs after spontaneous symmetry breaking.
Acknowledgements I want to thank my colleagues from the LEP groups for helpful discussions and/or making available data prior to publication, especially Glen Cowan, Ralf Ehret, Dmitri Kazakov, Michael Kobel, Sachio Komamyia, Marco Pieri, Silvie Rosier, Michael Schmitt, and Ulrich Schwickerath.
. J Ellis, S Kelley, D V , Nanopoulos. Phys. Lett. 260131J. Ellis, S. Kelley, and D. V. Nanopoulos. Phys. Lett. B260 (1991) 131.
. U Amaldi, W Boer, H Fürstenau, Phys. Lett. 260447U. Amaldi, W. de Boer, H. Fürstenau, Phys. Lett. B260 (1991) 447.
. P Langacker, M Luo, Phys. Rev. 44817P. Langacker, M. Luo, Phys. Rev. D44 (1991) 817.
For review and original references see N.P.Nilles. Phys.Rep. 1101For review and original references see N.P.Nilles, Phys.Rep. 110 (1984) 1;
. H E Haber, G L Kane, Phys.Rep. 11775H.E.Haber, G.L.Kane, Phys.Rep. 117 (1985) 75;
. R Barbieri, Riv.Nuovo Cim. 111R.Barbieri, Riv.Nuovo Cim. 11 (1988) 1;
. W Boer, Prog. in Nucl. and Part. Phys. 33201W.de Boer, Prog. in Nucl. and Part. Phys. 33 (1994) 201.
. G G Ross, R G Roberts, Nucl. Phys. 377571G.G. Ross and R.G. Roberts, Nucl. Phys. B377 (1992) 571.
. M Carena, S Pokorski, C E M Wagner, Nucl. Phys. 40659M. Carena, S. Pokorski, C.E.M. Wagner, Nucl. Phys. B406 (1993) 59;
. M Olechowski, S Pokorski, Nucl. Phys. 404590and private communicationM. Olechowski, S. Pokorski, Nucl. Phys. B404 (1993) 590; and private communication.
Il Nuovo Cimento 105 (1992) 1179 and references therein. F Anselmo, L Cifarelli, A Peterman, A Zichichi, F. Anselmo, L. Cifarelli, A. Peterman, and A. Zichichi, Il Nuovo Cimento 105 (1992) 1179 and references therein.
. S Kelley, J L Lopez, D V Nanopoulos, H Pois, K Yuan, Phys. Rev. 472468S. Kelley, J.L. Lopez, D.V. Nanopoulos, H. Pois, and K. Yuan, Phys. Rev. D47 (1993) 2468.
. W Boer, R Ehret, D Kazakov, Z. Phys. 67647W. de Boer, R. Ehret, and D. Kazakov, Z. Phys. C67 (1995) 647.
L E Ibáñez, G G Ross, Perspectives on Higgs Physics. G. Kane229CERN-TH-6412-92and references thereinL. E. Ibáñez and G. G. Ross, CERN-TH-6412-92, (1992), appeared in Perspectives on Higgs Physics, G. Kane (Ed.), p. 229 and references therein.
. R Arnowitt, P Nath, Phys. Rev. Lett. 69725R. Arnowitt and P. Nath, Phys. Rev. Lett. 69 (1992) 725;
. Phys. Lett. 28789Phys. Lett. B287 (1992) 89;
. See P For A Review, Univ Langacker, Penn, UPR-0539-TPreprintFor a review, see P. Langacker, Univ. of Penn. Preprint, UPR-0539-T.
. G L Kane, C Kolda, L Roszkowski, J D Wells, Phys. Rev. 496173G. L. Kane, C. Kolda, L. Roszkowski, and J. D. Wells, Phys. Rev. D49 (1994) 6173.
Cern Preprint CERN-PPE/96-183. Lep The, Coll, The LEP Coll., Cern Preprint CERN-PPE/96-183;.
. R Ammar, CLEO-CollaborationPhys. Rev. Lett. 742885CLEO-Collaboration, R. Ammar et al., Phys. Rev. Lett. 74, (1995) 2885.
T Altarelli, F Sjöstrand, Zwirner, Proceedings of the Workshop Physics at LEP2. the Workshop Physics at LEP21Proceedings of the Workshop Physics at LEP2 , Editors G. Altarelli, T. Sjöstrand, F. Zwirner, Vol.1, CERN 96-01.
. M Boulware, D Finnell, Phys. Rev. 442054M. Boulware, D. Finnell, Phys. Rev. D44(1991) 2054.
. P H Chankowski, S Pokorski, Nucl. Phys. 4753P. H. Chankowski, S. Pokorski, Nucl. Phys.B475 (1996) 3.
. J Ellis, J L Lopez, D V Nanopoulos, Phys. Lett. 37295J. Ellis, J. L. Lopez, D. V. Nanopoulos, Phys. Lett. B372 (1996) 95..
. D Garcia, J Sola, Phys. Lett. 354335D. Garcia, J. Sola, Phys. Lett. B354 (1995) 335.
. G L Kane, R G Stuart, J D Wells, Phys. Lett. 354350G.L. Kane, R.G. Stuart, J.D. Wells, Phys. Lett. B354(1995) 350.
. J D Wells, C Kolda, G L Kane, Phys. Lett. 338219J.D. Wells, C. Kolda, G.L. Kane, Phys. Lett. B338(1993) 219.
. D Garcia, R Jimenez, J Sola, Phys. Lett. 347309D. Garcia, R. Jimenez, J. Sola, Phys. Lett. B347 (1995) 309;
. Phys. Lett. 347321Phys. Lett. B347 (1995) 321.
. D Garcia, J Sola, Phys. Lett. 357349D. Garcia, J. Sola, Phys. Lett. B357(1995) 349.
Updated Global Fits of the SM and MSSM to Electroweak Precision Data, hep-ph/9609209 and references therein. W Boer, A Dabelstein, W Hollik, W Mösle, U Schwickerath, W. de Boer, A. Dabelstein, W. Hollik, W. Mösle and U. Schwickerath, Updated Global Fits of the SM and MSSM to Electroweak Precision Data, hep-ph/9609209 and references therein.
) 2632,An updated top mass (m t = 175 ± 6 GeV/c 2 ) from the combined CDF and D0 data was given by P. Tipton. F Abe, CDF Collaboration ; DØ CollaborationInvited talk at 28th Int. Conf. on High Energy Physics. Warsaw74Phys. Rev. Lett.F. Abe et al., CDF Collaboration, Phys. Rev. Lett. 74 (1995) 2626, S. Abachi et al., DØ Collaboration, Phys. Rev. Lett. 74 (1995) 2632,An updated top mass (m t = 175 ± 6 GeV/c 2 ) from the combined CDF and D0 data was given by P. Tipton, Invited talk at 28th Int. Conf. on High Energy Physics, Warsaw, July, 1996.
. S Eidelman, F Jegerlehner, Z. Phys. 67585S. Eidelman and F. Jegerlehner, Z. Phys. C67 (1995) 585;
. H Burkhardt, B Pietrzyk, Phys. Lett. 356398H. Burkhardt and B. Pietrzyk, Phys. Lett. B356(1995) 398.
. R M Barnett, Phys. Rev. 541R.M. Barnett et al., Phys. Rev. D54 (1996) 1.
. D Buskulic, ALEPH Coll. Phys. Lett. 313312D. Buskulic et al., ALEPH Coll. Phys. Lett. B313 (1993) 312.
The Electroweak Working Group for the LEP Coll., M W determinations presented at Moriond, Note LEPWWG 97-01. The Electroweak Working Group for the LEP Coll., M W determinations presented at Moriond, Note LEPWWG 97-01, April, 1997;.
M Schmelling, Plenary talk at 28th Int. Conf. on High Energy Physics. WarsawM. Schmelling, Plenary talk at 28th Int. Conf. on High Energy Physics, Warsaw, July, 1996.
. L Girardello, M T Grisaru, Nucl. Phys. 194419L. Girardello and M.T. Grisaru, Nucl. Phys. B194 (1984) 419.
. K Inoue, A Kakuto, H Komatsu, S Takeshita, ERR. ibid. 70Prog. Theor. Phys. 68330K. Inoue, A. Kakuto, H. Komatsu, and S. Takeshita, Prog. Theor. Phys. 68 (1982) 927; ERR. ibid. 70 (1983) 330;
126B (1983) 54. L E Ibáñez, C Lopéz, Phys. Lett. 233511Nucl. Phys.L.E. Ibáñez, C. Lopéz, Phys. Lett. 126B (1983) 54; Nucl. Phys. B233 (1984) 511;
. L Alvarez-Gaumé, J Polchinsky, M Wise, Nucl. Phys. 221495L. Alvarez-Gaumé, J. Polchinsky, and M. Wise, Nucl. Phys. 221 (1983) 495;
. J Ellis, J S Hagelin, D V Nanopoulos, K Tamvakis, Phys. Lett. 125275J. Ellis, J.S. Hagelin, D.V. Nanopoulos, K. Tamvakis, Phys. Lett. 125B (1983) 275;
. G Gamberini, G Ridolfi, F Zwirner, Nucl. Phys. 331331G.Gamberini, G. Ridolfi and F. Zwirner, Nucl. Phys. B331 (1990) 331.
Combined Fit of Low Energy Constraints to Minimal Supersymmetry and Discovery Potential at LEP II, hep-ph/9603350. W Boer, Z. Phys. 67Predictions of SUSY masses in the minimal supersymmetric grand unified theoryW. de Boer et al., Combined Fit of Low Energy Constraints to Minimal Supersymmetry and Discovery Potential at LEP II, hep-ph/9603350; W. de Boer et al., Predictions of SUSY masses in the minimal supersymmetric grand unified theory, Z. Phys. C67, (1995) 647-664.
C Greub, T Hurth, hep-ph/9608449SLAC-PUB-7267, ITP-SB-96-46. C. Greub, T. Hurth, SLAC-PUB-7267, ITP-SB-96-46, hep-ph/9608449;
M Misiak, talk given at 28th Int. Conf. on High Energy Physics. WarsawM. Misiak, talk given at 28th Int. Conf. on High Energy Physics, Warsaw, July, 1996.
. W Boer, Z. Phys. 71415W. de Boer et al., Z. Phys. C71 (1996) 415.
. U Amaldi, W Boer, P H Frampton, H Fürstenau, J T Liu, Phys. Lett. 281374U. Amaldi, W. de Boer, P.H. Frampton, H. Fürstenau, J.T. Liu, Phys. Lett. B281 (1992) 374.
. H Murayama, T Yanagida, TU-370Preprint Tohoku UniversityH. Murayama, T. Yanagida, Preprint Tohoku University, TU-370 (1991);
. T G Rizzo, Phys. Rev. 453903T.G. Rizzo, Phys. Rev. D45 (1992) 3903;
. T Moroi, H Murayama, T Yanagida, 438Preprint Tohoku UniversityT. Moroi, H. Murayama, T. Yanagida, Preprint Tohoku University, TU-438 (1993).
. H Murayama, M Olechowski, S Pokorski, Phys. Lett. 371and ref. thereinH. Murayama, M. Olechowski, and S. Pokorski, Phys. Lett. B371 (1996) 57 and ref. therein.
Presented at the CERN seminar on. F Richard, F. Richard for the DELPHI Coll. Presented at the CERN seminar on Feb. 25th, 1997.
Presented at the CERN seminar on. S Komamiya, S. Komamiya for the OPAL Coll. Presented at the CERN seminar on Feb. 25th, 1997.
Pieri for the L3 Coll. Presented at the CERN seminar on. M , M. Pieri for the L3 Coll. Presented at the CERN seminar on Feb. 25th, 1997.
Presented at the CERN seminar on. G Cowan, G. Cowan for the ALEPH Coll. Presented at the CERN seminar on Feb. 25th, 1997.
. M Carena, hep-ph/9602250M.Carena et al., hep-ph/9602250;
. P Chankowski, S Pokorski, J Rosiek, Phys. Lett. 281100P. Chankowski, S. Pokorski and J. Rosiek, Phys. Lett. B281 (1992) 100;
. M Carena, J R Espinosa, M Quiros, C E M Wagner, CERN- TH/95-45CERN PreprintM.Carena, J.R.Espinosa, M.Quiros and C.E.M.Wagner, CERN Preprint CERN- TH/95-45;
CERN Preprint CERN-TH/95-157; R. Hempfling and A. Hoang. M Carena, M Quiros, C E M Wagner, Phys. Lett. 33199M.Carena, M.Quiros and C.E.M.Wagner, CERN Preprint CERN-TH/95-157; R. Hempfling and A. Hoang, Phys. Lett. B331 (1994) 99.
A V Gladyshev, hep-ph/9603346 and references therein. A.V. Gladyshev et al., hep-ph/9603346 and references therein.
. S Dimopoulos, Phys. Rev. 543283S. Dimopoulos et al., Phys. Rev. D54 (1996) 3283;
. S Ambrosiano, Phys. Rev. 545395S. Ambrosiano et al., Phys. Rev. D54 (1996) 5395;
. D55. 1372D55 (1997) 1372;
. J L Lopez, D V Nanopoulos, Mod. Phys. Lett. 552473Phys. Rev.J.L. Lopez and D.V. Nanopoulos, Phys. Rev. D55 (1997) 4450; Mod. Phys. Lett. A10 (1996) 2473.
. D Aleph Coll, Buskulic, Z.Phys. 71179ALEPH Coll., D. Buskulic et al., Z.Phys. C71(1996)179;
. P H Chankowski, D Choudhury, S Pokorski, Phys. Lett. 389677P.H. Chankowski, D. Choudhury, S. Pokorski, Phys. Lett. B389(1996)677;
. D Kumar, R M Godbole, hep-ph/9605460D. Kumar, R.M. Godbole, hep-ph/9605460.
. B Carithers, MunichInvited talk at the DPG-TagungB. Carithers for the CDF Coll., Invited talk at the DPG-Tagung, March 20, 1997, Munich.
D Schlatter, Conclusions from the LEP WG on ALEPH 4-jet events, presented at the CERN Seminar on. D. Schlatter, Conclusions from the LEP WG on ALEPH 4-jet events, presented at the CERN Seminar on Feb. 25th, 1997.
| [] |
[
"Ground State Properties of One Dimensional S = 1/2 Heisenberg Model with Dimerization and Quadrumerization",
"Ground State Properties of One Dimensional S = 1/2 Heisenberg Model with Dimerization and Quadrumerization"
] | [
"Wei Chen *[email protected]**[email protected] \nDepartment of Physics\nSaitama University\n338-0825UrawaSaitama\n",
"Kazuo Hida \nDepartment of Physics\nSaitama University\n338-0825UrawaSaitama\n"
] | [
"Department of Physics\nSaitama University\n338-0825UrawaSaitama",
"Department of Physics\nSaitama University\n338-0825UrawaSaitama"
] | [] | The one dimensional S = 1/2 Heisenberg model with dimerization and quadrumerization is studied by means of the numerical exact diagonalization of finite size systems. Using the phenomenological renormalization group and finite size scaling law, the ground state phase diagram is obtained in the isotropic case. It exhibits a variety of the ground states which contains the S = 1 Haldane state, S = 1 dimer state and S = 1/2 dimer state as limiting cases. The gap exponent ν is also calculated which coincides with the value for the dimerization transition of the isotropic Heisenberg chain. In the XY limit, the phase diagram is obtained analytically and the comparison is made with the isotropic case. | 10.1143/jpsj.67.2910 | [
"https://export.arxiv.org/pdf/cond-mat/9804149v1.pdf"
] | 119,072,374 | cond-mat/9804149 | 0d38af6b7d07aaa56ef83cfb8560e89ed8dec844 |
Ground State Properties of One Dimensional S = 1/2 Heisenberg Model with Dimerization and Quadrumerization
15 Apr 1998
Wei Chen *[email protected]**[email protected]
Department of Physics
Saitama University
338-0825UrawaSaitama
Kazuo Hida
Department of Physics
Saitama University
338-0825UrawaSaitama
Ground State Properties of One Dimensional S = 1/2 Heisenberg Model with Dimerization and Quadrumerization
15 Apr 1998(Received March 24, 2022)typeset using JPSJ.sty <ver.1.0b>Heisenberg chaindimerizationquadrumerizationspin gapexact diagonalizationphenomenologi- cal renormalization groupground state phase diagramdimer phaseHaldane phase §1 Introduction
The one dimensional S = 1/2 Heisenberg model with dimerization and quadrumerization is studied by means of the numerical exact diagonalization of finite size systems. Using the phenomenological renormalization group and finite size scaling law, the ground state phase diagram is obtained in the isotropic case. It exhibits a variety of the ground states which contains the S = 1 Haldane state, S = 1 dimer state and S = 1/2 dimer state as limiting cases. The gap exponent ν is also calculated which coincides with the value for the dimerization transition of the isotropic Heisenberg chain. In the XY limit, the phase diagram is obtained analytically and the comparison is made with the isotropic case.
§1. Introduction
Recently, the S = 1/2 antiferromagnetic Heisenberg chains (AFHC) with modulated spatial structures have attracted a great deal of attention. Although the uniform S = 1/2 AFHC can be solved exactly using the Bethe Ansatz method and is known to have a gapless ground state, 1) this state is unstable against the dimerization leading to the spin-Peierls state which has the spin gap. 2,3,4,5) Other kinds of spatial modulation of the exchange coupling arising from the underlying lattice structures can also induce various kinds of spin gap phases.
On the other hand, no spin-Peierls instability is expected in the S = 1 AFHC due to the presence of the Haldane gap. 6) Instead, the Haldane-dimer phase transition takes place at finite strength of dimerization. 7,8) One of the authors (KH) has pointed out the connection between the dimer state of the S = 1/2 dimerized Heisenberg chain and the Haldane state of the S = 1 AFHC. 9) In this context, the dimerization in the S = 1 chain corresponds to the quadrumerization in the dimerized S = 1/2 chain. From this point of view, in the present work, we aim to get deeper insight into the nature of the Haldane-dimer transition in the S = 1 AFHC from the investigation of the ground state properties of the S = 1/2 Heisenberg chain with dimerization and quadrumerization.
On the other hand, in the neighbourhood of the S = 1/2 uniform Heisenberg point, both dimerization and quadrumerization are expected to produce the energy gap. It is possible, however, that the coexistence of these two periodicities does not always enhance the gap additively but may rather reduce the gap due to the competition between them leading to the gapless state in spite of the spatial nonuniformity. We also intend to investigate such competition of two periodicities in this model. This paper is organized as follows. In the next section, the model Hamiltonian is defined. The numerical results are presented in §3 for the isotropic case. From the phenomenological renormalization group and finite size scaling analysis of the numerically calculated energy gaps, the ground state phase diagram is obtained in §3. The critical exponent ν of the energy gap is also estimated taking the logarithmic corrections into account. The numerical results are compared with the perturbation theory. In §4, the ground state phase diagram is obtained analytically for the XY case using the Jordan-Wigner transformation. The comparison is made with the isotropic case. The last section is devoted to summary and discussion. §2.
Model Hamiltonian
The Hamiltonian of the one dimensional dimerized and quadrumerized S = 1/2 Heisenberg chain is given by
H = 2N l=1 j(S x 2l−1 S x 2l + S y 2l−1 S y 2l + ∆S z 2l−1 S z 2l ) + 2N −1 l=1 (1 + (−1) l−1 δ)(S x 2l S x 2l+1 + S y 2l S y 2l+1 + ∆S z 2l S z 2l+1 ) (2.1)
where 1 − j(−∞ ≤ j ≤ ∞) and δ(−1 ≤ δ ≤ 1) represent the degree of dimerization and quadrumerization, respectively. The open boundary condition is assumed. The anisotropy parameter is denoted by ∆. In the present work, we concentrate ourselves on the cases ∆ = 1 (isotropic case) and 0 (XY case). We also take δ ≥ 0 without loss of generality. Let us discuss the ground state of this model in some limiting cases for ∆ = 1. First, our model tends to the S = 1 AFHC for large negative j. Therefore the ground state is the VBS-like Haldane phase or the S = 1 dimer phase according as δ < δ c or δ > δ c where δ c is the critical value which depends on j. For j → −∞, δ c tend to 0.25 7,8) which is the value for the S = 1 AFHC. In terms of the spin-1/2 language, these phases can be also described as follows. For small δ (Haldane phase), the spin pairs connected by the 1 + δ-bonds and 1 − δ-bonds form local singlet pairs as schematically shown in Fig. 1(a). For large δ (S = 1 dimer phase), the spins connected by 1 + δ-bonds (S 4l+2 and S 4l+3 ) form singlet pairs strongly. Mediated by the fluctuation within these singlet pairs, the effective antiferromagnetic coupling is induced between the spins S 4l+1 and S 4l+4 , which leads to the 4-spin local singlets as shown in Fig. 1(b). It should be noted that the above picture in spin-1/2 language remain valid even for positive j as far as j < 1.
Secondly, we discuss the neighbourhood of δ = 0 and j ≃ 1. For δ = 0, the singlet pairs reside on the j-bonds or the 1 ± δ bonds according as j > 1 or < 1 corresponding to the S = 1/2 dimerized state with oppsite parity. For j < 1, the dimer configuration is the same as in Fig. 1(a) while it is shown in Fig. 1
(c) for j > 1. §3. Numerical Results for ∆ = 1
The Hamiltonian is numerically diagonalized to calculate the energy gap G (N, j, δ) in the open chain for 4N = 12, 16, 20 and 24 using the Lanczos algorithm. The open chain is used so that the critical point can be easily discerned. For concreteness, let us take the leftmost and rightmost bonds as j-bonds. According to the spin structure discussed is §2, for small values of δ, there remain two S = 1/2 residual spins at the both ends of the chain leading to the 4-fold quasi-degeneracy as far as j < 1. In this case, the energy gap G(N, j, δ) decreases exponentially with N . On the other hand, for large δ, there remain no residual spins. Therefore the energy gap G(N, j, δ) remains finite in the thermodynamic limit. Thus the product N G(N, j, δ) decreases (increases) with N for δ < δ c (δ > δ c ). Thanks to this situation, we can accurately determine the critical point using the phenomenological renormalization group method.
If we search for the critical point varying δ with fixed j, the phenomenological renormalization group equation for the finite size critical point δ c (N 1 , N 2 ) reads, 10) These values are extrapolated to N → ∞ assuming that the finite size correction to the critical value is proportional to ( N1+N2 2 ) −1/ν where ν is the critical exponent of the correlation length or, equivalently, the energy gap. This assumption is based on the finite size scaling hypothesis. 10) At j = 1 and δ = 0, the exponent ν is known to be 2/3 with the logarithmic corrections to the pure power-law behavior. 2,3,5) Assuming that ν remains constant over the whole phase boundary, we take ν = 2/3 in the extrapolation of the critical point. This assumption is plausible because the system remains SU (2) symmetric over the whole phase boundary.
N 1 G(N 1 , j, δ c (N 1 , N 2 )) = N 2 G(N 2 , j, δ c (N 1 , N 2 )).
To check this assumption numerically, we estimate the value of the exponent ν from the numerical data using the following formula of Spronken et al. 5) ,
ν(N 1 , N 2 ) = ln ( N2 N1 ) ln N1 ln N2 1/2 ln{ N2G ′ (N2) N1G ′ (N1) } (3.2)
which takes into account the logarithmic corrections due to the SU (2) symmetry of the isotropic Heisenberg model. Here G ′ (N ) denotes the derivative of G(N, j, δ) with respect to j or δ at j = j c (N 1 , N 2 ) or δ = δ c (N 1 , N 2 ). Fig. 3 and Fig. 4 show the examples of the extrapolation procedure of the critical points and critical exponent ν for δ = 0.5. The cross is the critical point j c ≃ 0.889 and exponent ν ≃ 0.670 in the thermodynamic limit. The δ-dependence of ν is shown in Fig. 5. The horizontal line is ν = 2/3. It is verified that ν ≃ 2/3 over the whole phase boundary except for the neighbourhood of j = 0 and δ = 1 where the numerical accuracy becomes worse. In this region, however, the phase boundary and the critical exponent can be determined by the perturbation theory with respect to j as follows.
For small j, the spins connected by the 1 + δ-bonds form strong singlet pairs. Between the spins S 4l+1 and S 4l+4 , the effective coupling j eff is generated mediated by the quantum fluctuation in the singlet pairs on the 1 + δ-bonds. This is calculated as,
j eff = 1 2 j 2 1 + δ (3.3)
up to the second order in j. When 1 − δ = j eff , the effective Hamiltonian is the uniform spin-1/2 AFHC which has a gapless ground state. Thus the phase boundary is given by,
1 2 j 2 + δ 2 = 1,(3.4)
and the gap exponent ν = 2/3. The ground state phase diagram of the present model is shown in Fig. 6 by the solid line. The perturbation result is shown by the dashed line. The numerical result is consistent with the perturbation theory.
On the other hand, Takano 11) has derived the gapless line of the present model as δ = (1 − j) 1/2 using the mapping onto the nonlinear σ-model. This is in qualitative agreement with our result as shown in Fig.6 by the dash-dotted line.
Each region of the phase diagram is illustrated as follows. When j is positive and close to unity, the phases inside and outside the solid line are the S = 1/2 dimer phases with different parity. The phase boundary is the gapless line where the energy gap disappears due to the competition of dimerization and quadrumerization. When j is negative, the ground state is the Haldane state inside the solid line and S = 1 dimer state outside the solid line. The S = 1/2 dimer phase is thus connected with the Haldane phase continuously inside the solid line even in the presence of quadrumerization. On the other hand, the connection between the opposite parity S = 1/2 dimer phase and the S = 1 dimer phase outside the solid line is interrupted by the point δ = 1 and j = 0. At this point the ground state consists of isolated dimers on the 1 + δ-bonds and free spins on the 4l-th and 4l + 1-th sites. In the presence of small but finite j, however, the spin structure does not depend on the sign of j because the effective coupling j eff is proportional to j 2 according to Eq. (3.3). Therefore we may conclude that the S = 1/2 dimer phase with j > 1 and the S = 1 dimer phase also belong to a single phase. §4. Analytic Results for ∆ = 0 For ∆ = 0 (XY -model), our model reduces to the half-filled noninteracting spinless fermion system by the Jordan-Wigner transformation. After the Fourier transformation, the single particle excitation spectrum is determined by the eigenvalue of the 4 × 4 matrix,
0 (1−δ)e ik 2 0 je −ik 2 (1−δ)e −ik 2 0 je ik 2 0 0 je −ik 2 0 (1+δ)e ik 2 je ik 2 0 (1+δ)e −ik 2 0 , (4.1)
where k is the momentum of the excitation. The eigenvalue is given by,
ε = ±ε ± (k), (4.2) where ε ± (k) ≡ 1 2 j 2 + (1 + δ 2 )
± j 2 cos 2 2k + j 2 δ 2 sin 2 2k + δ 2
1/2 . (4.3)
In the half filled case, negative branch is filled and the excitation energy is determined by ε ± (k). The energy gap G is determined from the minimum of ε ± (k) as,
G = 1 2 j 2 + (1 + δ 2 ) − j 2 + δ 2 . (4.4)
Setting G = 0, the critical line can be exactly calculated as, j 2 + δ 2 = 1, (4.5) and the critical exponent ν = 1. The phase boundary is a circle as shown in Fig. 6 by the dotted line. When j is positive, the phase diagrams of the isotropic case and that of the XY case look similar. For negative values of j, however, these two models behave quite differently. In the isotropic case, the Haldane-like phase extends to j → −∞ for small δ while it end up at j = −1 in the XY case. The phase at j → −∞ is continuously connected with the S = 1 dimer phase. This can be understood in the following way.
The coupling on the j-bond can be rewritten as,
j(S x 2l−1 S x 2l + S y 2l−1 S y 2l + ∆S z 2l−1 S z 2l ) = jS 2l−1 S 2l + j(∆ − 1)S z 2l−1 S z 2l = jS 2l−1 S 2l + j(∆ − 1) 2 {(S z 2l−1 + S z 2l ) 2 − 1 2 }.
If we regard S 2l−1 + S 2l as a single S = 1 spin operator, the last term corresponds to the single site anisotropy term which has large positive value for j → −∞ and 0 < ∆ < 1. Therefore the ground state at j → −∞ is the large-D like phase in the XY case. Considering that no phase boundary exists between the large-D phase and the dimer phase in S = 1 AFHC, 12) it is reasonable that we find no phase transition as a function of δ for large enough negative j in the present model.
§5. Summary and Dicussion
The ground state phase diagram of the dimerized and quadrumerized spin-1/2 Heisenberg chain is calculated by numerical diagonalization method. In the isotropic case, the critical points are determined using the phenomenological renormalization group and the finite size scaling hypothesis. The gap exponent ν is estimated to be close to 2/3 over the whole phase boundary taking the logarithmic corrections into account. It is suggested that the phase transition of the present model belongs to the same universality class as the S = 1/2 dimerization transition as expected from the symmetry of the system. The numerical results are consistent with the perturbation theory for δ ≃ 1 and j ≃ 0.
It is found that the Haldane phase is connected with the S = 1/2 dimer phase even in the presence of quadrumerization δ. The S = 1 dimer phase is found to be connected with the S = 1/2 dimer phase with opposite parity. At first glance, the spin structures of the last two states might appear different. The difference of these two spin structures is, however, local and can be transformed via a local reconstruction of 4-spin singlet without closing the energy gap. Therefore these two states are continuously connected with each other. Actually, no evidence of the phase transition is observed in the numerical results.
In the isotropic case, our problem has always the SU (2) symmetry and this fixes the value of the critical exponents. If we violate the SU (2) symmetry by introducing the anisotropic exchange interaction, we may expect wider variety of critical phenomena. Considering the big difference between the phase diagrams of the XY case and the isotropic case, it must be of interest to investigate the intermediate region 0 < ∆ < 1 in detail. The study of this problem is in progress and will be reported elsewhere.
Another way of breaking the SU (2) symmetry is to introduce the magnetic field. The effect of the magnetic field is of special interest related with the magnetization plateau which can be regarded as the field induced Haldane gap problem. 13,14,12) This is also under investigation and will be reported elsewhere.
We thank H. Nakano and K.Takano for useful discus-sion and fruitful comments. We are also grateful to H. Nishimori for the program package TITPACK version 2 for the diagonalization of spin-1/2 systems. The numerical calculation is performed using the HITAC S820 at the Information Processing Center of Saitama University and the FACOM VPP500 at the Supercomputer Center of Institute for Solid State Physics, University of Tokyo.
have the corresponding equation for j c (N 1 , N 2 ) if the roles of j and δ are interchanged. The finite size critical points are obtained as the intersection of N 1 G(N 1 , j, δ) and N 2 G(N 2 , j, δ) as shown in Fig. 2. Three intersections for (N 1 , N 2 ) = (12, 16), (16, 20) and (20,24) are represented by double circles.
Fig. 1 . 1 Fig. 2 .
112The schematic spin structure for (a) j < 1, δ < δc, (b) j < 1, δ > δc and (c) δ = 0, j > The j-dependence of N G(N, j, δ) with δ = 0.5 for 4N = 12, 16, 20 and 24. The intersections (double circles) are the finite size critical points.
Fig. 3 . 5 Fig. 4 . 5 Fig. 5 .
35455The extrapolation procedure of finite size critical point jc for δ = 0.The extrapolation procedure of finite size critical exponent ν for δ = 0.The δ dependence of the numerically obtained critical exponent ν. Filled (open) circles represent the values for j > 0(j < 0).
Fig. 6 .
6The phase diagram of the isotropic model (solid line) and the XY model (dotted line). The dashed line is the result of the perturbation calculation for δ ≃ 1 and j ≃ 0. The dashdotted line is the phase boundary obtained from the nonlinear σ model.11)
. H Bethe, Z. Phys. 71205H. Bethe : Z. Phys. 71 (1931) 205.
. M C Cross, D S Fisher, Phys. Rev. B. 19402M. C. Cross and D. S. Fisher : Phys. Rev. B 19 (1979) 402.
. J L Black, V J Emery, Phys. Rev. B. 23429J. L. Black and V. J. Emery : Phys. Rev. B 23 (1981) 429.
. T Nakano, H Fukuyama, J. Phys. Soc. Jpn. 491679T. Nakano and H. Fukuyama : J. Phys. Soc. Jpn. 49 (1980) 1679.
. G Spronken, B Fourcade, Y Lépine, Phys. Rev. B. 331886G. Spronken, B. Fourcade and Y. Lépine : Phys. Rev. B 33 (1986) 1886.
. F D M Haldane, Phys. Lett. 93464F. D. M. Haldane: Phys. Lett. 93A (1983) 464 ;
. Phys. Rev. Lett. 501153Phys. Rev. Lett. 50 (1983) 1153.
. Y Kato, A Tanaka, J. Phys. Soc. Jpn. 631277Y. Kato and A. Tanaka: J. Phys. Soc. Jpn 63 (1994) 1277.
. A Kitazawa, K Nomura, K Okamoto, Phys. Rev. Lett. 764038A. Kitazawa, K. Nomura and K. Okamoto: Phys. Rev. Lett. 76 (1996) 4038.
. K Hida, Phys. Rev. B. 452207K. Hida: Phys. Rev. B 45 (1992) 2207.
M , Phase Transitions and Critical Phenomena. C.Domb and J.L.LebowitzAcademic Press8146M. N. Barber in Phase Transitions and Critical Phenomena 8,ed.C.Domb and J.L.Lebowitz, Academic Press (1983) 146.
. K Takano, preprintK. Takano: preprint (1998).
. T Tonegawa, T Nakao, M Kaburagi, J. Phys. Soc. Jpn. 653317T. Tonegawa, T. Nakao and M. Kaburagi: J. Phys. Soc. Jpn. 65 (1996) 3317.
. M Yamanaka, M Oshikawa, I Affleck, Phys. Rev. Lett. 78M. Yamanaka, M. Oshikawa and I. Affleck: Phys. Rev. Lett. 78 (1997) 1984.
. K Totsuka, Phys. Rev.B. 573454K. Totsuka: Phys. Rev.B 57 (1998) 3454.
| [] |
[
"A Brane model with two asymptotic regions",
"A Brane model with two asymptotic regions"
] | [
"Musongela Lubo \nThe Abdus Salam International Centre for Theoretical Physics I\nC.T.P P.O.Box 58634100TriesteItaly\n"
] | [
"The Abdus Salam International Centre for Theoretical Physics I\nC.T.P P.O.Box 58634100TriesteItaly"
] | [] | Some brane models rely on a generalization of the Melvin magnetic universe including a complex scalar field among the sources. We argue that the geometric interpretation of Kip.S.Thorne of this geometry restricts the kind of potential a complex scalar field can display to keep the same asymptotic behavior. While a finite energy is not obtained for a Mexican hat potential in this interpretation, this is the case for a potential displaying a broken phase and an unbroken one. We use for technical simplicity and illustrative purposes an ad hoc potential which however shares some features with those obtained in some supergravity models. We construct a sixth dimensional cylindrically symmetric solution which has two asymptotic regions: the Melvin-like metric on one side and a flat space displaying a conical singularity on the other. The causal structure of the configuration is discussed. Unfortunately, gravity is not localized on the brane. PACS numbers:I. INTRODUCTIONAmong the most important characteristics of cosmic strings is the existence of a symmetry axis and the concentration of energy around this axis [1]. Taking gravity into account, the existence of a symmetry axis implies cylindrical symmetry for the metric as well. The static cylindrically symmetric solutions of Einstein equations in vacuum in 4D are of the Kasner type: they are parameterized by three constants obeying two constraints. The vanishing of the energy-momentum tensor in the asymptotic region implies that the geometry must approach a Kasner line there. As the energy momentum tensor corresponding to these axial configurations implies the invariance of the metric under boosts, one is left with only two Kasner geometries : a flat space presenting a conical singularity and a Melvin like space [2]. The first case leads to the well known cosmic strings. The second solution, written in a particular system of coordinates, displays circles of decreasing circumferences for increasing "radii"ρ. This feature has been analyzed by Kip.S.Thorne[3,4]. The interpretation is that ρ = ∞ is the point at infinity on the symmetry axis.The Melvin solution has high dimensional generalizations which can be used to build brane models[5,6]. In recent works, complex scalar fields have been incorporated into the picture[7]. In this article we address the same question. However, we use the Kip S.Thorne interpretation to fix the boundary conditions; the coordinate ρ being the point at infinity on the symmetry axis, the angular coordinate is not well defined there, just as for the polar coordinates at the origin on the plane. A cylindrically symmetric complex field must thus vanish at that point. We obtain that no static cylindrical solution can be obtained with the usual Higgs potential. We then exhibit a toy model for which this can be achieved and discuss its characteristics. This paper is organized as follows. In the second section we review the geometric interpretation of The Melvin magnetic solution in four dimensions. The third section incorporates the scalar field, set the boundary conditions and displays the numerical solution obtained. First, an Abelian-Higgs Lagrangian is coupled to the Einstein-Hilbert one. Looking for an axially symmetric configuration which displays the second special Kasner line element far from the source , the preceding section imposes the vanishing of the vector and the scalar field as the coordinate ρ goes to infinity. This results in a divergent inert mass per unit length if the v.e.v of the Higgs field does not vanish. If it does, one simply recovers the Melvin solution. We then construct, for illustrative purposes, a potential for which the behavior of the scalar is non trivial. This potential has two minima, one of them being at zero. This has similarities with some potentials obtained in some supergravity [7] models and some non commutative models[8]. * Electronic address: [email protected] | 10.1103/physrevd.71.044026 | [
"https://export.arxiv.org/pdf/hep-th/0408089v2.pdf"
] | 118,963,042 | hep-th/0408089 | 8f841031819ae9b4edf771f269e26cb10f590065 |
A Brane model with two asymptotic regions
7 Feb 2005
Musongela Lubo
The Abdus Salam International Centre for Theoretical Physics I
C.T.P P.O.Box 58634100TriesteItaly
A Brane model with two asymptotic regions
7 Feb 2005(Dated: March 27, 2022)PACS numbers:
Some brane models rely on a generalization of the Melvin magnetic universe including a complex scalar field among the sources. We argue that the geometric interpretation of Kip.S.Thorne of this geometry restricts the kind of potential a complex scalar field can display to keep the same asymptotic behavior. While a finite energy is not obtained for a Mexican hat potential in this interpretation, this is the case for a potential displaying a broken phase and an unbroken one. We use for technical simplicity and illustrative purposes an ad hoc potential which however shares some features with those obtained in some supergravity models. We construct a sixth dimensional cylindrically symmetric solution which has two asymptotic regions: the Melvin-like metric on one side and a flat space displaying a conical singularity on the other. The causal structure of the configuration is discussed. Unfortunately, gravity is not localized on the brane. PACS numbers:I. INTRODUCTIONAmong the most important characteristics of cosmic strings is the existence of a symmetry axis and the concentration of energy around this axis [1]. Taking gravity into account, the existence of a symmetry axis implies cylindrical symmetry for the metric as well. The static cylindrically symmetric solutions of Einstein equations in vacuum in 4D are of the Kasner type: they are parameterized by three constants obeying two constraints. The vanishing of the energy-momentum tensor in the asymptotic region implies that the geometry must approach a Kasner line there. As the energy momentum tensor corresponding to these axial configurations implies the invariance of the metric under boosts, one is left with only two Kasner geometries : a flat space presenting a conical singularity and a Melvin like space [2]. The first case leads to the well known cosmic strings. The second solution, written in a particular system of coordinates, displays circles of decreasing circumferences for increasing "radii"ρ. This feature has been analyzed by Kip.S.Thorne[3,4]. The interpretation is that ρ = ∞ is the point at infinity on the symmetry axis.The Melvin solution has high dimensional generalizations which can be used to build brane models[5,6]. In recent works, complex scalar fields have been incorporated into the picture[7]. In this article we address the same question. However, we use the Kip S.Thorne interpretation to fix the boundary conditions; the coordinate ρ being the point at infinity on the symmetry axis, the angular coordinate is not well defined there, just as for the polar coordinates at the origin on the plane. A cylindrically symmetric complex field must thus vanish at that point. We obtain that no static cylindrical solution can be obtained with the usual Higgs potential. We then exhibit a toy model for which this can be achieved and discuss its characteristics. This paper is organized as follows. In the second section we review the geometric interpretation of The Melvin magnetic solution in four dimensions. The third section incorporates the scalar field, set the boundary conditions and displays the numerical solution obtained. First, an Abelian-Higgs Lagrangian is coupled to the Einstein-Hilbert one. Looking for an axially symmetric configuration which displays the second special Kasner line element far from the source , the preceding section imposes the vanishing of the vector and the scalar field as the coordinate ρ goes to infinity. This results in a divergent inert mass per unit length if the v.e.v of the Higgs field does not vanish. If it does, one simply recovers the Melvin solution. We then construct, for illustrative purposes, a potential for which the behavior of the scalar is non trivial. This potential has two minima, one of them being at zero. This has similarities with some potentials obtained in some supergravity [7] models and some non commutative models[8]. * Electronic address: [email protected]
that the trajectories of massive particles in this geometry are bounded. We also study massless particle trajectories and discuss the causal structure of the solution. An appendix is devoted to the way the numerical approximation has been computed.
The coupling of scalar fields to gravity leads to many classical solutions [9,10,11,12,13,14,15,16]. The introduction of a scalar field among the sources leading to a Melvin type universe has been considered before [17]. There are three main differences between our work and the previous papers. Firstly, the scalar field considered here is not a dilaton so that its coupling to gravity is ordinary. The involved potentials are not the same. Secondly, the scalar field here is complex, contrary to [17] where it is real. The vanishing of the scalar field on the symmetry axis is not required when it is real. On the contrary, this becomes mandatory when it is complex, just like for the Higgs field on the cosmic string core. But, the more important difference is that here we have a configuration with two asymptotic regions. Like in [37], we do not have a delta function for the brane.
II. THE MELVIN SOLUTION IN 4D.
In this section we review the geometric interpretation of the Melvin solution. To keep the discussion as simple as possible, we actually analyze its asymptotic limit in four dimensions. The conclusions are however the same in the presence of extra dimensions and the addition of matter.
We will study a system consisting of a self gravitating Maxwell system in d dimensions. The classical field equations are derived from the action
S = d d x |g| − 1 4 F ab F ab + R 16πG
.
(1)
The solution which generalizes the Melvin universe in d dimensions is given by the following expressions of the metric and the Maxwell tensor:
ds 2 = 1 + ρ 2 a 2 2/(d−3) η µν dx µ dx ν − dρ 2 − ρ 2 1 + ρ 2 a 2 −2 dφ 2 , F = B 0 1 + ρ 2 a 2 −2 ρ dρ ∧ dφ ,(2)
a and B 0 being dependent constants. This solution has been used in the brane world scenarios. One of its important properties is that the brane can have positive tension and the closure of the bulk provides a singularity-free boundary condition for solutions that contain black holes and gravitational waves on the brane [5,6]. The characteristic of this metric on which we will put the emphasis is the fact that the circumference of circles obtained by letting only φ non fixed tend to zero as the coordinate ρ goes to infinity. The Lagrangian from which the solution given in Eq.(2) follows did not contain any scalar field. The question we address in this paper is: if one adds a scalar field to the picture, which kind of potential allows the same behavior for the metric? We will argue that a geometric interpretation of this property of the metric gives an important restriction.
The features observed by Kip.S.Thorne for the Melvin solution [18,19] are already present in a vacuum solution: the Kasner line element [2] which is obtained by taking a = 0 in Eq.(2). The simplest example where this behavior can be analyzed is the sphere. Introducing the polar stereographic coordinates (r, θ), the metric of a sphere of radius σ reads [20]
ds 2 = 16σ 4 (4σ 2 + r 2 ) 2 dr 2 + 16σ 4 r 2 (4σ 2 + r 2 ) 2 dθ 2 .(3)
For large values of the coordinate r, the coefficient g θθ becomes a decreasing function . If one interprets r as the radius, a circle of infinite radius turns out to be of null length. This is obvious since r = ∞ corresponds to the point at infinity on the plane which is mapped into the north pole by the stereographic projection; r = ∞ is just the north pole. When the coordinate r vanishes, one has another circle displaying a vanishing circumference : the south pole. The two are on the symmetry axis. Introducing the variable r * = 2σ arctan(r/2σ) , the metric reads ds 2 = dr 2 * + σ 2 sin 2 r * σ dθ 2 .
The relation between r * and r is bijective provided that r * ∈ [0, πσ]. The points located on the symmetry axis once again are those for which the coefficient g θθ vanishes.
III. THE 6D EXTENSION WITH A COMPLEX SCALAR FIELD.
Before proceeding, let us point out that the tangential Maxwell vector field in the Melvin solution vanishes when ρ goes to infinity, in accord with the geometric interpretation. We now wish to include a scalar field in the picture, in the presence of extra dimensions. We choose a sixth dimensional model essentially because in this case one can naturally obtain chiral fermions.
Let us first consider a scalar displaying a Higgs potential; the matter action is then
S = d 6 x √ −g 1 2 D µ ΦD µ Φ * − λ 4 (ΦΦ * − v 2 ) 2 .(5)
The
U (1) charge e is embodied in the covariant derivative D µ Φ = ∂ µ Φ − ieA µ Φ.
For a static cylindrically symmetric configuration, the ansatz can be given the form
ds 2 = β 2 (ρ)η µν dx µ dx ν − γ 2 (ρ)dρ 2 − α 2 (ρ)dφ 2 where µ = 0, · · · 3 , Φ = vf (ρ)e iφ and A φ = 1 e (1 − p(ρ)) .(6)
The cosmic string solution has been extensively studied in the literature. In that configuration, the smoothness of the geometry on the symmetry axis is guaranteed by going to the gauge γ(ρ) = 1 and imposing the boundary conditions [2]
α(0) = 0, α ′ (0) = 1 ,(7)
while the matter fields are non singular on the core provided that
f (0) = 0 , p(0) = 1 .(8)
The vanishing of the energy density in the asymptotic region implies
f (∞) = 1 , p(∞) = 0 .(9)
What happens if we want the metric to display the same asymptotic behavior than in Eq.(2) when the coordinate ρ goes to infinity? In the previous section, we argued that ρ = ∞ is the point at infinity on the symmetry axis. To have a regular cylindrically symmetric configuration, the Higgs and the tangential vector field must vanish there:
f (∞) = 0 and p(∞) = 1 .(10)
Extracting the expression of the integrand of the inertial mass from Eq.(5) one has
ǫ(ρ) = |g| 1 2 g ρρ |D ρ Φ| 2 + 1 2 g φφ |D φ Φ| 2 + 1 4 F ρφ F ρφ + λ 4 (Φ * Φ − v 2 ) 2 .(11)
In the asymptotic region (i.e. ρ → ∞) one has √ −g ∼ ρ 7/3 ; the volume element is not bounded. The first three terms decrease in the asymptotic region provided that f (ρ) and p(ρ) approach constants there; this is already satisfied by Eq. (10). The contribution of the Higgs potential in this part of the space is reduced to the integral of ρ 7/3 λv 4 . This has a chance to converge only when v = 0. Then, the parameterization given in Eq. (6) does not apply; one can however define a dimensionless function associated to the Higgs field by using the Newton constant. Doing this, we obtained a vanishing scalar field for any value of the parameters. Physically this can be understood as follows. Forcing the scalar field to go from zero to zero as ρ goes from zero to infinity, one obtains that it vanishes identically since there is no source. Such a source would be for example a local maximum of the potential but as the vacuum expectation value vanishes, such a maximum does not exist. In fact, one recovers the Melvin universe.
Is it possible to construct a solution with a non vanishing scalar field? To do this we need a potential which vanishes with the scalar field so that the minimum is attained at spatial infinity. We also need a local maximum which will correspond to a source. These conditions are for example satisfied by the gauge invariant potential
V (Φ) = λe w 2 ΦΦ * ΦΦ * (ΦΦ * − v 2 ) 2 .(12)
The maximum is attained at φ = ±v √ 2 − 1 while there are three minima , at φ = 0, ±v. The U(1) symmetry is broken spontaneously in the last two vacua and preserved in the first one. We disregard the renormalizability since we are interested only in classical solutions. This potential, although purely ad hoc, shares with the one appearing in [7] V (φ) = 2e φφ (φφ) p−1 2(p + φφ) 2 − 5φφ (13) the fact that it is the product of an exponential and a polynomial. The difference is the fact that in Eq. (13), one needs to have p ≥ 1 to have the vanishing value of the field as a vacuum but then there is no other vacuum. In [7], it was argued that the potential of Eq.(13) could be seen as inspired from some supergravity model, with a particular choice of the Kahler structure. Let us also mention that in models in which one works with non commutative spaces such as M×M n where M denotes the Minkowski space and M n the set of n×n matrices, one also obtains potentials displaying symmetric vacua. The attitude adopted here is like the one concerning analytical solutions for self gravitating domain walls [22,23,24]. One knows that a potential which is a cosine of a scalar field achieves the desired goal, although it is not renormalizable. In the same way, one introduces ad hoc potentials for quintessence [25,26,27]. Some of them are negative powers of a scalar field and so lead to non renormalizable theories. Our only aim is to show that solutions with finite energy exist for specific potentials. Moreover, some potentials displaying symmetric vacua have been used as candidates for dark matter [28,29]. We now wish to construct a solution which interpolates between two vacua, say |Φ| = v and |Φ| = 0. Our previous discussion tells us that the region where the field goes to its unbroken phase can not be Melvin-like. As we simply require cylindrical symmetry, we can choose that asymptotic region to be like the far region of a cosmic string.
The Einstein equations will be written in the form
R b a = −8πG T b a − 1 4 T(14)
where the factor 1/4 comes from the dimension of the space time. In sixth dimensions, asking our action to be a pure number means the self coupling λ and the gauge coupling e are dimensionfull. We thus can write our equations in terms of the dimensionless parameters
µ = Gv 2 , ν = λ 2 G −3 , τ = e 2 v and σ = w 2 v 2 .(15)
Note that here the dimension of v is an inverse length square. We will use the dimensionless length x given by
ρ = L x where L = 1 2 √ πµ 5/4 ν 1/4 1 √ v(16)
and the dimensionless functions
α(ρ) = √ π µ τ 1 √ v A(x) , β(ρ) = B(x) , γ(ρ) =γ(x) , f (ρ) = F (x) , p(ρ) = P (x) .(17)
The field equations read
e σF 2 (x) B(x)F 2 (x)γ 2 (x)(1 − F 2 (x)) 2 + A ′ (x)B ′ (x) A(x) + 3 B ′2 (x) B(x) − B ′ (x)γ ′ (x) γ(x) − 2 B(x)P ′ (x) 2 A 2 (x) + B ′′ (x) = 0 ,(18)1 4 e σF 2 (x) B(x)F 2 (x)γ 2 (x)(1 − F 2 (x)) 2 + 2πµB(x)F ′2 (x) − B(x)A ′ (x)γ ′ (x) 4A(x)γ(x) − B ′ (x)γ ′ (x) γ(x) + 3 B(x)P ′2 (x) 2A 2 (x) + B(x)A ′′ (x) 4A(x) + B ′′ (x) = 0 ,(19)e σF 2 (x) B(x)F 2 (x)γ 2 (x)(1 − F 2 (x)) 2 + 2τ πµ 5/2 √ ν F 2 (x)γ 2 (x)P 2 (x) A(x) + 4 A ′ (x)B ′ (x) B(x) − A ′ (x)γ ′ (x) γ(x) +6 P ′2 (x) A(x) + A ′′ (x) = 0 ,(20)τ 2πµ 5/2 √ ν F 2 (x)γ 2 (x)P (x) − A ′ (x)P ′ (x) A(x) + 4 B ′ (x)P ′ (x) B(x) −γ ′ (x)P ′ (x) γ(x) + P ′′ (x) = 0 ,(21)1 2πµ e σF 2 (x) F (x)γ 2 (x)(1 + (−4 + σ)F 2 (x) + (3 − 2σ)F 4 (x) + σF 6 (x)) − τ 4π 2 µ 7/2 √ ν F (x)γ 2 (x)P (x) A 2 (x) A ′ (x)F ′ (x) A(x) + 4 B ′ (x)F ′ (x) B(x) − F ′ (x)γ ′ (x) γ(x) + F ′′ (x) = 0 .(22)
We now relax the assumption that the coordinate ρ goes from zero to infinity but rather take it to go from minus to plus infinity. This is like parameterizing every point of the sphere not by θ, φ but by φ and its distance from the equator on a meridian. The upper hemisphere would have positive ρ while the lower would correspond to negative values of that length coordinate. In our case, the region where ρ → −∞ will correspond to a cosmic string-like geometry while ρ → ∞ will be associated with the Melvin-like behavior.
So, the boundary conditions are that when ρ → −∞, one is in the far region of the cosmic string solution:
A(x) ∼ x , B(x) ∼ 1 ,γ ∼ 1(23)
while for ρ → ∞ one enters the asymptotic region of the Melvin solution:
A(x) ∼ a 2 x , B(x) ∼γ(x) ∼ x 2 a 2 1/3 .(24)
To give an illustration, we need to solve the above set of coupled non linear differential equations. This has been done for the simplest choice of the parameters: µ = ν = τ = σ = 1; the details concerning the numerical treatment are given in the appendix. The behavior of the different fields has been given in the pictures FIG.1, FiG.2, and FiG.3. Roughly speaking, the region ρ = 0 is where the transition between the two regions takes place; it is also the place where the energy is concentrated.
The trapping of gravity is ensured when the condition
dx 4 dx 5 g 00 |g| < ∞(25)
is satisfied. From the asymptotic behavior of the metric, one sees this is not true for the solution we constructed. Let also remark that a change of coordinate can now be made so that for exampleγ = 1.
IV. THE CLASSICAL TRAJECTORIES.
Let us first consider the case of massive particles. The metric specified in Eq.(6) has cyclic coordinates ; its geodesics are by way of consequence characterized by constants of motion. Introducing the proper time per unit mass τ , the energy-momentum relation reads
dρ dτ 2 = −1 − k 2 1 α 2 (ρ) + k 2 β 2 (ρ) 1 γ 2 (ρ) .(26)
A physical motion is characterized by a real velocity. The asymptotic behavior of the metric displayed in Eq. (23,24) shows that the trajectory of a massive particle never attains the point at infinity on the Melvin branch; however it has access to the region at infinity on the string branch. The causal structure is found by analyzing particular null geodesics. The bounded null coordinates in the new background are given byū = c st orv = c st with
ū v = arctan[t ∓ σ(ρ)] ,(27)
where
σ(ρ) = ρ 0 dξ γ(ξ) β(ξ) .(28)
In FIG.4, we have drawn the Penrose-Carter diagram of the solution.
The embedding of the two dimensional metric containing only t and ρ in the Euclidean space can be realized by the surface of revolution
Z(r) = r 0 dy c 2 (y) − 1 , c(y) = γ(α −1 (y)) d dy (α −1 (y)) .(29)
The limiting behavior of the metric shows that in the region ρ → −∞,
Z(r) ∼ c st r ;(30)
no restriction is imposed on r and the surface is a cone. On the contrary, as ρ → ∞,
Z(r) ∼
there is a maximal circumference. The causal structure of the solution is given. The time-like infinities are I + (ū =v = π/2) and I − (ū =v = −π/2). As r can change sign, there are two space infinities : I > o (−π/2, π/2) and I < o (π/2, −π/2). The curves which begin at I − and end at I + correspond to fixed values of r while the others correspond to fixed t.
V.
CONCLUSIONS.
We have constructed a sixth dimensional cylindrically symmetric self gravitating configuration. It has two asymptotic regions: one corresponding to a flat space time with a deficit angle and the other to the second special Kasner line element. The part of the geometry which displays a deficit angle can be realized as a cone with a smoothed apex. The second special Kasner geometry, on the other side, can be seen as a tube with a decreasing radius. Gluing the two, one obtains something close to a funnel. The causal structure of this geometry was studied. It should be stressed that such a configuration, with two topologically different boundary regions, is not possible with static spherically symmetric configurations; the Birkhoff theorem forbids this.
The potential considered is non renormalizable but our discussion shows that it is one of the simplest which allows boundary conditions compatible with the geometric interpretation of the second special Kasner geometry.
Like in [32], we have built a configuration which has two different asymptotic regions. In our case we have a space with a conical singularity on one side and a Melvin-like solution on the other, while in the preceding one there are two Ads 5 space times glued together along a three brane. The fact that our solution is not asymptotically flat and the non localization of the four dimensional graviton is similar to [33]. Among the priorities which should be addresses if a more realistic model is built along these lines is the construction of realistic Abelian and non Abelian four dimensional models [34,35].
What we have learned in this work is basically that if one wants to include a complex scalar field possessing a winding number on a Melvin-like solution, one needs a particular kind of potential. Our solution does not trap gravity. Nevertheless, the model may still have some phenomenological interest. Although we have not made here the appropriate analytic computations, one can not rule out at this stage the possibility of having a quasi localized four dimensional graviton on the brane like in [36]. In this model, it was shown that Newton's law of gravity was valid only between two length scales fixed by the theory.
Rather than computing the integral, we approximated the surface to which it corresponds by a sum of rectangles. The values of the parameters we found are given below. Plotting the functions ODE k (x), one finds an error of the order 10 −2 on the entire real axis.
FIG. 1 :
1The scalar and vector fields are plotted in terms of the coordinate x.
FIG. 2 :FIG. 3 :
23From top to bottom, the components A(x), B(x),γ(x) The energy density ǫ(x)
FIG. 4: The causal structure of the solution is given. The time-like infinities are I + (ū =v = π/2) and I − (ū =v = −π/2). As r can change sign, there are two space infinities : I > o (−π/2, π/2) and I < o (π/2, −π/2). The curves which begin at I − and end at I + correspond to fixed values of r while the others correspond to fixed t.
823008493904919 , b 1 = 0.9933291170842984 , b 2 = 0.42807374544606963 , b 3 = 0.7966722551728198 , b 4 = 1.2646707186251818 , g 0 = 0.7842562408108704 , g 1 = −0.2977638255573934 , g 2 = −0.0349623024909614 , g 3 = 0.8326686788126545 , g 4 = 0.14292786831126408 . (A5)
The constructed solution interpolates between them. The minimum corresponding to a vanishing value of the field is attained in the Kasner-like asymptotic region while the non vanishing value corresponds to a flat space presenting an angular deficit. The classical trajectories of neutral particles in this geometry are analyzed in section four. We show Acknowledgments We thank A.D. Dolgov, Shankaranarayanan.S, R.Jeannerot and I.Dorsner for useful criticisms.APPENDIX A: NUMERICAL CONSIDERATIONS.Our numerical approximation relies on a symbolic approximation of the fields which can then be improved by a relaxation method. The function F (x) goes from 1 to 0 when the argument goes from −∞ to ∞. The ansatz is taken to beIn the same way, one hasThe asymptotic behavior of the metric displayed in Eqs.(23,24)is taken onto account by the following functions:These parameters are not all independent, due to the fact one constant(a) drives the asymptotic behavior of the functions α(ρ), β(ρ), γ(ρ) and P (ρ) simultaneously(see Eq.(2)). This has been taken into account. For simplicity, we introduce α 5 = √ a 5 . For the true solution, all the right members of the equations given in Eq.(18), · Eq.(22) , which we denote ODE 1 (x), · · · , ODE 5 (x), vanish. To obtain an initial approximation for the relaxation method, the idea is to look for the values of the coefficients f 0 , · · · , g 5 for which the integral eq(x) = ∞ −∞ dx ODE 2 1 (x) + ODE 2 2 (x) + ODE 2 3 (x) + ODE 2 4 (x) + ODE 2 5 (x)is minimal.
Vilenkin-Shellard, Cosmic strings and other topological defects. CambridgeCambridge University PressVilenkin-Shellard, Cosmic strings and other topological defects, Cambridge University Press, Cambridge(1994).
. Kip S Thorne, Phys.Rev. 139244Kip.S.Thorne, Phys.Rev.B139(1965) 244.
. Kip S Thorne, Phys.Rev. 138251Kip.S.Thorne, Phys.Rev.B138(1965) 251.
. G W Gibbons, D L Wiltshire, Nucl.Phys. 287717G.W. Gibbons, D.L. Wiltshire, Nucl.Phys. B287 (1987) 717.
. J Louko, D L Wiltshire, JHEP. 02027J. Louko, D.L. Wiltshire, JHEP 0202(2002)007.
. B De Carlos, J M Moreno, JHEP. 031140B. de Carlos, J.M. Moreno, JHEP 0311(2003)040.
. M Dubois-Violette, J Madore, R Kerner, Class.Quant.Grav. 61709M. Dubois-Violette, J.Madore, R.Kerner, Class.Quant.Grav.6(1989)1709.
. Sharmanthie Fernando, Gen.Rel.Grav. 3671Sharmanthie Fernando, Gen.Rel.Grav.36(2004)71.
. P Tanwi Ghosh, Mitra, Class.Quant.Grav. 201403Tanwi Ghosh, P. Mitra, Class.Quant.Grav.20(2003)1403.
. Wolfgang Graf, Phys.Rev. 6724002Wolfgang Graf, Phys.Rev.D67(2003)024002.
. Gerard Clement, Dmitri Gal'tsov, Cedric Leygnac, Phys.Rev. 6724012Gerard Clement, Dmitri Gal'tsov, Cedric Leygnac, Phys.Rev.D67(2003)024012.
. Gary W Gibbons, Daisuke Ida, Tetsuya Shiromizu, Phys.Rev. 6644010Gary W. Gibbons, Daisuke Ida, Tetsuya Shiromizu, Phys.Rev.D66(2002)044010.
. W Grumiller, D V Kummer, Vassilevich, Phys.Rept. 369327Grumiller, W. Kummer, D.V. Vassilevich, Phys.Rept.369(2002)327.
. Yves Brihaye, Betti Hartmann, Phys.Lett. 534137Yves Brihaye, Betti Hartmann, Phys.Lett.B534(2002)137.
. Steven S Gubser, Arkady A Tseytlin, Mikhail S Volkov, JHEP. 010917Steven S. Gubser, Arkady A. Tseytlin, Mikhail S. Volkov, JHEP 0109(2001)017.
Kei-ichi Maeda. G W Gibbons, Nucl.Phys. 298741G.W. Gibbons, Kei-ichi Maeda, Nucl.Phys.B298(1988)741.
. M A Melvin, Phys. Letters. 865M.A.Melvin, Phys. Letters 8(1964)65.
. M A Melvin, Phys. Rev. 225139M.A.Melvin, Phys. Rev. B225(1965)139.
. B A Dubrovin, A T Fomenko, S D Novikov, Geometrie Contemporaine, Methodes et Applications, Ed. Mir, MoscouB.A. Dubrovin, A.T. Fomenko, S.D. Novikov, Geometrie contemporaine: Methodes et Applications, Ed. Mir, Moscou(1987).
C W Misner, K S Thorne, J A Wheeler, Gravitation ,W.H.Freeman and Company. San FransiscoC.W.Misner, K.S.Thorne, J.A.Wheeler, Gravitation ,W.H.Freeman and Company, San Fransisco(1973).
. G Goetz, J.Math.Phys. 312683G.Goetz J.Math.Phys. 31 (1990), 2683.
. R Gass, M Mukherjee, Phys.Rev. 6065011R. Gass, M. Mukherjee, Phys.Rev. D60 (1999) 065011.
. A Wang, P S Letelier, Phys.Rev. 516612A. Wang, P.S. Letelier, Phys.Rev. D51 (1995) 6612.
. R R Caldwell, R Dave, Paul J Steinhardt, Phys.Rev.Lett. 801582R.R. Caldwell, R. Dave, Paul J. Steinhardt, Phys.Rev.Lett. 80(1998)1582.
. I Zlatev, L Wang, Paul J Steinhardt, Phys.Rev.Lett. 82896I.Zlatev, L.Wang, Paul J. Steinhardt, Phys.Rev.Lett.82(1999) 896.
. P Brax, J Martin, Phys.Rev. 61103502P.Brax, J.Martin, Phys.Rev. D61 (2000) 103502.
. C Beck, hep-ph/0310479C.Beck, hep-ph/0310479.
. S Weinberg, Rev.Mod.Phys. 611S.Weinberg, Rev.Mod.Phys.61(1989)1.
. M Christensen, A L Larsen, Y Verbin, Phys.Rev. 60125012M.Christensen, A.L.Larsen, Y.Verbin, Phys.Rev.D60(1999)125012.
. Y Brihaye, M Lubo, Phys.Rev. 6285004Y.Brihaye and M.Lubo,Phys.Rev.D62(2000)085004
. A Gruppuso, E Roessl, M Shaposhnikov, JHEP. 040811A. Gruppuso, E. Roessl, M. Shaposhnikov, JHEP 0408 (2004) 011
. M Shaposhnikov, P Tinyakov, K Zuleta, Phys.Rev. 70104019M. Shaposhnikov, P. Tinyakov, K. Zuleta, Phys.Rev. D70 (2004) 104019
. Mikhail Seif Randjbar-Daemi, Shaposhnikov, JHEP. 030416Seif Randjbar-Daemi, Mikhail Shaposhnikov, JHEP 0304 (2003) 016
. M Laine, H B Meyer, K Rummukainen, M Shaposhnikov, JHEP. 030168M. Laine, H.B. Meyer, K. Rummukainen, M. Shaposhnikov, JHEP 0301 (2003) 068
. Ruth Gregory, Valery A Rubakov, Sergei M Sibiryakov, Phys.Rev.Lett. 845928Ruth Gregory, Valery A. Rubakov, Sergei M. Sibiryakov, Phys.Rev.Lett. 84(2000)5928
. Merab Gogberashvili, Paul Midodashvili, Phys.Lett. 515Merab Gogberashvili, Paul Midodashvili, Phys.Lett. B515 (2001) 447-450
| [] |
[
"An operationalistic reformulation of Einstein's equivalence principle",
"An operationalistic reformulation of Einstein's equivalence principle"
] | [
"Vladik Kreinovich ",
"R R Zapatrin "
] | [] | [] | The Einstein's equivalence principle is formulated in terms of the accuracy of measurements and its dependence of the size of the area of measurement. It is shown that different refinements of the statement 'the spacetime is locally flat' lead to different conculsions about the spacetime geometry. | null | [
"https://export.arxiv.org/pdf/gr-qc/9705085v1.pdf"
] | 693,984 | gr-qc/9705085 | 8e95c34648368ad624c82dbfd8a5a0852bed9851 |
An operationalistic reformulation of Einstein's equivalence principle
May 1997
Vladik Kreinovich
R R Zapatrin
An operationalistic reformulation of Einstein's equivalence principle
May 1997arXiv:gr-qc/9705085v1 30
The Einstein's equivalence principle is formulated in terms of the accuracy of measurements and its dependence of the size of the area of measurement. It is shown that different refinements of the statement 'the spacetime is locally flat' lead to different conculsions about the spacetime geometry.
Introduction
Analyzing gravitational phenomena Einstein used the following postulate (which he called equivalence principle): what ever measurements we perform inside some spacetime region we cannot distinguish between the case when there is a homogeneous gravitational field and the case when all bodies in this region have constant acceleration with respect to some inertial frame. (And since any field can be considered homogeneous in a small enough region, this principle can be applied to a neighborhood of any point).
Einstein concluded from this principle that the spacetime metric is pseudo-Riemannian and in absence of all other fields but gravity the test particles are traveling along geodesics of this metric [1].
Yet V.A.Fock [2] noticed that this formulation is not exact enough: according to general relativity , the presence of gravitation means spacetime to be curved, i.e. curvature tensor is nonzero, R ijkl = 0. This is valid in any frame, in particular in a uniformly accelerated one. Hence in presence of gravity R ijkl = 0 while in uniformly accelerated frame R ijkl = 0, and this can be distinguished experimentally by emitting a "cloud" of a particles endowed with clocks to various directions with various speeds. With a help of the clocks one can determine the proper time ds along every trajectory and then calculate the metric. By numerical differentiation of the metric we can obtain the values of R ijkl and then compare all them with zero.
That is why most of authors do postulate the Riemannian metric within the strict mathematical account of general relativity. We would like to give here more profound and at the same time more strict mathematical grounds for this fact.
The main drawback of traditional definition of Riemannian geometry of spacetime is that it is formulated in terms of length (viz. proper time) of idealized infinitely small intervals rather than real ones of finite size. Besides that, this definition demands length of space-like intervals to be determined which is not desirable from the operationalistic point of view. Some authors (see, e.g. [5]) give the equivalent definition including only proper time along finite parts of time-like curves. The postulate the metric to be Riemannian in the sense of this definition. This is more operationalistic but yet not motivated physically.
In our paper we show that one can reformulate the initial Einstein's equivalence principle in such a way that both Riemannian metric of spacetime and the geodesic motion of test particles will be obtained from it.
Considering only the gravitational field this result is of few interest, but the question becomes essential in presence of non gravitational fields -it stipulates the choice of covariant analogue of an equation. For example in [4] one asserts that conformally invariant scalar field equations ✷φ + 1/6 Rφ = 0 come into contradiction with equivalence principle since they contain scalar curvature R (a more detailed analysis of this issue can be found in [3]). Nevertheless, such reasonings does not seem convincing: for example, usual Maxwell equations in curved space contain explicitly the curvature, but they undoubted by agree with equivalence principle. However if we twice differentiate both parts of the equation and exchange F kl;ij by F kl;ji +R ij p l F kp +R ij p k F pl we obtain equations containing the curvature explicitly. Thus the presence of curvature tensor in an equation does not mean at all the violation of equivalence principle. The formulation of equivalence principle proposed below allows to solve this problem in a physically meaningful and mathematically strict way.
The idea of our reformulation is the following. The Fock's experiment with the cloud of particles described above is idealized since all real measurements have their errors. Therefore all the values calculated via these measurements, in particular, the curvature, have their errors too. Thus if the error of so calculated curvature tensor will be great enough (of the order of the curvature itself) it would not be possible to determine whether the genuine value of curvature tensor is equal to zero or not. In the meantime, the Einstein's principle claims that any real (rather than exact) measurement performed in small enough region will not allow us to distinguish real (possibly curved) space from flat one.
Mathematical formulation
Begin with a formalization of basic notions.
Definition 1 A spacetime region M is called ǫ-small with respect to some fixed frame iff for any i and any a, b ∈ M
|x i (a) − x i (b)| < ǫ
This definition depends on coordinate frame; however the reformulation of equivalence principle based on this definition turns out not to depend on frame.
The spacetime properties determine the relative movement of uncharged particles. There are devices to measure coordinates and other kinematic features of the motion: time, velocity (e.g. using the Doppler effect), acceleration etc. However one mostly measure time or length (e.g. Doppler measurement of velocity contains determining frequency -the time interval between neighbor maxima). So, further we shall consider only time and length measurement. Clearly the measuring of small intervals of time and length can be performed with smaller absolute error. Let us denote by λ(ǫ) the error of our measurements in the ǫ-small region.
Definition 2 A spacetime is a triple (M, Γ, τ ) with M -a smooth manifold,
Γ -a family of smooth curves on M (trajectories of test particles) and for any γ ∈ Γ a smooth function τ : γ × γ → R (the proper time along γ) is determined such that
τ (a, c) = τ (a, b) + τ (b, c) whenever γ −1 (a) < γ −1 (b) < γ −1 (c) The flat spacetime is a triple (M 0 , Γ 0 , τ 0 ) where M 0 = R 4 , Γ 0 is set of all time-like straight lines, τ 0 is Minkowski metric. Definition 3 A spacetime (M, Γ, τ ) is called λ-flat if for any point m ∈ M
there exists such a frame that for a sufficiently small ǫ > 0 all coordinate and time measurements in any ǫ-small region of M coincide (up to an error ≤ λ(ǫ)) with the analogous result in the flat spacetime.
The final formulation of the equivalence principle must not of course depend on accessible devices (i.e. the kind of the function λ). Thus, instead of a single function we must operate with a class of such functions Λ = {λ}. We shall assume possible refinement of any measurement, namely Λ together with every λ is assumed to contain also the function kλ for every 0 < k < 1.
Definition 4 A spacetime is siad to be
Λ-flat if it is λ-flat for all λ ∈ Λ.
The formulation of the equivalence principle we propose is the following:
the spacetime is λ-flat If we exclude degenerate cases (Λ is too great and Fock's reasoning is valid or Λ is too small so that Λ-flatness implies nothing) the proposed formulation yields us a basis for Riemannian metrics and geodesic motion.
Main results
Theorem 1 For any class of functions Λ : R + → R + such that λ ∈ Λ implies kλ ∈ Λ for any positive k ≤ 1 one of the following statements is valid:
A. Any spacetime is Λ-flat.
B. Only pseudo-Riemannian spacetime are Λ-flat and the class of curves Γ is arbitrary.
C. Only pseudo-Riemannian spacetimes are Λ-flat and Γ is the set of timelike geodesics.
D. Only flat spacetime is Λ-flat.
The proof will be organized according to the following plan: P P P P P P P P P P P P P P P P P P P P P P P P P P P
✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ ✏ǫ 3 < +∞ D C ∃λ ∈ Λ : lim ǫ→0 λ(ǫ) ǫ 2 < +∞ B ∃λ ∈ Λ : lim ǫ→0 λ(ǫ) ǫ < +∞ A Lemma 1 If there exists λ ∈ Λ for which lim ǫ→0 λ(ǫ) ǫ = K < +∞ (1)
then any Λ-flat spacetime is pseudo-Riemannian.
Proof. Consider a curve γ ∈ Γ and a point a ∈ m in a coordinate frame {x i }.
Let a have the coordinates {x i 0 } in this frame. In accordance with the definition of lim there exists such a sequence ǫ n that λ(ǫ n )/ǫ n tends to K. Let all ǫ n be small enough (it can be assumed with no loss of generality) then for any n all measurements in the region
|x i − x i 0 | < ǫ n 2
with the error not greater than λ(ǫ n ) coincide with same measurements in flat spacetime, in particular
|δτ − δτ 0 | ≤ λ(ǫ n ) where δτ = τ (x i 0 + δx i , x i 0 ) δτ 0 = τ 0 (x i 0 + δx i , x i 0 )
and τ, τ 0 are metrics along two geodesics both passing through the region described above.
If x 0 +δx lies on the frontier of the region then |δx i ≥ Cǫ n for some C = const thus
δτ δx i − δτ 0 δx i ≤ λ(ǫ n ) Cǫ n therefore δτ 0 δx i − λ(ǫ n ) Cǫ n ≤ δτ δx i ≤ δτ δx i + λ(ǫ n ) Cǫ n
So when n → ∞ (and ǫ n → 0)
δτ 0 δx i − K C ≤ lim δτ δx i ≤ lim δτ δx i ≤ dτ 0 dx i + K C
All the above reasonings are valid for any kλ with k < 1 hence then any spacetime is Λ-flat.
dτ 0 dx i + kK C ≤ lim δτ δx i ≤ lim δτ δx i ≤ dτ 0 dx i + kK C Since k
Proof.
(2) implies lim ǫ → 0λ(ǫ)/ǫ = +∞. Hence for arbitrary N we have λ(ǫ) > N ǫ beginning from some ǫ. In particular, it is valid for
N > sup |dτ /dx i |, hence δτ < N δx i ≤ N ǫ < λ(ǫ) thus |δτ 0 − δτ | < λ(ǫ) for any λ ∈ Λ. ✷ Lemma 3 If there exists λ ∈ Λ such that lim λ(ǫ) ǫ 2 < +∞(3)
then in any Λ-flat spacetime the set Γ is a set of geodesics.
Proof is similar to that of Lemma 1, but uses the second derivatives of τ :
d 2 x i dτ 2 − d 2 x i dτ 2 0 ≤ λ(ǫ) ǫ 2 hence is some coordinate frame d 2 x i /dτ 2 = 0 thus D 2 x i /ds 2 = 0. ✷ Lemma 4 If for any λ ∈ Λ lim λ(ǫ) ǫ 2 = +∞ (4)
then any pseudo-Riemannian space with any set of trajectories Γ is Λ-flat.
Proof. Is similar to that of Lemma 2. We obtain that
λ(ǫ) > |D 2 x i /ds 2 | = |D 2 x i /ds 2 − 0| = |D 2 x i /ds 2 − D 2 x i 0 /ds 2 | ✷ Lemma 5 If for some λ ∈ Λ lim λ(ǫ) ǫ 3 < +∞(5)
then any Λ-flat spacetime is flat.
Proof. In this case the Fock's reasoning is valid: in terms of d 3 τ /dx 3 i we can define the curvature tensor with arbitrary small error. ✷ Lemma 6 If for any λ ∈ Λ lim λ(ǫ) ǫ 3 = +∞ (6) and for some λ ∈ Λ lim λ(ǫ) ǫ 2 < +∞ (7)
then any pseudo-Riemannian space with the set of geodesics Γ is Λ-flat.
Proof is carried out likewise. This competes the proof of the main theorem. ✷
The physical meaning of the results obtained is the following: lim λ(ǫ) ǫ < +∞ means the possibility of arbitrary exact measuring of velocities, lim λ(ǫ) ǫ 2 < +∞ means the possibility of arbitrary exact measuring of accelerations, and lim λ(ǫ) ǫ 3 < +∞ means the possibility of arbitrary exact measuring of derivatives of accelerations. So the physical meaning of the result we obtained is the following: the equivalence principle is valid only for measuring velocities and accelerations (in any point they can be turned to zero by corresponding choice of the coordinate frame) but not valid for derivatives of accelerations (which correspond to invariant tidal forces).
Some remarks on other fields
We require the results of all measurements (including trajectories of particles traveling under other fields) performed in ǫ-small region (see Def.1) to coincide up to λ(ǫ) with the results of analogous experiments in flat spacetime. We also require it to be so for all functions λ of a class Λ containing a function limλ(ǫ)/ǫ 2 < +∞ satisfying and containing no function λ with limλ(ǫ)/ǫ 3 < +∞ Let us demonstrate that the equivalence principle in this formulation holds for equation of radiating charged particle and theories with conformally invariant scalar field and does not hold for scalar-tensor Brans-Dicke theory.
The equation describing the motion of a radiating charged particle contains second derivatives of its velocityü i = d 2 u i /ds 2 viz. third derivatives of coordinates measured up to λ(ǫ)/ǫ 3 . However λ(ǫ)/ǫ 3 tends to infinity for any λ ∈ Λ, hence the smaller is the region, the greater is vagueness in determining u i , thus any equation containingü i does not contradict our equivalence principle (including equations containing the curvature explicitly).
On conformally-invariant scalar field. As we already mentioned, the only way to measure the curvature is by exploring its influence on trajectories of particles in accordance with the formulaẍ i = ϕ ,i . Sinceẍ i is measured up to λ(ǫ)/ǫ 3 , the measurement of ϕ ,i has the same value, hence ✷ϕ = (ϕ ,i ,i is measured up to λ(ǫ)/ǫ 3 and in accordance with the preceding reasoning any equation containing ✷ϕ do not contradict the operational equivalence principle.
In Brans-Dicke theory the principle does not hold since micro-black holes do not move along geodesics there.
5 Locally almost isotropic space is almost uniform: on the physical meaning of Schur's theorem
The Schur's theorem asserts that if a space is locally isotropic (i.e. in any point the curvature tensor has no directions chosen by some properties)
R ijkl = K(x)(g ik g jl − g il g jk )(8)
then the space is homogeneous.
Since all the real measurements are not exact their results can provide only local almost isotropicity. To what extent we can consider it to be homogeneous? It was the problem arised in 1960-s by Yu.A.Volkov.
The main difficulty in solving the problem is that the usual proof of Schur's theorem is based on Bianchi identities applied to (8) where after summing we obtain K ,i = 0 hence K = const. However the fact that curvatures along different directions are almost equal does not provide K ;i to be small enough. So, if we consider a space with almost equal curvatures along all the directions as locally isotropic one, we cannot obtain any isotropy. The goal of this section is to answer this question assuming the local almost isotropicity to be the closeness of results of all the measurements along any direction.
Definition 5 A spacetime region M is called ǫ-small if for any a, b ∈ M |x i (a) − x i (b)| ≤ ǫ ρ(a, b) ≤ ǫ
where ρ is a metric on M .
Definition 6 By an (x i )-distance between points a, b ∈ M we shall call max i {|x i (a)− x i (b)|, ρ(a, b)}. The ǫ-neighborhood of a point a is the set of all points m of M such that the (x i )-distance between m and a does not exceed ǫ. All geometric and kinematic measurements, as it was already mentioned in section 2 are reduced to the measurement of the metric, the distances and proper time interval along trajectories of particles.
Definition 7 A spacetime region M is ǫ-locally isotropic if for any a ∈ M one can define the action of an apropriate rotation group so that a is invariant under this action and all results obtained in the ǫ-neighborhood of a coincide up to λ(ǫ) with the results obtained after the action of any element of the group.
Definition 8 In an analogous way we shall call a region δ-uniform if one can define an action of a transition group on this region so that the results of any measurement on a system of N particles coincide with the precision λ(ǫ) with analogous measurement on the system, obtained from the first one after applying to it any element of the group. A region is called ǫ-locally δ-uniform if all the above is valid for the measurement in ǫ-neighborhoods of any two points of the region. Further we shall consider only geodesically connected domains.
The problem now is to find the least δ (over ǫ, δ and L) such that any ǫ-locally λ-isotropic region of the size L is δ-uniform.
Theorem 2 Any ǫ-locally λ-isotropic region of the size L is:
1. Cǫ-locally CLλ ǫ -uniform 2. C(L/ǫ) 2 λ-uniform
where C = const, and there are spaces for which this evaluations cannot be diminished.
Sketch of the Proof. 1). Since the local metric of Riemannian space is close to Euclidean there exists some c of order 1 such that into the intersection of two ǫ-neighborhoods of two points on the distance ǫ one can inscribe a Cǫneighborhood.
2). Consider two Cǫ-neighborhoods of arbitrary points x, y ∈ M . Let x 1 , y 1 be points on their frontiers (in pseudo-Riemannian case we choose these points so that xx 1 and yy 1 would be of the same kind). Since M is geodesically connected, x 1 and y 1 can be connected with a geodesic whose (x i )-length does not exceed L (see Definition 6). It can be divided into L/ǫ parts of (x i )-length of order ǫ. Then we approximate the geodesic by a broken line so that the intervals x 1 z 1 , z 1 z 2 , . . . , z n y 1 have the (x i )-length ǫ. Now let us rotate Cǫ-neighborhood of x in order to make it appears all inside the intersection of ǫ-neighborhoods of x 1 and z 1 .
Then we rotate it around z 1 , so that it gets to the ǫ-neighborhood of z 2 and so on until the ǫ-neighborhood of the point y. The results of all measurements do not differ more than δyλ, hence the results in neighborhoods of x 1 and y 1 do not differ by more than λL/ǫ.
Here is an example when the evaluation cannot be refined: when all differences of measurements are of the same sign i.e. curvature monotonically varies from x 1 to y 1 .
3). In a similar way we obtain the result for the global uniformity. An interval of curve of length ∼ L is composed of ∼ CL/ǫ intervals of length ∼ Cǫ. The results of measurements along this intervals are indistinguishable up to Lλ/ǫ, hence the error in time or length measurement along all the curve does not exceed (CL/ǫ) · (Lλ/ǫ) = C(L/ǫ) 2 λ. This completes the proof of Theorem 2.
6 Physical interpretation of the results and possible applications
If we perform in an ǫ-small (see Definition 5) region a measurement with error λ we know the metric g ij ∼ δτ /δx j up to λ/ǫ, the Cristoffel symbols Γ i jk ∼ ∂g ik /∂x j ∼ δ 2 τ /δx j2 up to λ/ǫ 2 , and the curvature up to λ/ǫ 3 . In any point we can choose a coordinate frame making Γ i jk zero, therefore measurements with error λ ∼ ǫ 2 do not allow us to distinguish non-isotropic case from a locally isotropic (and even from flat one, i.e. any Riemannian space is ǫ-locally Cǫ 2isotropic). For the sake of such distinction the error of the curvature must not exceed its value K, so the relation λ/Kǫ 2 characterize the relative error of measurement of local isotropy.
The relative error of measuring the local isotropy is the same. Therefore local isotropy implies local uniformly with ǫ/λ times greater error. Hence if λ ∼ ǫ 3 we obtain (for small enough ǫ) the ǫ 2 -uniformity obtained above for any Riemannian metric. To obtain non-trivial information on local uniformity one must have λ ≤ ǫ 4 which corresponds to the possibility of exact enough measurement of curvature tensor and its derivatives, viz. tidal forces and their spacetime gradients. Mathematically it means that if both and R ijkl and R ijkl;m are almost isotropic then the space is almost uniform.
Thus the physical result is the following: if accelerations and tidal forces an locally isotropic then nothing can be said on uniformity of the region. However if gradients of tidal forces are also isotropic then the space is locally uniform.
Imagine we verify the isotropy in a few points (e.g. close to the Earth) -in general the same points as other points of space. If it happens that in these points the space is ǫ-locally λ-isotropic then it is naturally to assume ǫ-local λ-isotropy everywhere and the results obtained give us the evaluation of its homogeneity.
can be taken arbitrary small, we have lim δτ δx i = dτ 0 dx i i.e. in any point in some coordinate frame the metric of our spacetime coincides with Minkowskian one, that is why it is pseudo-Riemannian. ✷ Lemma 2 If for any λ ∈
The meaning of relativity. A Einstein, Princeton University PressPrinceton; New Jerseyth editionEinstein, A., The meaning of relativity, Princeton University Press, Prince- ton, New Jersey, 1955 (4-th edition)
Theorie von Raum. V A Fock, Zeit und Gravitation. Akademie-VerlagFock, V.A., Theorie von Raum, Zeit und Gravitation, Akademie-Verlag, Berlin, 1960
On the difference between conformal and minimal coupling in general relativity. A A Grib, E A Poberii, Helvetica Physica Acta. 68Grib, A.A., and E.A.Poberii, On the difference between conformal and minimal coupling in general relativity, Helvetica Physica Acta, 68, 380- 395, 1995
A P Lightman, W H Press, R H Price, S A Teukolski, Problem book in relativity and gravitation. Princeton, New JerseyPrinceton University PressLightman, A.P., W.H.Press, R.H.Price and S.A.Teukolski, Problem book in relativity and gravitation, Princeton University Press, Princeton, New Jersey, 1975
. H.-J Treder, Gravitationstheorie Undäquivalenzprinzip, Akademie-VerlagBerlinTreder, H.-J., Gravitationstheorie undÄquivalenzprinzip, Akademie- Verlag, Berlin, 1971
| [] |
[
"Λ and Σ 0 Pair Production in Two-Photon Collisions at LEP",
"Λ and Σ 0 Pair Production in Two-Photon Collisions at LEP"
] | [
"B \nEUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH\n\n\nI. Physikalisches Institut, RWTH\nFRG § III. Physikalisches Institut, RWTH\nFRG §\nD-52056, D-52056Aachen, Aachen\n\nNational Institute for High Energy Physics, NIKHEF\nUniversity of Amsterdam\nNL-1009 DB AmsterdamThe Netherlands\n\nUniversity of Michigan\nAnn Arbor48109MIUSA\n\nLaboratoire d'Annecy-le-Vieux de Physique des Particules, LAPP,IN2P3-CNRS\nBP 110, F-74941 Annecy-le-Vieux CEDEXFrance\n\nInstitute of Physics\nUniversity of Basel\nCH-4056BaselSwitzerland\n\nLouisiana State University\nBaton Rouge70803LAUSA\n\nInstitute of High Energy Physics, IHEP\n100039BeijingChina △\n\nFRG § 9 University of Bologna and INFN-Sezione di Bologna\nHumboldt University\nD-10099, I-40126Berlin, BolognaItaly\n\nTata Institute of Fundamental Research\nMumbai (Bombay) 400 005India\n\nNortheastern University\n02115BostonMAUSA\n\nInstitute of Atomic Physics\nUniversity of Bucharest\nR-76900BucharestRomania\n\nCentral Research Institute for Physics of the Hungarian Academy of Sciences, H-1525 Budapest 114\nHungary ‡\n\nMassachusetts Institute of Technology\n02139CambridgeMAUSA\n\nPanjab University\n160, 014ChandigarhIndia\n\n¶ 17 INFN Sezione di Firenze\nKLTE-ATOMKI\nH-4010DebrecenHungary\n\nUniversity of Florence\nI-50125FlorenceItaly\n\nEuropean Laboratory for Particle Physics, CERN, CH-1211\nGeneva 23Switzerland\n\nWorld Laboratory, FBLJA Project, CH-1211\nGeneva 23Switzerland\n\nUniversity of Geneva\nCH-1211 Geneva 4Switzerland\n\nChinese University of Science and Technology\nUSTC\n230, 029Hefei, AnhuiChina △\n\nUniversity of Lausanne\nCH-1015LausanneSwitzerland\n\nInstitut de Physique Nucléaire de Lyon, IN2P3-CNRS\nUniversité Claude Bernard\nF-69622VilleurbanneFrance\n\nCentro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT\nFlorida Institute of Technology\nSpain♭ 2528040, 32901Madrid, MelbourneE-, FLUSA\n\nINFN-Sezione di Milano\nI-20133MilanItaly\n\nInstitute of Theoretical and Experimental Physics, ITEP\nMoscowRussia\n\nINFN-Sezione di Napoli\nUniversity of Naples\nI-80125NaplesItaly\n\nDepartment of Physics\nUniversity of Cyprus\nNicosiaCyprus\n\nUniversity of Nijmegen and NIKHEF\n6525 EDNijmegenNLThe Netherlands\n\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n\nINFN-Sezione di Perugia and Università Degli Studi di Perugia\nI-06100PerugiaItaly\n\nNuclear Physics Institute, St. Petersburg\nRussia\n\nCarnegie Mellon University\n15213PittsburghPAUSA\n\nINFN-Sezione di Napoli\nUniversity of Potenza\nI-85100PotenzaItaly\n\nPrinceton University\n08544PrincetonNJUSA\n\nUniversity of Californa\n92521RiversideCAUSA\n\nINFN-Sezione di Roma\nUniversity of Rome, \"\nLa Sapienza\", I-, 39 University and INFN, Salerno, I-84100 Salerno00185RomeItaly, Italy\n\nUniversity of California\nSan Diego92093CAUSA\n\nCentral Lab. of Mechatronics and Instrumentation, BU-1113 Sofia\nBulgarian Academy of Sciences\nBulgaria\n\nThe Center for High Energy Physics\nKyungpook National University\n702-701TaeguRepublic of Korea\n\nPurdue University\nWest Lafayette47907INUSA\n\nPaul Scherrer Institut\nPSI\nSwitzerland 45 DESY, FRG 46 Eidgenössische Technische Hochschule, ETH Zürich5232, D-15738, CH-8093Villigen, Zeuthen, ZürichCHSwitzerland\n\nFRG 48\nUniversity of Hamburg\nD-22761Hamburg\n\nNational Central University\nChung-LiTaiwan, China\n\nDepartment of Physics\nNational Tsing Hua University\nTaiwan, China\n"
] | [
"EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH\n",
"I. Physikalisches Institut, RWTH\nFRG § III. Physikalisches Institut, RWTH\nFRG §\nD-52056, D-52056Aachen, Aachen",
"National Institute for High Energy Physics, NIKHEF\nUniversity of Amsterdam\nNL-1009 DB AmsterdamThe Netherlands",
"University of Michigan\nAnn Arbor48109MIUSA",
"Laboratoire d'Annecy-le-Vieux de Physique des Particules, LAPP,IN2P3-CNRS\nBP 110, F-74941 Annecy-le-Vieux CEDEXFrance",
"Institute of Physics\nUniversity of Basel\nCH-4056BaselSwitzerland",
"Louisiana State University\nBaton Rouge70803LAUSA",
"Institute of High Energy Physics, IHEP\n100039BeijingChina △",
"FRG § 9 University of Bologna and INFN-Sezione di Bologna\nHumboldt University\nD-10099, I-40126Berlin, BolognaItaly",
"Tata Institute of Fundamental Research\nMumbai (Bombay) 400 005India",
"Northeastern University\n02115BostonMAUSA",
"Institute of Atomic Physics\nUniversity of Bucharest\nR-76900BucharestRomania",
"Central Research Institute for Physics of the Hungarian Academy of Sciences, H-1525 Budapest 114\nHungary ‡",
"Massachusetts Institute of Technology\n02139CambridgeMAUSA",
"Panjab University\n160, 014ChandigarhIndia",
"¶ 17 INFN Sezione di Firenze\nKLTE-ATOMKI\nH-4010DebrecenHungary",
"University of Florence\nI-50125FlorenceItaly",
"European Laboratory for Particle Physics, CERN, CH-1211\nGeneva 23Switzerland",
"World Laboratory, FBLJA Project, CH-1211\nGeneva 23Switzerland",
"University of Geneva\nCH-1211 Geneva 4Switzerland",
"Chinese University of Science and Technology\nUSTC\n230, 029Hefei, AnhuiChina △",
"University of Lausanne\nCH-1015LausanneSwitzerland",
"Institut de Physique Nucléaire de Lyon, IN2P3-CNRS\nUniversité Claude Bernard\nF-69622VilleurbanneFrance",
"Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT\nFlorida Institute of Technology\nSpain♭ 2528040, 32901Madrid, MelbourneE-, FLUSA",
"INFN-Sezione di Milano\nI-20133MilanItaly",
"Institute of Theoretical and Experimental Physics, ITEP\nMoscowRussia",
"INFN-Sezione di Napoli\nUniversity of Naples\nI-80125NaplesItaly",
"Department of Physics\nUniversity of Cyprus\nNicosiaCyprus",
"University of Nijmegen and NIKHEF\n6525 EDNijmegenNLThe Netherlands",
"California Institute of Technology\n91125PasadenaCAUSA",
"INFN-Sezione di Perugia and Università Degli Studi di Perugia\nI-06100PerugiaItaly",
"Nuclear Physics Institute, St. Petersburg\nRussia",
"Carnegie Mellon University\n15213PittsburghPAUSA",
"INFN-Sezione di Napoli\nUniversity of Potenza\nI-85100PotenzaItaly",
"Princeton University\n08544PrincetonNJUSA",
"University of Californa\n92521RiversideCAUSA",
"INFN-Sezione di Roma\nUniversity of Rome, \"\nLa Sapienza\", I-, 39 University and INFN, Salerno, I-84100 Salerno00185RomeItaly, Italy",
"University of California\nSan Diego92093CAUSA",
"Central Lab. of Mechatronics and Instrumentation, BU-1113 Sofia\nBulgarian Academy of Sciences\nBulgaria",
"The Center for High Energy Physics\nKyungpook National University\n702-701TaeguRepublic of Korea",
"Purdue University\nWest Lafayette47907INUSA",
"Paul Scherrer Institut\nPSI\nSwitzerland 45 DESY, FRG 46 Eidgenössische Technische Hochschule, ETH Zürich5232, D-15738, CH-8093Villigen, Zeuthen, ZürichCHSwitzerland",
"FRG 48\nUniversity of Hamburg\nD-22761Hamburg",
"National Central University\nChung-LiTaiwan, China",
"Department of Physics\nNational Tsing Hua University\nTaiwan, China"
] | [] | Strange baryon pair production in two-photon collisions is studied with the L3 detector at LEP. The analysis is based on data collected at e + e − centre-of-mass energies from 91 GeV to 208 GeV, corresponding to an integrated luminosity of 844 pb −1 . The processes γγ → ΛΛ and γγ → Σ 0 Σ 0 are identified. Their cross sections as a function of the γγ centre-of-mass energy are measured and results are compared to predictions of the quark-diquark model. | 10.1016/s0370-2693(02)01781-1 | [
"https://export.arxiv.org/pdf/hep-ex/0204025v1.pdf"
] | 11,362,516 | hep-ex/0204025 | 61f2e70b2bdadabb9fa5631dd04b799ceafed8be |
Λ and Σ 0 Pair Production in Two-Photon Collisions at LEP
arXiv:hep-ex/0204025v1 19 Apr 2002 February 6, 2002
B
EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH
I. Physikalisches Institut, RWTH
FRG § III. Physikalisches Institut, RWTH
FRG §
D-52056, D-52056Aachen, Aachen
National Institute for High Energy Physics, NIKHEF
University of Amsterdam
NL-1009 DB AmsterdamThe Netherlands
University of Michigan
Ann Arbor48109MIUSA
Laboratoire d'Annecy-le-Vieux de Physique des Particules, LAPP,IN2P3-CNRS
BP 110, F-74941 Annecy-le-Vieux CEDEXFrance
Institute of Physics
University of Basel
CH-4056BaselSwitzerland
Louisiana State University
Baton Rouge70803LAUSA
Institute of High Energy Physics, IHEP
100039BeijingChina △
FRG § 9 University of Bologna and INFN-Sezione di Bologna
Humboldt University
D-10099, I-40126Berlin, BolognaItaly
Tata Institute of Fundamental Research
Mumbai (Bombay) 400 005India
Northeastern University
02115BostonMAUSA
Institute of Atomic Physics
University of Bucharest
R-76900BucharestRomania
Central Research Institute for Physics of the Hungarian Academy of Sciences, H-1525 Budapest 114
Hungary ‡
Massachusetts Institute of Technology
02139CambridgeMAUSA
Panjab University
160, 014ChandigarhIndia
¶ 17 INFN Sezione di Firenze
KLTE-ATOMKI
H-4010DebrecenHungary
University of Florence
I-50125FlorenceItaly
European Laboratory for Particle Physics, CERN, CH-1211
Geneva 23Switzerland
World Laboratory, FBLJA Project, CH-1211
Geneva 23Switzerland
University of Geneva
CH-1211 Geneva 4Switzerland
Chinese University of Science and Technology
USTC
230, 029Hefei, AnhuiChina △
University of Lausanne
CH-1015LausanneSwitzerland
Institut de Physique Nucléaire de Lyon, IN2P3-CNRS
Université Claude Bernard
F-69622VilleurbanneFrance
Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas, CIEMAT
Florida Institute of Technology
Spain♭ 2528040, 32901Madrid, MelbourneE-, FLUSA
INFN-Sezione di Milano
I-20133MilanItaly
Institute of Theoretical and Experimental Physics, ITEP
MoscowRussia
INFN-Sezione di Napoli
University of Naples
I-80125NaplesItaly
Department of Physics
University of Cyprus
NicosiaCyprus
University of Nijmegen and NIKHEF
6525 EDNijmegenNLThe Netherlands
California Institute of Technology
91125PasadenaCAUSA
INFN-Sezione di Perugia and Università Degli Studi di Perugia
I-06100PerugiaItaly
Nuclear Physics Institute, St. Petersburg
Russia
Carnegie Mellon University
15213PittsburghPAUSA
INFN-Sezione di Napoli
University of Potenza
I-85100PotenzaItaly
Princeton University
08544PrincetonNJUSA
University of Californa
92521RiversideCAUSA
INFN-Sezione di Roma
University of Rome, "
La Sapienza", I-, 39 University and INFN, Salerno, I-84100 Salerno00185RomeItaly, Italy
University of California
San Diego92093CAUSA
Central Lab. of Mechatronics and Instrumentation, BU-1113 Sofia
Bulgarian Academy of Sciences
Bulgaria
The Center for High Energy Physics
Kyungpook National University
702-701TaeguRepublic of Korea
Purdue University
West Lafayette47907INUSA
Paul Scherrer Institut
PSI
Switzerland 45 DESY, FRG 46 Eidgenössische Technische Hochschule, ETH Zürich5232, D-15738, CH-8093Villigen, Zeuthen, ZürichCHSwitzerland
FRG 48
University of Hamburg
D-22761Hamburg
National Central University
Chung-LiTaiwan, China
Department of Physics
National Tsing Hua University
Taiwan, China
Λ and Σ 0 Pair Production in Two-Photon Collisions at LEP
arXiv:hep-ex/0204025v1 19 Apr 2002 February 6, 2002Submitted to Phys. Lett.The L3 Collaboration
Strange baryon pair production in two-photon collisions is studied with the L3 detector at LEP. The analysis is based on data collected at e + e − centre-of-mass energies from 91 GeV to 208 GeV, corresponding to an integrated luminosity of 844 pb −1 . The processes γγ → ΛΛ and γγ → Σ 0 Σ 0 are identified. Their cross sections as a function of the γγ centre-of-mass energy are measured and results are compared to predictions of the quark-diquark model.
Introduction
Electron-positron colliders are a suitable place for the study of two-photon interactions, via the process e + e − → e + e − γ * γ * → e + e − X, where γ * denotes a virtual photon. The outgoing electron and positron carry almost the full beam energy and are usually undetected, due to their small transverse momenta. The final state X has, therefore, a low mass relative to the e + e − centre-of-mass energy, √ s. The small photon virtuality allows the extraction of the cross section σ(γγ → X) in real photon collisions, once the photon flux is calculated by QED [1]. The process γγ → baryon antibaryon is sensitive to the quark structure of the baryon. Calculations of the cross section for this process were performed using the hard scattering approach [2]. Due to the failure of a three-quark calculation [3] to correctly predict the γγ → pp cross section in the GeV region [4], an alternative quark-diquark model was proposed [5]. This model includes non-perturbative effects through the use of diquarks, a qq bound state within the baryon [6].
In this letter we present the first measurements at LEP of the cross sections σ(γγ → ΛΛ) and σ(γγ → Σ 0 Σ 0 ). We analysed a total integrated luminosity of 844 pb −1 collected with the L3 detector [7]. Out of this sample, 157 pb −1 were collected around the Z peak and 687 pb −1 at centre-of-mass energies from 161 GeV to 208 GeV. The analysis is based on the central tracking system and the high resolution BGO electromagnetic calorimeter. The events are selected by the track triggers [8].
Monte Carlo events are generated [9] for each beam energy and for each process within the formalism of Reference 1. An uniform spectrum as a function of the two-photon mass, W γγ , from threshold to 5 GeV is used. The two-body final states ΛΛ and Σ 0 Σ 0 are generated isotropically in the centre-of-mass of the γγ system. The events are then passed through the full L3 detector simulation using the GEANT [10] and GEISHA [11] programs and are reconstructed following the same procedure as for the data. Time dependent detector inefficiencies, as monitored during the data taking period, are taken into account.
The CLEO, TPC/2γ and VENUS collaborations [12][13][14] searched for the reaction e + e − → e + e − ΛΛ at √ s = 10.6 GeV, 14.5 GeV and 58 GeV, respectively. Only CLEO and TPC/2γ observe a signal. No results for the e + e − → e + e − Σ 0 Σ 0 cross section were reported so far. Our results are compared to these experiments and to theoretical predictions of the quark-diquark model.
Event selection
In order to study the ΛΛ and Σ 0 Σ 0 final states, the Σ 0 → Λγ, Σ 0 → Λγ, Λ → pπ − and Λ → pπ + decays are considered. The preselection of events is based on charged tracks and proceeds as follows:
• There must be four charged tracks in the tracking chamber with a net charge of zero. These tracks must be reconstructed from at least 12 hits out of a maximum of 62.
• There must be two secondary vertices at a distance from the primary interaction vertex greater than 3 mm in the transverse plane.
• The angle in the transverse plane between the flight direction of each Λ candidate and the total momentum vector of the two outgoing tracks must be less than 0.15 rad.
• Events with a secondary vertex due to a photon conversion are rejected. A conversion is identified if by assigning the electron mass to the two tracks, their effective mass is below 0.05 GeV.
Λ identification
The two secondary vertices are assigned to the Λ and Λ decays. At each vertex, the p or the p are identified as the highest momentum tracks. Monte Carlo studies show that this is the correct configuration in more than 99% of the cases. To suppress the dominant e + e − → e + e − K 0 s K 0 s → e + e − π + π − π + π − background, the following criteria are applied:
• The dE/dx measurement must be consistent with the Λ or Λ hypotheses. A confidence level CL > 0.001 is required for both proton, antiproton and pions candidates. This cut rejects 85% of the K 0 s K 0 s background. • The particle assignment is considered to be correct if either, a) there is at least one of the tracks associated to a proton or to an antiproton with more than 30 hits and a dE/dx confidence level ratio CL(p)/CL(π) > 10, or b) the ratio between the electromagnetic transverse energy, E T , and the transverse momentum, p T , of the antiproton candidate is greater than 0.7. This cut eliminates 70% of the pions and keeps 77% of the antiproton signal, as shown in Figure 1.
The dE/dx identification has a high discriminating power for particles with momentum below 700 MeV, whereas the E T /p T cut becomes more efficient for higher momentum antiprotons. These criteria suppress 83% of the remaining K 0 s K 0 s background. • If two K 0 s candidates are reconstructed, the event is rejected. A K 0 s candidate is defined as a system with a reconstructed π + π − mass within a ± 30 MeV interval around the nominal K 0 s mass. Only 1% of the original K 0 s K 0 s background remains after this cut.
In addition to the previous requirements, a cut | cos θ * | < 0.6 is applied, where θ * is the polar angle of the Λ in the two-photon centre-of-mass system, to match the experimental acceptance with the range of the theoretical predictions. Clean Λ and Λ signals are observed in the distributions of the pπ − and pπ + masses, presented in Figures 2a and 2b. The Λ and Λ masses are found to be m Λ = 1.113 ± 0.006 GeV and m Λ = 1.115 ± 0.006 GeV, respectively, in agreement with the nominal value of 1.116 GeV [15]. The final sample contains 66 inclusive ΛΛ candidates. They are selected within a radius of 40 MeV around the nominal Λ mass in the plane of the effective masses m(pπ − ) vs. m(pπ + ), shown in Figures 2c and 2d. The remaining K 0 s K 0 s contamination is estimated by Monte Carlo simulation to be less than 1%. The normalisation of the K 0 s K 0 s Monte Carlo background is determined from our previous measurement of this channel [16]. Within the available statistics, the hypothesis of an isotropic distribution of the ΛΛ signal is verified.
Σ 0 identification
The reconstruction of Σ 0 and Σ 0 candidates is performed by combining the selected Λ and Λ with photon candidates. A photon candidate is defined as a shower in the electromagnetic calorimeter with at least two adjacent crystals and an energy between 50 MeV and 200 MeV. Monte Carlo studies show that 91% of the photons emitted by a Σ 0 have an energy below 200 MeV, which is compatible with the nominal Σ 0 mass of 1.193 GeV. Photon isolation criteria are also applied; there must be no charged tracks within 200 mrad around the photon direction and the cosine of the angle between the antiproton and the photon directions must be less than 0.8. To identify the Σ 0 , the mass difference ∆m = m(pπγ) − m(pπ) is used, as presented in Figure 3. A Σ 0 candidate corresponds to the mass interval 47 MeV < ∆m < 107 MeV. Out of the 66 selected ΛΛ events, 31 have Σ 0 candidates.
Exclusive ΛΛ and Σ 0 Σ 0 identification
In order to select the events from the exclusive reactions γγ → ΛΛ and γγ → Σ 0 Σ 0 , the transverse momentum of the four charged particles, P T , is required to be less than 0.5 GeV. This cut rejects events containing contributions from other final states such as Ξ 0 Ξ 0 or Σ 0 (1385)Σ 0 (1385).
Some of these states can still pass the P T cut, but their contribution to the final sample is negligible, given the magnitude of their cross sections [6]. The P T requirement rejects less than 8% of events corresponding to the exclusive final states ΛΛ and Σ 0 Σ 0 . Since the photons emitted by the Σ 0 candidates have a low energy, they give a small contribution to the total transverse momentum imbalance. The final sample contains 33 events. The numbers of selected events for the different e + e − centre-of-mass energies are listed in Table 1. A typical ΛΛ event is shown in Figure 4. The relative proportions of ΛΛ and Σ 0 Σ 0 final states in the sample are determined as follows. The event is labelled as Σ 0 Σ 0 -like if a Σ 0 or a Σ 0 candidate is observed and as ΛΛ-like otherwise. With these criteria, 19 ΛΛ-like and 14 Σ 0 Σ 0 -like events are found in the data. The true fractions r j (j = ΛΛ, Σ 0 Σ 0 ) of the two components are determined by a maximum extended likelihood fit with the constraint r ΛΛ + r Σ 0 Σ 0 = 1. The likelihood function to be maximized is:
L = n Nt t e −nt N t ! i n N i i e −n i N i ! ,
where N t and n t correspond respectively to the total number of observed and expected events, and N i and n i to the number of observed and expected i-like events. The latter is given by:
n i = ( j p ij r j )n t ,
where p ij is the probability of identifying an event corresponding to the final state j as i-like. The relative probabilities p ij are determined by Monte Carlo and shown in Table 2 together with their statistical uncertainties. The fractions r j and the number of events n t r j obtained by the fit are given in Table 3. The cross sections for the γγ → ΛΣ 0 and γγ → Σ 0 Λ processes are predicted to be negligible compared to the other channels [6]. In order to test this assumption, also an analysis with the three components ΛΛ, ΛΣ 0 + Σ 0 Λ and Σ 0 Σ 0 is carried out. The ΛΣ 0 + Σ 0 Λ fraction is measured to be compatible with zero within a large uncertainty.
Results
The production cross sections σ(e + e − → e + e − ΛΛ) and σ(e + e − → e + e − Σ 0 Σ 0 ) are measured as a function of the centre-of-mass energy. They refer to the following phase-space cuts: the effective mass of the ΛΛ pair, m ΛΛ , less than 3.5 GeV, | cos θ * | < 0.6 and P T < 0.5 GeV. In the cross section determination it is assumed that the fractions r i are independent of √ s. The results are summarised in Table 4.
The detection efficiency is determined by Monte Carlo for each data taking period. It takes into account the Λ → pπ branching ratio and track geometrical acceptance (≃ 6%), the baryon identification criteria (≃ 26%) and the track trigger efficiency (≃ 10%). The efficiency of higher level triggers (≃ 90%) is estimated from the data themselves, using prescaled events. The contribution of the different selection cuts to the detection efficiency is detailed in Table 5. The total efficiencies for each data set are listed in Table 1.
The dominant source of systematic uncertainty is the selection procedure (7%); other sources are the finite Monte Carlo statistics (5%) and the determination of the trigger efficiency (3%). The Monte Carlo contribution includes the uncertainty on the p ij probabilities used in the determination of the fractions r i .
The cross sections σ(γγ → ΛΛ) and σ(γγ → Σ 0 Σ 0 ) in real photon collisions are extracted as a function of W γγ by deconvoluting the two-photon luminosity function and the form factor [17]. They are presented in Table 6. For the γγ → Σ 0 Σ 0 case, the number of selected events as a function of W γγ is obtained from the corresponding m ΛΛ distribution, within a 4.0% uncertainty. The efficiencies and luminosity functions are evaluated for each W γγ interval and centre-of-mass energy. The efficiencies increase with W γγ reflecting the expected rise in the detector acceptance. The trigger and track identification efficiencies do not depend on W γγ . An additional systematic uncertainty of 5%, due to the choice of the photon form factor, is included. Figure 5a compares the present σ(γγ → ΛΛ) measurement with that of CLEO. The mass dependence of CLEO is steeper than the one we observe. Our data, fitted with a function of the form σ ∝ W −n , gives a value n = 7.6 ± 3.9. The quark-diquark model predicts n=6, and a three quark model n=10 [18]. In Figures 5b and 5c, the γγ → ΛΛ and γγ → Σ 0 Σ 0 cross section measurements are compared to the predictions of recent quark-diquark model calculations [6]. This model considers three different distribution amplitudes (DA) for the diquarks. The absolute predictions using the standard distribution amplitude (Standard DA) reproduce well our data. The asymptotic DA [19] and DZ-DA [20] models are excluded.
√ s (GeV) Luminosity (pb −1 ) Efficiency (%) Events Table 1: Integrated luminosity, overall efficiency and number of selected e + e − → e + e − ΛΛ and e + e − → e + e − Σ 0 Σ 0 events for each data taking period. The efficiency refers to the phase-space cuts: 2.23 < m ΛΛ < 3.5 GeV, | cos θ * | < 0.6 and P T < 0.5 GeV. The quoted uncertainties are statistical.
Identified as N i Selection probability p ij (%) Generated as ΛΛ Generated as Σ 0 Σ 0 ΛΛ-like 19 88.0 ± 0.8 39.1 ± 1.0 Σ 0 Σ 0 -like 14 12.0 ± 0.8 60.9 ± 1.0 Table 2: Numbers of observed events N i identified as ΛΛ-like and Σ 0 Σ 0 -like and relative probabilities p ij of identifying an event generated in the final state j as i-like.
Final state Fraction r j Events n t r j ΛΛ 0.38 ± 0.18 12.5 ± 6.1 Σ 0 Σ 0 0.62 ± 0.18 20.5 ± 6.5 Table 3: Results of the fit for the fractions r j and the number n t r j of ΛΛ and Σ 0 Σ 0 final states.
√ s (GeV) σ(e + e − → e + e − ΛΛ) (pb) σ(e + e − → e + e − Σ 0 Σ 0 ) (pb) 91 0.81 ± 0.48 ± 0.07 1.33 ± 0.61 ± 0.12 161−208 0.75 ± 0.39 ± 0.07 1.23 ± 0.43 ± 0.11 Table 4: The e + e − → e + e − ΛΛ and e + e − → e + e − Σ 0 Σ 0 cross sections for 2.23 < m ΛΛ < 3.5 GeV, | cos θ * | < 0.6 and P T < 0.5 GeV. The first uncertainty is statistical, the second systematic. Table 5: Average acceptance of the different selection cuts used in the analysis. The overall efficiency takes also into account the acceptance of the trigger system (≃ 80%). Figure 1: Distribution of the ratio of the transverse energy deposited in the electromagnetic calorimeter, E T , and the transverse momentum, p T , for pions and antiprotons. The π − data distribution is obtained from a high purity K 0 s K 0 s sample [16]. The π − Monte Carlo distribution corresponds to a simulated e + e − → e + e − K 0 s K 0 s sample normalized to the number of K 0 s K 0 s observed in data [16]. The antiproton distribution is obtained from simulated e + e − → e + e − ΛΛ events, with arbitrary normalization. Figure 5: Measurements of the γγ → ΛΛ and γγ → Σ 0 Σ 0 cross sections as a function of W γγ . In a) the γγ → ΛΛ cross section is compared to the one obtained by CLEO [4]. The dashed line shows the power law fit described in the text. In b) and c) the σ(γγ → ΛΛ) and σ(γγ → Σ 0 Σ 0 ) measurements are compared to the calculations of Reference 6. Statistical and systematic uncertainties are added in quadrature.
W γγ (GeV) W γγ (GeV) σ(γγ → ΛΛ) (pb) W γγ (GeV) W γγ (GeV) σ(γγ → Σ 0 Σ 0 )(
Figure 2 :Figure 3 :
23Effective mass distribution of the a) pπ − system and b) pπ + system. The two dimensional distribution is shown in c) and d). A radius of 40 MeV around the nominal mass value of m Λ defines the inclusive ΛΛ sample. Distribution of the mass difference between the Λγ (pπγ) and the Λ (pπ) systems. All possible combinations of a photon and a Λ or Λ candidate are shown.
L3 Figure 4 :
L34A typical γγ → ΛΛ event, displayed in the transverse plane. It illustrates the higher momentum of the proton and antiproton in the Λ and Λ decays and the separation in the electromagnetic calorimeter between the large antiproton signal and the small energy deposit of pions and protons.
Table 6 :
6The γγ → ΛΛ and γγ → Σ 0 Σ 0 cross sections as a function of W γγ for | cos θ * | < 0.6 and P T < 0.5 GeV. The central value W γγ of each bin corresponds to an average according to a W −8 distribution. The first uncertainty is statistical, the second systematic.
AcknowledgmentsWe thank C. F. Berger and W. Schweiger for very useful discussions and for providing us their theoretical predictions.The L3 Collaboration: P.Achard,20O.Adriani,17M.Aguilar-Benitez, 24 J.Alcaraz, 24,18 G.Alemanni, 22 J.Allaby,18A.Aloisio, 28 M.G.Alviggi, 28 H.Anderhub, 46 V.P.Andreev, 6,33 F.Anselmo,9A.Arefiev, 27 T.Azemoon,3T.Aziz,10Author List
. V M Budnev, Phys. Rep. 15181V. M. Budnev et al., Phys. Rep. 15 (1974) 181.
. S J Brodsky, J P Lepage, Phys. Rev. D. 222157S. J. Brodsky et J. P. Lepage, Phys. Rev. D 22 (1980) 2157.
. G Farrar, E Maina, F Neri, Nucl. Phys. B. 259746Nucl. Phys. BG. Farrar, E. Maina and F. Neri, Nucl. Phys. B 259 (1985) 702; Nucl. Phys. B 263 (1986) 746.
. M Artuso, CLEO collaborationPhys. Rev. D. 505484CLEO collaboration, M. Artuso et al., Phys. Rev. D 50 (1994) 5484.
. M Anselmino, F Caruso, P Kroll, W Schweiger, Int. Mod. Phys. A. 45213M. Anselmino, F. Caruso, P. Kroll and W. Schweiger, Int. Mod. Phys. A 4 (1989) 5213.
. C F Berger, B Lechner, W Schweiger, Fizika B. 8371C. F. Berger, B. Lechner and W. Schweiger, Fizika B 8 (1999) 371;
Exclusive Two-Photon Reactions in the Few-GeV Region. C F Berger, Graz UniversityDiploma ThesisC. F. Berger, Exclusive Two-Photon Reactions in the Few-GeV Region, Diploma Thesis, Graz University, 1997;
. C F Berger, W Schweiger, private communicationC. F. Berger and W. Schweiger, private communication.
. B L3 Collab, Adeva, Nucl. Instr. Meth. A. 28935L3 Collab., B. Adeva et al., Nucl. Instr. Meth. A 289 (1990) 35;
. O L3 Collab, Adriani, Phys. Rep. 2361L3 Collab., O. Adriani et al., Phys. Rep. 236 (1993) 1;
. M Chemarin, Nucl. Instr. Meth. A. 349345M. Chemarin et al., Nucl. Instr. Meth. A 349 (1994) 345;
. M Acciarri, Nucl. Instr. Meth. A. 351300M. Acciarri et al., Nucl. Instr. Meth. A 351 (1994) 300;
. I C Brock, Nucl. Instr. Meth. A. 381236I. C. Brock et al., Nucl. Instr. Meth. A 381 (1996) 236;
. A Adam, Nucl. Instr. Meth. A. 383342A. Adam et al., Nucl. Instr. Meth. A 383 (1996) 342.
. P Béné, Nucl. Inst. Meth. A. 306150P. Béné et al., Nucl. Inst. Meth. A 306 (1991) 150;
. D Haas, Nucl. Inst. Meth. A. 420101D. Haas et al., Nucl. Inst. Meth. A 420 (1991) 101.
Charm Production in Two-Photon Collisions. F L Linde, Rijksuniversiteit LeidenPh.D. ThesisF. L. Linde, Charm Production in Two-Photon Collisions, Ph.D. Thesis, Rijksuniversiteit Leiden, 1988.
. R Brun, GEANT 3.15 preprint CERN DD/EE/84-1. RevisedR. Brun et al., GEANT 3.15 preprint CERN DD/EE/84-1 (Revised 1987).
. H Fesefeldt, PITHA 85/2RWTH Aachen reportH. Fesefeldt, RWTH Aachen report PITHA 85/2, 1985.
. S Anderson, CLEO collaborationPhys. Rev. D. 562485CLEO collaboration, S. Anderson et al., Phys. Rev. D 56 (1997) 2485.
. H Aihara, TPC/2γ collaborationPhys. Rev. D. 402772TPC/2γ collaboration, H. Aihara et al., Phys. Rev. D 40 (1989) 2772.
. S Uehara, VENUS collaborationZ.Phys. C. 69597VENUS collaboration, S. Uehara et al., Z.Phys. C 69 (1996) 597.
. D E Particle Data Group, Groom, Eur. Phys. J. C. 151Particle Data Group, D. E. Groom et al., Eur. Phys. J. C 15 (2000) 1.
. M Acciarri, L3 collaborationPhys. Lett. B. 501173L3 collaboration, M. Acciarri et al., Phys. Lett. B 501 (2001) 173.
G A Schuler, CERN-TH/96-297hep-ph/9610406. G. A. Schuler, hep-ph/9610406, CERN-TH/96-297.
. S J Brodsky, G R Farrar, Phys. Rev. Lett. 311153S. J. Brodsky and G. R. Farrar, Phys. Rev. Lett. 31 (1973) 1153.
. P Kroll, M Schürmann, W Schweiger, Z. Phys. A. 338339P. Kroll, M. Schürmann and W. Schweiger, Z. Phys. A 338 (1991) 339.
. Z Dziembowski, Phys. Rev. D. 372030Z. Dziembowski, Phys. Rev. D 37 (1988) 2030.
| [] |
[
"Hard exclusive electroproduction of a pion in the backward region",
"Hard exclusive electroproduction of a pion in the backward region"
] | [
"J P Lansberg \nCentre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance\n",
"B Pire \nCentre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance\n",
"L Szymanowski \nCentre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance\n\nPhysique Théorique Fondamentale\nUniversité de Liège\n17 Allée du 6 Août, Bâtiment B5aB-4000Liège-1Belgium\n\nSoltan Institute for Nuclear Studies\nWarsawPoland\n"
] | [
"Centre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance",
"Centre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance",
"Centre de Physique Théorique\nÉcole Polytechnique\nCNRS\n91128PalaiseauFrance",
"Physique Théorique Fondamentale\nUniversité de Liège\n17 Allée du 6 Août, Bâtiment B5aB-4000Liège-1Belgium",
"Soltan Institute for Nuclear Studies\nWarsawPoland"
] | [] | We study the scaling regime of pion electroproduction in the backward region, eN → e ′ N ′ π. We compute the leading-twist amplitude in the kinematical region, where it factorises into a shortdistance matrix element and long-distance dominated nucleon Distribution Amplitudes and nucleon to pion Transition Distribution Amplitudes. Using the chiral limit of the latter, we obtain a first estimate of the cross section, which may be experimentally studied at JLab or Hermes. | 10.1103/physrevd.75.074004 10.1103/physrevd.77.019902 | [
"https://export.arxiv.org/pdf/hep-ph/0701125v3.pdf"
] | 12,458,153 | hep-ph/0701125 | fb4869453f3e7d262454d7af8f8424ec2c89ab80 |
Hard exclusive electroproduction of a pion in the backward region
arXiv:hep-ph/0701125v3 9 Nov 2007
J P Lansberg
Centre de Physique Théorique
École Polytechnique
CNRS
91128PalaiseauFrance
B Pire
Centre de Physique Théorique
École Polytechnique
CNRS
91128PalaiseauFrance
L Szymanowski
Centre de Physique Théorique
École Polytechnique
CNRS
91128PalaiseauFrance
Physique Théorique Fondamentale
Université de Liège
17 Allée du 6 Août, Bâtiment B5aB-4000Liège-1Belgium
Soltan Institute for Nuclear Studies
WarsawPoland
Hard exclusive electroproduction of a pion in the backward region
arXiv:hep-ph/0701125v3 9 Nov 2007numbers: 1360Le1360-r1238Bx
We study the scaling regime of pion electroproduction in the backward region, eN → e ′ N ′ π. We compute the leading-twist amplitude in the kinematical region, where it factorises into a shortdistance matrix element and long-distance dominated nucleon Distribution Amplitudes and nucleon to pion Transition Distribution Amplitudes. Using the chiral limit of the latter, we obtain a first estimate of the cross section, which may be experimentally studied at JLab or Hermes.
I. INTRODUCTION
In [1,2], we introduced the framework to study backward pion electroproduction
γ ⋆ (q)N (p 1 ) → N ′ (p 2 )π(p π ),(1)
on a proton (or neutron) target, in the Bjorken regime (q 2 large and q 2 /(2p 1 .q) fixed) in terms of Transition Distribution Amplitudes (TDAs), as well as the reaction N (p 1 )N (p 2 ) → γ ⋆ (q)π(p π ) in the near forward region. This extended the concept of Generalised Parton Distributions. Such an extension of the GPD framework has already been advocated in the pioneering work of [3]. The TDAs involved in the description of Deeply-Virtual Compton Scattering (DVCS) in the backward kinematics γ ⋆ (q)N (p 1 ) → N ′ (p 2 )γ(p γ ) (2) and the reaction N (p 1 )N (p 2 ) → γ ⋆ (q)γ(p γ ) in the near forward region were given in [4]. This followed the same lines as in [5], where we have argued that factorisation theorems [6] for exclusive processes apply to the case of the reaction π − π + → γ * γ in the kinematical regime where the off-shell photon is highly virtual (of the order of the energy squared of the reaction) but the momentum transfer t is small. Besides, in this simpler mesonic case, a perturbative limit may be obtained [7] for the γ ⋆ to ρ transition. For the γ → π one, we have recently shown [8] that experimental analysis of processes such as γ ⋆ γ → ρπ and γ ⋆ γ → ππ, which involve the latter TDAs, could be carried out, e.g. the background from the Bremsstrahlung is small if not absent and rates are sizable at present e + e − facilities.
Whereas in the pion to photon case, models used for GPDs [9,10,11,12] could be applied to TDAs since they are defined from matrix elements of the same quarkantiquark operators, this is not obvious for the nucleon to meson or photon TDAs, which are defined from matrix elements of a three-quark operator. Before estimates based on models such as the meson-cloud model [13] become available, it is important to use model-independent information coming from general theorems. We will use here the constraints for the proton to pion TDAs derived in the chiral limit.
The structure of this paper is the following: first, we recall the necessary kinematics related to hard electroproduction of a pion as well as the definitions of the proton to pion TDAs, which enter the description of the latter process in the backward region; secondly, we establish the limiting constraints on the TDAs in the chiral limit when the final-state pion is soft; thirdly, we calculate the hard contribution for the process; hence, extrapolating the limiting value of the TDAs to the large-ξ region, we give a first evaluation of the unpolarised cross section, by restricting the analysis of the hard part to the sole Efremov-Radyushkin-Brodsky-Lepage (ERBL) region, where all the three quarks struck by the virtual photon have positive light-cone momentum fraction of the target proton. This analysis is motivated by the experimental conditions [14,15,16] of JLab and Hermes at moderate electron energies. Related processes with three-quark exchanges in the hard scattering were recently studied in [17] similarly to what was proposed in [18].
II. KINEMATICS AND DEFINITIONS
A. The electroproduction process eP → e ′ P ′ π 0 Let us first recall the kinematics for the electron proton collisions (see e.g. [19]). As usual, we shall work in the one-photon-exchange approximation and consider the differential cross section for γ ⋆ (q)P (p 1 ) → P ′ (p 2 )π 0 (p π ) in the center-of-mass frame of the pion and the final-state proton (see the kinematics in Fig. 1). The photon flux Γ is defined in the Hand convention to be
Γ = α em 2π 2 E e ′ E e W 2 − M 2 2M Q 2 1 1 − ǫ ,(3)
with E e the energy of the initial electron in the lab frame (beam energy), E e ′ the one of the scattered electron, W the invariant mass of the P ′ π 0 pair, M the proton mass, Q 2 the virtuality of the exchanged photon (Q 2 = −q 2 = −(p e − p e ′ ) 2 ) and ǫ = 1−y 1−y+y 2 /2 (y = q.p1 pe.p1 ) its linear polarisation parameter. The five-fold differential cross section for the process eP → e ′ P ′ π 0 can then be reduced to a two-fold one, expressible in the center-of-mass frame of the P ′ π 0 pair, times the flux factor Γ:
d 5 σ dE e ′ d 2 Ω e d 2 Ω * π = Γ d 2 σ d 2 Ω * π ,(4)
where Ω e is the differential solid angle for the scattered electron in the lab frame, Ω * π is the differential solid angle for the pion in the P ′ π 0 center-of-mass frame, such that dΩ * π = dϕ d cos θ * π . θ * π is defined as the polar angle between the virtual photon and the pion in the latter system. ϕ is the azimuthal angle between the electron plane and the plane of the process γ ⋆ P → P ′ π 0 (hadronic plane) (ϕ = 0 when the pion is emitted in the half plane containing the outgoing electron, see also In general, we have contributions from different polarisations of the photon. For that reason, we define four polarised cross sections, which do not depend on ϕ but only on W , Q 2 and θ * π , d 2 σ T , d 2 σ L , d 2 σ TL and d 2 σ TT . The ϕ dependence is therefore more explicit since
d 2 σ dΩ * π = d 2 σ T dΩ * π + ǫ d 2 σ L dΩ * π + 2ǫ(1 + ǫ) d 2 σ TL dΩ * π cos ϕ + ǫ d 2 σ TT dΩ * π cos 2ϕ.(5)
As we shall show below, at the leading-twist accuracy, the QCD mechanism considered here contributes only to B. The subprocess γ ⋆ P → P ′ π 0
In the scaling regime, the amplitude for γ ⋆ P (p 1 ) → P ′ (p 2 )π(p π ) in the backward kinematics -namely small u = (p π − p 1 ) 2 = ∆ 2 or cos θ * π close to -1-then involves the TDAs T (x i , ξ, ∆ 2 ), where x i (i = 1, 2, 3) denote the light-cone-momentum fractions carried by participant quarks and ξ is the skewedness parameter such that 2ξ =
x 1 + x 2 + x 3 .
The amplitude is then a convolution of the proton DAs, a perturbatively-calculable-hard-scattering amplitude and the TDAs, defined from the Fourier transform of a matrix element of a three-quark-light-cone operator between a proton and a meson state. We have shown that these TDAs obey QCD evolution equations, which follow from the renormalisation-group equation of the threequark operator. Their Q 2 dependence is thus completely under control.
TDA DA ℓ 1 ℓ 3 k 1 k 3 M h P (p 1 ) P ′ (p 2 ) γ ⋆ (q) π(p π )
FIG. 2: The factorisation of the process γ ⋆ P → P ′ π into proton-distribution amplitudes (DA), the hard-subprocess amplitude (M h ) and proton → pion transition distribution amplitudes (TDA) .
The momenta of the process γ ⋆ P → P ′ π are defined as shown in Fig. 1 and Fig. 2. The z-axis is chosen along the initial nucleon and the virtual photon momenta and the x − z plane is identified with the collision or hadronic plane ( Fig. 1). Then, we define the light-cone vectors p and n (p 2 =n 2 =0) such that 2 p.n = 1, as well as P = 1 2 (p 1 + p π ), ∆ = p π − p 1 and its transverse component
∆ T (∆ T .∆ T = ∆ 2 T < 0)
, which we choose to be along the x-axis. From those, we define ξ in an usual way as ξ = − ∆.n 2P.n . We can then express the momenta of the particles through their Sudakov decomposition and, keeping the first-order corrections in the masses and ∆ 2 T , we have:
p 1 = (1 + ξ)p + M 2 1 + ξ n, q ≃ −2ξ 1 + (∆ 2 T − M 2 ) Q 2 p + Q 2 2ξ 1 + (∆ 2 T −M 2 ) Q 2 n, p π = (1 − ξ)p + m 2 π − ∆ 2 T 1 − ξ n + ∆ T , p 2 ≃ −2ξ (∆ 2 T − M 2 ) Q 2 p + Q 2 2ξ 1 + (∆ 2 T −M 2 ) Q 2 − m 2 π − ∆ 2 T 1 − ξ + M 2 1 + ξ n − ∆ T , ∆ = −2ξp + m 2 π − ∆ 2 T 1 − ξ − M 2 1 + ξ n + ∆ T .(6)
The polarisation vectors of the virtual photon are chosen to be (in the P ′ π 0 center-of-mass frame):
ε L (q) ≃ 2ξ(Q 2 + ∆ 2 T − M 2 ) Q 3 p + Q 3 2ξ(Q 2 + ∆ 2 T − M 2 ) n, ε T1 (q) = ε x , ε T2 (q) = ε y .(7)
We also have
W 2 = (p 1 + q) 2 ≃ (1 − ξ)Q 2 2ξ − (∆ 2 T − M 2 ),(8)
where we have kept the leading term in Q 2 and the nextto-leading one which does not vanish in the limit ξ → 1. This provides us with the following relation between ξ and W 2
ξ ≃ Q 2 Q 2 + 2(W 2 + ∆ 2 T − M 2 ) ,(9)
which reduces to the usual one, ξ ≃ Q 2 Q 2 +2W 2 , when M 2 and ∆ 2 T can be neglected compared to W 2 (which is not the case in the ξ → 1 limit). Furthermore, we have the exact relation
x B = Q 2 2p 1 .q = Q 2 W 2 + Q 2 − M 2 ,(10)
which gives
ξ ≃ x B 2 − x B .(11)
Finally, we have (neglecting the pion mass):
∆ 2 T = 1 − ξ 1 + ξ u − 2M 2 1 − ξ (1 + ξ) 2 ξ.(12)
In Ref. [1], we have defined the leading-twist proton to pion P → π transition distribution amplitudes from the Fourier transform of the matrix element
π| ǫ ijk q i ′ α (z 1 n) [z 1 ; z 0 ] i ′ ,i q j ′ β (z 2 n) [z 2 ; z 0 ] j ′ j q k ′ γ (z 3 n) ×[z 3 ; z 0 ] k ′ k |P .(13)
The brackets [z i ; z 0 ] in Eq. (13) account for the insertion of a path-ordered gluonic exponential along the straight line connecting an arbitrary initial point z 0 n and a final one z i n:
[z i ; z 0 ] ≡ P exp ig 1 0 dt (z i − z 0 )n µ A µ (n[tz i + (1 − t)z 0 ])(14)
which provide the QCD-gauge invariance for such nonlocal operator and equal unity in a light-like (axial) gauge.
The leading-twist TDAs for the p → π 0 transition,
V pπ 0 i (x i , ξ, ∆ 2 ), A pπ 0 i (x i , ξ, ∆ 2 ) and T pπ 0 i (x i , ξ, ∆ 2 ) are defined here 1 as 2 : 4F π 0 (p π )| ǫ ijk u i α (z 1 n)u j β (z 2 n)d k γ (z 3 n) |P (p 1 , s 1 ) = (15) i f N f π V pπ 0 1 (p /C) αβ (N + ) γ + A pπ 0 1 (p /γ 5 C) αβ (γ 5 N + ) γ + T pπ 0 1 (σ pµ C) αβ (γ µ N + ) γ +M −1 V pπ 0 2 (p /C) αβ (∆ / T N + ) γ + M −1 A pπ 0 2 (p /γ 5 C) αβ (γ 5 ∆ / T N + ) γ + M −1 T pπ 0 2 (σ p∆T C) αβ (N + ) γ +M −1 T pπ 0 3 (σ pµ C) αβ (σ µ∆T N + ) γ + M −2 T pπ 0 4 (σ p∆T C) αβ (∆ / T N + ) γ , 1
The present definitions differ from those of [1] by constant multiplicative factors and by the definition of σ µν . 2 In the following, we shall use the notation F ≡
(p.n) 3 Z ∞ −∞ Π j dz j /(2π) 3 e iΣ k x k z k p.n .
where σ µν = 1/2[γ µ , γ ν ] with σ pµ = p ν σ νµ ,..., C is the charge conjugation matrix and N + is the large component of the nucleon spinor (N = (n /p / + p /n /)N = N − + N + with N + ∼ p + 1 and N − ∼ 1/p + 1 ). f π is the pion decay constant (f π = 131 MeV) and f N has been estimated through QCD sum rules to be of order 5.2 · 10 −3 GeV 2 [20]. All the TDAs V i , A i and T i are dimensionless. Note that the first three terms in (15) are the only ones surviving the limit ∆ T → 0.
III. THE SOFT-PION LIMIT
We now derive the general limit of these three contributing TDAs at ∆ T = 0 in the soft-pion limit, when ξ gets close to 1 (see also [21]). In that limit, the soft-meson theorem [22] derived from current algebra apply [18], which allow us to express these 3 TDAs in terms of the 3 Distribution Amplitudes (DAs) of the corresponding baryon. In the case of the proton DA [20],
V p (x i ), A p (x i ), T p (x i ) are defined such as 4F DA 0|ǫ ijk u i α (z 1 n)u j β (z 2 n)d k γ (z 3 n)|P (p, s) = f N × V p (x i )(p /C) αβ (γ 5 N + ) γ +A p (x i )(p /γ 5 C) αβ N + γ +T p (x i )(σ pµ C) αβ (γ µ γ 5 N + ) γ ,(16)with F DA ≡ (p.n) 3 ∞ −∞ Π j dzj (2π) 3 e iΣ k x k z k p.n where p is here the proton momentum.
Inspired by [18], which considered the related case of the distribution amplitude of the proton-meson system, we use the soft pion theorems [22] to write:
π a (p π )|O|N (p 1 , s 1 ) = − i f π 0|[Q a 5 , O]|N (p 1 , s 1 ) (17) + ig A 4f π p 1 .p π s ′ 1N (p 1 , s ′ 1 )p / π γ 5 τ a N (p 1 , s 1 ) 0|O|N (p 1 , s ′ 1 )
The second term, which takes care of the nucleon pole term, does not contribute at threshold and will not be considered in the following.
For the transition p → π 0 , Q a 5 = Q 3 5 and O = u α u β d γ . Since the commutator of the chiral charge Q 5 with the quark field ψ (τ a being the Pauli matrix)
[Q a 5 , ψ] = − τ a 2 γ 5 ψ ,(18)
the first term in the rhs of Eq. (18) gives three terms from (γ 5 u) α u β d γ , u α (γ 5 u) β d γ and u α u β (γ 5 d) γ . The corresponding multiplication by γ 5 (or (γ 5 ) T when it acts on the index β) on the vector and axial structures of the DA (Eq. (16)) gives two terms which cancel and the third one, which remains, is the same as the one for the TDA up to the modification that on the DA decomposition, p is the proton momentum, whereas for the TDA one, p is the light-cone projection of P , i.e. half the proton momentum if ξ = 1. This introduces a factor 2ξ in the relations between the 2 DAs A p and V p and the 2 TDAs V pπ 0 1 and A pπ 0 1 , which is canceled though by the factor one half in Eq. (18).
To what concerns the tensorial structure multiplying T p , the three terms are identical at leading-twist accuracy and corresponds to the structure multiplying T pπ 0 1 , this gives a factor 3. Finally, an extra factor (4ξ) −1 appears when one goes to the momentum space [21]. We eventually have at ∆ T = 0:
V pπ 0 1 (x 1 , x 2 , x 3 , ξ, ∆ 2 ) = V p 4ξ x 1 2ξ , x 2 2ξ , x 3 2ξ , A pπ 0 1 (x 1 , x 2 , x 3 , ξ, ∆ 2 ) = A p 4ξ x 1 2ξ , x 2 2ξ , x 3 2ξ ,(19)T pπ 0 1 (x 1 , x 2 , x 3 , ξ, ∆ 2 ) = 3T p 4ξ x 1 2ξ , x 2 2ξ , x 3 2ξ .
Note the factor 1 2ξ in the argument of the DA in Eq. (19). We refer to [21] for a complete discussion. Indeed, for the TDAs, the x i are defined with respect to p ( see e.g. F ≡ (p.n) 3 ∞ −∞ Π j dzj (2π) 3 e iΣ k x k z k p.n ) which tends to p1 2 when ξ → 1. Therefore, they vary within the interval [−2, 2], whereas for the DAs, the momentum fractions are defined with respect to the proton momentum p 1 and vary between 0 and 1.
Our results are comparable to the ones for the protonpion DAs obtained in [17]. Finally, it is essential to note that these limiting values are not zero, unlike for some GPDs. Hence, we find it reasonable to conjecture that these expressions give the right order of magnitude of the TDAs for quite large values of ξ (say ξ ≥ 0.5) in a first estimate of cross sections.
IV. HARD-AMPLITUDE CALCULATION
At leading order in α s , the amplitude M λ s1s2 for γ ⋆ (q, λ)P (p 1 , s 1 ) → P ′ (p 2 , s 2 )π 0 (p π ) reads
M λ s1s2 = −i (4πα s ) 2 √ 4πα em f 2 N 54f π Q 4 × ū(p 2 , s 2 )ε /(λ)γ 5 u(p 1 , s 1 ) S λ s 1 s 2 × 1+ξ −1+ξ d 3 x 1 0 d 3 y 2 7 α=1 T α + 14 α=8 T α I − ε(λ) µ ∆ T,νū (p 2 , s 2 )(σ µν + g µν )γ 5 u(p 1 , s 1 ) S ′λ s 1 s 2 × 1+ξ −1+ξ d 3 x 1 0 d 3 y 2 7 α=1 T ′ α + 14 α=8 T ′ α I ′ ,(20)
where the coefficient T α and T ′ α (α = 1, ..., 14) are functions of x i ,y j ,ξ and ∆ 2 and are given in Table (I). In general, we have 21 diagrams: we have not drawn 7 others which differ only to the 7 first ones by a permutation between the u-quark lines 1 and 2. The symmetry of the integration domain and of the DAs and TDAs with respect to the changes x 1 ↔ x 2 and y 1 ↔ y 2 therefore tells us that they give the same contributions as the 7 first diagrams. They are accounted for by a factor 2 in the last equation.
The integrals in Eq. (20) are understood with two delta-functions insuring momentum conservation:
d 3 x ≡ dx 1 dx 2 dx 3 δ(2ξ − x 1 − x 2 − x 3 )(21)
and
d 3 y ≡ dy 1 dy 2 dy 3 δ(1 − y 1 − y 2 − y 3 ).(22)
The expression in Eq. (20) is to be compared with the leading-twist amplitude for the baryonic-form factor [20]
M λ ∝ −iū(p 2 )ε /(λ)u(p 1 ) α 2 s f 2 N Q 4 1 0 d 3 x 1 0 d 3 y 2 7 α=1 T p α (x i , y j , ξ, t) + 14 α=8 T p α (x i , y j , ξ, t) .(23)
The factors T p α are very similar to the T α obtained here.
α Tα T ′ α 1 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Qu(2ξ) 2 [(V pπ 0 1 −A pπ 0 1 )(V p −A p )+4T pπ 0 1 T p +2 ∆ 2 T M 2 T pπ 0 4 T p ] (2ξ−x 1 −iǫ) 2 (x 3 −iǫ)(1−y 1 ) 2 y 3 −Qu(2ξ) 2 [(V pπ 0 2 −A pπ 0 2 )(V p −A p )+2(T pπ 0 2 +T pπ 0 3 )T p ] (2ξ−x 1 −iǫ) 2 (x 3 −iǫ)(1−y 1 ) 2 y 3 2 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × 0 0 3 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × Qu(2ξ) 2 [4T pπ 0 1 T p +2 ∆ 2 T M 2 T pπ 0 4 T p ] (x 1 −iǫ)(2ξ−x 2 −iǫ)(x 3 −iǫ)y 1 (1−y 1 )y 3 Qu(2ξ) 2 [2(T pπ 0 2 +T pπ 0 3 )T p ] (x 1 −iǫ)(2ξ−x 2 −iǫ)(x 3 −iǫ)y 1 (1−y 1 )y 3 4 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Qu(2ξ) 2 [(V pπ 0 1 −A pπ 0 1 )(V p −A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ)(x 3 −iǫ)y 1 (1−y 1 )y 3 −Qu(2ξ) 2 [(V pπ 0 2 −A pπ 0 2 )(V p −A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ)(x 3 −iǫ)y 1 (1−y 1 )y 3 5 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × Qu(2ξ) 2 [(V pπ 0 1 +A pπ 0 1 )(V p +A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ)(x 3 −iǫ)y 1 (1−y 2 )y 3 Qu(2ξ) 2 [(V pπ 0 2 +A pπ 0 2 )(V p +A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ)(x 3 −iǫ)y 1 (1−y 2 )y 3 6 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × 0 0 7 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Q d (2ξ) 2 [2(V pπ 0 1 V p +A pπ 0 1 A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ) 2 y 1 (1−y 3 ) 2 −Q d (2ξ) 2 [2(V pπ 0 2 V p +A pπ 0 2 A p )] (x 1 −iǫ)(2ξ−x 3 −iǫ) 2 y 1 (1−y 3 ) 2 8 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × 0 0 9 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Qu(2ξ) 2 [(V pπ 0 1 −A pπ 0 1 )(V p −A p )+4T pπ 0 1 T p +2 ∆ 2 T M 2 T pπ 0 4 T p ] (2ξ−x 1 −iǫ) 2 (x 2 −iǫ)(1−y 1 ) 2 y 2 −Qu(2ξ) 2 [(V pπ 0 2 −A pπ 0 2 )(V p −A p )+2(T pπ 0 2 +T pπ 0 3 )T p ] (2ξ−x 1 −iǫ) 2 (x 2 −iǫ)(1−y 1 ) 2 y 2 10 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Qu(2ξ) 2 [(V pπ 0 1 +A pπ 0 1 )(V p +A p )+4T pπ 0 1 T p +2 ∆ 2 T M 2 T pπ 0 4 T p ] (x 1 −iǫ)(2ξ−x 2 −iǫ) 2 y 1 (1−y 2 ) 2 −Qu(2ξ) 2 [(V pπ 0 2 +A pπ 0 2 )(V p +A p )+2(T pπ 0 2 +T pπ 0 3 )T p ] (x 1 −iǫ)(2ξ−x 2 −iǫ) 2 y 1 (1−y 2 ) 2 11 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × 0 0 12 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × Q d (2ξ) 2 [(V pπ 0 1 +A pπ 0 1 )(V p +A p )] (x 1 −iǫ)(x 2 −iǫ)(2ξ−x 3 −iǫ)y 1 (1−y 2 )y 2 Q d (2ξ) 2 [(V pπ 0 2 +A pπ 0 2 )(V p +A p )] (x 1 −iǫ)(x 2 −iǫ)(2ξ−x 3 −iǫ)y 1 (1−y 2 )y 2 13 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × −Q d (2ξ) 2 [4T pπ 0 1 T p +2 ∆ 2 T M 2 T pπ 0 4 T p ] (x 1 −iǫ)(2ξ−x 1 −iǫ)(x 2 −iǫ)y 1 (1−y 2 )y 2 −Q d (2ξ) 2 [2(T pπ 0 2 +T pπ 0 3 )T p ] (x 1 −iǫ)(2ξ−x 1 −iǫ)(x 2 −iǫ)y 1 (1−y 2 )y 2 14 u(y1) u(y2) d(y3) d(x3) u(x2) u(x1) × Q d (2ξ) 2 [(V pπ 0 1 −A pπ 0 1 )(V p −A p )] (x 1 −iǫ)(2ξ−x 1 −iǫ)(x 2 −iǫ)y 1 y 2 (1−y 3 ) Q d (2ξ) 2 [(V pπ 0 2 −A pπ 0 2 )(V p −A p )] (x 1 −iǫ)(2ξ−x 1 −iǫ)(x 2 −iǫ)y 1 y 2 (1−y 3 )
V. CROSS-SECTION ESTIMATE FOR UNPOLARISED PROTONS
When ξ is large, the ERBL region covers most of the integration domain. This corresponds to the emission of three quarks of positive light-cone-momentum fraction off the target proton. Therefore it is legitimate to approximate the cross section only from the ERBL region. As a consequence, the integration on the momentum fractions contained in the TDAs between −1 + ξ and 1 + ξ (see Eq. (20)) can be converted into one between 0 and 1 by a change of variable and can be carried out straightforwardly.
On the other hand, we have at our disposal a reasonable estimation of the TDAs V pπ 0 1 , A pπ 0 1 and T pπ 0 1 in the large-ξ region and for vanishing ∆ T thanks to the soft pion limit (see section III). As a consequence, we have all the tools needed for a first evaluation of the unpolarised cross section for γ ⋆ P → P ′ π 0 for large ξ and when ∆ 2 T is vanishing.
The differential cross section for unpolarised protons in the proton-pion center-of-mass frame is calculated as usual from the averaged-squared amplitudes, |M i | 2 (i = T, L, T T, LT ):
dσ i dΩ * π = 1 64π 2 W 2 (p π .p 2 ) 2 − m 2 π M 2 ) (p 1 .q) 2 + M 2 Q 2 ) |M i | 2 ≃ W 2 − M 2 64π 2 W 2 (W 2 + M 2 + Q 2 ) 2 − 4W 2 M 2 |M i | 2 .(24)
The |M i | 2 are obtained from squaring and summing (resp. averaging) M λ s1s2 over the final (resp. initial) proton helicities with given appropriate combinations of the photon helicity λ [19]. The expression of M λ s1s2 are obtained from Eq. (20).
For vanishing ∆ 2 T , the spinorial structure S λ s1s2 u(p 2 , s 2 )ε /(λ)γ 5 u(p 1 , s 1 )
only survives. To obtain |M T | 2 , we square the latter and sum over the proton helicities and the transverse polarisations of the photon, it gives a factor 2(1+ξ)Q 2 ξ .
On the other hand, |M L | 2 , vanishes at the leading-twist accuracy, as in the nucleon-form-factor case. The same is true for |M LT | 2 and |M T T | 2 since the x and y direction are not distinguishable when ∆ 2 T is vanishing.
Contrariwise, if we wanted to consider the spinorial structure S ′ λ s1s2 -arising when ∆ 2
T = 0 - ε(λ) µ ∆ T,νū (p 2 , s 2 )(σ µν + g µν )γ 5 u(p 1 , s 1 ) ,(26)
|M T T | 2 , and thus dσT T dΩ * π , would not be zero and the cross section would show a cos 2ϕ dependence.
The remaining part still to be considered is now entirely contained in the factor I of Eq. (20) for which we need to choose parametrisations for the DAs and the TDAs. For the sake of coherence, we shall choose the same parametrisation for both. Since the asymptotic limit [23] V p (x i ) = T p (x i ) = ϕ as = 120x 1 x 2 x 3 and A(x i ) = 0 is known to give a vanishing proton-form factor and the wrong sign to the neutron one, we shall not use it.
Note, however, that the isospin relations between the TDAs V pπ 0 1 , A pπ 0 1 and T pπ 0 1 differ from those between the DAs; the factor 3 in Eq. (19) clearly illustrates this fact. Therefore, whereas the asymptotic limit choices give a vanishing proton form factor due to a full cancellation between the 14-diagram contributions, the resulting expression will not vanish here even for the asymptotic DAs and TDAs derived in the soft limit.
Yet, we shall rather consider the more reasonable choices of V. L. Chernyak and A. R. Zhitnitsky [20] (noted CZ) based on an analysis of QCD sum rules.
Therefore, we take for the DAs:
V p (x i ) = ϕ as [11.35(x 2 1 + x 2 2 ) + 8.82x 2 3 − 1.68x 3 − 2.94], A p (x i ) = ϕ as [6.72(x 2 2 − x 2 1 )], T p (x i ) = ϕ as [13.44(x 2 1 + x 2 2 ) + 4.62x 2 3 + 0.84x 3 − 3.78],(27)
and for the limiting value of the TDAs
V pπ 0 = ϕ as 2 5 ξ 4 [ 11.35 (2ξ) 2 (x 2 1 + x 2 2 ) + 8.82 (2ξ) 2 x 2 3 − 1.68 2ξ x 3 − 2.94], A pπ 0 = ϕ as 2 5 ξ 4 [ 6.72 (2ξ) 2 (x 2 2 − x 2 1 )], T pπ 0 = 3ϕ as 2 5 ξ 4 [ 13.44 (2ξ) 2 (x 2 1 + x 2 2 ) + 4.62 (2ξ) 2 x 2 3 + 0.84 2ξ x 3 − 3.78].(28)
With this choice, we get the following analytic result valid at large values of ξ:
dσ dΩ * π θ * π =π = 0.389α em α 4 s f 4 N π 3 (W 2 − M 2 ) f 2 π W 2 (W 2 + M 2 + Q 2 ) 2 − 4W 2 M 2 × 4.5 × 10 7 (1 + ξ) Q 6 ξ .(29)
The algebraic factors come from the DA and TDA parametrisation (Eqs. Our lack of knowledge of the TDAs unfortunately prevents us from comparing our results with existing data [14]. Indeed these data at Q 2 = 1GeV 2 are mostly in the resonance region (W < 1.5 GeV) except for the large W tail of the distribution, which however correspond to small values of the skewedness parameter (ξ < .3). We thus need a realistic model for the p → π TDAs at smaller values of ξ before discussing present data. The issue is more favorable at higher energies, where higher values of ξ can be attained above the resonance region, as for instance at HERMES and with CEBAF at 12 GeV [24]. Our calculation of the cross section, in an admittedly quite narrow range of the parameters, can thus serve as a reasonable input to the feasibility study of backward pion electroproduction at CEBAF at 12 GeV, in the hope to reach the scaling regime, in which we are interested.
The corresponding results for the asymptotic choice are three order of magnitude smaller. This shows how sensitive the amplitude is with respect to non-perturbative input of the DAs. This has to be paralleled with the perfect cancellations in the proton-form-factor calculation in this limit. The breaking of the isospin relations for the TDAs prevents some of these cancellations, but the full cross section is still shrunk down, whereas the CZ choice gives a much larger contribution as expected.
VI. CONCLUSION
Hard-exclusive electroproduction of a meson in the backward region thus opens a new window in the understanding of hadronic physics in the framework of the collinear-factorisation approach of QCD. Of course, the most important and most difficult problem to solve, in order to extract reliable precise information on the p → π Transition Distribution Amplitudes from an incomplete set of observables such as cross sections and asymmetries, is to develop a realistic model for the TDAs. This is the subject of non-perturbative studies such as, e.g. lattice simulations. We have derived the limit of these TDAs when the pion momentum is small and we have provided a first estimate of the cross section in the kinematical regime which should be accessed at JLab at 12 GeV.
This estimate, which is unfortunately reliable only in a restricted kinematical domain, also shows an interesting sensitivity to the underlying model for the proton DA. Beside information about the pion content in protons through the TDAs, backward pion electroproduction is therefore also likely to bring us information about the protons DAs themselves.
Finally, it is worthwhile to note that the analysis presented here could be easily extended to ep → e ′ nπ + , ep → e ′ ∆ 0 π + , ep → e ′ ∆ + π 0 , ep → e ′ ∆ ++ π − and similar reactions with a neutron target, for which data can also be expected [25].
FIG
. 1: (Backward) electroproduction of a pion on a proton in the center-of-mass frame of the γ ⋆ proton.
(27) & (28)). For ξ = 0.8, we have the Q 2 dependence of dσ dΩ * π θ * π =π shown inFig. 3for α s = 0.4. for ξ = 0.8 as function of Q 2 .
TABLE I :
I14 of the 21 diagrams contributing to the hard-scattering amplitude with their associated coefficient Tα and T ′ α . The seven first ones with u-quark lines inverted are not drawn. The crosses represent the virtual-photon vertex.
AcknowledgmentsWe are thankful to P. Bertin, V. Braun
Note that a multiplicative factor 4ξ 2 is missing in Eq. B Pire, L Szymanowski, arXiv:hep-ph/0504255Phys. Lett. B. 6228323 of the published versionB. Pire and L. Szymanowski, Phys. Lett. B 622 (2005) 83 [arXiv:hep-ph/0504255]. Note that a multiplicative factor 4ξ 2 is missing in Eq. 23 of the published version.
. B Pire, L Szymanowski, arXiv:hep-ph/0509368PoS. 2005103B. Pire and L. Szymanowski, PoS HEP2005 (2006) 103 [arXiv:hep-ph/0509368].
. L L Frankfurt, P V Pobylitsa, M V Polyakov, M Strikman, arXiv:hep-ph/9901429Phys. Rev. D. 6014010L. L. Frankfurt, P. V. Pobylitsa, M. V. Polyakov and M. Strikman, Phys. Rev. D 60 (1999) 014010 [arXiv:hep-ph/9901429].
. J P Lansberg, B Pire, L Szymanowski, arXiv:hep-ph/0607130Nucl. Phys. A. 78216J. P. Lansberg, B. Pire and L. Szymanowski, Nucl. Phys. A 782 (2007) 16 [arXiv:hep-ph/0607130].
. B Pire, L Szymanowski, arXiv:hep-ph/0411387Phys. Rev. D. 71111501B. Pire and L. Szymanowski, Phys. Rev. D 71 (2005) 111501 [arXiv:hep-ph/0411387].
. J C Collins, L Frankfurt, M Strikman, arXiv:hep-ph/9611433Phys. Rev. D. 562982J. C. Collins, L. Frankfurt and M. Strikman, Phys. Rev. D 56 (1997) 2982 [arXiv:hep-ph/9611433].
. B Pire, M Segond, L Szymanowski, S Wallon, arXiv:hep-ph/0605320Phys. Lett. B. 639642B. Pire, M. Segond, L. Szymanowski and S. Wallon, Phys. Lett. B 639 (2006) 642 [arXiv:hep-ph/0605320].
. J P Lansberg, B Pire, L Szymanowski, arXiv:hep-ph/0602195Phys. Rev. D. 7374014J. P. Lansberg, B. Pire and L. Szymanowski, Phys. Rev. D 73 (2006) 074014 [arXiv:hep-ph/0602195].
. B C Tiburzi, arXiv:hep-ph/0508112Phys. Rev. D. 7294001B. C. Tiburzi, Phys. Rev. D 72 (2005) 094001 [arXiv:hep-ph/0508112].
. W Broniowski, E. Ruiz Arriola, arXiv:hep-ph/0701243W. Broniowski and E. Ruiz Arriola, arXiv:hep-ph/0701243.
. S Noguera, private communicationS. Noguera, private communication.
. W Broniowski, E. Ruiz Arriola, arXiv:hep-ph/0307198Phys. Lett. B. 57457W. Broniowski and E. Ruiz Arriola, Phys. Lett. B 574 (2003) 57 [arXiv:hep-ph/0307198].
. L Theussl, arXiv:nucl-th/0211036Eur. Phys. J. A. 20483L. Theussl, et al., Eur. Phys. J. A 20 (2004) 483 [arXiv:nucl-th/0211036];
. A E Dorokhov, L Tomio, arXiv:hep-ph/9803329Phys. Rev. D. 6214016A. E. Dorokhov and L. Tomio, Phys. Rev. D 62 (2000) 014016 [arXiv:hep-ph/9803329];
. F Bissey, arXiv:hep-ph/0207107Phys. Lett. B. 547210F. Bissey, et al., Phys. Lett. B 547 (2002) 210 [arXiv:hep-ph/0207107].
. F Bissey, arXiv:hep-ph/0310184Phys. Lett. B. 587189F. Bissey, et al., Phys. Lett. B 587 (2004) 189 [arXiv:hep-ph/0310184].
. B Pasquini, S Boffi, arXiv:hep-ph/0601177Phys. Rev. D. 7394001B. Pasquini and S. Boffi, Phys. Rev. D 73 (2006) 094001 [arXiv:hep-ph/0601177].
. G Laveissiere, JLab Hall A CollaborationarXiv:nucl-ex/0308009Phys. Rev. C. 6945203G. Laveissiere et al. [JLab Hall A Collaboration], Phys. Rev. C 69 (2004) 045203 [arXiv:nucl-ex/0308009].
. M Ungaro, CLAS CollaborationarXiv:hep-ex/0606042Phys. Rev. Lett. 97112003M. Ungaro et al. [CLAS Collaboration], Phys. Rev. Lett. 97 (2006) 112003 [arXiv:hep-ex/0606042];
R W Gothe, CLAS CollaborationAIP Conf. Proc. 814278R. W. Gothe [CLAS Collaboration], AIP Conf. Proc. 814 (2006) 278.
. A Airapetian, HERMES CollaborationarXiv:hep-ex/0112022Phys. Lett. B. 53585A. Airapetian et al. [HERMES Collaboration], Phys. Lett. B 535 (2002) 85 [arXiv:hep-ex/0112022].
. V M Braun, D Y Ivanov, A Lenz, A Peters, arXiv:hep-ph/0611386Phys. Rev. D. 7514021V. M. Braun, D. Y. Ivanov, A. Lenz and A. Peters, Phys. Rev. D 75 (2007) 014021 [arXiv:hep-ph/0611386].
. P V Pobylitsa, M V Polyakov, M Strikman, arXiv:hep-ph/0101279Phys. Rev. Lett. 8722001P. V. Pobylitsa, M. V. Polyakov and M. Strikman, Phys. Rev. Lett. 87 (2001) 022001 [arXiv:hep-ph/0101279].
. P J Mulders, Phys. Rept. 18583P. J. Mulders, Phys. Rept. 185 (1990) 83.
. V L Chernyak, A R Zhitnitsky, Phys. Rept. 112173V. L. Chernyak and A. R. Zhitnitsky, Phys. Rept. 112, 173 (1984);
. J P Lansberg, B Pire, L Szymanowski, arXiv:0710.1267to appear in PRD. rapid communicationJ. P. Lansberg, B. Pire and L. Szymanowski, to appear in PRD, rapid communication. [arXiv:0710.1267].
S Adler, R Dashen, Current Algebras. Benjamin, New YorkS. Adler and R. Dashen, Current Algebras, Benjamin, New York, 1968.
. G P Lepage, S J Brodsky, Phys. Rev. D. 222157G. P. Lepage and S. J. Brodsky, Phys. Rev. D 22 (1980) 2157.
. J Roche, Hall A CollaborationC E Hyde-Wright, Hall A CollaborationB Michel, Hall A CollaborationC Munoz Camacho, Hall A CollaborationarXiv:nucl-ex/0609015J. Roche, C. E. Hyde-Wright, B. Michel, C. Munoz Camacho, et al. [Hall A Collaboration], arXiv:nucl-ex/0609015.
. M Garçon, F X Girod, private communicationM. Garçon, F.X. Girod, private communication.
| [] |
[
"Quantum convolutional neural network for classical data classification",
"Quantum convolutional neural network for classical data classification"
] | [
"Tak Hur \nDepartment of Physics\nImperial College London\nSW7 2BWLondonUK\n",
"Leeseok Kim \nDepartment of Electrical and Computer Engineering\nUniversity of New Mexico\n87131AlbuquerqueNMUSA\n",
"Daniel K Park \nSungkyunkwan University Advanced Institute of Nanotechnology\n16419SuwonRepublic of Korea\n"
] | [
"Department of Physics\nImperial College London\nSW7 2BWLondonUK",
"Department of Electrical and Computer Engineering\nUniversity of New Mexico\n87131AlbuquerqueNMUSA",
"Sungkyunkwan University Advanced Institute of Nanotechnology\n16419SuwonRepublic of Korea"
] | [] | With the rapid advance of quantum machine learning, several proposals for the quantum-analogue of convolutional neural network (CNN) have emerged. In this work, we benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification. In particular, we propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm. We investigate the performance of various QCNN models differentiated by structures of parameterized quantum circuits, quantum data encoding methods, classical data pre-processing methods, cost functions and optimizers on MNIST and Fashion MNIST datasets. In most instances, QCNN achieved excellent classification accuracy despite having a small number of free parameters. The QCNN models performed noticeably better than CNN models under the similar training conditions. Since the QCNN algorithm presented in this work utilizes fully parameterized and shallow-depth quantum circuits, it is suitable for Noisy Intermediate-Scale Quantum (NISQ) devices. arXiv:2108.00661v2 [quant-ph] | 10.1007/s42484-021-00061-x | [
"https://arxiv.org/pdf/2108.00661v2.pdf"
] | 236,772,493 | 2108.00661 | aded97da3005e58fb70c321fa1b2bba7cd5affb6 |
Quantum convolutional neural network for classical data classification
Tak Hur
Department of Physics
Imperial College London
SW7 2BWLondonUK
Leeseok Kim
Department of Electrical and Computer Engineering
University of New Mexico
87131AlbuquerqueNMUSA
Daniel K Park
Sungkyunkwan University Advanced Institute of Nanotechnology
16419SuwonRepublic of Korea
Quantum convolutional neural network for classical data classification
With the rapid advance of quantum machine learning, several proposals for the quantum-analogue of convolutional neural network (CNN) have emerged. In this work, we benchmark fully parameterized quantum convolutional neural networks (QCNNs) for classical data classification. In particular, we propose a quantum neural network model inspired by CNN that only uses two-qubit interactions throughout the entire algorithm. We investigate the performance of various QCNN models differentiated by structures of parameterized quantum circuits, quantum data encoding methods, classical data pre-processing methods, cost functions and optimizers on MNIST and Fashion MNIST datasets. In most instances, QCNN achieved excellent classification accuracy despite having a small number of free parameters. The QCNN models performed noticeably better than CNN models under the similar training conditions. Since the QCNN algorithm presented in this work utilizes fully parameterized and shallow-depth quantum circuits, it is suitable for Noisy Intermediate-Scale Quantum (NISQ) devices. arXiv:2108.00661v2 [quant-ph]
I. INTRODUCTION
Machine learning techniques with artificial neural networks are ubiquitous in modern society as the ability to make reliable predictions from the vast amount of data is essential in various domains of science and technology. A convolutional neural network (CNN) is one such example, especially for data with a large number of features. It effectively captures spatial correlation within data and learns important features [1], which is shown to be useful for many pattern recognition problems, such as image classification, signal processing, and natural language processing [2]. It has also opened the path to Generative Adversarial Networks (GANs) [3]. CNNs are also rising as a useful tool for scientific research, such as in high energy physics [4,5], gravitational wave detection [6] and statistical physics [7]. By all means, the computational power required for the success of machine learning algorithms increases with the volume of data, which is increasing at an overwhelming rate. With the potential of quantum computers to outperform any foreseeable classical computers for solving certain computational tasks, Quantum machine learning (QML) has emerged as the potential solution to address the challenge of handling an ever-increasing amount of data. For example, several innovative quantum machine learning algorithms have been proposed to offer speedups over their classical counterparts [8][9][10][11][12][13]. Motivated by the benefits of CNN and the potential power of QML, Quantum Convolutional Neural Network (QCNN) algorithms have been developed [14][15][16][17][18][19][20][21][22] (see Appendix A for a brief summary and comparison of other approaches to QCNN). Previous constructions of QCNN have reported success in developing efficient * These authors contributed equally to this work and are listed in alphabetical order. † [email protected] quantum arithmetic operations that exactly implement the basic functionalities of classical CNN or in developing parameterized quantum circuits inspired by key characteristics of CNN. While the former likely requires faulttolerant quantum devices, the latter has been focused on quantum data classification. In particular, Cong et al. proposed a fully parameterized quantum circuit (PQC) architecture inspired by CNN and demonstrated its success for some quantum many-body problems [14]. However, the study of fully parameterized QCNN for performing pattern recognition, such as classification, on classical data is missing.
In this work, we present a fully parameterized quantum circuit model for QCNN that solves supervised classification problems on classical data. In a similar vein to [14], our model only uses two-qubit interactions throughout the entire algorithm in a systematic way. The PQC models-also known as variational quantum circuits [23]-are attractive since they are expected to be suitable for Noisy Intermediate-Scale Quantum (NISQ) hardware [24,25]. Another advantage of QCNN models for NISQ computing is their intrinsically shallow circuit depth. Furthermore, QCNN models studied in this work exploit entanglement, which is a global property, and hence have the potential to transcend classical CNN that is only able to capture local correlations. We benchmark the performance of the parameterized QCNN with respect to several variables, such as quantum data encoding methods, structures of parameterized quantum circuits, cost functions, and optimizers using two standard datasets, namely MNIST and Fashion MNIST, on Pennylane [26]. The quantum encoding benchmark also examines classical dimensionality reduction methods, which is essential for early quantum computers with a limited number of logical qubits. The various QCNN models tested in this work employs a small number of free parameters, ranging from 12 to 51. Nevertheless, all QCNN models produced high classification accuracy, with the best case being about 99% for MNIST and about 94% for Fashion MNIST. Moreover, we discuss a QCNN model that only requires nearest neighbour qubit interactions, which is a desirable feature for NISQ computing. Comparing classification performances of QCNN and CNN models shows that QCNN is more favorable than CNN under the similar training conditions for both benchmarking datasets.
The remainder of the paper is organized as follows. Section II sets the theoretical framework of this work by describing the classification problem, the QCNN algorithm, and various methods for encoding classical data as a quantum state. Section III describes variables of the QCNN model, such as parameterized quantum circuits, cost functions, and classical data pre-processing methods, that are subject to our benchmarking study. Section IV compares and presents the performance of various designs of QCNN for binary classification of MNIST and Fashion MNIST datasets. Conclusions are drawn and directions for future work are suggested in Section V.
II. THEORETICAL FRAMEWORK
A. Classification
Classification is an example of pattern recognition, which is a fundamental problem in data science that can be effectively addressed via machine learning. The goal of L-class classification is to infer the class label of an unseen data pointx ∈ C N , given a labelled data set
D = {(x 1 , y 1 ), . . . , (x M , y M )} ⊂ C N × {0, 1, . . . , L − 1}.
The classification problem can be solved by training a parameterized quantum circuit. Hereinafter, we refer to fully parameterized quantum circuits trained for machine learning tasks as Quantum Neural Network (QNN). For this supervised classification task, a QNN is trained by optimizing the parameters of quantum gates so as to minimize the cost function
C(θ) = M i=1 α i c(y i , f (x i , θ)) where f (x i , θ)
is the machine learning model defined by the set of parameters θ that predicts the label of x i , c(a, b) quantifies the dissimilarity between a and b, and α i is a weight that satisfies M i=1 α i = 1. After the training is finished, the class label for the unseen data pointx is determined asỹ = f (x, θ * ), where θ * = arg min θ C(θ). If the problem is restricted to binary classification (i.e. L = 2), the class label can be inferred from a single-qubit von Neumann measurement. For example, the sign of an expectation value of σ z observable can represent the binary label [27]. Hereinafter, we focus on the binary classification, albeit potential future work towards multiclass classification will be discussed in Sec. V.
B. Quantum Convolutional Neural Network
An interesting family of quantum neural networks utilizes tree-like (or hierarchical) structures [28] with which the number of qubits from a preceding layer is reduced by a factor of two for the subsequent layer. Such architectures consist of O(log(n)) layers for n input qubits, thereby permitting shallow circuit depth. Moreover, they can avoid one of the most critical problems in the PQC based algorithms known as "barren plateau", thereby guaranteeing the trainability [29]. These structures also make a natural connection to the tensor network, which serves as a useful ground for exploring manybody physics, neural networks, and the interplay between them.
The progressive reduction of the number of qubits is analogous to the pooling operation in CNN. A distinct feature of the QCNN architecture is the translational invariance, which forces the blocks of parameterized quantum gates to be identical within a layer. The quantum state resulting from an ith layer of QCNN can be expressed as
|ψ i (θ i ) ψ i (θ i )| = Tr Bi (U i (θ i ) |ψ i−1 ψ i−1 | U i (θ i ) † ),
(1) where Tr Bi (·) is the partial trace operation over subsys-
tem B i ∈ C 2 n/2 i
, U i is the parameterized unitary gate operation that includes quantum convolution and the gate part of pooling, and |ψ 0 = |0 ⊗n . Following the existing nomenclature, we refer to the structure (or template) of the parameterized quantum circuit as ansatz. In our QCNN architecture, U i always consists of two-qubit quantum circuit blocks, and the convolution and pooling part each uses the identical quantum circuit blocks within the given layer. Since a two-qubit gate requires 15 parameters at most [30], in ith layer consisting of l i > 0 independent convolutional filter and one pooling operation the maximum number of parameters subject to optimization is 15(l i + 1). Then the total number of parameters is at most 15
log 2 (n) i=1
(l i + 1) if the convolution and pooling operations are iterated until only one qubit remains. One can also consider an interesting hybrid architecture in which the QCNN layers are stacked until m qubits are remaining and then a classical neural network takes over from the m qubit measurement outcomes. In this case, the number of quantum circuit parameters is less than the maximum number given above. Usually, l i is set to be a constant. Therefore, the number of parameters subject to optimization grows as O(log(n)), which is an exponential reduction compared to the general hierarchical structure discussed in Ref. [28]. This also implies that the number of parameters can be suppressed double-exponentially with the size of classical data if the exponentially large state space is fully utilized for encoding the classical data. An example quantum circuit for a QCNN algorithm with eight qubits for binary classification is depicted in Fig. 1. Generalizing Fig. 1 to larger systems can be done simply by connecting all neighbor-
Data Encoding
Pooling
| 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ Classical Computer arg min θ C(θ) × l 1 × l 2 × l 3 FIG. 1.
A schematic of the QCNN algorithm for an example of 8 input qubits. The quantum circuit consists of three parts: quantum data encoding (green rectangle), convolutional filters (blue rounded rectangle), and pooling (red circle). The quantum data encoding is fixed in a given structure of QCNN, while the convolutional filter and pooling use parameterized quantum gates. There are three layers in this example, and in each layer, multiple convolutional filters can be applied. The number of filters for ith layer is denoted by li. In each layer, the convolutional filter applies the same two-qubit ansatz to nearest neighbour qubits in a translationally-invariant way. Similarly, pooling operations within the layer use the same ansatz. In this example, the pooling operation is represented as a controlled unitary transformation, which is activated when the control qubit is 1. However, general controlled operations can also be considered. The measurement outcome of the quantum circuit is used to calculate the user-defined cost function. The classical computer is used to compute the new set of parameters based on the gradient, and the quantum circuit parameters are updated accordingly for the subsequent round.
ing qubits with the two-qubit parameterized gates in the translationally invariant way. The optimization of the gate parameters can be carried out by iteratively updating the parameters based on the gradient of the cost function until some condition for the termination is reached. The cost function gradient can be calculated classically or by using a quantum computer via parameter-shift rule [31][32][33]. When the parameter-shift rule is used, QCNN requires an exponentially smaller number of quantum circuit executions compared to the general hierarchical structures inspired by tensor networks (e.g. tree tensor network) in Ref. [28]. While the latter uses O(n) runs, the former only uses O(log(n)) runs.
C. Quantum data encoding
Many machine learning techniques transform input data X into a different space to make it easier to work with. This transformation φ : X → X is often called a feature map. In quantum computing, the same analogy can be applied to perform a quantum feature map, which acts as φ : X → H where the vector space H is a Hilbert space [34]. In fact, such feature mapping is mandatory when one uses quantum machine learning on classical data, since classical data must be encoded as a quantum state [9,11,[35][36][37][38]. The quantum feature map x ∈ X → |φ(x) ∈ H is equivalent to applying a unitary transformation U φ (x) to the initial state |0 ⊗n to produce
U φ (x) |0 ⊗n = |φ(x)
, where n is the number of qubits.
This refers to the green rectangle in Fig. 1.
There exist numerous structures of U φ (x) to encode the classical input data x into a quantum state. In this work, we benchmark the performance of the QCNN algorithm with several different quantum data encoding techniques. These techniques are explained in detail in this section.
Amplitude encoding
One of the most general approaches to encode classical data as a quantum state is to associate normalized input data with probability amplitudes of a quantum state. This encoding scheme is known as the amplitude encoding (AE). The amplitude encoding represents input data of x = (x 1 , ..., x N ) T of dimension N = 2 n as amplitudes of an n-qubit quantum state |φ(x) as
U φ (x) : x ∈ R N → |φ(x) = 1 x N i=1 x i |i ,(2)
where |i is the ith computational basis state. Clearly, with amplitude encoding, a quantum computer can represent exponentially many classical data. This can be of great advantage in QCNN algorithms. Since the number of parameters subject to optimization scales as O(log(n)) (see Sec. II B), the amplitude encoding reduces the number of parameters doubly-exponentially with the size (i.e. dimension) of the classical data. However, the quantum circuit depth for amplitude encoding usually grows as O(poly(N )), although there exists a method to reduce it to O(log(N )) at the cost of increasing the number of qubits to O(N ) [38].
Qubit encoding
The computational overhead of amplitude encoding motivates qubit encoding, which uses a constant quantum circuit depth while using O(N ) number of qubits. The qubit encoding embeds one classical data point x i , that is rescaled to lie between 0 and π, into a single qubit as |φ(x i ) = cos xi 2 |0 + sin xi 2 |1 for i = 1, ..., N . Hence, the qubit encoding maps input data of
x = (x 1 , . . . , x N ) T to N qubits as U φ (x) : x ∈ R N → |φ(x) = N i=1 (cos x i 2 |0 + sin x i 2 |1 ),(3)
where
x i ∈ [0, π) for all i. The encoding circuit can be expressed with a unitary operator U φ (x) = N j=1 U xj where U xj = e −i x j 2 σy := cos xj 2 − sin xj 2 sin xj 2 cos xj 2 .
Dense qubit encoding
In principle, since a quantum state of one qubit can be described with two real-valued parameters, two classical data points can be encoded in one qubit. Thus the qubit encoding described above can be generalized to encode two classical vectors per qubit by using rotations around two orthogonal axes [39]. By choosing them to be the x and y axes of the Bloch sphere, this method, which we refer to as dense qubit encoding, encodes x j = (x j1 , x j2 ) into a single qubit as
|φ(x j ) = e −i x j 2 2 σy e −i x j 1 2 σx |0 .
Hence, the dense qubit encoding maps an N -dimensional input data x = (x 1 , . . . , x N ) T to N/2 qubits as
U φ (x) : x ∈ R N → |φ(x) = N/2 j=1 e −i x N/2+j 2 σy e −i x j 2 σx |0 .(4)
Note that there is freedom to choose which pair of classical data to be encoded in one qubit. In this work, we chose the pairing as shown in Eq. (4), but one may choose to encode x 2j−1 and x 2j in jth qubit.
Hybrid Encoding
As shown in previous sections, the amplitude encoding is advantageous when the quantum circuit width (i.e. the number of qubits) is considered while the qubit encoding is advantageous when the quantum circuit depth is considered. These two encoding schemes represent the extreme ends of the quantum circuit complexities for loading classical data into a quantum system. In this section, we introduce simple hybrid encoding methods to compromise the quantum circuit complexity between these two extreme ends. In essence, the hybrid encoding implements the amplitude encoding to a number of independent blocks of qubits in parallel. Let us denote the number of qubits in each independent block that amplitudeencodes classical data by m. Then each block can encode O(2 m ) classical data. Let us also denote that there are b such blocks of m qubits by b. Then the quantum system of b blocks contain b2 m classical data. The first hybrid encoding, which we refer to as hybrid direct encoding (HDE), can be expressed as
U φ (x) : x ∈ R N → |φ(x) = b j=1 1 x j 2 m i=1 x ij |i j .
(5) Note that each block can have a different normalization constant, and hence the amplitudes may not be a faithful representation of the data unless the normalization constant have similar values. To circumvent this problem, we also introduce hybrid angle encoding (HAE), which can be expressed as
|φ(x) = b k=1 2 m i=1 m−1 j=0 cos 1−ij x g(j),k sin ij x g(j),k |i k ,(6)
where i ∈ {0, 1} m is the binary representation of i with i j being the j + 1th bit of the bit string, x j,k represents the jth element of the data assigned to the kth block of qubits, and g(j) = 2 j + j−1 l=0 i l 2 l . In this case, having b block of m qubits allows b(2 m − 1) classical data to be encoded. The performance of these hybrid encoding methods will be compared in Sec. IV.
Since the hybrid methods are parallelized, the quantum circuit depth is reduced to O(2 m ) where m < N , while the number of qubits is O(mN/2 m ). Therefore the hybrid encoding algorithms use fewer number of qubits than the qubit encoding and use shallower quantum circuit depth than the amplitude encoding. Finding the best trade-off between the quantum circuit width and depth (i.e. the choice of m) depends on the specific details of given quantum hardware.
III. BENCHMARK VARIABLES
A. Ansatz
An important step in the construction of a QCNN model is the choice of ansatz. In general, the QCNN structure is flexible to use an arbitrary two-qubit unitary operation at each convolutional filter and each pooling step. However, we constrain our design such that all convolutional filters use the same ansatz, and the same applies to all pooling operations (but differ from convolutional filters). We later show that the QCNN with fixed ansatz provides excellent results for the benchmarking datasets. While using different ansatz for all filters can be an interesting attempt for further improvements, this will increase the number of parameters to be optimized.
In the following, we introduce a set of convolutional and pooling ansatz (i.e. parameterized quantum circuit templates) used in our QCNN models.
Convolution filter
parameterized quantum circuits for convolutional layers in QCNN are composed of different configurations of single-qubit and two-qubit gate operations. Most circuit diagrams in Fig. 2 are inspired by past studies. For instance, circuit 1 is used as the parameterized quantum circuit for training a tree tensor network (TTN) [28]. Circuits 2, 3, 4, 5, 7, and 8 are taken from the work by Sim et al. [40] which includes the analysis on expressibility and entangling capability of four-qubit parameterized quantum circuits. We modified these quantum circuits to two-qubit forms to utilize them as building blocks of the convolutional layer, which always consists of two qubits. Circuits 7 and 8 are reduced versions of circuits that recorded the best expressibility in the study. Circuit 2 is a two-qubit version of the quantum circuit that exhibited the best entangling capability. Circuits 3, 4 and 5 are drawn from circuits that have balanced significance in both expressibility and entangling capability. Circuit 6 is developed as a proper candidate of two-body Variational Quantum Eigensolver (VQE) entangler in Ref. [41]. This circuit is also known to be able to implement an arbitrary SO(4) gate [42]. In fact, a total VQE entangler can be constructed by linearly arranging the SO(4) gates throughout input qubits. Since this structure is similar to the structure of convolutional layers in QCNN, the SO(4) gate would be a great candidate to be used in the convolution layer. Circuit 9 represents the parameterization of an arbitrary SU (4) gate [20,30].
Pooling
The pooling layer applies parameterized quantum gates to two qubits and traces out one of the qubits to reduce the two-qubit states to one-qubit states. Similar to the choice of ansatz for the convolutional filter, there exists a variety of choices of two-qubit circuits of the pooling layer. In this work, we choose a simple form of a two-qubit circuit consisting of two free parameters for the pooling layer. The circuit is shown in Fig. 3.
Application of the parameterized gates in the pooling step in conjunction with the convolutional circuit 9 might be redundant since it is already an arbitrary SU (4) gate. Thus for the convolutional circuit 9, we test two QCNN constructions, with and without the parameterized two-qubit circuit in the pooling layer. In the latter, the pooling layer only consists of tracing out one qubit.
B. Cost function
The variational parameters of the ansatz are updated to minimize the cost function calculated on the training data set. In this benchmark study, we test the performance of QCNN models with two different cost functions, namely the mean squared error and the cross-entropy loss.
Mean Squared Error
Before training QCNN, we map original class labels of {0, 1} to {1, −1} respectively to associate them with the eigenvalues of the qubit observables. Then the mean squared error (MSE) between predictions and class labels becomes
C(θ) = 1 N N i=1 (M z (ψ i (θ)) −ỹ i ) 2 ,(7)
whereM z (ψ i ) = ψ i |σ z |ψ i is the Pauli-Z expectation value of one qubit state extracted from QCNN for ith training data, andỹ i ∈ {1, −1} is the label of the corresponding training data (i.e.ỹ i = 1 − 2y i ). Since QCNN performs a single qubit measurement in the Z basis, the final state can be thought of as a mixed state
a i |0 0| + b i |1 1|.
Then minimizing the cost function above with respect to θ would correspond to forcing a i to be as larger as possible than b i if the ith training data is labelled 0, and vice versa if it is labelled 1.
Cross-Entropy Loss
Cross-entropy loss is widely used in training classical neural networks. It measures the performance of a classification model whose output is a probability between 0 and 1. Due to the probabilistic property of quantum mechanics, one could consider the cross-entropy loss by considering probabilities of measuring computational basis states in the single-qubit measurement of QCNN. The R y (θ 2 ) cross-entropy loss for the ith training data can be expressed as
R y (θ 1 ) R x (θ 2 ) R x (θ 1 ) R z (θ 4 ) R z (θ 3 ) R z (θ 5 ) R z (θ 6 ) R x (θ 8 ) R x (θ 7 ) R z (θ 10 ) R z (θ 9 ) (a) Convolutional circuit 1 (g) Convolutional circuit 7 R x (θ 2 ) R x (θ 1 ) H H (b) Convolutional circuit 2 (c) Convolutional circuit 3 R y (θ 4 ) R y (θ 3 ) R y (θ 2 ) R y (θ 1 ) (d) Convolutional circuit 4 R y (θ 5 ) R y (θ 4 ) R y (θ 2 ) R y (θ 1 ) R z (θ 3 ) R z (θ 6 ) R y (θ 5 ) R y (θ 4 ) R y (θ 2 ) R y (θ 1 ) R x (θ 3 ) R x (θ 6 ) R y (θ 4 ) R y (θ 3 ) R y (θ 2 ) R y (θ 1 ) R y (θ 6 ) R y (θ 5 ) (e) Convolutional circuit 5 (f) Convolutional circuit 6 R x (θ 2 ) R x (θ 1 ) R z (θ 4 ) R z (θ 3 ) R x (θ 5 ) R x (θ 6 ) R x (θ 8 ) R x (θ 7 ) R z (θ 10 ) R z (θ 9 ) (h) Convolutional circuit 8 R z (θ 8 ) R y (θ 7 ) R y (θ 9 ) U3(θ 1 , ϕ 2 , λ 3 ) U3(θ 4 , ϕ 5 ,C(θ) = N i=1 y i log (Pr[ψ i (θ) = 1]) + (1 − y i ) log (Pr[ψ i (θ) = 0]) ,(8)
where y i ∈ {0, 1} is the class label and Pr[ψ i (θ) = y i ] is the probability of measuring the computational basis state |y i from the QCNN circuit.
C. Classical data pre-processing
The size of quantum circuits that can be reliably executed on NISQ devices is limited due to the noise and technical challenges of building quantum hardware. Thus the encoding schemes for high dimensional data usually require the number of qubits that are beyond the current capabilities of quantum devices. Therefore, classical dimensionality reduction techniques will be useful in the near-term application of quantum machine learning techniques. In this work, we pre-process data with three classical dimensionality reduction techniques, namely bilinear interpolation, principal component analysis (PCA) [43] and autoencoding (AutoEnc) [44]. For the simulation presented in the following section, amplitude encoding is used only with bilinear interpolation while all other encoding schemes are tested with PCA and autoencoding. Bilinear interpolation and PCA are carried out by utilizing tf.image.resize from Ten-sorFlow and sklearn.decomposition.PCA from scikitlearn, respectively. Autoencoders are capable of modelling complex non-linear functions, while PCA is a simple linear transformation with cheaper and faster computation. Since the pre-processing step should not produce too much computational resource overhead or result in overfitting, we train a simple autoencoder with one hidden layer. The data in the latent space (i.e. hidden layer) is then fed to quantum circuits.
IV. SIMULATION
A. QCNN results overview
This section reports the classical simulation results of the QCNN algorithm for binary classification carried out with Pennylane [26]. The test is performed with two standard datasets, namely MNIST and Fashion MNIST, under various conditions as described in the previous section. Note that the MNIST and Fashion MNIST datasets are 28×28 image data, each with ten classes. Our benchmark focuses on binary classification, and hence we select classes 0 and 1 for both datasets.
The variational parameters in the QCNN ansatze are optimized by minimizing the cost function with an optimizer provided in Pennylane [26]. In particular, we tested Adam [45] and Nesterov moment [46] optimization algorithms. At each iteration t, we create a small batch by randomly selecting data from the training set. Compared to training on the full data set, training on the mini-batch not only reduces simulation time but also helps the gradients to escape from local minima. For both Adam and Nesterov moment optimizers, the batch size was 25 and the learning rate was 0.01. We also fixed the number of iterations to be 200 to speed up the training process. Note that the training can alternatively be stopped at different conditions, such as when the validation set accuracy does not increase for a predetermined number of consecutive runs [28]. The number of training (test) data are 12665 (2115) and 12000 (2000) for MNIST and fashion MNIST datasets, respectively. Table I and II show the mean classification accuracy and one standard deviation obtained from five instances with random initialization of parameters. The number of random initialization is chosen to be the same as that of Ref. [28]. The results are obtained for various QCNN models of different convolutional and pooling circuits and data encoding strategies. When benchmarking with the hybrid encoding schemes (i.e. HDE and HAE), we used two blocks of four qubits, which results in having 32 and 30 features encoded in 8 qubits, respectively. For all results presented here, training is done with the crossentropy loss. Similar results are obtained when MSE is used for training, and we present the MSE results in Appendix C. Here we only report the classification results obtained with the Nesterov optimizer, since it consistently provided better convergence. The ansatze in the table are listed in the same order as the list of convolutional circuits shown in Fig. 2. The last row of the table (i.e. Ansatz 9b) contains the results when the QCNN circuit only consists of the convolutional circuit 9 without any unitary gates in the pooling step.
The simulation results show that all ansatze perform reasonably well, while the ones with more number of free parameters tend to produce higher score. Since all ansatze perform reasonably well, one may choose to use the ansatz with smaller number of free parameters to save the training time. For example, by choosing ansatz 4 and amplitude encoding, one can achieve 97.8% classification accuracy while using only 24 free parameters total instead of achieving 98.4% at the cost of increasing the number of free parameters to 51. It is also important to note that most of the results obtained with the hybrid data encoding and PCA is considerably worse than the others. This is due to the normalization problem discussed in Sec. II C 4, which motivated the development of the hybrid angle encoding. We observed that the normalization problem is negligible in the case with autoencoding. The simulation results clearly demonstrates that the normalization problem is resolved by the hybrid angle encoding as expected, and reasonably good results can be obtained with this method. For MNIST, HAE with PCA provides the best solution among the hybrid encoding schemes on average. On the other hand, for Fashion MNIST, HDE with autoencoding provides the best solution among the hybrid encoding schemes on average.
In the following, we also compare the two classical dimensionality reduction methods by presenting the overall mean accuracy and standard deviation values obtained by averaging over all ansatze and the random initializations. The average values are presented in Tab. III. As discussed before, the HDE with PCA does not perform well for both datasets due to the data normalization issue. Besides this case, interestingly, PCA works better than autoencoding for MNIST data, and vice versa for Fashion MNIST data, thereby suggesting that the choice of the classical pre-processing method should be datadependent.
Finally, we also examine how classification performance improves as the number of convolutional filters in each layer increases. For simplicity, we set the number of convolutional filters in each layer to be same, i.e l 1 = l 2 = l 3 = L (see Fig. 1 for the definition of l i ). Without loss of generality, we pick two ansatze and five encodings. For ansatze, we choose the one with the smallest number of free parameters and another with arbitrary SU (4) operations. These are circuit 2 and circuit 9b, and they use 12 and 45 parameters total, respectively. For data encoding, we tested amplitude, qubit, and dense encoding. The qubit and dense encoding are further grouped under two different classical dimensionality reduction techniques, PCA and autoencoding. Since the qubit and dense encoding load 8 and 16 features, respectively, we label them as PCA8, AutoEnc8, PCA16, and AutoEnc16 based on the number of features and the dimensionality reduction techniques. The classification accuracies for L = {1, 2, 3} are plotted in Fig. 4. The simulation results show that in some cases, the classification accuracy can be improved by increasing the number of convolutional filters. For example, the classification accuracy for MNIST data can be improved from about 86% to 96% when circuit 2 and dense encoding with autoencoding are used. For Fashion MNIST, the classification accuracy is improved from about 88% to 90% when circuit 2 and amplitude encoding are used, and from about 86% to 90% when circuit 2 and qubit encoding with PCA is used. However, we do not observe general trend with respect to the number of convolutional filters. In particular, the relationship between the classification accuracy and L is less obvious for circuit 9b. We speculate that this attributes to the fact that circuit 9b implements an arbitrary SU (4), which is an arbitrary two-qubit gate, and hence repetitive application of an arbitrary SU (4) is redundant.
B. Boundary conditions of the QCNN circuit
The general structure of QCNN shown in Fig. 1 uses two-qubit gates between the first (top) and last (bottom) qubits, which can be thought of as imposing periodic boundary condition. One may notice that all-to-all connectivity can be established even without connecting the boundaries. Thus we tested the classification performance of a QCNN architecture without having the two-qubit gates to close the loop. We refer to this case as the open boundary QCNN. Without loss of generality, we tested QCNNs with two different ansatz, the convolutional circuit 2 (Ansatz 2 in Tabs. I and II) which uses the smallest number of free parameters and the convolutional circuit 9 which implements an arbitrary SU (4). In case of the latter, pooling was done without parameterized gates, and hence the ansatz is equivalent to ansatz 9b in Tabs. I and II. By imposing the open-boundary condition in conjunction with the ansatz 9b, one can modify the qubit arrangement of the QCNN circuit so as to use nearest-neighbour qubit interactions only. For an example of 8-qubit QCNN circuit, the modified structure is depicted in Fig. 5. Such design is particularly advantageous for NISQ devices that have limited physical qubit connectivity. For example, if one employs the qubit or the dense encoding method, the QCNN algorithm can be implemented with a 1-dimensional chain of physical qubits.
The simulation results are presented in Tab. IV for MNIST and Fashion MNIST datasets. These results are attained with one convolutional filters per layer, i.e. 8 TABLE III. Comparison of the classical dimensionality reduction methods for angle encoding, dense encoding, and hybrid encoding. For each encoding scheme, classification results from all instances (i.e. various ansatze and random initialization of parameters) are averaged out to produce the mean and standard deviation.
with ansatz 9b is even more attractive for NISQ devices since the convolutional operations can be done with only nearest-neighbour qubit interactions as mentioned above.
C. Comparison to CNN
We now compare the classification results of QCNN to that of classical CNN. Our goal is to compare the classification accuracy of the two given a similar number of parameters subject to optimization. To make a fair comparison between the two, we fix all hyperparameters of the two methods to be the same, except we used the Adam optimizer for CNN since it performed significantly better than the Nesterov moment optimizer. A detailed description of the classical CNN architecture is provided in Appendix B.
It is important to note that CNN can be trained with such small number of parameters effectively only when the number of nodes in the input layer is small. Therefore, the CNN results are only comparable to that of the qubit and dense encoding cases which requires 8 and 16 classical input nodes, respectively. We designed four different CNN models with the number of free parameters being 26, 34, 44 and 56 to make them comparable to the QCNN models. In these cases, a dimensionality reduction technique must precede. For hybrid and amplitude encoding, which require relatively simpler data pre-processing, the number of nodes in the CNN input layer is too large to be trained with a small number of parameters as in QCNN.
Comparing the values in Tab. V with the QCNN results, one can see that QCNN models perform better than their corresponding CNN models for the MNIST dataset. The same conclusion also holds for the Fashion dataset, except for the CNN models with 44 and 56 parameters that achieve similar performance as their corresponding QCNN models. Another noticeable result is that the QCNN models have considerably smaller standard deviations than the CNN models on average. This implies that the QCNN models not only achieve higher classification accuracy than the CNN models under similar training conditions but also are less sensitive to the random initialization of the free parameters.
In Fig. 6, we present two representative examples of the cross-entropy loss as a function of the number of training iterations. For simplicity, we show such data for two cases in MNIST data classification: circuit 9b and qubit encoding with autoencoding, and circuit 9b and dense encoding with PCA. Considering the number of free parameters, these cases are comparable to the CNN models with 8 inputs with autoencoding and 16 inputs with PCA, respectively. Recall that the mean classification accuracy and one standard deviation in QCNN (CNN) is 96.6 ± 2.2 (90.4 ± 13.4) for the first case, and 98.3 ± 0.5 (93.0 ± 13.4) for the second case. Figure 6 shows that in both cases, the QCNN models are trained faster than the CNN models, while the advantage manifests more clearly in the first case. Furthermore, the standard deviations in the QCNN models are significantly smaller than that of the CNN models.
Convolution
Data Encoding
Pooling | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ | 0⟩ Classical Computer arg max θ C(θ) × l 1 × l 2 ×
V. CONCLUSION
Fully parameterized quantum convolutional neural networks pave promising avenues for near-term applications of quantum machine learning and data science. This work presented an extensive benchmark of QCNN for solving classification problems on classical data, a fundamental task in pattern recognition. The QCNN algorithm can be tailored with many variables such as the structure of parameterized quantum circuits (i.e. ansatz) for convolutional filters and pooling operators, quantum data encoding methods, classical data pre-processing methods, cost functions and optimizers. To improve the utility of QCNN for classical data, we also introduced new data encoding schemes, namely hybrid direct encoding and hybrid angle encoding, with which the exchange between quantum circuit depth and width for state preparation can be configured. With diverse combinations of the aforementioned variables, we tested 8-qubit QCNN models for binary classification of MNIST and Fashion MNIST datasets by simulation with Pennylane. The QCNN models tested in this work operated with a small number of free parameters, ranging from 12 to 51. Despite the small number of free parameters, QCNN produced high classification accuracy for all instances, with the best case being close to 99% for MNIST and 94% for Fashion MNIST. We also compared QCNN results to CNN and observed that QCNN performed noticeably better than CNN given the similar training conditions for both benchmarking datasets. The comparison between QCNN and CNN is only valid for qubit and dense encoding cases in which the number of input qubits grows linearly with the dimension of the input data. With amplitude or hybrid encoding, the number of input qubits is substantially smaller than the dimension of the data, and hence there is no classical analogue. We speculate that the advantage of QCNN lies in the ability to exploit entanglement, which is a global effect, while CNN is only capable of capturing local correlations.
The QCNN architecture proposed in this work can be generalized for L-class classification through one-vs-one or one-vs-all strategies. It also remains an interesting future work to examine the construction of a multi-class classifier by leaving log 2 (L) qubits for measurement in the output layer. Another interesting future work is to optimize the data encoding via training methods provided in Ref. [47]. However, since QCNN itself can be viewed as a feature reduction technique, it is not clear whether introducing another layer of the variational quantum circuit for data encoding would help until a thorough investigation is carried out. Understanding the underlying principle for the quantum advantage demonstrated in this work also remains to be done. One way to study this is by testing QCNN models with a set of data that does not exhibit local correlation but contains some global feature while analyzing the amount of entanglement created in the QCNN circuit. Since the circuit depth grows only logarithmically with the number of input qubits and the gate parameters are learned, the QCNN model is expected to be suitable for NISQ devices. However, the verification through real-world experiments and noisy simulations remains to be done. Furthermore, testing the classification performance as the QCNN models grow bigger remains an interesting future work. Finally, the application of the proposed QCNN algorithms for other real-world datasets such as those relevant to high-energy physics and medical diagnosis is of significant importance. We thank Quantum Open Source Foundation as this work was initiated under the Quantum Computing Mentorship program.
DATA AVAILABILITY
The source code used in this study is available at https://github.com/takh04/QCNN.
CONFLICT OF INTEREST
The authors declare that they have no conflict of interest.
Appendix A: Related works
The term Quantum Convolutional Neural Network (QCNN) appears in several places, but it refers to a number of different frameworks. Several proposals have been made in the past to reproduce classical CNN on a quantum circuit by imitating the basic arithmetic of the convolutional layer for a given filter [15,19,21]. Although these algorithms have the potential to achieve exponential speedups against the classical counterpart in the asymptotic limit, they require an efficient means to implement quantum random access memory (QRAM), expensive subroutines such as the linear combination of unitaries or quantum phase estimation with extra qubits, and they work only for specific types of quantum data embedding. Another branch of CNN-inspired QML algorithms focuses on implementing the convolutional filter as a parameterized quantum circuit, which can be stacked by inserting a classical pooling layer in between [16][17][18]. Following the nomenclature provided in [17], we refer to this approach as quanvolutioanl neural network to distinguish it from QCNN. The potential quantum advantage of using quanvolutional layers lies in the fact that quantum computers can access kernel functions in highdimensional Hilbert spaces much more efficiently than classical computers. In quanvolutional NN, a challenge is to find a good structure for the parametric quantum circuit in which the number of qubits equals the size of the filter. This approach is also limited to qubit encoding since each layer requires a quantum embedding which has a non-negligible cost. Furthermore, stacking quanvolutional layers via pooling requires each parameterized quantum circuit to be measured multiple times for the measurement statistics.
Variational quantum circuits with the hierarchical structure consisting of O(log(n)) layers do not exhibit the problem of "barren plateau" [29]. In other words, the precision required in the measurement grows at most polynomially with the system size. This result guarantees the trainability of the fully parameterized QCNN models studied in this work when randomly initializing their parameters. Furthermore, numerical calculations in Ref. [29] show that the cost function gradient vanishes at a slower rate (with n, the number of initial qubits) when all unitary operators in the same layer are identical as in QCNN [14]. The hierarchical structure inspired by tensor network, without translational invariance, was first introduced in Ref. [28]. The hierarchical quantum circuit can be combined with a classical neural network as demonstrated in Ref. [48].
We note in passing that there exist several works proposing the quantum version of perceptron for binary classification [49][50][51]. While our QCNN model differs from them as it implements the entire network as a parameterized quantum circuit, interesting future work is to investigate the alternative approach to construct a complex network of quantum artificial neurons developed in the previous works. In order to compare the classification accuracy of CNN and QCNN in fair conditions, we fixed hyperparameters used in the optimization step to be the same, which include iteration numbers, batch size, optimizer type, and its learning rates. In addition, we modified the structure of CNN in ways that its number of parameters subject to optimization is as close to that used in QCNN as pos-sible. For example, since QCNN attains the best results with about 40 to 50 free parameters, we adjust the CNN structure accordingly. This led us to come up with two CNN, one with the input shape of (8, 1, 1) and another with the input shape of (16, 1, 1). In order to occupy the small number of input nodes for MNIST and Fashion MNIST classification, PCA and autoencoding are used for data pre-processing as done in QCNN. The CNNs go through convolutional and pooling stages twice, followed by a fully connected layer. The number of free parameters used in the CNN models is 26 or 44 for the case of 8 input nodes and 34 or 56 for the case of 16 input nodes. The training also mimics that of QCNN. For every iteration step, 25 data are randomly selected from the training data set, and trained via Adam optimizer with the learning rate of 0.01. We also fixed the number of iterations to be 200 as done in QCNN. The number of training (test) data are 12665 (2115) and 12000 (2000) for MNIST and fashion MNIST datasets, respectively.
Appendix C: QCNN simulation results for MSE loss
In Sec. IV of the main text, we presented the Pennylane simulation results of QCNN trained with the crossentropy loss. When MSE is used as the cost function, similar results are obtained. We report classification results for MNIST and Fashion MNIST data attained from QCNN models that are trained with MSE in Tab. VI and VII.
TABLE VIII. Mean accuracy and one standard deviation of the classification for 0 and 1 in the MNIST dataset when the HQC model is trained with cross-entropy loss. The mean and the standard deviation are obtained from five repetitions with random initialization of parameters. The first column shows the ansatz label. The second column shows the total number of parameters that are subject to optimization. For qubit, dense and hybrid encoding, two rows indicate the values obtained with different classical data pre-processing, namely PCA and autoencoding, respectively. The best result under each quantum data encoding method is written in bold.
λ 6 )
6U3(θ 10 , ϕ 11 , λ 12 )U3(θ 13 , ϕ 14 , λ 15 ) (i) Convolutional circuit 9FIG. 2. Parameterized quantum circuits used in the convolutional layer. Ri(θ) is a rotation around the i axis of the Bloch sphere by an angle of θ, and H is the Hadamard gate. U 3(θ, φ, λ) is an arbitrary single-qubit gate that can be expressed as U 3(θ, φ, λ) = Rz(φ)Rx(−π/2)Rz(θ)Rx(π/2)Rz(λ).
FIG. 3 .
3Parameterized quantum circuit used in the pooling layer. The pooling layer applies two controlled rotations Rz(θ1) and Rx(θ2), respectively, each activated when the control qubit is 1 (filled circle) or 0 (open circle). The control (first) qubit is traced out after the gate operations in order to reduce the dimension.
l 1 = l 2 = l 3 = 1. The simulation results demonstrate that for the case of two ansatze tested the classification performance between open-and periodic-boundary QCNN circuits are similar. Although the number of free parameters are the same under these conditions, depending on the specification of the quantum hardware such as the qubit connectivity, the open-boundary QCNN circuit can have shallower depth. The open-boundary circuit
II. Mean accuracy and one standard deviation of the classification for 0 (t-shirt/top) and 1 (trouser) in the Fashion MNIST dataset when the model is trained with the cross-entropy loss. The mean and the standard deviation are obtained from five repetitions with random initialization of parameters. The first column shows the ansatz label. The second column shows the total number of parameters that are subject to optimization. For qubit, dense and hybrid encoding, two rows indicate the values obtained with different classical data pre-processing, namely PCA and autoencoding, respectively. The best result under each quantum data encoding method is written in bold.
circuit 9b for Fashion MNIST (c) Convolutional circuit 9b for MNIST FIG. 4. Classification accuracy vs. the number of convolutional filters L for MNIST and Fashion MNIST datasets. The number of filters in each layer is set to be equal, i.e. l1 = l2 = l3 = L (seeFig. 1for the definition of li). The simulation is carried out with two ansatze, circuit 2 and circuit 9b, and five encoding schemes, amplitude encoding (AE), qubit encoding with PCA (PCA8) and with autoencoding (AutoEnc8), and dense encoding with PCA (PCA16) and with autoencoding (AutoEnc16).
input CNN with PCA vs QCNN (a) 8-input CNN with AutoEnc vs QCNN FIG. 6. Cross-entropy loss as a function of the number of training iterations. The QCNN models use circuit 9b as the ansatz. (a) The QCNN model with qubit encoding and autoencoding is compared to the CNN model with 8-inputs. (b) The QCNN model with dense encoding and PCA is compared to the CNN model with 16-inputs.
FIG. 7 .
7A schematic of CNN used in this work for comparing to the classification performance of QCNN. To make the comparison as fair as possible, the number of free parameters are adjusted to be similar to that used in QCNN. This leads to starting with a small number of input nodes. While we used two CNN structures with 8 and 16 input nodes, the figure shows a CNN structure with 8 input nodes as an example.
TABLE I. Mean accuracy and one standard deviation of the classification for 0 and 1 in the MNIST dataset when the model is trained with the cross-entropy loss. The mean and the standard deviation are obtained from five repetitions with random initialization of parameters. The first column shows the ansatz label. The second column shows the total number of parameters that are subject to optimization. For qubit, dense and hybrid encoding, two rows indicate the values obtained with different classical data pre-processing, namely PCA and autoencoding, respectively. The best result under each quantum data encoding method is written in bold.Classification Accuracy
Ansatz
# of
params
Amplitude Qubit
Dense
HDE
HAE
1
12
96.8 ± 5.3
98.0 ± 0.4
91.4 ± 2.3
97.6 ± 1.1
88.4 ± 9.2
68.7 ± 5.1
88.4 ± 2.6
97.9 ± 0.3
77.7 ± 6.0
2
12
94.5 ± 3.1
98.2 ± 4.5
98.2 ± 6.6
98.2 ± 0.5
85.6 ± 4.5
62.2 ± 3.2
93.4 ± 5.4
94.7 ± 2.1
80.0 ± 4.0
3
18
93.8 ± 4.4
98.5 ± 0.2
93.3 ± 3.8
96.9 ± 1.6
95.8 ± 1.7
76.4 ± 2.7
95.3 ± 3.4
98.1 ± 0.2
84.4 ± 0.7
4
24
97.8 ± 2.4
98.2 ± 0.4
98.5 ± 1.2
98.2 ± 0.4
97.2 ± 1.1
70.2 ± 1.3
96.6 ± 1.0
98.0 ± 0.3
90.4 ± 3.8
5
24
96.7 ± 2.1
98.3 ± 0.4
94.9 ± 2.1
98.1 ± 0.5
96.0 ± 1.3
72.6 ± 5.7
93.5 ± 1.3
98.0 ± 0.1
86.5 ± 5.0
6
24
97.2 ± 2.2
98.1 ± 0.4
97.7 ± 1.0
98.1 ± 0.3
93.4 ± 0.5
77.4 ± 1.7
97.0 ± 2.0
98.3 ± 0.2
86.9 ± 7.3
7
36
98.3 ± 2.2
98.2 ± 0.3
93.7 ± 4.5
98.7 ± 2.4
95.1 ± 1.6
74.6 ± 3.2
97.2 ± 2.2
98.2 ± 0.1
90.2 ± 3.0
8
36
98.1 ± 0.7
98.3 ± 0.4
96.9 ± 2.4
98.7 ± 0.1
95.4 ± 2.8
79.7 ± 1.6
96.6 ± 1.7
98.3 ± 0.1
89.1 ± 2.6
9a
51
98.4 ± 0.2
98.4 ± 0.5
96.4 ± 2.3
98.7 ± 0.4
96.7 ± 1.4
78.1 ± 2.8
97.8 ± 2.2
98.2 ± 0.2
87.0 ± 5.3
9b
45
98.3 ± 0.2
97.7 ± 0.6
96.6 ± 2.2
98.3 ± 0.5
96.5 ± 1.7
77.4 ± 2.5
98.0 ± 1.2
98.1 ± 0.1
88.5 ± 2.8
TABLE
l 3
lFIG. 5. A QCNN circuit with the open-boundary condition and no gate operations for pooling. In this case, the QCNN circuit can be constructed with nearest-neighbour qubit interactions only.Classification Accuracy
# of
params
input
size
PCA
AutoEnc
MNIST
26
8 91.0 ± 12.7 82.7 ± 15.2
34
16 97.0 ± 3.5 83.5 ± 15.5
44
8 93.3 ± 13.2 90.4 ± 13.4
56
16 93.0 ± 13.4 95.5 ± 2.3
Fashion
MNIST
26
8 82.2 ± 16.6 86.8 ± 12.7
34
16 78.8 ± 19.1 79.0 ± 19.0
44
8
89.4 ± 3.9 92.4 ± 2.8
56
16 91.9 ± 2.0 93.6 ± 2.2
TABLE V. Mean classification accuracy and one standard
deviation obtained with classical CNN for classifying 0 and
1 in the MNIST and Fashion MNIST datasets. Each col-
umn is named with the pre-processing method (PCA or Au-
toEnc). These results directly compare to the second and
third columns of Tabs. I and II denoted by Qubit and Dense.
The hierarchical structure inspired by tensor network named as hierarchical quantum classifier (HQC) was first introduced in Ref.[28]. The HQC therein does not enforce translational invariance, and hence the number of free parameters subject to optimization grows as O(n) for a quantum circuit with n input qubits. Although the simulation presented in the main manuscript aims to benchmark the classification performance of the QML model in which the number of parameters grows as O(log(n)), we also report the simulation results of HQC with the tensor tree network (TTN) structure[28]in this supplementary section for interested readers. The TTN classifier does not employ parameterized quantum gates for pooling. Thus for certain ansatz, the number of param-eters differs from that of QCNN models. For example, although the convolutional circuit 2 inFig. 2has two free parameters, only one of them is effective since one of the qubits is traced out as soon as the parameterized gate is applied. For brevity, here we only report the results obtained with the cross-entropy loss but similar results can be obtained with MSE. As can be seen from Tab. VIII and Tab. IX, the number of effective parameters (i.e. the second column) grows faster that that of QCNN models. An interesting observation is that there is no clear trend as the number of parameters is increased beyond 42, which is close to the maximum number of parameters used in QCNN. In other words, there is no clear motivation to increase the number of free parameters beyond 42 or so when seeking to improve the classification performance. Studying overfitting under the growth of the number of parameters remains an interesting open problem.IX. Mean accuracy and one standard deviation of the classification for 0 (t-shirt/top) and 1 (trouser) in the Fashion MNIST dataset when the HQC model is trained with cross-entropy loss. The mean and the standard deviation are obtained from five repetitions with random initialization of parameters. The first column shows the ansatz label. The second column shows the total number of parameters that are subject to optimization. For qubit, dense and hybrid encoding, two rows indicate the values obtained with different classical data pre-processing, namely PCA and autoencoding, respectively. The best result under each quantum data encoding method is written in bold.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recogni- tion. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Deep learning. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, Nature. 5217553Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521(7553):436-444, May 2015.
Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT Press2Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems -Volume 2, NIPS'14, page 2672-2680, Cambridge, MA, USA, 2014. MIT Press.
A convolutional neural network neutrino event classifier. A Aurisano, A Radovic, D Rocco, A Himmel, M D Messier, E Niner, G Pawloski, F Psihas, A Sousa, P Vahle, Journal of Instrumentation. 1109A. Aurisano, A. Radovic, D. Rocco, A. Himmel, M.D. Messier, E. Niner, G. Pawloski, F. Psihas, A. Sousa, and P. Vahle. A convolutional neural network neutrino event classifier. Journal of Instrumentation, 11(09):P09001- P09001, sep 2016.
Convolutional neural networks applied to neutrino events in a liquid argon time projection chamber. R Acciarri, Journal of Instrumentation. 1203R. Acciarri et al. Convolutional neural networks applied to neutrino events in a liquid argon time projection cham- ber. Journal of Instrumentation, 12(03):P03011-P03011, mar 2017.
Deep learning for realtime gravitational wave detection and parameter estimation: Results with advanced ligo data. Daniel George, E A Huerta, Physics Letters B. 778Daniel George and E.A. Huerta. Deep learning for real- time gravitational wave detection and parameter estima- tion: Results with advanced ligo data. Physics Letters B, 778:64-70, 2018.
Detection of phase transition via convolutional neural networks. Akinori Tanaka, Akio Tomiya, Journal of the Physical Society of Japan. 86663001Akinori Tanaka and Akio Tomiya. Detection of phase transition via convolutional neural networks. Journal of the Physical Society of Japan, 86(6):063001, 2017.
Quantum algorithms for supervised and unsupervised machine learning. Seth Lloyd, Masoud Mohseni, Patrick Rebentrost, arXiv:1307.0411arXiv preprintSeth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum algorithms for supervised and unsupervised machine learning. arXiv preprint arXiv:1307.0411, 2013.
Quantum principal component analysis. Seth Lloyd, Masoud Mohseni, Patrick Rebentrost, Nature Physics. 109Seth Lloyd, Masoud Mohseni, and Patrick Rebentrost. Quantum principal component analysis. Nature Physics, 10(9):631-633, 2014.
Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Nathan Wiebe, Ashish Kapoor, Krysta M Svore, Quantum Info. Comput. 153-4Nathan Wiebe, Ashish Kapoor, and Krysta M. Svore. Quantum algorithms for nearest-neighbor methods for supervised and unsupervised learning. Quantum Info. Comput., 15(3-4):316-356, March 2015.
Quantum support vector machine for big data classification. Patrick Rebentrost, Masoud Mohseni, Seth Lloyd, Phys. Rev. Lett. 113130503Patrick Rebentrost, Masoud Mohseni, and Seth Lloyd. Quantum support vector machine for big data classifica- tion. Phys. Rev. Lett., 113:130503, Sep 2014.
q-means: A quantum algorithm for unsupervised machine learning. Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, Anupam Prakash, Advances in Neural Information Processing Systems. H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. GarnettCurran Associates, Inc32Iordanis Kerenidis, Jonas Landman, Alessandro Luongo, and Anupam Prakash. q-means: A quantum algo- rithm for unsupervised machine learning. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
Quantum classifier with tailored quantum kernel. npj Quantum Information. Carsten Blank, K Daniel, June-Koo Kevin Park, Francesco Rhee, Petruccione, 6Carsten Blank, Daniel K Park, June-Koo Kevin Rhee, and Francesco Petruccione. Quantum classifier with tai- lored quantum kernel. npj Quantum Information, 6(1):1- 7, 2020.
Quantum convolutional neural networks. Iris Cong, Soonwon Choi, Mikhail D Lukin, Nature Physics. 1512Iris Cong, Soonwon Choi, and Mikhail D. Lukin. Quan- tum convolutional neural networks. Nature Physics, 15(12):1273-1278, December 2019.
Iordanis Kerenidis, Jonas Landman, Anupam Prakash, arXiv:1911.01117Quantum algorithms for deep convolutional neural networks. arXiv preprintIordanis Kerenidis, Jonas Landman, and Anupam Prakash. Quantum algorithms for deep convolutional neural networks. arXiv preprint arXiv:1911.01117, 2019.
Junhua Liu, Hui Kwan, Kristin L Lim, Wei Wood, Chu Huang, He-Liang Guo, Huang, Hybrid quantum-classical convolutional neural networks. 64290311Junhua Liu, Kwan Hui Lim, Kristin L. Wood, Wei Huang, Chu Guo, and He-Liang Huang. Hy- brid quantum-classical convolutional neural networks. 64(9):290311, 2021.
Quanvolutional neural networks: powering image recognition with quantum circuits. Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, Tristan Cook, Quantum Machine Intelligence. 2Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quanvolutional neural net- works: powering image recognition with quantum cir- cuits. Quantum Machine Intelligence, 2(1):2, June 2020.
Samuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Haiwang Yu, Shinjae Yoo, arXiv:2012.12177Quantum convolutional neural networks for high energy physics data analysis. arXiv preprintSamuel Yen-Chi Chen, Tzu-Chieh Wei, Chao Zhang, Hai- wang Yu, and Shinjae Yoo. Quantum convolutional neu- ral networks for high energy physics data analysis. arXiv preprint arXiv:2012.12177, 2020.
A quantum deep convolutional neural network for image recognition. Yaochong Li, Ri-Gui Zhou, Ruqing Xu, Jia Luo, Wenwen Hu, Quantum Science and Technology. 5444003YaoChong Li, Ri-Gui Zhou, RuQing Xu, Jia Luo, and WenWen Hu. A quantum deep convolutional neural net- work for image recognition. Quantum Science and Tech- nology, 5(4):044003, July 2020.
Ian Maccormack, Conor Delaney, arXiv:2012.14439arXiv: 2012.14439Alexey Galda, Nidhi Aggarwal, and Prineha Narang. Branching Quantum Convolutional Neural Networks. physics, physics:quant-phIan MacCormack, Conor Delaney, Alexey Galda, Nidhi Aggarwal, and Prineha Narang. Branching Quan- tum Convolutional Neural Networks. arXiv:2012.14439 [physics, physics:quant-ph], December 2020. arXiv: 2012.14439.
Shijie Wei, Yanhu Chen, Zengrong Zhou, Guilu Long, arXiv:2104.06918arXiv: 2104.06918A Quantum Convolutional Neutral Network on NISQ Devices. quant-phShiJie Wei, YanHu Chen, ZengRong Zhou, and GuiLu Long. A Quantum Convolutional Neutral Network on NISQ Devices. arXiv:2104.06918 [quant-ph], April 2021. arXiv: 2104.06918.
Quantum computing models for artificial neural networks. S Mangini, F Tacchino, D Gerace, D Bajoni, C Macchiavello, Europhysics Letters). 134110002EPLS. Mangini, F. Tacchino, D. Gerace, D. Bajoni, and C. Macchiavello. Quantum computing models for ar- tificial neural networks. EPL (Europhysics Letters), 134(1):10002, April 2021.
Variational quantum algorithms. M Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R Mc-Clean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, Patrick J Coles, Nature Reviews Physics. 39M. Cerezo, Andrew Arrasmith, Ryan Babbush, Simon C. Benjamin, Suguru Endo, Keisuke Fujii, Jarrod R. Mc- Clean, Kosuke Mitarai, Xiao Yuan, Lukasz Cincio, and Patrick J. Coles. Variational quantum algorithms. Na- ture Reviews Physics, 3(9):625-644, 2021.
John Preskill, Quantum Computing in the NISQ era and beyond. Quantum. 279John Preskill. Quantum Computing in the NISQ era and beyond. Quantum, 2:79, August 2018.
Kishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S Kottmann, Tim Menke, Wai-Keong Mok, arXiv:2101.08448Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik. Noisy intermediate-scale quantum (nisq) algorithms. arXiv preprintKishor Bharti, Alba Cervera-Lierta, Thi Ha Kyaw, Tobias Haug, Sumner Alperin-Lea, Abhinav Anand, Matthias Degroote, Hermanni Heimonen, Jakob S. Kottmann, Tim Menke, Wai-Keong Mok, Sukin Sim, Leong-Chuan Kwek, and Alán Aspuru-Guzik. Noisy intermediate-scale quantum (nisq) algorithms. arXiv preprint arXiv:2101.08448, 2021.
Pennylane: Automatic differentiation of hybrid quantumclassical computations. Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, M Sohaib Alam, Shahnawaz Ahmed, Juan Miguel Arrazola, Carsten Blank, Alain Delgado, Soran Jahangiri, Keri Mckiernan, Johannes Jakob Meyer, Zeyue Niu, Antal Száva, Nathan Killoran, arXiv:1811.04968arXiv preprintVille Bergholm, Josh Izaac, Maria Schuld, Chris- tian Gogolin, M. Sohaib Alam, Shahnawaz Ahmed, Juan Miguel Arrazola, Carsten Blank, Alain Delgado, So- ran Jahangiri, Keri McKiernan, Johannes Jakob Meyer, Zeyue Niu, Antal Száva, and Nathan Killoran. Pen- nylane: Automatic differentiation of hybrid quantum- classical computations. arXiv preprint arXiv:1811.04968, 2020.
Robust quantum classifier with minimal overhead. Daniel K Park, Carsten Blank, Francesco Petruccione, 2021 International Joint Conference on Neural Networks (IJCNN). Daniel K. Park, Carsten Blank, and Francesco Petruc- cione. Robust quantum classifier with minimal overhead. In 2021 International Joint Conference on Neural Net- works (IJCNN), pages 1-7, 2021.
Hierarchical quantum classifiers. Edward Grant, Marcello Benedetti, Shuxiang Cao, Andrew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G Green, Simone Severini, npj Quantum Information. 465Edward Grant, Marcello Benedetti, Shuxiang Cao, An- drew Hallam, Joshua Lockhart, Vid Stojevic, Andrew G. Green, and Simone Severini. Hierarchical quantum classi- fiers. npj Quantum Information, 4(1):65, December 2018.
Absence of barren plateaus in quantum convolutional neural networks. Arthur Pesah, M Cerezo, Samson Wang, Tyler Volkoff, Andrew T Sornborger, Patrick J Coles, Phys. Rev. X. 1141011Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger, and Patrick J. Coles. Absence of barren plateaus in quantum convolutional neural net- works. Phys. Rev. X, 11:041011, Oct 2021.
Optimal quantum circuits for general two-qubit gates. Farrokh Vatan, Colin Williams, Phys. Rev. A. 6932315Farrokh Vatan and Colin Williams. Optimal quan- tum circuits for general two-qubit gates. Phys. Rev. A, 69:032315, Mar 2004.
Hybrid quantum-classical approach to quantum optimal control. Jun Li, Xiaodong Yang, Xinhua Peng, Chang-Pu Sun, Phys. Rev. Lett. 118150503Jun Li, Xiaodong Yang, Xinhua Peng, and Chang-Pu Sun. Hybrid quantum-classical approach to quantum op- timal control. Phys. Rev. Lett., 118:150503, Apr 2017.
Quantum circuit learning. K Mitarai, M Negoro, M Kitagawa, K Fujii, Phys. Rev. A. 9832309K. Mitarai, M. Negoro, M. Kitagawa, and K. Fujii. Quan- tum circuit learning. Phys. Rev. A, 98:032309, Sep 2018.
Evaluating analytic gradients on quantum hardware. Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, Nathan Killoran, Phys. Rev. A. 9932331Maria Schuld, Ville Bergholm, Christian Gogolin, Josh Izaac, and Nathan Killoran. Evaluating analytic gradi- ents on quantum hardware. Phys. Rev. A, 99:032331, Mar 2019.
Quantum machine learning in feature hilbert spaces. Maria Schuld, Nathan Killoran, Phys. Rev. Lett. 12240504Maria Schuld and Nathan Killoran. Quantum machine learning in feature hilbert spaces. Phys. Rev. Lett., 122:040504, Feb 2019.
Quantum random access memory. Vittorio Giovannetti, Seth Lloyd, Lorenzo Maccone, Phys. Rev. Lett. 100160501Vittorio Giovannetti, Seth Lloyd, and Lorenzo Maccone. Quantum random access memory. Phys. Rev. Lett., 100:160501, Apr 2008.
Circuit-based quantum random access memory for classical data. Daniel K Park, Francesco Petruccione, June-Koo Kevin Rhee, Scientific Reports. 913949Daniel K. Park, Francesco Petruccione, and June- Koo Kevin Rhee. Circuit-based quantum random access memory for classical data. Scientific Reports, 9(1):3949, 2019.
Circuit-based quantum random access memory for classical data with continuous amplitudes. T M L Veras, I C S De Araujo, K D Park, A J Da Silva, IEEE Transactions on Computers. T. M. L. Veras, I. C. S. De Araujo, K. D. Park, and A. J. da Silva. Circuit-based quantum random access memory for classical data with continuous amplitudes. IEEE Transactions on Computers, pages 1-1, 2020.
A divide-and-conquer algorithm for quantum state preparation. F Israel, Daniel K Araujo, Francesco Park, Adenilton J Petruccione, Da Silva, Scientific Reports. 1116329Israel F. Araujo, Daniel K. Park, Francesco Petruccione, and Adenilton J. da Silva. A divide-and-conquer algo- rithm for quantum state preparation. Scientific Reports, 11(1):6329, March 2021.
Robust data encodings for quantum classifiers. Ryan Larose, Brian Coyle, Phys. Rev. A. 10232420Ryan LaRose and Brian Coyle. Robust data encodings for quantum classifiers. Phys. Rev. A, 102:032420, Sep 2020.
Expressibility and entangling capability of parameterized quantum circuits for hybrid quantum-classical algorithms. Sukin Sim, Peter D Johnson, Alán Aspuru-Guzik, Advanced Quantum Technologies. 21900070Sukin Sim, Peter D. Johnson, and Alán Aspuru-Guzik. Expressibility and entangling capability of parameter- ized quantum circuits for hybrid quantum-classical algo- rithms. Advanced Quantum Technologies, 2(12):1900070, 2019.
Quantum computation of electronic transitions using a variational quantum eigensolver. Robert M Parrish, Edward G Hohenstein, Peter L Mcmahon, Todd J Martínez, Phys. Rev. Lett. 122230401Robert M. Parrish, Edward G. Hohenstein, Peter L. McMahon, and Todd J. Martínez. Quantum computa- tion of electronic transitions using a variational quantum eigensolver. Phys. Rev. Lett., 122:230401, Jun 2019.
Decomposition of orthogonal matrix and synthesis of two-qubit and three-qubit orthogonal gates. Hai-Rui Wei, Yao-Min Di, Quantum Inf. Comput. 123-4Hai-Rui Wei and Yao-Min Di. Decomposition of orthog- onal matrix and synthesis of two-qubit and three-qubit orthogonal gates. Quantum Inf. Comput., 12(3-4):262- 270, 2012.
Principal Component Analysis. Ian T Jolliffe, Springer Series in Statistics. Springer-Verlag2 editionIan T. Jolliffe. Principal Component Analysis. Springer Series in Statistics. Springer-Verlag New York, 2 edition, 2002.
Deep Learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT PressIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www. deeplearningbook.org.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2017.
A method for solving the convex programming problem with convergence rate o(1/k 2 ). Y Nesterov, Proceedings of the USSR Academy of Sciences. the USSR Academy of Sciences269Y. Nesterov. A method for solving the convex program- ming problem with convergence rate o(1/k 2 ). Proceedings of the USSR Academy of Sciences, 269:543-547, 1983.
Quantum embeddings for machine learning. Seth Lloyd, Maria Schuld, Aroosa Ijaz, Josh Izaac, Nathan Killoran, arXiv:2001.03622arXiv preprintSeth Lloyd, Maria Schuld, Aroosa Ijaz, Josh Izaac, and Nathan Killoran. Quantum embeddings for machine learning. arXiv preprint arXiv:2001.03622, 2020.
Variational quantum tensor networks classifiers. Rui Huang, Xiaoqing Tan, Qingshan Xu, Neurocomputing. 452Rui Huang, Xiaoqing Tan, and Qingshan Xu. Variational quantum tensor networks classifiers. Neurocomputing, 452:89-98, 2021.
An artificial neuron implemented on an actual quantum processor. Francesco Tacchino, Chiara Macchiavello, Dario Gerace, Daniele Bajoni, npj Quantum Information. 5126Francesco Tacchino, Chiara Macchiavello, Dario Gerace, and Daniele Bajoni. An artificial neuron implemented on an actual quantum processor. npj Quantum Information, 5(1):26, 2019.
Quantum computing model of an artificial neuron with continuously valued input data. Stefano Mangini, Francesco Tacchino, Dario Gerace, Chiara Macchiavello, Daniele Bajoni, Machine Learning: Science and Technology. 1445008Stefano Mangini, Francesco Tacchino, Dario Gerace, Chiara Macchiavello, and Daniele Bajoni. Quantum com- puting model of an artificial neuron with continuously valued input data. Machine Learning: Science and Tech- nology, 1(4):045008, oct 2020.
Quantum neuron with real weights. A Cláudio, Gustavo I S Monteiro, Matheus Filho, J Hopper, Fernando M Costa, De Paula, Wilson R Neto, De Oliveira, Neural Networks. 143Cláudio A. Monteiro, Gustavo I.S. Filho, Matheus Hop- per J. Costa, Fernando M. de Paula Neto, and Wilson R. de Oliveira. Quantum neuron with real weights. Neural Networks, 143:698-708, 2021.
| [
"https://github.com/takh04/QCNN."
] |
[
"IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1 Subjective Quality Database and Objective Study of Compressed Point Clouds with 6DoF Head-mounted Display",
"IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1 Subjective Quality Database and Objective Study of Compressed Point Clouds with 6DoF Head-mounted Display"
] | [
"Xinju Wu ",
"Senior Member, IEEEYun Zhang ",
"Chunling Fan ",
"Senior Member, IEEEJunhui Hou ",
"Fellow, IEEESam Kwong "
] | [] | [] | In this paper, we focus on subjective and objective Point Cloud Quality Assessment (PCQA) in an immersive environment and study the effect of geometry and texture attributes in compression distortion. Using a Head-Mounted Display (HMD) with six degrees of freedom, we establish a subjective PCQA database, named SIAT Point Cloud Quality Database (SIAT-PCQD). Our database consists of 340 distorted point clouds compressed by the MPEG point cloud encoder with the combination of 20 sequences and 17 pairs of geometry and texture quantization parameters. The impact of distorted geometry and texture attributes is further discussed in this paper. Then, we propose two projection-based objective quality evaluation methods, i.e., a weighted view projection based model and a patch projection based model. Our subjective database and findings can be used in point cloud processing, transmission, and coding, especially for virtual reality applications. The subjective dataset 1 2 has been released in the public repository.Index Terms-Point clouds, subjective quality assessment, quality metrics, virtual reality, six degrees of freedom (6DoF). . Her research interests include image processing and image quality assessment.Junhui Hou received the B.Eng. degree in information engineering (Talented Students Program) from the | 10.1109/tcsvt.2021.3101484 | [
"https://arxiv.org/pdf/2008.02501v2.pdf"
] | 236,905,980 | 2008.02501 | 9931d2e7fb5ee0d9662f1707a858818a2ad968aa |
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1 Subjective Quality Database and Objective Study of Compressed Point Clouds with 6DoF Head-mounted Display
Xinju Wu
Senior Member, IEEEYun Zhang
Chunling Fan
Senior Member, IEEEJunhui Hou
Fellow, IEEESam Kwong
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY 1 Subjective Quality Database and Objective Study of Compressed Point Clouds with 6DoF Head-mounted Display
10.21227/ad8d-7r28
In this paper, we focus on subjective and objective Point Cloud Quality Assessment (PCQA) in an immersive environment and study the effect of geometry and texture attributes in compression distortion. Using a Head-Mounted Display (HMD) with six degrees of freedom, we establish a subjective PCQA database, named SIAT Point Cloud Quality Database (SIAT-PCQD). Our database consists of 340 distorted point clouds compressed by the MPEG point cloud encoder with the combination of 20 sequences and 17 pairs of geometry and texture quantization parameters. The impact of distorted geometry and texture attributes is further discussed in this paper. Then, we propose two projection-based objective quality evaluation methods, i.e., a weighted view projection based model and a patch projection based model. Our subjective database and findings can be used in point cloud processing, transmission, and coding, especially for virtual reality applications. The subjective dataset 1 2 has been released in the public repository.Index Terms-Point clouds, subjective quality assessment, quality metrics, virtual reality, six degrees of freedom (6DoF). . Her research interests include image processing and image quality assessment.Junhui Hou received the B.Eng. degree in information engineering (Talented Students Program) from the
I. INTRODUCTION
I N recent years, the significant advance of Two-Dimensional (2D) video is the improvement of resolution, which has evolved from Standard Definition (SD), High Definition (HD) to Ultra High Definition (UHD). Visual information perceived by humans at the moment has been enriched with the progress of resolution. Meanwhile, people gradually attach importance to the experience of watching videos, especially preferring active interaction. Extended Reality (XR) technology, summarizing Virtual Reality (VR), Augmented Reality (AR), and Mixed Reality (MR) technologies, has attracted much attention with the emergence of Head-Mounted Display (HMD). In XR, degree of freedom refers to the head movement in space, with three corresponding to rotation movement, and the other three corresponding to translation movement. However, 360-degree videos only support Three Degrees of Freedom (3DoF), tracking users' rotational motion but not translational movement. To keep pace with the consumption of visual media, some novel concepts of immersive media came out, like 3DoF+, i.e., enabling additional limited translational movements, and Six Degrees of Freedom (6DoF), allowing rotational motion as well as translational motion. In the VR environment with 6DoF, observers utilize eye and head movement to explore scenes, utterly different from traditional 2D viewing. Many types of data are applicable for 6DoF, such as simple proxy geometry, voxels, and point clouds.
A point cloud is a collection of Three-Dimensional (3D) points in 3D space without space connections or ordering relations [1]. Each point in a point cloud consists of a geometry attribute, i.e., 3D position (x, y, z), and other attributes like color, reflectivity, and opacity denoted by vectors. According to the Point Cloud Compression (PCC) group of MPEG, point clouds can be categorized as static, dynamic, and dynamically acquired point clouds [1]. Similar to the relationship between videos and images, a dynamic point cloud recognizes each static point cloud as a frame, showing the movement of a 3D object or a scene going after the temporal variation. Targeting XR applications, static or dynamic point clouds can be captured in a studio full of high-speed cameras, especially for contents like people and objects. Targeting applications like autonomous driving, dynamically acquired point clouds are mainly obtained by LIDAR sensors on the top of a moving vehicle, enabling dynamic environment perception in robot navigation. For superior perceptual experience, point clouds are desired to be captured densely with high precision. Unlike meshes, point clouds have no spatial connectivity nor ordering relations, excluding the concepts of edges, faces, or polygons. The massive number of points and the inherent characteristics of disjunction and disorder pose challenges for point cloud processing and compression.
For storage and transmission of point clouds, efficient compression frameworks were researched by scholars. In [2], a time-varying point cloud codec was first proposed in a 3D tele-immersive system and later regarded as the anchor of MPEG PCC standards. The emerging PCC is composed of two classes for compression for different categories of point clouds. One is Video-based PCC (V-PCC) targeting dynamic point clouds, and the other is Geometry-based PCC (G-PCC). G-PCC is the combination of Surface PCC (S-PCC) for static point clouds and LIDAR PCC (L-PCC) for dynamically acquired point clouds. Nowadays, PCC [1] is a trending and intriguing research topic.
Both processing and compression may induce kinds of distortions for point clouds, including changes in the number of points and values of positions and colors. The degradation of the contents might influence users' perception. Therefore, subjective and objective assessment is the pointed and valid method to reflect the quality of point clouds. Nowadays, point cloud evaluation is still a sophisticated and challenging problem involved with excessive factors such as degradation, rendering methods, display equipment, evaluation methodologies, quality of sources and so forth.
A normative and effective framework for Point Cloud Quality Assessment (PCQA) is necessitous but has not been explored fully yet. Learning from Image Quality Assessment (IQA) and Video Quality Assessment (VQA) , many papers for PCQA came out since 2017. However, the early works [3]- [8] only focused on colorless point clouds and simple types of degradation, while point clouds with color and codecs like V-PCC are the prevailing trend. Also, some works utilized a virtual camera around point clouds to render videos, and those videos were then displayed in a 2D screen for subjects to evaluate, which lacked human interaction.
In this work, we concentrate on subjective PCQA in an immersive 6DoF VR environment to study the effect of geometry and texture attributes in compression distortion. First, contents like human figures and inanimate objects are selected, showing the variety of test sequences. A total of 17 different combinations of geometry and texture Quantization Parameters (QP) with 20 sequences are used to create 340 distorted point clouds by the V-PCC codec. Our subjective database, named SIAT Point Cloud Quality Database (SIAT-PCQD) [9], is available on public repository. Second, we analyze essential factors in the visual quality of point clouds in detail, including geometry and texture attributes from the perspective of different QP levels and types of sequences. Finally, we propose two projection-based objective quality evaluation methods, i.e., a weighted view projection based model and a patch projection based model. The remainder of our paper is organized as follows. Section II reviews the related work in subjective and objective point cloud evaluation. Section III details our subjective quality evaluation test, including procedures like data preparation, rendering techniques, equipment and environment, and evaluation methodology. Section IV shows data processing and the results of our subjective experiment. In Section V, we propose two projection-based objective quality evaluation methods, and the performance evaluation will be shown in Section VI. Finally, the conclusion of our work and the challenges of PCQA are outlined in Section VII.
II. RELATED WORKS
In this section, related works in point cloud evaluation are described in the aspects of subjective and objective assessment.
A. Subjective Point Cloud Quality Assessment
Early works on point cloud subjective quality assessment came out in large numbers since 2014. In [12], distortions like down-sampling and noise generation were considered to study the relationship between 3D point cloud models and human visual perception. Later, some works [4]- [8] evaluated the quality of colorless point clouds and built the initial workflow of subjective point cloud evaluation. In [3] [6], AR devices were first used in work about point clouds. However, only geometry of point clouds was evaluated in the subjective quality assessment. Also, point clouds were processed by Gaussian noise and octree-pruning degradation, and the latter promising point cloud codecs were failed to be considered. In the early stage, limited types of degradation were focused on, and some prevailing codecs were absent in these works.
Alexiou et al. [5] compared the Double Stimulus Impairment Scale (DSIS) and the Absolute Category Rating (ACR) methodology while evaluating point cloud geometry of Gaussian noise and octree-pruning degradation. They compared the visualization of raw point clouds and point clouds after surface reconstruction [7], and then obtained similar results using 2D and 3D monitors to display the projected contents of point clouds [8]. Javaheri et al. performed a subjective and objective evaluation of point cloud denoising algorithms in [4]. Different methodologies and ways of displaying were further studied in this period, but most of the subjective datasets still only used point cloud geometry as the evaluation contents.
Subsequently, colored point clouds under the primitive point cloud codec through octree-pruning and degradation like noise and down-sampling was further explored. Javaheri et al. [13] performed subjective and objective PCQA by octree-based compression scheme, available in the Point Cloud Library (PCL), and graph-based compression scheme. They created a spiral virtual camera path moving around the point cloud sequences from a full view to a closer view, and the generated videos were evaluated by subjects. In [14], point cloud evaluation experiments were conducted in three different laboratories, and it was found that removing points regularly was more acceptable for subjects. The quality scores, obtained by various point clouds with geometry and color information, showed a high correlation with objective metrics. Yang et al. [15] proposed the SJTU-PCQA database with point clouds augmented with octree-based compression, color noise, geometry noise, and scaling, and they also developed an objective metric based on projection.
Later on, the advanced point cloud codecs developed by MPEG were introduced in subjective PCQA studies. In [16], Zerman et al. first considered V-PCC for colored point clouds and rendered point clouds by Unity in the way of no interaction. They found that texture distortion is more critical than geometric distortion in the human figure database they created. They also found that the count of points severely affects geometric quality metrics rather than perceptual quality. Su As for interaction, a quality evaluation methodology for colored and voxelized point clouds was proposed in [10]. In their experiment, subjects visualized the contents through renderer and interacted with point clouds by zooming, rotating, and translation using the mouse. Alexiou et al. [11] focused on the evaluation of test conditions defined by MPEG for core experiments and conducted two additional rate allocation experiments for geometry and color encoding modules. A new software, which supports interaction and would be used in the web applications for point cloud rendering, was developed. In a word, these works enabled interaction by allowing subjects to operate on point clouds displayed in a 2D monitor, namely the desktop condition. With the rapid development of computer graphics and progressive technologies, 3D data can be fully ex-hibited in VR applications. User quality evaluation of dynamic point clouds in VR was performed in the works [22] [23]. Subramanyam et al. [22] first compared two VR viewing conditions enabling 3DoF and 6DoF by assessing the quality of dynamic digital human point clouds, and the latter work [23] developed the PointXR toolbox that can host experiments under variants of protocols in the VR environment.
In our work, a subjective quality evaluation experiment was conducted with positive interaction in the 6DoF VR environment. We aim to explore the impact of degradation of point clouds' geometry and texture attributes for visual quality in future VR applications. Thus, seventeen distorted rates for compressed degradation are set in our subjective experiment where the number of rate levels is far more than other works. A summary of the available subjective PCQA databases is shown in Table I.
B. Objective Point Cloud Quality Assessment
Objective quality assessment of point clouds aims to create an accurate mathematical model to predict the quality of point clouds. According to the work [10], the state-of-the-art objective evaluation metrics of point clouds can be classified into two categories: point-based metrics and projection-based metrics.
Point-based metrics are mainly the point-to-point error (D1) [24], the point-to-plane error (D2) [25] and the planeto-plane error [26] for geometric errors. D1 measures the Euclidean distances between corresponding point pairs, indicating how far the distorted point cloud points moved away from their original positions. Considering local plane properties, D2 [25] computes the projected errors along the normal direction on the associated point, imposing a larger penalty on points far from the perceived local plane surface. The planeto-plane metric [26] focuses on the angular similarity between tangent planes of associated points, and tangent planes indicate the linear approximation of the surface. All of them are fullreference metrics based on geometric errors, regardless of the human vision.
To further improve the prediction accuracy, point-based metrics have been explored by extracting features in geometry and texture. Alexiou et al. [26] introduced an objective quality metric based on the angular similarity between tangent planes. Meynet et al. [27] extended the Mesh Structural Distortion Measure (MSDM) metric designed for 3D meshes to point clouds as PC-MSDM. Javaheri et al. successively proposed novel geometry quality metrics for point clouds based on the popular geometry quality metric PSNR [28], a generalization of the Hausdorff distance [29], and the Mahalanobis distance to measure the correspondence between a point and a distribution [30], respectively. After exploring the geometry quality of point clouds, researchers focused on extracting color features and combining the geometry and the color domain. Rafael et al. computed the direct distance between points [31]. They adapted the Local Binary Pattern (LBP) descriptor to processing local regions, and the histograms of the LBP outputs were compared to obtain the final score [32]. Meynet et al. linearly combined geometry-based features, i.e., curvature, and color-based features, i.e., lightness, chroma, and hue, and proposed the Point Cloud Quality Metric (PCQM) [33]. Color histograms and color correlograms were utilized and combined with geometry metrics to provide a global quality score [34]. Yang et al. resampled the reference point cloud to extract key points, constructed a local graph centered at key points, and aggregated color gradient features to form the method named GraphSIM [35]. Alexiou et al. [36] focused on structual similarity and local distributions of point cloud attributes reflecting topology and color. Viola et al. [37] proposed a reduced reference metric by extracting statistical features in geometry, color, and normal vector domain. However, the operation of point clouds as nonstructural data, like computing curvature and resampling, is time-consuming and computationally expensive.
Projection-based metrics project both reference and test point clouds onto six planes of their bounding boxes [10] or more planes, then compute the average scores of the structural data, i.e., projected images, by the state-of-the-art image quality metrics. In [38], projection-based objective quality assessment was extended by assigning weights to perspectives based on user interactivity data. The authors identified that there is not much difference when using additional views. Yang et al. [15] proposed a projection-based method via perspective projection onto six planar and extracted global and local features of depth and color images obtained by projection. However, occlusion and dislocation are inevitable for projecting point clouds onto large planes. Thus, we build a subjective PCQA database in the immersive 6DoF VR environment and propose two projection-based objective evaluation methods.
III. SUBJECTIVE EXPERIMENT FOR PCQA In this section, we describe the details of our point clouds subjective quality assessment experiment with dataset prepa-ration, processing, equipment and test environment, and evaluation methodology.
A. Dataset Preparation
To better explore how people perceive point clouds, we chose contents like human figures and inanimate objects. As shown in Fig. 1 and Table II, 20 static sequences were selected in our test. The human category consists of six full-body figures and four upper-body figures, while another category includes ten different inanimate objects.
To show the diversity of point clouds, we considered the characteristics, i.e., Spatial Information (SI) [45] and Color-Fulness (CF) [46]. We projected the source point cloud into six views of its bounding box to apply SI and CF. Similar to video contents in [45], we obtained the maximum value among six views as the final SI for a sequence. Fig. 2 shows the distribution of 20 sequences along with the horizontal (CF) and vertical (SI) axes. The dispersed state in CF/SI shows the diversity of our contents in space/color domain. In particular, the luminance of the sequence Banana is generally higher than others, making a CF measurement difference.
B. Processing
Before compression, it requires the preprocessing procedure for point cloud sequences to minimize the impact of some additional influencing factors. Fig. 3 shows the whole workflow before conducting the experiment, which mainly includes preprocessing, encoding, and rendering.
• Preprocessing: The sequences, as mentioned above, are selected from different repositories, which means their sizes, positions, and orientations vary. However, we desire that point clouds are exhibited in life-size rendering to achieve realistic tele-immersive scenarios. So we normalize sequences to remain point clouds within a similar bounding box (600, 1000, 400) in the preprocessing stage to deal with this issue. The source models have been processed with sub-sampling, rotation, translation, and scaling, except four sequences Longdress, Redandblack, Loot, and Soldier from the 8i Voxelized Full Bodies Database. Additionally, the point cloud encoder V-PCC fails to deal with decimals, so that the positions of points were through round operation and then the duplicate points were removed. In particular, it is unnecessary to have integer conversion for four upper body figure sequences from the Microsoft database, so we just adjusted their positions and orientations in rendering software.
• Encoding: Distorted versions were generated using the state-of-the-art MPEG PCC reference software Test Model Category 2 version 7.0 (TMC2v7.0). The V-PCC method takes advantage of an advanced 2D video codec after projecting point clouds into frames. First, a point cloud is split into patches by clustering normal vectors. The obtained patches would be packed into images, and the gaps between patches would be padded to reduce pixel residuals. Then projected images of sequences are compressed utilizing the HEVC reference software HM16. 18. More information about the framework of V-PCC can be referred to [1].
QP determines the step size for transformed coefficients (20,27), (20,37), (20,47), (28,27), (28,37), (28,47), (36,27), (36,37), (36,47), (24,32), (32,42), (0,0), (20,0), ( in codecs. In V-PCC, a pair of parameters, namely geometry QP and texture QP, regulate how much detail is saved in the geometry and texture attributes of point clouds. As geometry QP is increased, points deviate from their original positions. As texture QP is increased, some color details are aggregated. Similar to the Common Test Conditions (CTC) document from the MPEG PCC [24], the gaps of geometry and texture QPs were set as 4 and 5, ranging from 20 to 32 and from 27 to 42. As shown in Table II, geometry QP ranks the first in each pair, while texture QP ranks the second in that pair. And we chose a losslessly compressed version as our reference contents.
(a) longdress (b) redandblack (c) loot (d) soldier (e) The20sMaria (f) UlliWegner (g) Ricardo (h) Phil (i) Andrew (j) Sarah (k) facade (l) house_without_roof (m) ULB_Unicorn (n) romanoillamp (o) biplane (p) Nike (q) banana (r) grass (s) bush (t) AngelSeated
• Rendering: Point clouds are appropriate to represent the complete view of objects and scenes in immersive applications with 6DoF. Thus, we developed an actively interactive VR experiment software for subjects to observe point cloud models in the 6DoF environment. Observers are permitted to explore freely in a room and observe the point clouds from any angle without occlusion. In this condition, the views displayed within the HMD are consistent with observers' body and head movements, resulting in an immersive feeling of perception.
Our experiment software was developed in Unity (version 5.6.5f1), exploiting the SteamVR plugin (version 1.2.3) to connect VR headsets. Point Cloud Viewer and Tools (version 2.20) helped us import and view the point cloud data inside Unity. After preprocessing, point clouds are rescaled to a similar size to exhibit realistic tele-immersive scenarios. Besides, geometric coding distortions can be masked by surface reconstruction [18]. Thus, raw point clouds are presented using the default point size. Notably, a large size of point cloud files might take up too much memory and cause a system hang. So we first packed all the resources related to rendering point clouds like prefabs and meshes and dynamically asynchronously loaded the asset bundles to improve software stability.
C. Equipment and Environment
HTC Vive devices with a HMD and two hand controllers were used for every subject to interact in our test. The headset features a resolution of 1080 × 1200 pixels per eye, namely 2160 × 1200 pixels, and a 110-degree field of view. As for the virtual environment setting, backgrounds with complex textures or contrasting colors and settings with high or low light intensity would enhance the contrast between point clouds and the environment, as shown in Fig. 4. Thus, we preferred a comfortable environment with mild contrast as the scene of our experiment. Following the recommendations from Recommendation ITU-R BT.500-13 [47], a room with gray walls was created as the virtual environment to conduct the experiment. Meanwhile, the scene was lighted by a virtual lamp on the ceiling centralized above the models. The lamp is set as an area light with intensity values of 2 in Unity to simulate ordinary lighting in the room.
For 2D images and videos, the best way of presentation is displaying on the planar screen. For volumetric media like meshes and point clouds, it is more natural to exhibit the 3D models in virtual 3D worlds. In VR applications, users are allowed to navigate in virtual scenarios freely. Therefore, in our test, observers can walk freely in the room to watch 3D point clouds, which is different from the way of watching images or videos that subjects only stand or sit in a fixed position. Fig. 5 shows views of our experimental system from different angles using the HMD.
It is known that distance influences perceptual quality, as observers are more sensitive to errors from a close distance and focus more on the whole from a long distance. If a subject always stands too close to point clouds, then the local views of sparse 3D points are perceived, deranging the test's quality evaluation. So we set a recommended distance of two meters away from models. We suggested the subjects to stand at the distance initially and told them that they could walk freely to navigate and perceive. Eventually, subjects were asked to come back to the initial position, i.e., the fixed position of a scoreboard, to give scores so that the last view would not bias a subject towards a score. Notably, we made observers walk physically in our experiment instead of teleporting them to a place by controller pointer because observers are more likely to suffer sickness or nausea when the perceptual views switch frequently and are inconsistent with their body movements.
D. Subjective Evaluation Methodology
Two protocols for subjective PCQA are widely adopted by researchers, namely ACR methodology and DSIS methodology, as shown in Table I. Subjects tend to rate explicitly according to the visual quality under ACR and relative differences of perception under DSIS. But the double stimulus methodology, DSIS, is more consistent in terms of identifying the level of impairment [5]. The reason is that point clouds are displayed as a collection of points, and spaces between contents may be habitually recognized as "holes". Subjects are likely to give lower scores when the reference point cloud is absent. Besides, it is mentioned in the recently published standardization ITU-T P.919 [48] for immersive 360 • video on HMD that DSIS is statically more reliable than ACR. Therefore, our experiment used the DSIS methodology and displayed the reference and the distorted point clouds side by side as shown in Fig. 5. In addition, the hidden reference for each sequence was added like some point cloud assessment experiments [3], [5]- [8], [10], [11], [14], [21]. In other words, a pair of reference point clouds would be simultaneously displayed once for each sequence without the distorted point cloud. Finally, scores from continuous 5-scale rating were then normalized to integer values between 0 and 100 [16], [17]. It should be noted that side-by-side displaying is commonly used in subjective quality assessment of holographic data like light fields [49], [50] and point clouds [3], [5]- [8], [10], [11], [14], [16], [21]. Moreover, the addition of hidden reference records the subjective scores for reference point cloud sequences to reduce the effect of sequence content.
Thirty-eight subjects, aged from 22 to 35, were involved in each session of our subjective test. Seven are experts in video compression or quality assessment. According to the contents, our experiment was separated into two sessions of human figures and inanimate objects. And 22 males and 16 females participated in the first session, while 27 males and 11 females participated in the second session. Furthermore, 29 subjects engaged in both sessions. All subjects were instructed with the procedures, operation, and some attention points of our test during the test guidance. Before the primary test, a test of color blindness was performed, and a training phase was conducted for each subject through the same procedure as the primary test but using an extra sequence queen. Every session takes about 1 hour with a mandatory break of 5 to 10 minutes to avoid fatigue, presenting distorted versions in random order. Compared with SJTU-PCQA [15] and other databases [22] [23], our database has 20 point cloud sequences, one type of distortion, and 17 distorted levels, which is diverse in sequences and distorted levels. And our database explores the visual quality on HMD under the 6DoF viewing condition.
(a) (b) (c) (d) (e) (f) (g) (h) (i) (j)
IV. DATA PROCESSING AND ANALYSIS
In this section, Differential Mean Opinion Scores (DMOS) are calculated as final scores after outlier detection of subjects and samples. The correlation between human figure session and inanimate object session is analyzed. Then, the impacts of different sequences and texture/geometry QPs are discussed.
A. Outlier Detection & DMOS
For comparisons with point clouds captured from different methods, the raw scores were first converted to the difference score between the reference and the distorted point clouds,
d i,j = s refi,j − s i,j ,(1)
where s refi,j and s i,j respectively stand for the subjective rating of the original and distorted point clouds from subject i over samples of the point cloud j, where i ∈ [1, · · · , N ] and j ∈ [1, · · · , M ] with N and M being the numbers of subjects and samples, respectively. d i,j is the difference score of subject i rating on sample j. The score from a subject over all samples is represented as s i = (s i,1 , · · · , s i,M ). An outlier detection procedure from ITU-R BT 500.13 recommendation [47] was then used to remove scores generated by unreliable subjects. In our study, none of subjects were rejected in the procedure. In order to unify the different rating scales across subjects, difference scores for each observer were transformed to zscores like [51] by the mean and the standard deviation equal for all observers,
z i,j = d i,j −d i σ i ,(2)
whered i and σ i show the mean value and standard deviation of a subject. Then scores assigned by a subject would be normalized to zero mean and unit variance. It is known that 99% of values fall within three standard deviations from the zero mean within a normally distributed sample. Therefore, z-scores are rescaled by linear mapping [52],
z i,j = z i,j + 3 6 .(3)
Finally, DMOS of each test point cloud is computed as the mean of the rescaled z-scores as
DM OS j = 1 N N i=1ẑ i,j .(4)
In particular, screening of observers and the score processing procedure are accomplished for each session.
B. Correlation between Two Sessions
To avoid fatigue, our tests were separated into two parts by considering different categories of contents, i.e., human figures and inanimate objects. We examine the correlation between the DMOSs of these two sessions and use a linear function and a logistics function to fit the data. As shown in Fig. 6 ( sessions in our test, which means that there is no significant difference caused by different subjects and processes. It can be seen that DMOSs increase when geometry and texture QPs are enlarged, and the degradation of point clouds compressed by QP of medium values are more visually distinguishable. In addition, the boxplot of z-scores of subjects in both sessions is shown in Fig. 7. Z-scores data have stable distributions in both session 1 and session 2 with the minimum and maximum values of -2.4095 and 2.8213 in session 1 and of -2.6614 and 2.8863 in session 2.
DMOS (inanimate object session)
C. Analysis
In Fig. 8, we compare the visual quality of different sequences under QP pairs defined in CTC from MPEG PCC. As can be seen in all sequences, there is slight visual quality degradation from (20,27) to (24,32). Observers are sensitive to the compression distortion when QP pairs arise from (28,37) to (36,47). Besides, the lower values and gentle changes, denoted as dotted lines in Fig. 8(a), indicate that it is harder for subjects to perceive the distortions in the upper body figure set due to the quality of the source point clouds. Fig. 9 shows varying geometry and texture QPs against subjective scores. Varying texture QPs can be seen in each group of Fig. 9(a), and varying geometry QPs can be seen in each group of Fig. 9(b). Three findings in Fig. 9 are listed as follows. 1) Group 3 and Group 4 in Fig. 9(b) represent the texture QP values of 37 and 47, respectively. It can be seen that the values in Group 4 are much higher than values in Group 3, meaning that perceptual quality falls off with texture QP rising from 37 to 47.
2) The slope of Group 4 in Fig. 9(b) is smaller than other groups in Fig. 9(b). In other words, when texture QP equals 47 and geometry QP is between 20 and 36, the variation of subjective scores within the group of texture QP as 47 is relatively smaller than groups of other texture QP values like 0, 27, and 37. 3) In addition, it can be seen that the slopes in Fig. 9(a) are higher than slopes in Fig. 9(b).
V. PROPOSED PROJECTION-BASED OBJECTIVE MODELS
Based on our subjective PCQA database, we evaluate the performance of popular point-based objective methods used by MPEG, as shown in Table III. The bold denotes the best performance of a correlation coefficient, i.e., a column in Table III. In particular, predictive objective scores are obtained through nonlinear regression, according to the work [53]. Then, based on the Recommendation ITU-T P.1401 [54], performance evaluation metrics are adopted to show the performance, including Pearson's Linear Correlation Coefficient (PLCC), Spearman Rank Order Correlation Coefficient (SROCC), Kendall Rank Order Correlation Coefficient (KROCC), and Root Mean Square Error (RMSE).
It can be found that objective scores show a low correlation with subjective scores. D1 MSE reaches 0.3136 in PLCC and 0.3963 in SROCC, while D2 MSE shows correlation as 0.3498 in PLCC and 0.4125 in SROCC. The reason may be that D1 and D2 only consider the geometry and ignore color information. Moreover, D1/D2 and YUV color are errorbased models without taking characteristics of human eyes into account, although YUV color, which denotes comparison of texture attributes after point matching, can slightly better represent visual quality compare to geometry attributes (D1/D2). Therefore, we desire to predict the visual quality of point clouds by mimicking how people perceive the world.
A. Proposed Weighted View Projection Based PCQA Method
The widely used projection-based method [10] projects a point cloud onto six planes of the bounding box, viewing each plane as an equal role regardless of the significance of different views. Through the subjective experiment, we found that the size of a plane of bounding box might relate to visual quality as the larger region own a bigger chance to capture subjects' visual attention by showing more details of the content. For instance, observers show a tendency of ignoring the top and bottom view of a human digital sequence as well as spending more time in various views in the object models, such as the (20,27) lated as
S f inal = 6 i=1 w i · Q(P i (pc ref ), P i (pc dist )),(5)
where S f inal denotes the objective scores of this projectionbased metric. P i (·) indicates projecting a reference point cloud pc ref or a distorted point cloud pc dist onto a bounding box plane, and i ∈ [1, ..., 6] corresponds to the front, back, left, right, top, and bottom views, respectively. Q(·) denotes the operation of computing visual quality scores using one of the existing IQA methods.
Here, we propose a weighted view projection based PCQA method by setting the weight of one view w i as the ratio of the size of a plane to the sum of the area of six planes on the bounding box. It is consistent with the viewing habits of subjects for the reason that point clouds are characterized by keeping the content in the center, with the subject moving around, and the larger view has larger chance to be seen. So w i is computed as ci 6 i=1 ci , and c i is the area of a plane of the bounding box.
This model is easy to operate, time-saving, and computationally cheap. However, view projection may result in occlusion, and geometry may be less sensitive or incompletely expressed, as observers can see a point cloud from every angle, and six planes are inadequate to denote all views sensed in human eyes.
B. Proposed Patch Projection Based PCQA Method
To discover more details of different views for a point cloud, it is better to segment a point cloud into smaller parts and then project them onto planes. A smaller connected part, namely a patch, is composed of 3D points with similar normal vectors. To reduce the self-occlusion brought by view projection, we propose an objective PCQA model based on patch projection, as shown in Fig. 11. The whole process has two parts, i.e., patch generation and quality prediction. In patch generation, the reference and the distorted point clouds are converted into two geometry images and two texture images, each of which includes non-overlapping patches. In quality prediction, the visual quality of geometry and texture images are computed to denote the visual quality of the distorted point cloud. Fig. 12 depicts the process of 3D to 2D patch projection for a reference point cloud. For the reference point cloud in patch generation, patches can be obtained by clustering 3D points according to the direction of normal vectors and segmenting the connected components. Then all patches are inserted into a blank image gird in the packing process to create one geometry image and one texture image.
However, since the geometry distortion may change the positions and normal vectors, the patches of the distorted point cloud generated from the procedure in Fig. 12 may differ from those of the reference point cloud. The first and the second column in Fig. 13 shows one texture image and one geometry image obtained from the reference and the distorted point clouds, respectively. We can observe that there are significant mismatches between the images generated from the reference and the distorted point clouds. These mismatches cause the conventional full-reference 2D IQA methods are no longer applicable to geometry and texture images from the distorted point cloud.
To handle this mismatch problem, we design a point matching based patch generation for the distorted point cloud. Algorithm 1 describes the point matching based patch generation algorithm. At first, the corresponding points in the reference and the distorted point clouds are found by the nearest neighbor algorithm. Then, new patches are assigned by reference Based on the reference patch information and correspondences between points of reference and distorted point clouds, distorted patches are able to match the reference patches correspondingly. The third column in Fig. 13 shows the geometry and the texture images generated from the distorted point cloud using Algorithm 1. We can find that the patches in the third column match those in the first column accurately. Consequently, we can use the conventional 2D IQAs to do the quality prediction. In quality prediction, geometry or texture images are evaluated by one of effective IQA metrics respectively, then the geometry and texture quality scores are fused by addition to obtain the final score. The process can be formulated as, where T 1 (·) and G 1 (·) indicate projecting a reference point cloud pc ref and generating a texture image and a geometry image, respectively. Similarly, T 2 (·) and G 2 (·) denote projecting a distorted point cloud pc dist and creating the texture and the geometry images with the same patch placement of T 1 (·) and G 1 (·). Q(·) denotes using one of IQA methods, computing visual quality scores of geometry or texture images. Then, the fusion of geometry and texture images obtains the final score of a point cloud, i.e., S f inal . In joint bit allocation between geometry and color for V-PCC [55], the distortion model for point clouds is modeled as a linear combination of the geometry distortion and texture distortion, and the weighting factor of texture distortion is set as 0.75 or 0.5. Thus, we chose one of median values for the texture parameter between 0.5 and 0.75 as 0.6. Here, a and b are set as 0.6 and 0.4, respectively.
S f inal = a · Q(T 1 (pc ref ), T 2 (pc ref , pc dist )) +b · Q(G 1 (pc ref ), G 2 (pc ref , pc dist )),(6)
Following V-PCC mechanism, we implemented our patch projection based PCQA method based on TMC2, and changes have been made to generate distorted patches according to the following Algorithm 1. Point matching is achieved by the KNN algorithm, where we set K = 1 to find the nearest neighbor. In the process of point matching, for each point p 1 in the reference point cloud, we find the nearest neighbor p 2 in the distorted point cloud as illustrated in Fig. 14.
VI. EXPERIMENTAL RESULTS AND ANALYSIS
To evaluate the performance of the proposed two projectionbased PCQA methods, two datasets SIAT-PCQD and vsen-seVVDB [16] were used. In addition to the conventional D1 and D2 metrics in Table III, another two kinds of benchmark schemes were compared. One is point-based PCQA metrics, including PC-MSDM [27], PC-ASIM [26], PCQM [33], and PointSSIM [36]. The other is the projection-based metric [10]. In the proposed projection-based PCQA methods and the benchmark scheme [10], 11 different 2D IQA submetrics were used to do the quality prediction, which are error based (NQM [56]), structural similarity based (SSIM [51], MS-SSIM [57], IW-SSIM [58], GSM [59], GMSD [60], RF-SIM [61], SR-SIM [62], VSI [63]), and natural scene statistics based (IFC [64], VIF [53]). PLCC, SROCC, KORCC and RMSE [54] were used to evaluate the performance of different PCQA metrics. A. Evaluation for View Projection Based PCQA Table IV shows the correlation between subjective scores and objective scores of different projection-based PCQA methods. An average gain ρ G between the proposed scheme and the benchmark scheme is computed as
ρ G = 1 N N i=1 (X i − X orgi ),(7)
and the gain ratio ρ R is evaluated as
ρ R = N i=1 (X i − X orgi ) N i=1 X orgi ,(8)
where X orgi and X i denote the PLCC, SROCC, KORCC and RMSE of the PCQA [10] and the proposed PCQA using submetrics i. N = 11 since 11 submetrics were tested. Table IV shows the PLCC, SROCC, KORCC and RMSE comparison between the proposed weighted view projection based PCQA and the benchmark schemes. In the table, the bold denotes the average gain and the gain ratio of the proposed method, and the underlined denotes the best performance in each scheme. We have two key observations. 1) Compared with the view projection based method [10], the proposed weighted view projection based method is able to achieve an increase of 0.0285, 0.0379, and 0.0446 in PLCC on average (6.70%, 10.12%, 14.31%) for Human figure subset, Inanimate object subset, and ALL in SIAT-PCQD, respectively. Similarly, gains can be found for SROCC, KROCC, and RMSE, which proves the proposed weighted view projection based method is more effective than the benchmark [10]. 2) While comparing the effectiveness of using different submetrics, the proposed weighted view projection based method achieves the best when using IFC [64] and VIF [53].
B. Evaluation for Patch Projection Based PCQA
The bottom row of Table IV shows the PLCC, SROCC, KORCC and RMSE of the proposed patch projection based PCQA. We have four key observations. 1) Compared with the view projection based method [10], the proposed patch projection based method is able to achieve an increase of 0.3162, 0.2912, and 0.3426 in PLCC on average (74.42%, 77.72%, 109.94%) for Human figure subset, Inanimate object subset, and ALL in SIAT-PCQD, respectively. Similarly, gains can be found for SROCC, KROCC, and RMSE, which are significantly higher than those of the proposed weighted view projection based method. 2) While comparing the effectiveness of using different submetrics, the proposed patch projection based method achieves the best when using IW-SSIM [58] whose PLCC is 0.8101, 0.8994, and 0.8181 for subsets and the whole SIAT-PCQD database. 3) As for the point-based methods, the PLCCs of PC-MSDM [27], PC-ASIM [26] PCQM [33], and PointSSIM [36] are 0.1814, 0.2374, 0.6539, and 0.7808 in SIAT-PCQD, respectively. Compared with them, the proposed patch projection based method using IW-SSIM and VSI achieves a higher PLCC as 0.8181 and 0.8063, respectively. 4) Compared with the D1, D2, and YUV in Table III, the proposed patch projection based method outperforms significantly these schemes in PLCC, SROCC, KROCC, and RMSE in SIAT-PCQD.
To further validate the effectiveness of the proposed patch projection based PCQA while comparing with the view projection based scheme [10], another point cloud database, named vsenseVVDB [16], was tested and Table V shows the results. We can observe 1) the average gain and the gain ratio are 0.4478 and 112.78% in PLCC, respectively. Similarly, gains can be found for SROCC, KROCC, and RMSE. 2) While using different submetrics in the proposed patch projection based method, the PLCCs range from 0.5545 to 0.9477. SR-SIM and GSM achieve the top two performance in PLCC. The proposed patch projection based PCQA is significantly better than the view projection based scheme in vsenseVVDB.
The comparative studies on seven PCQA methods and two databases illustrate the proposed patch projection based method is more effective. The advantage of the proposed patch projection based method is it relieves the occlusion problem in view projection by segmenting the point cloud into smaller parts. Secondly, the framework of the proposed patch projection based method is able to take advantage of the advanced 2D IQA metrics for PCQA.
VII. CONCLUSIONS
In this paper, a subjective point cloud quality assessment experiment in an immersive virtual reality environment with a head-mounted display was conducted, and a weighted view projection based objective method and a patch projection based objective method were proposed. The impacts of sequences and geometry and texture quantization parameters were discussed in the analyses of the subjective experiment.
The proposed patch projection based method improved the correlation of predictive scores with subjective scores based on image quality assessment metrics for the reason that the method reduces occluded areas during the process of projection. Nowadays, point cloud assessment is still an intricate and challenging problem involved with excessive elements. Our subjective database and findings can be used in perceptionbased point cloud processing, transmission, and coding, especially for virtual reality applications.
Fig. 1 .
1The thumbnail images of the sequences used in our experiment.
Fig. 2 .
2Distribution of spatial information and colorfulness of the 20 source sequences in the dataset.
Fig. 3 .
3The workflow before conducting the subjective experiment, including preprocessing, encoding, and rendering.
Fig. 4 .Fig. 5 .
45Scene settings in the virtual environment. (a) wallpaper background, (b) brick background, (c) black background, (d) white background, (e) directional light, (f) point light, (g) spot light, (h) light intensity = 1, (i) light intensity = 2, (j) the adopted setting. Views of the experiment system in HMD from some angles. (a) back, (b) left, (c) right.
, a black point denotes a rate level, shown as (geometry QP, texture QP). The R-square value of linear fitting and logistic fitting are 0.973 and 0.9628. It indicates the linear correlation of the two
Fig. 6 .Fig. 7 .
67Correlation between the human figure session and inanimate object session. The boxplot of z-scores in two sessions. (a) the human figure session, (b) the inanimate object session.
Fig. 8 .
8Comparing the visual quality of different sequences under QP pairs defined in CTC from MPEG PCC. (a) the human figure session, (b) the inanimate object session.
Fig. 10 .Fig. 11 .
1011Six view images through orthographic projection obtained from the sequence Longdress (the first row) and the sequence Grass (the second row). The view images, in order, are the front, back, left, right, top, and bottom views. front view of sequence ULB Unicom, the top view of sequence grass and side views of the sequence Nike. Examples of the projected images obtained from the sequence longdress and grass are exhibited inFig. 10.The process of view projection based model can be formu-The framework of the proposed patch projection based PCQA method.
Fig. 12 .
12The process of 3D to 2D patch projection for a point cloud.
Fig. 13 .
13Texture/geometry images obtained from the reference and the distorted point clouds. (a) from the reference point cloud, (b) from the distorted point cloud without point matching, (c) from the distorted point cloud with point matching. patches, and then each point in these new patches is replaced by its corresponding point in the distorted point cloud. In a word, new patches maintain the contours and placement information of reference patches, but each point in new patches is obtained from the distorted point cloud. Finally, new patches are viewed as distorted patches.
Algorithm 1 :
1Point matching based patch generation Data: reference point set A, distorted point set B, reference patches T 1 Result: distorted patches T 2 1 foreach point p ∈ A do 2 M atchedP oint[p] ← NearestNeighbor(p, B); 3 end 4 new patches T ← T 1 ; 5 foreach patch t ∈ T
Fig. 14 .
14Point matching between reference and distorted point clouds.
TABLE I SUMMARY
IOF SUBJECTIVE PCQA DATABASES.Method
Sequence set
Status of
sequences
Colored
Degradation
#Distorted point clouds
seq × dist × rate
Display
Inter-
action
Methodology
Alexiou et al. [3]
objects
static
×
Gaussian noise, octree-pruning
5 × 2 × 4 = 40
AR
DSIS
Torlig et al. [10]
objects, humans
octree-based compression &JPEG
7 × 9 = 63
2D monitor
DSIS
Alexiou et al. [11]
objects, humans
V-PCC, G-PCC
9 × (5 + 6) = 99
2D monitor
DSIS
Javaheri et al. [4]
objects
static
×
2 outerlier removal algorithms,
3 denoising algorithms
4 × 5 × 3 = 60
2D monitor
×
DSIS
Alexiou and Ebrahimi [5]
Gaussian noise, octree-pruning
5 × 2 × 4 = 40
2D monitor
DSIS, ACR
Alexiou and Ebrahimi [6]
5 × 2 × 4 = 40
2D monitor
DSIS
Alexiou et al. [7]
octree-pruning
7 × 1 × 4 = 28
2D monitor
DSIS
Alexiou et al. [8]
7 × 1 × 4 = 28
2D/3D monitor
DSIS
Zhang et al. [12]
objects
static
down-sampling,
geometry noise, color noise
1 × (6 + 7 + 12) = 25
2D monitor
×
-
Javaheri et al. [13]
objects, humans
static
octree-pruning,
graph-based compression
6 × 2 × 3 = 36
DSIS
da Silva Cruz et al. [14]
objects, humans,
scenes
static
octree-pruning,
projection-based encoder
8 × 2 × 3 = 48
DSIS
SJTU-PCQA [15]
objects, humans
static
octree-based compression, color noise,
geometry noise, scaling
10 × 7 × 6 = 420
ACR
vsenseVVDB [16]
humans
dynamic
down-sampling, V-PCC
2 × 2 × 4 = 16
DSIS, PWC
Su et al. [17]
objects
static
down-sampling, Gaussian noise,
V-PCC, S-PCC, L-PCC
20 × (3 + 9 + 9 + 12 + 4) = 740
DSIS
IRPC [18]
objects, humans
static
PCL, G-PCC, V-PCC
6 × 3 × 3 = 54
DSIS
vsenseVVDB2 [19]
humans
dynamic
Mesh: Draco+JPEG
Point Clouds: G-PCC, V-PCC
8 × (6 × 2 + 5) = 136
ACR-HR
Cao et al. [20]
humans
dynamic
Mesh: TFAN + FFmpeg
Point Clouds: V-PCC + FFmpeg
4 × 1 × 5 = 20
ACR
Stuart et al. [21]
objects, humans
static
G-PCC, V-PCC
6 × (2 + 1) × 5 = 90
2D/3D monitor
DSIS
Subramanyam et al. [22]
humans
dynamic
the MPEG anchor, V-PCC
8 × 2 × 4 = 64
HMD
ACR-HR
PointXR [23]
objects
static
G-PCC
5 × (1 + 1) × 4 = 40
DSIS
Proposed SIAT-PCQD [9]
objects, humans
static
V-PCC
20 × 1 × 17 = 340
DSIS
et al. [17] built a point cloud database of diverse contents and
applied down-sampling, Gaussian noise, and three state-of-
the-art PCC algorithms to create distorted point clouds. They
first explicitly defined types of distortions for point clouds:
geometry distortions include hollow, geometry noise, hole,
shape distortion, collapse, and gap and blur; texture distortions
include texture noise, blocking, blur, and color bleeding. Java-
heri et al. [18] created a subjective database named IRPC and
studied the impacts of different coding and rendering solutions
on the perceptual quality. For the comparison of dynamic point
clouds and meshes, the perceptual quality of compressed 3D
sequences was explored [19] [20], and Cao et al. [20] first
study the impact of observation distance on perceptual quality.
Recently, Perry et al. [21] confirmed the superior compression
performance of the MPEG V-PCC compared to MPEG G-
PCC in static contents. Thus, researchers considered prevailing
point cloud codecs like V-PCC and G-PCC on diverse colored
point clouds. However, point clouds were rendered as video
sequences by individual tracks around the point cloud centers
and then displayed on a planar screen in passive interaction.
TABLE II SUMMARY
IIOF PRE-PROCESSED TEST SEQUENCES.Sequence
Category
Source
Pre-
processing
#Points
Geometry
Precision
Bounding Box
(geometry QP,
texture QP)
Redandblack [39]
Full body figures
MPEG 1 /JPEG 2
No
729, 133
10 bits
(393, 977, 232)
Bold number denotes lossless compression. 1 https://mpeg.chiariglione.org/tags/point-cloud 2 https://jpeg.org/plenodb/ 3 https://sketchfab.com/28,0),
(36,0), (0,27),
(0,37), (0,47)
Longdress [39]
Full body figures
MPEG 1 /JPEG 2
No
765, 821
10 bits
(356, 1003, 296)
Loot [39]
Full body figures
MPEG 1 /JPEG 2
No
784, 142
10 bits
(352, 992, 354)
Soldier [39]
Full body figures
MPEG 1 /JPEG 2
No
1, 059, 810
10 bits
(360, 1016, 405)
The20sMaria [40]
Full body figures
MPEG 1
Yes
950, 423
10 bits
(405, 908, 324)
UlliWegner [41]
Full body figures
MPEG 1
Yes
598, 448
10 bits
(376, 997, 258)
Ricardo [42]
Upper body figures
JPEG 2
No
960, 703
10 bits
(446, 364, 178)
Phil [42]
Upper body figures
JPEG 2
No
1, 660, 959
10 bits
(441, 464, 394)
Andrew [42]
Upper body figures
JPEG 2
No
1, 276, 312
10 bits
(392, 444, 297)
Sarah [42]
Upper body figures
JPEG 2
No
1, 355, 867
10 bits
(486, 467, 348)
Facade
Inanimate objects
MPEG 1
Yes
292, 169
10 bits
(555, 375, 75)
House without roof
Inanimate objects
MPEG 1
Yes
581, 213
10 bits
(488, 481, 455)
ULB Unicorn
Inanimate objects
MPEG 1
Yes
1, 086, 944
10 bits
(571, 361, 303)
Romanoillamp [43]
Inanimate objects
JPEG 2
Yes
343, 186
10 bits
(517, 355, 352)
Biplane [44]
Inanimate objects
JPEG 2
Yes
400, 972
10 bits
(439, 569, 410)
Nike
Inanimate objects
Sketchfab 3
Yes
186, 960
10 bits
(303, 213, 303)
Banana
Inanimate objects
Sketchfab 3
Yes
145, 243
10 bits
(201, 337, 102)
Grass
Inanimate objects
Sketchfab 3
Yes
724, 725
10 bits
(494, 159, 434)
Bush
Inanimate objects
Sketchfab 3
Yes
1, 211, 816
10 bits
(587, 400, 435)
AngelSeated
Inanimate objects
Sketchfab 3
Yes
770, 184
10 bits
(543, 942, 305)
TABLE III PERFORMANCE
IIIOF POINT-BASED OBJECTIVE POINT CLOUD QUALITY METRICS.Category
Metric
All
Human figure session
Inanimate object session
PLCC
SROCC KROCC
RMSE
PLCC
SROCC KROCC
RMSE
PLCC
SROCC KROCC
RMSE
D1 [24]
p2point MSE
0.3136
0.3963
0.2761
0.1224
0.3850
0.4553
0.3299
0.1221
0.2549
0.3517
0.2459
0.1213
p2point Hausdorff
0.2980
0.3791
0.2620
0.1231
0.3823
0.4514
0.3261
0.1223
0.2311
0.3215
0.2228
0.1220
PSNR-p2point MSE
0.2849
0.3279
0.2252
0.1236
0.3103
0.3957
0.2770
0.1258
0.3205
0.2794
0.1898
0.1188
PSNR-p2point Hausdorff
0.2825
0.3273
0.2268
0.1237
0.2996
0.3894
0.2745
0.1262
0.3178
0.2827
0.1940
0.1189
D2 [25]
p2plane MSE
0.3498
0.4125
0.2947
0.1208
0.4151
0.4759
0.3548
0.1204
0.2971
0.3542
0.2504
0.1198
p2plane Hausdorff
0.3218
0.3862
0.2679
0.1221
0.3993
0.4622
0.3378
0.1213
0.2323
0.3138
0.2135
0.1220
PSNR-p2plane MSE
0.3111
0.3463
0.2381
0.1225
0.3631
0.4371
0.3069
0.1233
0.2906
0.2781
0.1840
0.1200
PSNR-p2plane Hausdorff
0.3013
0.3419
0.2363
0.1229
0.3471
0.4039
0.2826
0.1241
0.3177
0.2933
0.2006
0.1189
YUV [24]
PSNR-Y
0.3443
0.3481
0.2318
0.1211
0.5920
0.5295
0.3609
0.1067
0.3991
0.4338
0.2945
0.1150
PSNR-U
0.3883
0.4097
0.2790
0.1188
0.4902
0.4527
0.3079
0.1153
0.4549
0.5195
0.3653
0.1117
PSNR-V
0.4373
0.4378
0.3035
0.1160
0.4502
0.4244
0.3007
0.1182
0.4677
0.4840
0.3342
0.1109
PSNR-YUV
0.4336
0.4544
0.3098
0.1162
0.5230
0.5058
0.3507
0.1128
0.4794
0.5417
0.3799
0.1101
(geometry QP, texture QP)
DMOS
(a)
(geometry QP, texture QP)
DMOS
(b)
Fig. 9. Varying geometry/texture QP against subjective scores. (a) grouping
by geometry QP, (b) grouping by texture QP.
Wi
0.24
0.24
0.19
0.19
0.07
0.07
Wi
0.11
0.11
0.10
0.10
0.29
0.29
TABLE IV PERFORMANCE
IVOF POINT-BASED AND PROJECTION-BASED OBJECTIVE POINT CLOUD QUALITY METRICS.Point-based and
Projection-based
PCQA methods
Submetric
All
Human figure session
Inanimate object session
PLCC
SROCC
KROCC
RMSE
PLCC
SROCC
KROCC
RMSE
PLCC
SROCC
KROCC
RMSE
PC-MSDM [27]
-
0.1814
0.1470
0.0991
0.1293
0.3003
0.2981
0.2125
0.1289
0.1902
0.0024
0.0008
0.1253
PC-ASIM [26]
0.2374
0.2695
0.1780
0.1277
0.3526
0.3382
0.2228
0.1265
0.1375
0.2389
0.1660
0.1264
PCQM [33]
0.6539
0.6666
0.4825
0.0994
0.6391
0.6538
0.4757
0.1040
0.6811
0.6882
0.5069
0.0934
PointSSIM [36]
0.7808
0.6955
0.5086
0.0821
0.7294
0.6152
0.4449
0.0925
0.8785
0.8033
0.6180
0.0610
View Projection Based [10]
GMSD
0.3487
0.2713
0.1874
0.1208
0.4511
0.3034
0.2148
0.1181
0.4172
0.3555
0.2502
0.1140
GSM
0.1447
0.1828
0.1277
0.1276
0.3645
0.1993
0.1387
0.1232
0.2586
0.2476
0.1723
0.1212
IFC
0.5378
0.5123
0.3528
0.1087
0.6669
0.6126
0.4360
0.0986
0.5477
0.5635
0.3979
0.1050
IW-SSIM
0.4298
0.4044
0.2714
0.1164
0.5620
0.5453
0.3723
0.1095
0.3684
0.3368
0.2228
0.1166
MS-SSIM
0.1647
0.2302
0.1575
0.1272
0.4247
0.2867
0.2042
0.1198
0.2570
0.2625
0.1831
0.1212
NQM
0.2014
0.2357
0.1629
0.1263
0.2658
0.2998
0.1977
0.1276
0.2512
0.2992
0.2070
0.1214
RFSIM
0.2269
0.2352
0.1550
0.1256
0.3315
0.3715
0.2364
0.1248
0.3167
0.3145
0.2158
0.1190
SR-SIM
0.2405
0.2888
0.1960
0.1251
0.4114
0.3491
0.2342
0.1206
0.4560
0.4408
0.2966
0.1117
SSIM
0.1027
0.1344
0.0938
0.1283
0.0204
0.1330
0.0935
0.1323
0.1793
0.2295
0.1566
0.1234
VIF
0.5284
0.5564
0.3836
0.1095
0.5580
0.5549
0.3914
0.1098
0.5862
0.6324
0.4382
0.1016
VSI
0.3445
0.2920
0.1985
0.1210
0.4649
0.4144
0.2874
0.1172
0.3791
0.3751
0.2536
0.1161
Proposed Weighted
View Projection Based
GMSD
0.3959
0.3069
0.2103
0.1184
0.3936
0.3298
0.2321
0.1216
0.5012
0.4105
0.2826
0.1085
GSM
0.2135
0.2122
0.1469
0.1260
0.3647
0.2213
0.1556
0.1232
0.3346
0.3161
0.2150
0.1182
IFC
0.4941
0.4657
0.3154
0.1121
0.6716
0.6211
0.4497
0.0980
0.4411
0.4236
0.2806
0.1126
IW-SSIM
0.4394
0.4112
0.2784
0.1158
0.5480
0.5356
0.3653
0.1107
0.3871
0.3386
0.2212
0.1157
MS-SSIM
0.3301
0.2585
0.1777
0.1217
0.4399
0.3020
0.2183
0.1188
0.3834
0.3214
0.2249
0.1159
NQM
0.2209
0.2312
0.1601
0.1257
0.3120
0.2773
0.1848
0.1257
0.2623
0.3015
0.2060
0.1211
RFSIM
0.2278
0.2415
0.1616
0.1255
0.3265
0.3673
0.2372
0.1251
0.3486
0.3056
0.2109
0.1176
SR-SIM
0.3308
0.2921
0.1983
0.1217
0.4235
0.3439
0.2335
0.1199
0.4216
0.4308
0.2872
0.1138
SSIM
0.2294
0.1715
0.1205
0.1255
0.3501
0.1507
0.1063
0.1240
0.3601
0.3248
0.2210
0.1170
VIF
0.5474
0.5595
0.3852
0.1079
0.5630
0.5594
0.3967
0.1094
0.6106
0.6210
0.4277
0.0993
VSI
0.3315
0.2922
0.1993
0.1216
0.4416
0.4110
0.2865
0.1187
0.3837
0.3434
0.2312
0.1158
Average gain
0.0446
0.0090
0.0061
-0.0013
0.0285
0.0045
0.0054
-0.0006
0.0379
0.0072
0.0013
-0.0014
Ratio
14.31%
2.92%
2.88%
-1.10%
6.70%
1.26%
2.19%
-0.49%
10.12%
1.96%
0.50%
-1.26%
Proposed
Patch Projection Based
GMSD
0.7360
0.5923
0.4208
0.0873
0.7993
0.6169
0.4476
0.0795
0.6992
0.5947
0.4257
0.0897
GSM
0.6536
0.5408
0.3777
0.0976
0.7968
0.6056
0.4416
0.0800
0.5394
0.4592
0.3225
0.1056
IFC
0.2925
0.2404
0.1664
0.1233
0.3440
0.1353
0.0936
0.1243
0.3400
0.3824
0.2785
0.1180
IW-SSIM
0.8181
0.6966
0.5183
0.0742
0.8101
0.6505
0.4761
0.0776
0.8994
0.8563
0.6783
0.0548
MS-SSIM
0.6772
0.5180
0.3632
0.0949
0.7796
0.5369
0.3827
0.0829
0.5945
0.4924
0.3482
0.1009
NQM
0.5100
0.5082
0.3523
0.1109
0.7855
0.6783
0.5119
0.0819
0.5172
0.5547
0.3979
0.1074
RFSIM
0.6144
0.5493
0.3798
0.1017
0.7636
0.6345
0.4535
0.0854
0.5841
0.5281
0.3716
0.1018
SR-SIM
0.7761
0.6539
0.4731
0.0813
0.8040
0.6527
0.4796
0.0787
0.8349
0.7748
0.5816
0.0690
SSIM
0.5498
0.4280
0.2915
0.1077
0.6259
0.4521
0.3116
0.1032
0.5436
0.4257
0.2997
0.1053
VIF
0.6043
0.5404
0.3751
0.1027
0.6914
0.5757
0.4025
0.0956
0.7749
0.6925
0.5225
0.0793
VSI
0.8063
0.6807
0.5013
0.0763
0.7994
0.6359
0.4626
0.0795
0.8922
0.8429
0.6540
0.0567
Average gain
0.3426
0.2368
0.1757
-0.0253
0.3162
0.1912
0.1504
-0.0303
0.2912
0.2316
0.1897
-0.0257
Ratio
109.94%
77.07%
82.90%
-21.22%
74.42% 53.58%
60.91%
-25.82%
77.72% 62.56%
73.53%
-22.79%
TABLE V PERFORMANCE
VOF THE PROPOSED PATCH PROJECTION BASED OBEJCTIVE METHOD IN THE VSENSEVVDB [16] DATABASEProjection-based
PCQA methods
Submetric
PLCC
SROCC
KROCC
RMSE
View Projection Based [10]
GMSD
0.2529
0.2813
0.1984
24.2996
GSM
0.3468
0.3484
0.2452
23.5575
IFC
0.6891
0.6799
0.4777
18.2002
IW-SSIM
0.469
0.4485
0.3239
22.183
MS-SSIM
0.2549
0.2831
0.1943
24.9611
NQM
0.5382
0.2666
0.1943
21.1682
RFSIM
0.4416
0.4415
0.3279
22.5341
SR-SIM
0.2319
0.3333
0.2308
24.4312
SSIM
0.2396
0.275
0.1984
24.3844
VIF
0.4885
0.4653
0.3279
21.9154
VSI
0.4154
0.2692
0.1903
24.5985
Proposed
Patch Projection Based
GMSD
0.7101
0.824
0.6316
17.6826
GSM
0.9383
0.8939
0.7523
8.6856
IFC
0.8732
0.8346
0.6518
12.2409
IW-SSIM
0.8998
0.8254
0.6316
10.9596
MS-SSIM
0.8941
0.8225
0.6342
11.2509
NQM
0.8273
0.7983
0.587
14.108
RFSIM
0.9011
0.9322
0.7611
23.8582
SR-SIM
0.9477
0.8951
0.7264
8.0182
SSIM
0.5545
0.5907
0.4228
20.9014
VIF
0.8389
0.8254
0.6437
13.6709
VSI
0.9094
0.9051
0.7658
10.4448
Average gain
0.4478
0.4596
0.3908
-9.1284
Ratio
112.78% 123.54% 147.78% -39.81%
Emerging mpeg standards for point cloud compression. S Schwarz, M Preda, V Baroncini, M Budagavi, P Cesar, P A Chou, R A Cohen, M Krivokuća, S Lasserre, Z Li, J Llach, K Mammou, R Mekuria, O Nakagami, E Siahaan, A Tabatabai, A M Tourapis, V Zakharchenko, IEEE Trans. Emerg. Sel. Topics Circuits Syst. 91S. Schwarz, M. Preda, V. Baroncini, M. Budagavi, P. Cesar, P. A. Chou, R. A. Cohen, M. Krivokuća, S. Lasserre, Z. Li, J. Llach, K. Mammou, R. Mekuria, O. Nakagami, E. Siahaan, A. Tabatabai, A. M. Tourapis, and V. Zakharchenko, "Emerging mpeg standards for point cloud compression," IEEE Trans. Emerg. Sel. Topics Circuits Syst., vol. 9, no. 1, pp. 133-148, March 2019.
Design, implementation, and evaluation of a point cloud codec for tele-immersive video. R Mekuria, K Blom, P Cesar, IEEE Trans. Circuits Syst. Video Technol. 274R. Mekuria, K. Blom, and P. Cesar, "Design, implementation, and evaluation of a point cloud codec for tele-immersive video," IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 4, pp. 828-842, 2016.
Towards subjective quality assessment of point cloud imaging in augmented reality. E Alexiou, E Upenik, T Ebrahimi, 2017 IEEE 19th Int. Workshop Multimedia Signal Process. E. Alexiou, E. Upenik, and T. Ebrahimi, "Towards subjective quality assessment of point cloud imaging in augmented reality," in 2017 IEEE 19th Int. Workshop Multimedia Signal Process. (MMSP), 2017, pp. 1-6.
Subjective and objective quality evaluation of 3D point cloud denoising algorithms. A Javaheri, C Brites, F Pereira, J Ascenso, 2017 IEEE Int. Conf. Multimedia Expo Workshops. ICMEWA. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Subjective and objective quality evaluation of 3D point cloud denoising algorithms," in 2017 IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), 2017, pp. 1-6.
On the performance of metrics to predict quality in point cloud representations. E Alexiou, T Ebrahimi, Appl. Digit. Image Process. XL. 10396103961E. Alexiou and T. Ebrahimi, "On the performance of metrics to predict quality in point cloud representations," in Appl. Digit. Image Process. XL, vol. 10396, 2017, p. 103961H.
Impact of Visualisation Strategy for Subjective Quality Assessment of Point Clouds. E Alexiou, T Ebrahimi, 2018 IEEE Int. Conf. Multimedia Expo Workshops (ICMEW). E. Alexiou and T. Ebrahimi, "Impact of Visualisation Strategy for Subjective Quality Assessment of Point Clouds," in 2018 IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), 2018, pp. 1-6.
Point cloud subjective evaluation methodology based on reconstructed surfaces. E Alexiou, A M Pinheiro, C Duarte, D Matković, E Dumić, L A Da Silva Cruz, L G Dmitrović, M V Bernardo, M Pereira, T Ebrahimi, Appl. Digit. Image Process. XLI. 10752107520E. Alexiou, A. M. Pinheiro, C. Duarte, D. Matković, E. Dumić, L. A. da Silva Cruz, L. G. Dmitrović, M. V. Bernardo, M. Pereira, and T. Ebrahimi, "Point cloud subjective evaluation methodology based on reconstructed surfaces," in Appl. Digit. Image Process. XLI, vol. 10752, 2018, p. 107520H.
Point cloud subjective evaluation methodology based on 2D rendering. E Alexiou, T Ebrahimi, M V Bernardo, M Pereira, A Pinheiro, L A D S Cruz, C Duarte, L G Dmitrovic, E Dumic, D Matkovics, 2018 10th Int. Conf. Quality Multimedia Experience (QoMEX). E. Alexiou, T. Ebrahimi, M. V. Bernardo, M. Pereira, A. Pinheiro, L. A. D. S. Cruz, C. Duarte, L. G. Dmitrovic, E. Dumic, D. Matkovics, and others, "Point cloud subjective evaluation methodology based on 2D rendering," in 2018 10th Int. Conf. Quality Multimedia Experience (QoMEX), 2018, pp. 1-6.
Siat-pcqd: Subjective point cloud quality database with 6dof head-mounted display. X Wu, Y Zhang, C Fan, J Hou, S Kwong, 10.21227/ad8d-7r28X. Wu, Y. Zhang, C. Fan, J. Hou, and S. Kwong, "Siat-pcqd: Subjective point cloud quality database with 6dof head-mounted display," 2021. [Online]. Available: https://dx.doi.org/10.21227/ad8d-7r28
A novel methodology for quality assessment of voxelized point clouds. E M Torlig, E Alexiou, T A Fonseca, R L De Queiroz, T Ebrahimi, Appl. Digit. Image Process. XLI. 10752107520E. M. Torlig, E. Alexiou, T. A. Fonseca, R. L. de Queiroz, and T. Ebrahimi, "A novel methodology for quality assessment of voxelized point clouds," in Appl. Digit. Image Process. XLI, vol. 10752, 2018, p. 107520I.
A comprehensive study of the rate-distortion performance in mpeg point cloud compression. E Alexiou, I Viola, T M Borges, T A Fonseca, R L De Queiroz, T Ebrahimi, APSIPA Trans. Signal Inf. Process. 827E. Alexiou, I. Viola, T. M. Borges, T. A. Fonseca, R. L. de Queiroz, and T. Ebrahimi, "A comprehensive study of the rate-distortion performance in mpeg point cloud compression," APSIPA Trans. Signal Inf. Process., vol. 8, p. e27, 2019.
A subjective quality evaluation for 3D point cloud models. J Zhang, W Huang, X Zhu, J.-N Hwang, 2014 Int. Conf. Audio, Language and Image Process. J. Zhang, W. Huang, X. Zhu, and J.-N. Hwang, "A subjective quality evaluation for 3D point cloud models," in 2014 Int. Conf. Audio, Language and Image Process., 2014, pp. 827-831.
Subjective and objective quality evaluation of compressed point clouds. A Javaheri, C Brites, F Pereira, J Ascenso, 2017 IEEE 19th Int. Workshop Multimedia Signal Process. A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Subjective and objective quality evaluation of compressed point clouds," in 2017 IEEE 19th Int. Workshop Multimedia Signal Process. (MMSP), 2017, pp. 1-6.
Point cloud quality evaluation: Towards a definition for test conditions. L A Da Silva Cruz, E Dumić, E Alexiou, J Prazeres, R Duarte, M Pereira, A Pinheiro, T Ebrahimi, 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX). L. A. da Silva Cruz, E. Dumić, E. Alexiou, J. Prazeres, R. Duarte, M. Pereira, A. Pinheiro, and T. Ebrahimi, "Point cloud quality evalua- tion: Towards a definition for test conditions," in 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX), 2019, pp. 1-6.
Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration. Q Yang, H Chen, Z Ma, Y Xu, R Tang, J Sun, IEEE Trans. Multimedia. Q. Yang, H. Chen, Z. Ma, Y. Xu, R. Tang, and J. Sun, "Predicting the perceptual quality of point cloud: A 3d-to-2d projection-based exploration," IEEE Trans. Multimedia, pp. 1-1, 2020.
Subjective and objective quality assessment for volumetric video compression. E Zerman, P Gao, C Ozcinar, A Smolic, Electron. Imag. 201910E. Zerman, P. Gao, C. Ozcinar, and A. Smolic, "Subjective and objective quality assessment for volumetric video compression," Electron. Imag., vol. 2019, no. 10, pp. 323-1, 2019.
Perceptual Quality Assessment of 3d Point Clouds. H Su, Z Duanmu, W Liu, Q Liu, Z Wang, 2019 IEEE Int. Conf. Image Process. (ICIP). H. Su, Z. Duanmu, W. Liu, Q. Liu, and Z. Wang, "Perceptual Quality Assessment of 3d Point Clouds," in 2019 IEEE Int. Conf. Image Process. (ICIP), 2019, pp. 3182-3186.
Point Cloud Rendering after Coding: Impacts on Subjective and Objective Quality. A Javaheri, C Brites, F Pereira, J Ascenso, arXiv:1912.09137arXiv preprintA. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Point Cloud Render- ing after Coding: Impacts on Subjective and Objective Quality," arXiv preprint arXiv:1912.09137, 2019.
Textured mesh vs coloured point cloud: A subjective study for volumetric video compression. E Zerman, C Ozcinar, P Gao, A Smolic, 2020 Int. Conf. Quality Multimedia Experience (QoMEX). E. Zerman, C. Ozcinar, P. Gao, and A. Smolic, "Textured mesh vs coloured point cloud: A subjective study for volumetric video com- pression," in 2020 Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
Visual quality of compressed mesh and point cloud sequences. K Cao, Y Xu, P Cosman, IEEE Access. 8K. Cao, Y. Xu, and P. Cosman, "Visual quality of compressed mesh and point cloud sequences," IEEE Access, vol. 8, pp. 171 203-171 217, 2020.
Quality evaluation of static point clouds encoded using mpeg codecs. S Perry, H P Cong, L A Da Silva Cruz, J Prazeres, M Pereira, A Pinheiro, E Dumic, E Alexiou, T Ebrahimi, 2020 IEEE Int. Conf. Image Process. (ICIP). S. Perry, H. P. Cong, L. A. da Silva Cruz, J. Prazeres, M. Pereira, A. Pinheiro, E. Dumic, E. Alexiou, and T. Ebrahimi, "Quality evaluation of static point clouds encoded using mpeg codecs," in 2020 IEEE Int. Conf. Image Process. (ICIP), 2020, pp. 3428-3432.
Comparing the quality of highly realistic digital humans in 3dof and 6dof: A volumetric video case study. S Subramanyam, J Li, I Viola, P Cesar, 2020 IEEE Conf. Virtual Reality and 3D User Interfaces (VR). S. Subramanyam, J. Li, I. Viola, and P. Cesar, "Comparing the quality of highly realistic digital humans in 3dof and 6dof: A volumetric video case study," in 2020 IEEE Conf. Virtual Reality and 3D User Interfaces (VR), 2020, pp. 127-136.
Pointxr: A toolbox for visualization and subjective evaluation of point clouds in virtual reality. E Alexiou, N Yang, T Ebrahimi, 2020E. Alexiou, N. Yang, and T. Ebrahimi, "Pointxr: A toolbox for visualiza- tion and subjective evaluation of point clouds in virtual reality," in 2020
Int, Conf, Quality Multimedia Experience (QoMEX). Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
Common test conditions for point cloud compression. S Schwarz, D Flynn, N17995MacauTech. Rep.S. Schwarz and D. Flynn, "Common test conditions for point cloud compression," ISO/IEC JTC1/SC29/WG11, Macau, Tech. Rep. N17995, October 2018.
Geometric distortion metrics for point cloud compression. D Tian, H Ochimizu, C Feng, R Cohen, A Vetro, 2017 IEEE Int. Conf. Image Process. (ICIP. D. Tian, H. Ochimizu, C. Feng, R. Cohen, and A. Vetro, "Geometric distortion metrics for point cloud compression," in 2017 IEEE Int. Conf. Image Process. (ICIP), 2017, pp. 3460-3464.
Point cloud quality assessment metric based on angular similarity. E Alexiou, T Ebrahimi, 2018 IEEE Int. Conf. Multimedia Expo (ICME). E. Alexiou and T. Ebrahimi, "Point cloud quality assessment metric based on angular similarity," in 2018 IEEE Int. Conf. Multimedia Expo (ICME), 2018, pp. 1-6.
PC-MSDM: A quality metric for 3D point clouds. G Meynet, J Digne, G Lavoué, 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX). G. Meynet, J. Digne, and G. Lavoué, "PC-MSDM: A quality metric for 3D point clouds," in 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX), 2019, pp. 1-3.
Improving psnr-based quality metrics performance for point cloud geometry. A Javaheri, C Brites, F Pereira, J Ascenso, 2020 IEEE Int. Conf. Image Process. (ICIP). A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Improving psnr-based quality metrics performance for point cloud geometry," in 2020 IEEE Int. Conf. Image Process. (ICIP), 2020, pp. 3438-3442.
A generalized hausdorff distance based quality metric for point cloud geometry. A Javaheri, C Brites, F Pereira, J Ascenso, 2020 Int. Conf. Quality Multimedia Experience (QoMEX). A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "A generalized hausdorff distance based quality metric for point cloud geometry," in 2020 Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
Mahalanobis based point to distribution metric for point cloud geometry quality evaluation. A Javaheri, C Brites, F Pereira, J Ascenso, IEEE Signal Process. Lett. 27A. Javaheri, C. Brites, F. Pereira, and J. Ascenso, "Mahalanobis based point to distribution metric for point cloud geometry quality evaluation," IEEE Signal Process. Lett., vol. 27, pp. 1350-1354, 2020.
Multi-distance point cloud quality assessment. R Diniz, P G Freitas, M C Q Farias, 2020 IEEE Int. Conf. Image Process. (ICIP). R. Diniz, P. G. Freitas, and M. C. Q. Farias, "Multi-distance point cloud quality assessment," in 2020 IEEE Int. Conf. Image Process. (ICIP), 2020, pp. 3443-3447.
Towards a point cloud quality assessment model using local binary patterns. R Diniz, P G Freitas, M C Q Farias, 2020 Int. Conf. Quality Multimedia Experience (QoMEX). R. Diniz, P. G. Freitas, and M. C. Q. Farias, "Towards a point cloud quality assessment model using local binary patterns," in 2020 Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
Pcqm: A full-reference quality metric for colored 3d point clouds. G Meynet, Y Nehmé, J Digne, G Lavoué, 2020 Int. Conf. Quality Multimedia Experience (QoMEX). G. Meynet, Y. Nehmé, J. Digne, and G. Lavoué, "Pcqm: A full-reference quality metric for colored 3d point clouds," in 2020 Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
A color-based objective quality metric for point cloud contents. I Viola, S Subramanyam, P Cesar, 2020 Int. Conf. Quality Multimedia Experience (QoMEX). I. Viola, S. Subramanyam, and P. Cesar, "A color-based objective quality metric for point cloud contents," in 2020 Int. Conf. Quality Multimedia Experience (QoMEX), 2020, pp. 1-6.
Inferring point cloud quality via graph similarity. Q Yang, Z Ma, Y Xu, Z Li, J Sun, arXiv:2006.00497arXiv preprintQ. Yang, Z. Ma, Y. Xu, Z. Li, and J. Sun, "Inferring point cloud quality via graph similarity," arXiv preprint arXiv:2006.00497, 2020.
Towards a point cloud structural similarity metric. E Alexiou, T Ebrahimi, 2020 IEEE Int. Conf. Multimedia Expo Workshops (ICMEW). E. Alexiou and T. Ebrahimi, "Towards a point cloud structural similarity metric," in 2020 IEEE Int. Conf. Multimedia Expo Workshops (ICMEW), 2020, pp. 1-6.
A reduced reference metric for visual quality evaluation of point cloud contents. I Viola, P Cesar, IEEE Signal Process. Lett. 27I. Viola and P. Cesar, "A reduced reference metric for visual quality evaluation of point cloud contents," IEEE Signal Process. Lett., vol. 27, pp. 1660-1664, 2020.
Exploiting user interactivity in quality assessment of point cloud imaging. E Alexiou, T Ebrahimi, 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX). E. Alexiou and T. Ebrahimi, "Exploiting user interactivity in quality assessment of point cloud imaging," in 2019 11th Int. Conf. Quality Multimedia Experience (QoMEX), 2019, pp. 1-6.
E Eon, B Harrison, T Myers, P A Chou, m40059/M740068i voxelized full bodies. GenevaTech. Rep.E. d'Eon, B. Harrison, T. Myers, and P. A. Chou, "8i voxelized full bodies (a voxelized point cloud dataset)," ISO/IEC JTC1/SC29/WG11, Geneva, Tech. Rep. m40059/M74006, January 2017.
HHI point cloud dataset of moving actress. T Ebner, I Feldmann, O Schreer, P Kauff, E Feiler, F Govaere, K Costa-Zahn, F Mrongowius, m42152Gwangju, KoreaTech. Rep.T. Ebner, I. Feldmann, O. Schreer, P. Kauff, E. Feiler, F. Govaere, K. Costa-Zahn, and F. Mrongowius, "HHI point cloud dataset of moving actress," ISO/IEC JTC1/SC29/WG11, Gwangju, Korea, Tech. Rep. m42152, January 2018.
HHI point cloud dataset of a boxing trainer. T Ebner, I Feldmann, O Schreer, P Kauff, T V Unger, m42921Ljubljana, SloveniaTech. Rep.T. Ebner, I. Feldmann, O. Schreer, P. Kauff, and T. v. Unger, "HHI point cloud dataset of a boxing trainer," ISO/IEC JTC1/SC29/WG11, Ljubljana, Slovenia, Tech. Rep. m42921, July 2018.
Microsoft voxelized upper bodies -a voxelized point cloud dataset. C Loop, Q Cai, S O Escolano, P A Chou, C. Loop, Q. Cai, S. O. Escolano, and P. A. Chou, "Microsoft voxelized upper bodies -a voxelized point cloud dataset," Available at https://jpeg. org/plenodb/pc/microsoft/ (accessed Apr. 4, 2020).
Emerging image modalities representation and compression. University of São PauloUniversity of São Paulo, "Emerging image modalities representation and compression," Available at http://uspaulopc.di.ubi.pt/ (accessed Apr. 4, 2020).
ScanLAB projects point cloud data sets. EPFL, "ScanLAB projects point cloud data sets," Available at http:// grebjpeg.epfl.ch/jpeg pc/index Bi-plane.html (accessed Apr. 4, 2020).
Subjective video quality assessment methods for multimedia applications. Itu-T, P. Tech. Rep. 910ITU-T, "Subjective video quality assessment methods for multimedia applications," P., Tech. Rep. 910, 2008.
Analysis of public image and video databases for quality assessment. S Winkler, IEEE J. Sel. Topics Signal Process. 66S. Winkler, "Analysis of public image and video databases for quality assessment," IEEE J. Sel. Topics Signal Process., vol. 6, no. 6, pp. 616- 625, 2012.
Methodology for the subjective assessment of the quality of television pictures. Itu-R, BT. Tech. RepITU-R, "Methodology for the subjective assessment of the quality of television pictures," BT., Tech. Rep. 500-13, 2012.
Subjective test methodologies for 360°video on head-mounted displays. Itu-T, P., Tech. Rep. 919ITU-T, "Subjective test methodologies for 360°video on head-mounted displays," P., Tech. Rep. 919, 2020.
Comparison and evaluation of light field image coding approaches. I Viola, M Řeřábek, T Ebrahimi, IEEE J. Sel. Topics Signal Process. 117I. Viola, M.Řeřábek, and T. Ebrahimi, "Comparison and evaluation of light field image coding approaches," IEEE J. Sel. Topics Signal Process., vol. 11, no. 7, pp. 1092-1106, 2017.
Assessing the quality of experience in viewing rendered decompressed light fields. C Perra, Multimedia Tools and Appl. 7716C. Perra, "Assessing the quality of experience in viewing rendered decompressed light fields," Multimedia Tools and Appl., vol. 77, no. 16, pp. 21 771-21 790, 2018.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE Trans. Image Process. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.
Study of subjective and objective quality assessment of video. K Seshadrinathan, R Soundararajan, A C Bovik, L K Cormack, IEEE Trans. Image Process. 196K. Seshadrinathan, R. Soundararajan, A. C. Bovik, and L. K. Cormack, "Study of subjective and objective quality assessment of video," IEEE Trans. Image Process., vol. 19, no. 6, pp. 1427-1441, 2010.
Image information and visual quality. H R Sheikh, A C Bovik, IEEE Trans. Image Process. 152H. R. Sheikh and A. C. Bovik, "Image information and visual quality," IEEE Trans. Image Process., vol. 15, no. 2, pp. 430-444, 2006.
Methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models. Itu-T, 1401P. Tech. Rep.ITU-T, "Methods, metrics and procedures for statistical evaluation, qualification and comparison of objective quality prediction models," P., Tech. Rep. 1401, 2020.
Model-based joint bit allocation between geometry and color for video-based 3d point cloud compression. Q Liu, H Yuan, J Hou, R Hamzaoui, H Su, IEEE Trans. Multimedia. Q. Liu, H. Yuan, J. Hou, R. Hamzaoui, and H. Su, "Model-based joint bit allocation between geometry and color for video-based 3d point cloud compression," IEEE Trans. Multimedia, pp. 1-1, 2020.
Image quality assessment based on a degradation model. N Damera-Venkata, T D Kite, W S Geisler, B L Evans, A C Bovik, IEEE Trans. Image Process. 94N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, "Image quality assessment based on a degradation model," IEEE Trans. Image Process., vol. 9, no. 4, pp. 636-650, 2000.
Multiscale structural similarity for image quality assessment. Z Wang, E P Simoncelli, A C Bovik, 37th Asilomar Conf. Signals, Syst. Computers. 2Z. Wang, E. P. Simoncelli, and A. C. Bovik, "Multiscale structural similarity for image quality assessment," in 37th Asilomar Conf. Signals, Syst. Computers, 2003, vol. 2, 2003, pp. 1398-1402 Vol.2.
Information content weighting for perceptual image quality assessment. Z Wang, Q Li, IEEE Trans. Image Process. 205Z. Wang and Q. Li, "Information content weighting for perceptual image quality assessment," IEEE Trans. Image Process., vol. 20, no. 5, pp. 1185-1198, 2011.
Image quality assessment based on gradient similarity. A Liu, W Lin, M Narwaria, IEEE Trans. Image Process. 214A. Liu, W. Lin, and M. Narwaria, "Image quality assessment based on gradient similarity," IEEE Trans. Image Process., vol. 21, no. 4, pp. 1500-1512, 2012.
Gradient magnitude similarity deviation: A highly efficient perceptual image quality index. W Xue, L Zhang, X Mou, A C Bovik, IEEE Trans. Image Process. 232W. Xue, L. Zhang, X. Mou, and A. C. Bovik, "Gradient magnitude similarity deviation: A highly efficient perceptual image quality index," IEEE Trans. Image Process., vol. 23, no. 2, pp. 684-695, 2014.
Rfsim: A feature based image quality assessment metric using riesz transforms. L Zhang, L Zhang, X Mou, 2010 IEEE Int. Conf. Image Process. (ICIP). L. Zhang, L. Zhang, and X. Mou, "Rfsim: A feature based image quality assessment metric using riesz transforms," in 2010 IEEE Int. Conf. Image Process. (ICIP), 2010, pp. 321-324.
Sr-sim: A fast and high performance iqa index based on spectral residual. L Zhang, H Li, 2012 IEEE Int. Conf. Image Process. (ICIP). L. Zhang and H. Li, "Sr-sim: A fast and high performance iqa index based on spectral residual," in 2012 IEEE Int. Conf. Image Process. (ICIP), 2012, pp. 1473-1476.
Vsi: A visual saliency-induced index for perceptual image quality assessment. L Zhang, Y Shen, H Li, IEEE Trans. Image Process. 2310L. Zhang, Y. Shen, and H. Li, "Vsi: A visual saliency-induced index for perceptual image quality assessment," IEEE Trans. Image Process., vol. 23, no. 10, pp. 4270-4281, 2014.
An information fidelity criterion for image quality assessment using natural scene statistics. H R Sheikh, A C Bovik, G De Veciana, IEEE Trans. Image Process. 1412H. R. Sheikh, A. C. Bovik, and G. de Veciana, "An information fidelity criterion for image quality assessment using natural scene statistics," IEEE Trans. Image Process., vol. 14, no. 12, pp. 2117-2128, 2005.
. Xinju Wu received the B.S. degree in China Uni. Xinju Wu received the B.S. degree in China Uni-
| [] |
[
"Projection-free Distributed Online Learning with Sublinear Communication Complexity",
"Projection-free Distributed Online Learning with Sublinear Communication Complexity"
] | [
"Yuanyu Wan [email protected] ",
"Guanghui Wang [email protected] ",
"Wei-Wei Tu [email protected] ",
"Lijun Zhang [email protected] ",
"\n4Paradigm Inc\nNational Key Laboratory for Novel Software Technology\nNanjing University\n210023, 100000Nanjing, BeijingChina, China\n",
"\nNational Key Laboratory for Novel Software Technology\nNanjing University\n210023NanjingChina\n"
] | [
"4Paradigm Inc\nNational Key Laboratory for Novel Software Technology\nNanjing University\n210023, 100000Nanjing, BeijingChina, China",
"National Key Laboratory for Novel Software Technology\nNanjing University\n210023NanjingChina"
] | [] | To deal with complicated constraints via locally light computations in distributed online learning, a recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an O(T 3/4 ) regret bound for convex losses, where T is the number of total rounds. However, it requires T communication rounds, and cannot utilize the strong convexity of losses. In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same O(T 3/4 ) regret bound with only O( √ T ) communication rounds for convex losses, and a better regret bound of O(T 2/3 (log T ) 1/3 ) with fewer O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. The key idea is to adopt a delayed update mechanism that reduces the communication complexity, and redefine the surrogate loss function in D-OCG for exploiting the strong convexity. Furthermore, we provide lower bounds to demonstrate that the O( √ T ) communication rounds required by D-BOCG are optimal (in terms of T ) for achieving the O(T 3/4 ) regret with convex losses, and the O(T 1/3 (log T ) 2/3 ) communication rounds required by D-BOCG are near-optimal (in terms of T ) for achieving the O(T 2/3 (log T ) 1/3 ) regret with strongly convex losses up to polylogarithmic factors. Finally, to handle the more challenging bandit setting, in which only the loss value is available, we incorporate the classical one-point gradient estimator into D-BOCG, and obtain similar theoretical guarantees.(OCO)-a multi-round game between a learner and an adversary(Zinkevich, 2003), and achieved a regret bound of O(T 3/4 ) for convex losses, where T is the number of total rounds. In each round, OCG updates the learner by utilizing one linear optimization step to minimize a surrogate loss function. Different from CG that requires all data related to the objective function are given beforehand, OCG only requires a single data point per round.Recently, Zhang et al.(2017)further proposed D-OCG by extending OCG into a more practical scenario-distributed OCO over a network. It is well motivated by many distributed applications such as multi-agent coordination and distributed tracking in sensor networks(Li et al., 2002;Xiao et al., 2007;Nedić et al., 2009;Duchi et al., 2011;Yang et al., 2019). Specifically, by defining the network as an undirected graph, each node of the graph represents a local learner, and can only communicate with its neighbors. The key idea of D-OCG is to maintain OCG for each local learner, and update it according to the local gradient as well as those received from its neighbors in each round. Compared with projection-based distributed algorithms(Ram et al., 2010;Hosseini et al., 2013;Yan et al., 2013), D-OCG significantly reduces the time cost for solving high-dimensional problems with complicated constraints, because it only utilizes one linear optimization step for each update of local learners. Moreover, D-OCG is more scalable than OCG, since it can utilize many locally light computation resources to handle large-scale problems.However, there exist two interesting questions about D-OCG. First, the local learners of D-OCG communicate with their neighbors to share the local gradients in each round, so it requires T communication rounds in total. Since the communication overhead is often the performance bottleneck in distributed systems, it is natural to ask whether the communication complexity of D-OCG can be reduced without increasing its regret. Second, similar to OCG in the standard OCO, Zhang et al. (2017) have proved that D-OCG in the distributed OCO achieves an O(T 3/4 ) regret bound for convex losses. Note that recent studies(Garber and Kretzu, 2021;Wan and Zhang, 2021)in the standard OCO have proposed variants of OCG to attain better regret for strongly convex losses. It is thus natural to ask whether the strong convexity can also be utilized to improve the regret of D-OCG in the distributed OCO.In this paper, we provide affirmative answers for these two questions by developing an improved variant of D-OCG, namely distributed block online conditional gradient (D-BOCG), which can attain the same O(T 3/4 ) regret bound with only O( √ T ) communication rounds for convex losses, and a better regret bound of O(T 2/3 (log T ) 1/3 ) with fewer O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. Compared with the original D-OCG, there exist three critical changes.• To further utilize the strong convexity of losses, a more general surrogate loss function is introduced in our D-BOCG, which is inspired by the surrogate loss function used in strongly convex variants of OCG(Garber and Kretzu, 2021;Wan and Zhang, 2021)and is able to cover that used in D-OCG. | null | [
"https://arxiv.org/pdf/2103.11102v2.pdf"
] | 249,642,747 | 2103.11102 | 16b74aea7270a325d68dafc1d19425572feb7234 |
Projection-free Distributed Online Learning with Sublinear Communication Complexity
Yuanyu Wan [email protected]
Guanghui Wang [email protected]
Wei-Wei Tu [email protected]
Lijun Zhang [email protected]
4Paradigm Inc
National Key Laboratory for Novel Software Technology
Nanjing University
210023, 100000Nanjing, BeijingChina, China
National Key Laboratory for Novel Software Technology
Nanjing University
210023NanjingChina
Projection-free Distributed Online Learning with Sublinear Communication Complexity
Projection-freeDistributed Online LearningCommunication ComplexityConditional Gradient
To deal with complicated constraints via locally light computations in distributed online learning, a recent study has presented a projection-free algorithm called distributed online conditional gradient (D-OCG), and achieved an O(T 3/4 ) regret bound for convex losses, where T is the number of total rounds. However, it requires T communication rounds, and cannot utilize the strong convexity of losses. In this paper, we propose an improved variant of D-OCG, namely D-BOCG, which can attain the same O(T 3/4 ) regret bound with only O( √ T ) communication rounds for convex losses, and a better regret bound of O(T 2/3 (log T ) 1/3 ) with fewer O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. The key idea is to adopt a delayed update mechanism that reduces the communication complexity, and redefine the surrogate loss function in D-OCG for exploiting the strong convexity. Furthermore, we provide lower bounds to demonstrate that the O( √ T ) communication rounds required by D-BOCG are optimal (in terms of T ) for achieving the O(T 3/4 ) regret with convex losses, and the O(T 1/3 (log T ) 2/3 ) communication rounds required by D-BOCG are near-optimal (in terms of T ) for achieving the O(T 2/3 (log T ) 1/3 ) regret with strongly convex losses up to polylogarithmic factors. Finally, to handle the more challenging bandit setting, in which only the loss value is available, we incorporate the classical one-point gradient estimator into D-BOCG, and obtain similar theoretical guarantees.(OCO)-a multi-round game between a learner and an adversary(Zinkevich, 2003), and achieved a regret bound of O(T 3/4 ) for convex losses, where T is the number of total rounds. In each round, OCG updates the learner by utilizing one linear optimization step to minimize a surrogate loss function. Different from CG that requires all data related to the objective function are given beforehand, OCG only requires a single data point per round.Recently, Zhang et al.(2017)further proposed D-OCG by extending OCG into a more practical scenario-distributed OCO over a network. It is well motivated by many distributed applications such as multi-agent coordination and distributed tracking in sensor networks(Li et al., 2002;Xiao et al., 2007;Nedić et al., 2009;Duchi et al., 2011;Yang et al., 2019). Specifically, by defining the network as an undirected graph, each node of the graph represents a local learner, and can only communicate with its neighbors. The key idea of D-OCG is to maintain OCG for each local learner, and update it according to the local gradient as well as those received from its neighbors in each round. Compared with projection-based distributed algorithms(Ram et al., 2010;Hosseini et al., 2013;Yan et al., 2013), D-OCG significantly reduces the time cost for solving high-dimensional problems with complicated constraints, because it only utilizes one linear optimization step for each update of local learners. Moreover, D-OCG is more scalable than OCG, since it can utilize many locally light computation resources to handle large-scale problems.However, there exist two interesting questions about D-OCG. First, the local learners of D-OCG communicate with their neighbors to share the local gradients in each round, so it requires T communication rounds in total. Since the communication overhead is often the performance bottleneck in distributed systems, it is natural to ask whether the communication complexity of D-OCG can be reduced without increasing its regret. Second, similar to OCG in the standard OCO, Zhang et al. (2017) have proved that D-OCG in the distributed OCO achieves an O(T 3/4 ) regret bound for convex losses. Note that recent studies(Garber and Kretzu, 2021;Wan and Zhang, 2021)in the standard OCO have proposed variants of OCG to attain better regret for strongly convex losses. It is thus natural to ask whether the strong convexity can also be utilized to improve the regret of D-OCG in the distributed OCO.In this paper, we provide affirmative answers for these two questions by developing an improved variant of D-OCG, namely distributed block online conditional gradient (D-BOCG), which can attain the same O(T 3/4 ) regret bound with only O( √ T ) communication rounds for convex losses, and a better regret bound of O(T 2/3 (log T ) 1/3 ) with fewer O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. Compared with the original D-OCG, there exist three critical changes.• To further utilize the strong convexity of losses, a more general surrogate loss function is introduced in our D-BOCG, which is inspired by the surrogate loss function used in strongly convex variants of OCG(Garber and Kretzu, 2021;Wan and Zhang, 2021)and is able to cover that used in D-OCG.
Introduction
Conditional gradient (CG) (Frank and Wolfe, 1956;Jaggi, 2013) (also known as Frank-Wolfe) is a simple yet efficient offline algorithm for solving high-dimensional problems with complicated constraints. To find a feasible solution, instead of performing the time-consuming projection step, CG utilizes the linear optimization step, which can be carried out much more efficiently. For example, in the matrix completion problem (Hazan and Kale, 2012), where the feasible set consists of all matrices with bounded trace norm, the projection step needs to compute the singular value decomposition (SVD) of a matrix. In contrast, the linear optimization step in CG only requires computing the top singular vector pair of a matrix, which is at least an order of magnitude faster than the SVD. Due to the emergence of large-scale problems, online conditional gradient (OCG) (Hazan and Kale, 2012;Hazan, 2016) (also known as online Frank-Wolfe) was proposed for online convex optimization • To reduce the communication complexity, our D-BOCG adopts a delayed update mechanism, which divides the total T rounds into a smaller number of equally-sized blocks, and only updates the local learners for each block. In this way, the local learners only need to communicate with their neighbors once for each block, and the total number of communication rounds is reduced from T to the number of blocks.
• According to the delayed update mechanism, the number of updates in our D-BOCG could be much smaller than that in D-OCG, which brings a new challenge, i.e., only performing 1 linear optimization step as D-OCG for each update will increase the O(T 3/4 ) regret of D-OCG for convex losses. To address this problem, we perform iterative linear optimization steps for each update. Specifically, the number of linear optimization steps for each update is set to be the same as the block size, which ensures that the total number of linear optimization steps required by our D-BOCG is the same as that required by D-OCG.
Note that the delayed update mechanism and the iterative linear optimization steps in the last two changes are borrowed from Garber and Kretzu (2020), which employed them to improve projection-free bandit convex optimization. In contrast, we apply them to the distributed setting considered here. Furthermore, to complement theoretical guarantees of our D-BOCG, for any distributed online algorithm with C communication rounds, we provide an Ω(T / √ C) lower regret bound with convex losses, and an Ω(T /C) lower regret bound with strongly convex losses, respectively. These lower bounds imply that the O( √ T ) communication rounds required by D-BOCG are optimal (in terms of T ) for achieving the O(T 3/4 ) regret with convex losses, and the O(T 1/3 (log T ) 2/3 ) communication rounds required by D-BOCG are nearoptimal (in terms of T ) for achieving the O(T 2/3 (log T ) 1/3 ) regret with strongly convex losses up to polylogarithmic factors. To the best of our knowledge, we are the first to study lower regret bounds for distributed online algorithms with limited communication rounds. Finally, to handle the more challenging bandit setting, we propose distributed block bandit conditional gradient (D-BBCG) by combining with the classical one-point gradient estimator (Flaxman et al., 2005), which can approximate the gradient with a single loss value. Our theoretical analysis first reveals that in expectation, D-BBCG can also attain a regret bound of O(T 3/4 ) with O( √ T ) communication rounds for convex losses, and a regret bound of O(T 2/3 (log T ) 1/3 ) with O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. Moreover, for convex losses, we show that D-BBCG enjoys a high-probability regret bound of O(T 3/4 (log T ) 1/2 ) with O( √ T ) communication rounds. A preliminary version of this paper was presented at the 37th International Conference on Machine Learning in 2020 (Wan et al., 2020). In this paper, we have significantly enriched the preliminary version by adding the following extensions.
• Different from Wan et al. (2020) that only considered convex losses, we generalize D-BOCG and D-BBCG to further exploit the strong convexity, and establish improved theoretical guarantees for strongly convex losses.
• Different from Wan et al. (2020) that only studied upper regret bounds, we provide lower bounds on the regret of distributed online algorithms with limited communication rounds for convex losses as well as strongly convex losses.
• We provide more experiments including new results for distributed online binary classification, different networks topologies, different network sizes, and three additional datasets.
Related Work
In this section, we briefly review existing projection-free algorithms for OCO and the distributed OCO.
Projection-free Algorithms for OCO
OCO is a general framework for online learning, which covers a variety of problems such as online portfolio selection (Blum and Kalai, 1999;Agarwal et al., 2006), online routing Kleinberg, 2004, 2008), online metric learning (Jain et al., 2008;Tsagkatakis and Savakis, 2011), and learning with expert advice (Cesa-Bianchi et al., 1997;. It is generally viewed as a repeated game between a leaner and an adversary. In each round t, the learner first chooses a decision x(t) from a convex decision set K ⊆ R d . Then, the adversary reveals a convex function f t (x) : K → R, which incurs a loss f t (x(t)) to the learner. The goal of the learner is to minimize the regret with respect to any fixed optimal decision, which is defined as
R T = T t=1 f t (x(t)) − min x∈K T t=1 f t (x).
OCG (Hazan and Kale, 2012;Hazan, 2016) is the first projection-free algorithm for OCO, which enjoys a regret bound of O(T 3/4 ) for convex losses and updates as the following linear optimization step v = argmin x∈K ∇F t (x(t)) x
x(t + 1) = x(t) + s t (v − x(t))(1)
where
F t (x) = η t−1 k=1 ∇f k (x(k)) x + x − x(1) 2 2(2)
is a surrogate loss function, and s t , η are two parameters. Recent studies have proposed to improve the regret of OCG by exploiting the additional curvature of loss functions including smoothness and strong convexity. For convex and smooth losses, Hazan and Minasyan (2020) proposed the online smooth projection free algorithm, and obtained an expected regret bound of O(T 2/3 ) as well as a high-probability regret bound of O(T 2/3 log T ). If the losses are α-strongly convex, Wan and Zhang (2021) proposed a strongly convex variant of OCG by redefining the surrogate loss function as
F t (x) = t−1 k=1 ∇f k (x(k)) x + α 2 x − x(k) 2 2(3)
and using a line search rule to select the original parameter s t in (1). This algorithm can enjoy a regret bound of O(T 2/3 ) for strongly convex losses, and a very similar algorithm was concurrently proposed by Garber and Kretzu (2021). Moreover, when the decision set is polyhedral (Garber and Hazan, 2016) or smooth (Levy and Krause, 2019), projection-free algorithms have been proposed to enjoy an O( √ T ) regret bound for convex losses and an O(log T ) regret bound for strongly convex losses, respectively. If the decision set is strongly convex, Wan and Zhang (2021) have proved that OCG can achieve an O(T 2/3 ) regret bound for convex losses, and the strongly convex variant of OCG can achieve an O( √ T ) regret bound for strongly convex losses.
Furthermore, OCG has been extended to handle the more challenging bandit setting, where only the loss value is available to the learner. Due to the lack of the gradient, Chen et al. (2019) proposed a bandit variant of OCG by combining with the one-point gradient estimator (Flaxman et al., 2005), which can approximate the gradient with a single loss value. For convex losses, the bandit variant of OCG achieves an expected regret bound of O(T 4/5 ), which is worse than the O(T 3/4 ) regret bound of OCG. Later, by dividing the total rounds into several equally-sized blocks and performing iterative linear optimization steps for each block, Garber and Kretzu (2020) improved the bandit variant of OCG, and reduced the expected regret bound for convex losses from O(T 4/5 ) to O(T 3/4 ). For strongly convex losses, Garber and Kretzu (2021) further developed a projection-free bandit algorithm that attains an expected regret bound of O(T 2/3 log T ).
We also note that Chen et al. (2018) developed a projection-free algorithm for another interesting setting where the learner is allowed to access to the stochastic gradients.
Projection-free Algorithms for the Distributed OCO
According to previous studies (Hosseini et al., 2013;Zhang et al., 2017), distributed OCO is a variant of OCO over a network defined by an undirected graph G = (V, E), where V = [n] is the node set and E ⊆ V × V is the edge set. Different from OCO where only 1 learner exists, in the distributed OCO, each node i ∈ V is a local learner, and can only communicate with its immediate neighbors
N i = {j ∈ V |(i, j) ∈ E} .
In each round t, each local learner i ∈ V chooses a decision x i (t) ∈ K, and then it receives a convex loss function f t,i (x) : K → R chosen by the adversary. Moreover, the global loss function f t (x) is defined as the sum of local loss functions
f t (x) = n j=1 f t,j (x).
The goal of each local learner i ∈ V is to minimize the regret measured by the global loss with respect to the optimal fixed decision, which is defined as
R T,i = T t=1 f t (x i (t)) − min x∈K T t=1 f t (x).
Since the local loss function f t,i (x) is only available to the local learner i, to achieve this global goal for all local learners, it is necessary to utilize both their local gradients and those received from their neighbors. Therefore, to make OCG distributed, Zhang et al. (2017) first introduced a non-negative weight matrix P ∈ R n×n , and redefined the surrogate loss function F t (x) in OCG as
F t,i (x) = ηz i (t) x + x − x 1 (1) 2 2 (4) for each local learner i by replacing t−1 k=1 ∇f k (x(k)) with z i (t), where z i (1) = 0 and z i (t + 1) = j∈N i P ij z j (t) + ∇f t,i (x i (t)).(5)
Note that the matrix P is also referred to as a gossip, consensus, or averaging matrix (Bénézit et al., 2007;Tsianos and Rabbat, 2012;Koloskova et al., 2019). Moreover, z i (t) is a weighted sum of historical local gradients and those received from the neighbors, which could be viewed as an approximation for the sum of global gradients and is critical for minimizing the global regret. Then, with a time-varying parameter s t , they proposed D-OCG updating as follows
for each local learner i ∈ V do v i = argmin x∈K ∇F t,i (x i (t)) x x i (t + 1) = x i (t) + s t (v i − x i (t))
end for (6) and established a regret bound of O(T 3/4 ) for convex losses. However, in each round t, each local learner i needs to compute z i (t + 1) by communicating with its neighbors, which requires T communication rounds in total.
Preliminaries
In this section, we introduce necessary preliminaries including standard definitions, common assumptions, and basic algorithmic ingredients.
Definitions
We first recall the standard definitions for smooth and strongly convex functions (Boyd and Vandenberghe, 2004).
Definition 1 Let f (x) : K → R be a function over K. It is called β-smooth over K if for all x ∈ K, y ∈ K f (y) ≤ f (x) + ∇f (x) (y − x) + β 2 y − x 2 2 .
Definition 2 Let f (x) : K → R be a function over K. It is called α-strongly convex over K if for all x ∈ K, y ∈ K
f (y) ≥ f (x) + ∇f (x) (y − x) + α 2 y − x 2 2 .
Let f (x) : K → R be α-strongly convex over K and x * = argmin x∈K f (x). According to Hazan and Kale (2012), for any x ∈ K, it holds that
α 2 x − x * 2 2 ≤ f (x) − f (x * ).(7)
Algorithm 1 CG 1: Input:
feasible set K, L, F (x), x in 2: c 0 = x in 3: for τ = 0, . . . , L − 1 do 4: v τ ∈ argmin x∈K ∇F (c τ ) x 5: s τ = argmin s∈[0,1] F (c τ + s(v τ − c τ )) 6: c τ +1 = c τ + s τ (v τ − c τ ) 7: end for 8: return x out = c L
Assumptions
Then, similar to previous studies on OCO and the distributed OCO, we introduce the following assumptions.
Assumption 1 At each round t, each local loss function f t,i (x) is G-Lipschitz over K, i.e., |f t,i (x) − f t,i (y)| ≤ G x − y 2 for any x ∈ K, y ∈ K.
Assumption 2 The convex decision set K is full dimensional and contains the origin. Moreover, there exist two constants r, R > 0 such that rB d ⊆ K ⊆ RB d where B d denotes the unit Euclidean ball centered at the origin in R d .
Assumption 3 The non-negative weight matrix P ∈ R n×n is supported on the graph G = (V, E), symmetric, and doubly stochastic, which satisfies
• P ij > 0 only if (i, j) ∈ E; • n j=1 P ij = j∈N i P ij = 1, ∀i ∈ V ; n i=1 P ij = i∈N j P ij = 1, ∀j ∈ V .
Moreover, the second largest singular value of P denoted by σ 2 (P ) is strictly smaller than 1.
The non-negative weight matrix P in Assumption 3 will be utilized to model the communication between the local learners in the distributed OCO.
Assumption 4 At each round t, each local loss function f t,i (x) is α-strongly convex over K.
Note that if Assumption 4 holds with α = 0, it reduces to the case with only convex losses. Moreover, according to previous studies (Flaxman et al., 2005;Garber and Kretzu, 2020), the following assumption is required in the bandit setting.
Assumption 5 At each round t, each local loss function f t,i (x) is bounded over K, i.e., |f t,i (x)| ≤ M , for any x ∈ K. Moreover, all local loss functions are chosen beforehand, i.e., the adversary is oblivious.
Algorithmic Ingredients
Next, we present conditional gradient (CG) (Frank and Wolfe, 1956;Jaggi, 2013), which will be utilized to minimize surrogate loss functions in our algorithms. Given a function F (x) : K → R and an initial point c 0 = x in ∈ K, it iteratively performs the linear optimization step as follows
v τ ∈ argmin x∈K ∇F (c τ ) x c τ +1 = c τ + s τ (v τ − c τ )
for τ = 0, . . . , L − 1, where L is the number of iterations and s τ is selected by line search
s τ = argmin s∈[0,1] F (c τ + s(v τ − c τ )).
The detailed procedures of CG are summarized in Algorithm 1, and its convergence rate is presented in the following lemma.
Lemma 1 (Derived from Theorem 1 of Jaggi (2013)) If F (x) : K → R is a convex and β-smooth function and x 2 ≤ R for any x ∈ K, Algorithm 1 with L ≥ 1 ensures
F (x out ) − F (x * ) ≤ 8βR 2 L + 2 where x * ∈ argmin x∈K F (x).
According to Lemma 1, when L is large enough, CG can output a point x out such that the approximation error F (x out ) − F (x * ) is very small. As a result, with an appropriate L > 1, we can minimize our surrogate loss functions more accurately than only performing 1 linear optimization step, which is critical for achieving our desired regret bounds with only sublinear communication complexity. Moreover, CG has been employed to develop projection-free algorithms with improved regret bounds for bandit convex optimization Kretzu, 2020, 2021). In this paper, we introduce it into the distributed OCO to propose projection-free algorithms with sublinear communication complexity for the full information and bandit settings, respectively. Finally, we introduce a standard technique called one-point gradient estimator (Flaxman et al., 2005), which can approximate the gradient with a single loss value and will be utilized in the bandit setting. For a function f (x), its δ-smoothed version is defined as
f δ (x) = E u∼B d [f (x + δu)]
and satisfies the following lemma.
Lemma 2 (Lemma 1 in Flaxman et al. (2005)) Let δ > 0, we have
∇ f δ (x) = E u∼S d d δ f (x + δu)u
where S d denotes the unit sphere in R d .
Lemma 2 provides an unbiased estimator of the gradient ∇ f δ (x) by only observing the single value f (x + δu). Note that there also exist two-point and (d + 1)-point gradient estimators (Agarwal et al., 2010;Duchi et al., 2015), which can approximate the gradient more accurately than the one-point gradient estimator. However, by querying two or (d + 1) points per round, in expectation, Agarwal et al. (2010) have recovered the best regret bounds established in the full information setting for both convex and strong convex losses, which actually implies that the bandit setting with two or (d + 1) points is almost as simple as the full information setting. So, in this paper, we only consider the most challenging bandit setting, where only one point is available per round. Moreover, we will show that the one-point gradient estimator is sufficient for projection-free algorithms to obtain theoretical guarantees similar to those obtained in the full information setting.
Distributed Block Online Conditional Gradient (D-BOCG)
In this section, we present our D-BOCG with the corresponding theoretical guarantees for convex losses and strongly convex losses.
The Algorithm
First, to reduce the communication complexity of D-OCG, we divide the total T rounds into B blocks of size K, where K is a parameter and we assume that B = T /K is an integer without loss of generality. In this way, each block m ∈ [B] consists of a set of rounds T m = {(m − 1)K + 1, . . . , mK}.
For each local learner i ∈ V , its decision in each block m stays the same and is denoted by x i (m). The local gradient of local learner i in each round t ∈ T m is denoted by g i (t) = ∇f t,i (x i (m)) and the cumulative gradient of local learner i in each block m is denoted by
g i (m) = t∈Tm g i (t).
Then, we describe how to compute x i (m) for each local learner in each block m. Initially, we set x i (1) = x in for each local learner i, where x in is arbitrarily chosen from K. For any m > 1, following Zhang et al. (2017), we can update the decision x i (m) by approximately minimizing a surrogate loss function F m−1,i (x). One may adopt a surrogate loss function similar to (4) used by D-OCG. However, it was only designed for convex losses, which cannot utilize the strong convexity. To address this limitation, we define a more general surrogate loss function for α-strongly convex losses, which can utilize the strong convexity for α > 0 and also cover that used in D-OCG for α = 0. To help understanding, we start by introducing our surrogate loss function for the simple case with n = 1, and then extend it to the general case for any n ≥ 1.
The simple case In the simple case with n = 1, the distributed OCO reduces to the standard OCO, and we only need to define a surrogate loss function F m,1 (x) for each block m ∈ [B]. Note that when we assume that all local losses are α-strongly convex, the cumulative loss function t∈Tm f t,1 (x) in each block m is αK-strongly convex, because of |T m | = K. Therefore, to utilize the strong convexity, inspired by (3), we can define F m,1 (x) as
F m,1 (x) = m−1 k=1 g 1 (k) x + αK 2 x − x 1 (k) 2 2 + h x − x in 2 2
where h is a parameter that allows us to recover the surrogate loss function (2) for convex losses when α = 0, though it does not exist in (3). Since x 1 (k) 2 2 does not affect the minimizer of the function F m,1 (x), we further simplify F m,1 (x) to
F m,1 (x) = m−1 k=1 ( g 1 (k) − αKx 1 (k)) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .
By initializing z 1 (1) = 0 and computing z 1 (m + 1) as
z 1 (m + 1) = z 1 (m) + g 1 (m) − αKx 1 (m)(8)
we can rewrite the above F m,1 (x) as
F m,1 (x) = z 1 (m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .
The general case Note that F m,1 (x) and z 1 (m) in the simple case only contain the information of the local learner 1, which cannot be used to achieve a regret bound measured by the global loss in the general case. Therefore, inspired by (5) used in D-OCG, in the general case, we update z i (m + 1) as
z i (m + 1) = j∈N i P ij z j (m) + g i (m) − αKx i (m)(9)
for each local learner i, where P is a non-negative weight matrix satisfying Assumption 3. Different from (8), this update further incorporates the information from the neighbors of local learner i. Moreover, in each block m, the surrogate loss function for each local learner i is defined as
F m,i (x) = z i (m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .(10)
Obviously, for convex losses with α = 0, this F m,i (x) is equivalent to (4) in D-OCG by setting K = 1 and h = 1/η. Finally, we need to specify how to compute x i (m + 1) by approximately minimizing F m,i (x) defined in (10). Similar to the update rules of D-OCG (6), one may simply perform 1 linear optimization step with the above F m,i (x) to compute x i (m + 1) for any block m and local learner i. However, this naive update will increase the O(T 3/4 ) regret for convex losses established by D-OCG, since the number of updates is decreased. To address this problem, we invoke CG for each update as
x i (m + 1) = CG(K, L, F m,i (x), x i (m))(11)
where L is an appropriate parameter. The detailed procedures of our algorithm are presented in Algorithm 2, and it is called distributed block online conditional gradient (D-BOCG).
Algorithm 2 D-BOCG 1: Input: feasible set K, x in ∈ K, α, h, L, and K 2: Initialization: choose x 1 (1) = · · · = x n (1) = x in and set z 1 (1) = · · · = z n (1) = 0 3: for m = 1, . . . , T /K do 4:
for each local learner i ∈ V do 5:
define F m,i (x) = z i (m) x + (m−1)αK 2 x 2 2 + h x − x in 2 2 6: g i (m) = 0 7:
for t = (m − 1)K + 1, . . . , mK do 8:
play x i (m) and observe g i (t) = ∇f t,i (x i (m)) 9: g i (m) = g i (m) + g i (t)
10:
end for
11:
x i (m + 1) = CG(K, L, F m,i (x), x i (m)) //This step can be executed in parallel to the above for loop.
12:
z i (m + 1) = j∈N i P ij z j (m) + g i (m)−αKx i (m) 13:
end for 14: end for Remark 1 We first note that Algorithm 2 requires (T /K)L linear optimization steps in total. We can limit the total number of linear optimization steps to T by simply setting L = K, which is the same as that required by D-OCG (Zhang et al., 2017). Moreover, it is also important to note that at the step 11 in Algorithm 2, the computation about x i (m + 1) only depends on x i (m) and z i (m), which is available at the beginning of the block m. Therefore, the step 11 in Algorithm 2 can be executed in parallel to the for loop from steps 7 to 10 in Algorithm 2, which implies that the L linear optimization steps utilized to compute x i (m + 1) can be uniformly allocated to all K rounds in the block m, instead of only the last round in the block m. Specifically, Algorithm 2 with L = K only needs to perform 1 linear optimization step in each round, which is computationally as efficient as D-OCG. This parallel strategy is significant, because without it, Algorithm 2 needs to stop at the end of each block m and wait until L linear optimization steps are completed. It has also been utilized by Kretzu (2020, 2021) when developing improved projection-free algorithms for bandit convex optimization.
Theoretical Guarantees
In the following, we present theoretical guarantees of our D-BOCG. To help understand the effect of the CG method, we start with the following regret bound, the first term in which clearly depends on the approximation error of the CG method.
Theorem 1 Under Assumptions 1, 2, 3, and 4, for any i ∈ V , Algorithm 2 ensures
R T,i ≤3nGK B m=2 2 m (m − 2)αK + 2h + 3nGK B m=2 K(G + αR) √ n ((m − 2)αK + 2h)(1 − σ 2 (P )) + n B m=1 4K 2 (G + 2αR) 2 mαK + 2h + 4nhR 2 where m = max i∈[n] (F m−1,i (x i (m)) − min x∈K F m−1,i (x)).
Remark 2 Note that D-BOCG with K = L = 1, h = 1/η, and α = 0 reduces to D-OCG (Zhang et al., 2017). In that case, according to their analysis (see the proof of Lemmas 2 and 4 in Zhang et al. (2017) for further details), we can prove that
m /h = O( 1/m) by setting h = Ω(GT 3/4 ), where the constant G in h is introduced due to the upper bound of F m,i (x) − F m−1,i (x). If we further consider L = 1, α = 0, and K = √ T , we can similarly prove that m /h = O( 1/m) by setting h = Ω(GKB 3/4 ), since now the upper bound of F m,i (x) − F m−1,i (x)
in on the order of O(GK) and the maximum m is changed from T to B. However, this will make the first term in the above regret bound worse than O(T 3/4 ), as explained below
3nGK B m=2 2 m 2h = O K B m=2 1 m 1/4 = O KB 3/4 = O T 7/8 .
Therefore, to keep the O(T 3/4 ) regret for convex losses with K = √ T , we need to use more linear optimization steps. According to Lemma 1, if L is large enough, the approximation error m could be very small. More specifically, by combining Theorem 1 with Lemma 1, we have the following theorem.
Theorem 2 Under Assumptions 1, 2, 3, and 4, for any i ∈ V , Algorithm 2 ensures
R T,i ≤ 12nGRT √ L + 2 + B m=2 3nGK 2 (G + αR) √ n ((m − 2)αK + 2h)(1 − σ 2 (P )) + B m=1 4nK 2 (G + 2αR) 2 mαK + 2h + 4nhR 2 .
Remark 3 Note that Theorem 2 with K = L = 1, h = 1/η, and α = 0 cannot recover the O(T 3/4 ) regret bound of D-OCG, because 12nGRT √ L+2 would be on the order of O(T ). The main reason is that for the CG method, the approximation error bound in Lemma 1 is too loose when L = 1. According to Zhang et al. (2017), instead of using Lemma 1, a more complicated analysis is required for bounding the approximation error when only utilizing 1 linear optimization step. To recover the O(T 3/4 ) regret bound with K = L = 1, h = 1/η, and α = 0, one potential way is to extend the analysis of Zhang et al. (2017) from the case with L = 1 to the case with any L. However, we find that this extension is highly non-trivial, and notice that Theorem 2 is sufficient to achieve our desired regret bounds and communication complexity.
For convex losses, we can simplify Theorem 2 to the following corollary.
Corollary 1 Under Assumptions 1, 2, 3, and 4 with α = 0, for any i ∈ V , Algorithm 2 with α = 0,
K = L = √ T , and h = n 1/4 T 3/4 G √ 1−σ 2 (P )R has R T,i ≤ 12n + 2 1 − σ 2 (P )n 3/4 + 11 2 n 5/4 (1 − σ 2 (P )) −1/2 GRT 3/4 .
Remark 4
The above corollary shows that our D-BOCG can enjoy the O(T 3/4 ) regret bound with only O( √ T ) communication rounds for convex losses. By contrast, D-OCG (Zhang et al., 2017) obtains the O(T 3/4 ) regret bound with a larger number of T communication rounds for convex losses.
Moreover, for strongly convex losses, we can simplify Theorem 2 to the following corollary.
Corollary 2 Under Assumptions 1, 2, 3, and 4 with α > 0, for any i ∈ V , Algorithm 2 with α > 0, K = L = T 2/3 (ln T ) −2/3 , and h = αK ensures
R T,i ≤ 3n 3/2 G(G + αR) α(1 − σ 2 (P )) + 4n(G + 2αR) 2 α T 2/3 ((ln T ) −2/3 + (ln T ) 1/3 ) + 12nGRT 2/3 (ln T ) 1/3 + 4nR 2 αT 2/3 (ln T ) −2/3 .
Remark 5 The above corollary shows that our D-BOCG can enjoy a regret bound of
O(T 2/3 (log T ) 1/3 ) with O(T 1/3 (log T ) 2/3 )
communication rounds for strongly convex losses.
Compared with Corollary 1, both the regret and communication complexity of our D-BOCG are improved by utilizing the strong convexity.
Remark 6 Besides the dependence on T , Corollaries 1 and 2 also explicitly show how the regret of our D-BOCG depends on the network size n and the spectral gap 1 − σ 2 (P ). First, the dependence on n shows that the regret of our D-BOCG will be larger on larger networks for convex losses and strongly convex losses. Second, the spectral gap actually reflects the connectivity of the network: a larger spectral gap value implies better connectivity (Duchi et al., 2011;Zhang et al., 2017). Therefore, the dependence on the spectral gap implies that the regret of our D-BOCG will be smaller on "well connected" networks than on "poorly connected" networks for convex losses and strongly convex losses. More specifically, with a particular choice of the matrix P , Duchi et al. (2011) have bounded the spectral gap for several classes of interesting networks, such as 1 − σ 2 (P ) = Ω(1) for expanders and the complete graph, and 1 − σ 2 (P ) = Ω(1/n 2 ) for the cycle graph (see Section 3.2 in Duchi et al. (2011) for details). By replacing the dependence on 1 − σ 2 (P ) with that on n, our Corollaries 1 and 2 further imply that
• in the case with convex losses, the regret of D-BOCG can be bounded by O(n 5/4 T 3/4 ) for "well connected" networks and O(n 9/4 T 3/4 ) for "poorly connected" networks;
• in the case with strongly convex losses, the regret of D-BOCG can be bounded by O(n 3/2 T 2/3 (log T ) 1/3 ) for "well connected" networks and O(n 7/2 T 2/3 (log T ) 1/3 ) for "poorly connected" networks.
Analysis
In the following, we only provide the proofs of Theorems 1 and 2. The proofs of Corollary 1 and Corollary 2 can be found in the Appendix.
Proof of Theorem 1
We start this proof by defining several auxiliary variables. Letz(m) = 1
n n i=1 z i (m) for m ∈ [B + 1], and let d i (m) = g i (m) − αKx i (m) andd(m) = 1 n n i=1 d i (m) for m ∈ [B].
According to Assumption 3, we havē
z(m + 1) = 1 n n i=1 z i (m + 1) = 1 n n i=1 j∈N i P ij z j (m) + g i (m) − αKx i (m) = 1 n n i=1 n j=1 P ij z j (m) +d(m) = 1 n n j=1 n i=1 P ij z j (m) +d(m) =z(m) +d(m).(12)
Then, we definex(1) = x in andx(m + 1) = argmin x∈KFm (x) for any m ∈ [B + 1], wherē
F m (x) =z(m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 . Similarly, we define x i (m + 1) = argmin x∈K F m,i (x) for any m ∈ [B + 1], where F m,i (x) = z i (m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .
is defined in Algorithm 2. Then, we derive an upper bound of x i (m)−x(m) 2 for any m ∈ [B]. If m = 1, according to the definition and Algorithm 2, it is easy to verify that
x i (m) −x(m) 2 = 0. (13) For any B ≥ m ≥ 2, due to m = max i∈[n] (F m−1,i (x i (m)) − min x∈K F m−1,i (x)), we have x i (m) −x(m) 2 ≤ x i (m) − x i (m) 2 + x i (m) −x(m) 2 ≤ 2F m−1,i (x i (m)) − 2F m−1,i ( x i (m)) (m − 2)αK + 2h + x i (m) −x(m) 2 ≤ 2 m (m − 2)αK + 2h + x i (m) −x(m) 2(14)
where the second inequality is due to the fact that F m−1,i (x) is ((m − 2)αK + 2h)-strongly convex and (7). (14), we introduce the following two lemmas.
To further bound x i (m) −x(m) 2 in
Lemma 3 (Derived from the Proof of Lemma 6 in Zhang et al. (2017)) For any i ∈ [n], let
d i (1), . . . , d i (m) ∈ R d be a sequence of vectors. Let z i (1) = 0, z i (m + 1) = j∈N i P ij z j (m) + d i (m), andz(m) = 1 n n i=1 z i (m) for m ∈ [B], where P satisfies Assumption 3. For any i ∈ [n] and m ∈ [B], assuming d i (m) 2 ≤ G where G > 0 is a constant, we have z i (m) −z(m) 2 ≤ G √ n 1 − σ 2 (P )
.
Lemma 4 (Lemma 5 in Duchi et al. (2011)
) Let Π K (u, η) = argmin x∈K ηu x + x 2 2 . We have Π K (u, η) − Π K (v, η) 2 ≤ η 2 u − v 2 .
According to Assumptions 1 and 2, for any m ∈ [B], we have
d i (m) 2 = g i (m) − αKx i (m) 2 = t∈Tm g i (t) − αKx i (m) 2 ≤ t∈Tm g i (t) 2 + αK x i (m) 2 ≤ K(G + αR)(15)
where
T m = {(m − 1)K + 1, . . . , mK}. By applying Lemma 3 with d i (m) 2 ≤ K(G + αR), for any B ∈ [m], we have z i (m) −z(m) 2 ≤ K(G + αR) √ n 1 − σ 2 (P ) .(16)
Moreover, for any B ≥ m ≥ 2, we notice that
x i (m) = argmin x∈K z i (m − 1) x + (m − 2)αK 2 x 2 2 + h x − x in 2 2 = argmin x∈K (z i (m − 1) − 2hx in ) x + (m − 2)αK + 2h 2 x 2 2 = argmin x∈K 2 (m − 2)αK + 2h (z i (m − 1) − 2hx in ) x + x 2 2 .(17)
Similarly, for any B ≥ m ≥ 2, we havē
x(m) = argmin x∈Kz (m − 1) x + (m − 2)αK 2 x 2 2 + h x − x in 2 2 = argmin x∈K (z(m − 1) − 2hx in ) x + (m − 2)αK + 2h 2 x 2 2 = argmin x∈K 2 (m − 2)αK + 2h (z(m − 1) − 2hx in ) x + x 2 2 .(18)
Therefore, by combining Lemma 4 with (16), for any B ≥ m ≥ 2, we have
x i (m) −x(m) 2 ≤ 1 (m − 2)αK + 2h z i (m − 1) − 2hx in −z(m − 1) + 2hx in 2 = 1 (m − 2)αK + 2h z i (m − 1) −z(m − 1) 2 ≤ K(G + αR) √ n ((m − 2)αK + 2h)(1 − σ 2 (P ))
By substituting the above inequality into (14), for any B ≥ m ≥ 2, we have
x i (m) −x(m) 2 ≤ 2 m (m − 2)αK + 2h + K(G + αR) √ n ((m − 2)αK + 2h)(1 − σ 2 (P ))
.
Then, let u 1 = 0 and u m = (13) and (19), for any m ∈ [B], it holds that
2 m (m−2)αK+2h + K(G+αR) √ n ((m−2)αK+2h)(1−σ 2 (P )) for any B ≥ m ≥ 2. Fromx i (m) −x(m) 2 ≤ u m . Next, let x * ∈ argmin x∈K T t=1 f t (x). For any i, j ∈ V , m ∈ [B]
, and t ∈ T m , according to Assumptions 1 and 4, we have
f t,j (x i (m)) − f t,j (x * ) ≤f t,j (x(m)) + G x(m) − x i (m) 2 − f t,j (x * ) ≤f t,j (x j (m)) + G x(m) − x j (m) 2 − f t,j (x * ) + Gu m ≤∇f t,j (x j (m)) (x j (m) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m =∇f t,j (x j (m)) (x j (m) −x(m)) + ∇f t,j (x j (m)) (x(m) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m ≤∇f t,j (x j (m)) (x(m) − x * ) − α 2 x j (m) − x * 2 2 + 3Gu m
where the third inequality is due to the strong convexity of f t,j (x) and the last inequality is due to
∇f t,j (x j (m)) (x j (m) −x(m)) ≤ ∇f t,j (x j (m)) 2 x j (m) −x(m) 2 ≤ Gu m .
Moreover, we note that
x j (m) − x * 2 2 = x j (m) −x(m) 2 2 + 2x j (m) (x(m) − x * ) + x * 2 2 − x(m) 2 2 ≥2x j (m) (x(m) − x * ) + x * 2 2 − x(m) 2 2 . Therefore, for any i, j ∈ V , m ∈ [B], and t ∈ T m , we have f t,j (x i (m)) − f t,j (x * ) − 3Gu m ≤(∇f t,j (x j (m)) − αx j (m)) (x(m) − x * ) − α 2 ( x * 2 2 − x(m) 2 2 )
By summing over t ∈ T m and m ∈ [B], for any i, j ∈ V , we have B m=1 t∈Tm
(f t,j (x i (m)) − f t,j (x * )) − 3G B m=1 t∈Tm u m ≤ B m=1 t∈Tm (∇f t,j (x j (m)) − αx j (m)) (x(m) − x * ) − B m=1 t∈Tm α 2 ( x * 2 2 − x(m) 2 2 ) = B m=1 ( g j (m) − αKx j (m)) (x(m) − x * ) − B m=1 αK 2 ( x * 2 2 − x(m) 2 2 ).
Furthermore, by summing over j = 1, . . . , n, for any i ∈ V , we have
B m=1 t∈Tm n j=1 (f t,j (x i (m)) − f t,j (x * )) − 3G B m=1 t∈Tm n j=1 u m ≤ B m=1 n j=1 ( g j (m) − αKx j (m)) (x(m) − x * ) − αnK 2 B m=1 ( x * 2 2 − x(m) 2 2 ) =n B m=1 d (m) (x(m) − x * ) − αK 2 ( x * 2 2 − x(m) 2 2 ) .
Then, it is easy to verify that
R T,i = B m=1 t∈Tm n j=1 (f t,j (x i (m)) − f t,j (x * )) ≤n B m=1 d (m) (x(m) − x * ) − αK 2 ( x * 2 2 − x(m) 2 2 ) + 3G B m=1 t∈Tm n j=1 u m =n B m=1 d (m) (x(m) − x * ) − αK 2 ( x * 2 2 − x(m) 2 2 ) + 3nGK B m=1 u m .(20)
To
bound B m=1 d (m) (x(m) − x * ) − αK 2 ( x * 2 2 − x(m) 2 2 )
, we introduce the following lemma.
Lemma 5 (Lemma 2.3 in Shalev-Shwartz (2011)) Let x * t = argmin x∈K t−1 i=1 f i (x)+R(x), ∀t ∈ [T ], where R(x) is a strongly convex function. Then, ∀x ∈ K, it holds that T t=1 (f t ( x * t ) − f t (x)) ≤ R(x) − R( x * 1 ) + T t=1 f t ( x * t ) − f t ( x * t+1 ) . Before applying Lemma 5, we define f m (x) =d(m) x + αK 2 x 2 2 . For any x ∈ K, it is easy to verify that ∇ f m (x) 2 = d (m) + αKx 2 ≤ 1 n n j=1 d j (m) 2 + αKx 2 ≤ K(G + 2αR)(21)
where the last inequality is due to Assumption 2 and (15). Moreover, according to the definition and (12), for any m ∈ [B], we havē
x(m+1) = argmin x∈Kz (m) x+ (m − 1)αK 2 x 2 2 +h x−x in 2 2 = argmin x∈K m−1 τ =1 f τ (x)+h x−x in 2 2 .
By applying Lemma 5 with the loss functions { f m (x)} B m=1 , the decision set K, and the regularizer
R(x) = h x − x in 2 2 , we have B m=1 f m (x(m + 1)) − f m (x * ) ≤ h x * − x in 2 2 − h x(2) − x in 2 2 + B m=1 f m (x(m + 1)) − f m (x(m + 2)) ≤ 4hR 2 + B m=1 ∇ f m (x(m + 1)) (x(m + 1) −x(m + 2)) ≤ 4hR 2 + B m=1 K(G + 2αR) x(m + 1) −x(m + 2) 2(22)
where the last inequality is due to the Cauchy-Schwarz inequality and (21).
Note thatF m+1 (x) is (mαK + 2h)-strongly convex andx(m + 2) = argmin x∈KFm+1 (x). For any m ∈ [B], we have
mαK + 2h 2 x(m + 1) −x(m + 2) 2 2 ≤F m+1 (x(m + 1)) −F m+1 (x(m + 2)) =F m (x(m + 1)) + f m (x(m + 1)) −F m (x(m + 2)) − f m (x(m + 2)) ≤ f m (x(m + 1)) − f m (x(m + 2)) ≤∇ f m (x(m + 1)) (x(m + 1) −x(m + 2)) ≤K(G + 2αR) x(m + 1) −x(m + 2) 2
where the first inequality is due to (7), the second inequality is due tox(m + 1) = argmin x∈KFm (x), and the last inequality is due to the Cauchy-Schwarz inequality and (21). For any m ∈ [B], the above equality can be simplified as
x(m + 1) −x(m + 2) 2 ≤ 2K(G + 2αR) mαK + 2h .
Then, by combining the above inequality with (22)
, we have B m=1 d (m) (x(m) − x * ) − αK 2 ( x * 2 2 − x(m) 2 2 ) = B m=1 f m (x(m)) − f m (x * ) = B m=1 f m (x(m)) − f m (x(m + 1)) + B m=1 f m (x(m + 1)) − f m (x * ) ≤K(G + 2αR) B m=1 ( x(m) −x(m + 1) 2 + x(m + 1) −x(m + 2) 2 ) + 4hR 2 ≤2K(G + 2αR) B m=1 x(m + 1) −x(m + 2) 2 + 4hR 2 ≤ B m=1 4K 2 (G + 2αR) 2 mαK + 2h + 4hR 2(23)
where the second inequality is due tox (
2) = argmin x∈KF1 (x) = x in =x(1) and x(1) − x(2) 2 = 0 ≤ x(B + 1) −x(B + 2) 2 .
Finally, we complete the proof by substituting the definition of u m and (23) into (20).
Proof of Theorem 2
We note that F m−1,i (x) is ((m − 2)αK + 2h)-smooth, and according to our Algorithm 2, we have
x i (m) = CG(K, L, F m−1,i (x), x i (m − 1)).
Therefore, for any B ≥ m ≥ 2, by applying Lemma 1, it is easy to verify that
m = max i∈[n] F m−1,i (x i (m)) − min x∈K F m−1,i (x) ≤ 8((m − 2)αK + 2h)R 2 L + 2 .
By substituting the above inequality and K(B − 1) ≤ KB = T into Theorem 1, we have
R T,i ≤ 12nGRT √ L + 2 + B m=2 3nGK 2 (G + αR) √ n ((m − 2)αK + 2h)(1 − σ 2 (P )) + B m=1 4nK 2 (G + 2αR) 2 mαK + 2h + 4nhR 2 .
Lower Bounds
In this section, we present lower bounds regarding the communication complexity for convex losses and strongly convex losses, respectively.
Convex Losses
Following previous studies (Hosseini et al., 2013;Zhang et al., 2017), when developing distributed online algorithms, we need to upper bound the regret of all local learners simultaneously. Correspondingly, to establish a lower regret bound for these distributed online algorithms, we actually only need to prove that the regret of one local learner has a lower bound. For simplicity, in the following, we will consider to derive a lower regret bound for the local learner 1.
For convex losses, we present a lower bound in the following theorem.
Theorem 3 Suppose K = −R/ √ d, R/ √ d d ,
which satisfies Assumption 2 with R = R and r = R/ √ d. For distributed OCO with n > 1 local learners over K and any distributed online algorithm communicating C rounds before the round T , there exists a sequence of local loss functions satisfying Assumption 1 such that
R T,1 ≥ nRGT 2 2(C + 1) .
Proof In each round t, we simply set f t,1 (x) = 0 for the local learner 1, and select f t,2 (x), . . . , f t,n (x) for other local learners with a more careful strategy. In this way, the global loss function is
f t (x) = f t,1 (x) + n i=2 f t,i (x) = n i=2 f t,i (x).
Since the local loss function f t,i (x) is only revealed to the local learner i ∈ [n], the local learner 1 cannot access to the global loss unless it communicates with other local learners. In this way, we can maximize the impact of communication on the regret of the local learner 1.
Without loss of generality, we denote the set of communication rounds by C = {c 1 , . . . , c C }, where 1 ≤ c 1 < · · · < c C < T . Let c 0 = 0, c C+1 = T . Then, we can divide the total T rounds into the following C + 1 intervals
[c 0 + 1, c 1 ], [c 1 + 1, c 2 ], . . . , [c C + 1, c C+1 ].
For any i ∈ {0, . . . , C} and t ∈ [c i + 1, c i+1 ], we will set f t,2 (x) = · · · = f t,n (x) = h i (x), which is revealed to the local learner 1 after the decision x 1 (c i+1 ) is made. In this way, the global loss can be written as f t (x) = (n − 1)h i (x) for any i ∈ {0, . . . , C} and t ∈ [c i + 1, c i+1 ].
For any distributed online algorithm with communication rounds C = {c 1 , . . . , c C }, we denote the sequence of decisions made by the local learner 1 as x 1 (1), . . . , x 1 (T ). For any i ∈ {0, . . . , C}, we note that the decisions x 1 (c i + 1), . . . , x 1 (c i+1 ) are made before the loss function h i (x) is revealed.
Inspired by the lower bound for the general OCO (Abernethy et al., 2008), we first utilize a randomized strategy to select h i (x) for any i ∈ {0, . . . , C}, and derive an expected lower bound for R T,1 . Specifically, we independently select h i (x) = w i x for any i ∈ {0, . . . , C}, where the coordinates of w i are ±G/ √ d with probability 1/2 and h i (x) satisfies Assumption 1. Then, it is not hard to verify that
E w 0 ,...,w C [R T,1 ] =E w 0 ,...,w C T t=1 f t (x 1 (t)) − min x∈K T t=1 f t (x) =E w 0 ,...,w C C i=0 c i+1 t=c i +1 (n − 1)h i (x 1 (t)) − min x∈K C i=0 c i+1 t=c i +1 (n − 1)h i (x) =(n − 1)E w 0 ,...,w C C i=0 c i+1 t=c i +1 w i x 1 (t) − min x∈K C i=0 (c i+1 − c i )w i x =(n − 1)E w 0 ,...,w C − min x∈K C i=0 (c i+1 − c i )w i x where the last equality is due to E w 0 ,...,w C [w i x 1 (t)] = 0 for any t ∈ [c i + 1, c i+1 ].
Then, we have
E w 0 ,...,w C [R T,1 ] = − (n − 1)E w 0 ,...,w C min x∈K x C i=0 (c i+1 − c i )w i = − (n − 1)E w 0 ,...,w C min x∈{−R/ √ d,R/ √ d} d x C i=0 (c i+1 − c i )w i
where the last equality is due to the fact that a linear function is minimized at the vertices of the cube. Let 01 , . . . , 0d , . . . , C1 , . . . , Cd be independent and identically distributed variables with Pr( ij = ±1) = 1/2 for i ∈ {0, . . . , C} and j ∈ {1, . . . , d}. Then, we have
E w 0 ,...,w C [R T,1 ] = − (n − 1)E 01 ,..., Cd d j=1 − R √ d C i=0 (c i+1 − c i ) ij G √ d =(n − 1)RGE 01 ,..., C1 C i=0 (c i+1 − c i ) i1 ≥ (n − 1)RG √ 2 C i=0 (c i+1 − c i ) 2 ≥ (n − 1)RG √ 2 (c C+1 − c 0 ) 2 C + 1 = (n − 1)RGT 2(C + 1)(24)
where the first inequality is due to the Khintchine inequality and the second inequality is due to the Cauchy-Schwarz inequality. The expected lower bound in (24) implies that for any distributed online algorithm with communication rounds C = {c 1 , . . . , c C }, there exists a particular choice of w 0 , . . . , w C such that
R T,1 ≥ (n − 1)RGT 2(C + 1) ≥ nRGT 2 2(C + 1)
where the last inequality is due to n − 1 ≥ n/2 for any integer n ≥ 2.
Remark 7 Theorem 3 essentially establishes an Ω( √ T ) lower bound on the communication rounds required by any distributed online algorithm whose all local learners achieve the O(T 3/4 ) regret bound for convex losses, which matches (in terms of T ) the O( √ T ) communication rounds required by our D-BOCG up to constant factors.
Strongly Convex Losses
For strongly convex losses, we provide a lower bound in the following theorem.
Theorem 4 Suppose K = −R/ √ d, R/ √ d d ,
which satisfies Assumption 2 with R = R and r = R/ √ d. For distributed OCO with n > 1 local learners over K and any distributed online algorithm communicating at the end of C rounds before the round T , there exists a sequence of local loss functions satisfying Assumption 4 with α > 0 and Assumption 1 with G = 2αR respectively such that
R T,1 ≥ αnR 2 T 8(C + 1) .
Proof This proof is similar to that of Theorem 3. The main difference is to add a term α 2 x 2 2 to previous local loss functions, which makes them α-strongly convex. For any distributed online algorithm with C communication rounds, we still denote the set of communication rounds by C = {c 1 , . . . , c C } where 1 ≤ c 1 < · · · < c C < T , and the sequence of decisions made by the local learner 1 by x 1 (1), . . . , x 1 (T ). Let c 0 = 0 and c C+1 = T . Then, we can divide the total T rounds into C + 1 intervals
[c 0 + 1, c 1 ], [c 1 + 1, c 2 ], . . . , [c C + 1, c C+1 ].
In each round t, for the local learner 1, we simply set f t,1 (x) = α 2 x 2 2 that satisfies Assumption 1 with G = 2αR and Assumption 4. Moreover, for any i ∈ {0, . . . , C} and
t ∈ [c i + 1, c i+1 ], we set f t,2 (x) = · · · = f t,n (x) = h i (x).
Specifically, we independently select
h i (x) = w i x + α 2 x 2 2
for any i ∈ {0, . . . , C}, where the coordinates of w i are ±αR/ √ d with probability 1/2. It is easy to verify that h i (x) satisfies Assumption 1 with G = 2αR and Assumption 4, respectively.
Note that the local learner 1 does not communicate with other local learners between rounds c i + 1 and c i+1 . Therefore, the decisions x 1 (c i + 1), . . . ,
x 1 (c i+1 ) are independent of w i . Letw = 1 αT C i=0 (c i+1 − c i )w i .
In this way, the total loss for any x ∈ K is equal to
T t=1 f t (x) = T t=1 n j=2 f t,j (x) + αn 2 x 2 2 = C i=0 (c i+1 − c i ) (n − 1)w i x + αn 2 x 2 2 =α(n − 1)Tw x + αnT 2 x 2 2 = αT 2 √ nx + (n − 1) √ nw 2 2 − (n − 1) √ nw 2 2 .(25)
Since the absolute value of each element in w i is equal to αR/ √ d, we note that the absolute value of each element in − n−1 nw is bounded by
n − 1 nαT C i=0 (c i+1 − c i )αR √ d = (n − 1)R n √ d ≤ R √ d which implies that − n−1 nw belongs to K = −R/ √ d, R/ √ d d .
By combining with (25), we have
argmin x∈K T t=1 f t (x) = − n − 1 nw and min x∈K T t=1 f t (x) = − αT 2 (n − 1) √ nw 2 2 .
Then, we have
E w 0 ,...,w C T t=1 f t (x 1 (t)) − min x∈K T t=1 f t (x) =E w 0 ,...,w C C i=0 c i+1 t=c i +1 (n − 1)w i x 1 (t) + αn 2 x 1 (t) 2 2 + αT 2 (n − 1) √ nw 2 2 ≥E w 0 ,...,w C C i=0 c i+1 t=c i +1 (n − 1)w i x 1 (t) + α(n − 1) 2 T 2n w 2 2 =E w 0 ,...,w C α(n − 1) 2 T 2n w 2 2(26)
where the inequality is due to α x 2 2 ≥ 0 for any x and the last equality is due to E w 0 ,...,w C [w i x 1 (t)] = 0 for any t ∈ [c i + 1, c i+1 ].
Let 01 , . . . , 0d , . . . , C1 , . . . , Cd be independent and identically distributed variables with Pr( ij = ±1) = 1/2 for i ∈ {0, . . . , C} and j ∈ {1, . . . , d}. Then, we have
E w 0 ,...,w C α(n − 1) 2 T 2n w 2 2 = (n − 1) 2 2αnT E w 0 ,...,w C C i=0 (c i+1 − c i )w i 2 2 = (n − 1) 2 2αnT E 01 ,..., Cd d j=1 C i=0 (c i+1 − c i ) ij αR √ d 2 = α(n − 1) 2 R 2 2nT E 01 ,..., C1 C i=0 (c i+1 − c i ) i1 2 ≥ α(n − 1) 2 R 2 2nT C i=0 (c i+1 − c i ) 2 ≥ α(n − 1) 2 R 2 2nT · (c C+1 − c 0 ) 2 C + 1 = α(n − 1) 2 R 2 T 2n(C + 1)(27)
where the first inequality is due to the Khintchine inequality, and the second inequality is due to the Cauchy-Schwarz inequality. By combining (26) with (27), we derive an expected lower bound as
E w 0 ,...,w C [R T,1 ] = E w 0 ,...,w C T t=1 f t (x 1 (t)) − min x∈K T t=1 f t (x) ≥ α(n − 1) 2 R 2 T 2n(C + 1)
which implies that for any distributed online algorithm with communication rounds C = {c 1 , . . . , c C }, there exists a particular choice of w 0 , . . . , w C such that
R T,1 ≥ α(n − 1) 2 R 2 T 2n(C + 1) ≥ αnR 2 T 8(C + 1)
where the last inequality is due to n − 1 ≥ n/2 for any integer n ≥ 2.
Remark 8 Theorem 4 essentially establishes an Ω T 1/3 (log T ) −1/3 lower bound on the communication rounds required by any distributed online algorithm whose all local learners achieve the O(T 2/3 (log T ) 1/3 ) regret bound for strongly convex losses, which almost matches (in terms of T ) the O(T 1/3 (log T ) 2/3 ) communication rounds required by our D-BOCG up to polylogarithmic factors.
Discussions
Besides the dependence on T , the lower bounds in our Theorems 3 and 4 also depend on the network size n. When the number of communication rounds is limited to O( √ T ) and the losses are convex, Theorem 3 provides a lower regret bound of Ω(nT 3/4 ), but our D-BOCG only achieves a regret bound of O(n 5/4 (1−σ 2 (P )) −1/2 T 3/4 ) as shown in Corollary 1. Similarly, when the number of communication rounds is limited to O(T 1/3 (log T ) 2/3 ) and the losses are strongly convex, Theorem 4 provides a lower regret bound of Ω(nT 2/3 (log T ) −2/3 ), but our D-BOCG only achieves a regret bound of O(n 3/2 (1 − σ 2 (P )) −1 T 2/3 (log T ) 1/3 ) as shown in Corollary 2. So, in terms of the dependence on n and 1 − σ 2 (P ), there still exist some gaps between our upper bounds and lower bounds. To eliminate these gaps, one potential way is to reduce the dependence of the upper bounds on n, and establish lower bounds depending on the spectral gap 1 − σ 2 (P ) by carefully considering the topology of the network, which is non-trivial and will be investigated in the future. Moreover, we note that in the proof of Theorems 3 and 4, only the regret of the local learner 1 is analyzed. It is also interesting to ask whether the regret of other local learner i = 1 simultaneously has a lower bound similar to that of the local learner 1. Unfortunately, the answer is negative when we utilize the sequence of local losses selected in the proof of Theorems 3 and 4. Let us consider a distributed online algorithm, which directly computes x i (t + 1) ∈ argmin x∈K f t,i (x). Following notations used in the proof of Theorem 3, for x * ∈ argmin x∈K T t=1 f t (x), the regret of its local learner i = 1 can be upper bounded as
T t=1 f t (x i ) − T t=1 f t (x * ) = C j=0 c j+1 t=c j +1 (n − 1)w j (x i (t) − x * ) ≤ C j=0 (n − 1)w j (x i (c j + 1) − x * ) ≤2(n − 1)(C + 1)RG
where the first inequality is due to x i (t) ∈ argmin x∈K w j x for c j+1 ≥ t > c j + 1 and the last inequality is due to the fact that x i (c j + 1) ∈ K and the coordinates of w j belong to ±G/ √ d. This regret bound is smaller than the lower bound nRGT /(2 2(C + 1)), when C is small. A similar result can be derived when we use the sequence of local losses selected in the proof of Theorem 4. However, as discussed before, deriving a lower bound for one local learner is sufficient in this paper. So, we leave the problem of simultaneously lower bounding the regret of all local learners as a future work.
An Extension of D-BOCG to the Bandit Setting
In this section, we extend our D-BOCG to the bandit setting, where only the loss value is available to each local learner. The main idea is to combine D-BOCG with the one-point gradient estimator (Flaxman et al., 2005).
The Algorithm
By combining our D-BOCG with the one-point gradient estimator, our algorithm for the bandit setting is outlined in Algorithm 3, and named as distributed block bandit conditional gradient (D-BBCG), where 0 < δ ≤ r and
K δ = (1 − δ/r)K = {(1 − δ/r)x|x ∈ K}.
Algorithm 3 D-BBCG 1: Input: feasible set K, δ, x in ∈ K δ , α, h, L, and K 2: Initialization: choose x 1 (1) = · · · = x n (1) = x in and set z 1 (1) = · · · = z n (1) = 0 3: for m = 1, . . . , T /K do 4:
for each local learner i ∈ V do 5:
define F m,i (x) = z i (m) x + (m−1)αK 2 x 2 2 + h x − x in 2 2 6: g i (m) = 0 7:
for t = (m − 1)K + 1, . . . , mK do 8:
play y i (t) = x i (m) + δu i (t) where u i (t) ∼ S d 9:
observe f t,i (y i (t)) and compute g i (t) = d δ f t,i (y i (t))u i (t) x i (m + 1) = CG(K δ , L, F m,i (x), x i (m)) //This step can be executed in parallel to the above for loop.
13:
z i (m + 1) = j∈N i P ij z j (m) + g i (m)−αKx i (m) 14:
end for 15: end for
Comparing D-BBCG with D-BOCG, there exist three differences as follows. First, in line 8 of D-BBCG, the actual decision y i (t) is x i (m) plus a random decision δu i (t), where u i (t) ∼ S d . Second, in line 9 of D-BBCG, we can only observe the loss value f t,i (y i (t)) instead of the gradient ∇f t,i (x i (m)), and adopt the one-point gradient estimator to approximate the gradient as
g i (t) = d δ f t,i (y i (t))u i (t).
Third, to ensure y i (t) ∈ K, in line 12 of D-BBCG, we perform
x i (m + 1) = CG(K δ , L, F m,i (x), x i (m)) by replacing K in line 11 of D-BOCG with a smaller set K δ ⊆ K, which limits x i (m) in the set K δ . Because of Assumption 2 and 0 < δ ≤ r, it is easy to verify that x + δu ∈ K for any x ∈ K δ and u ∼ S d by utilizing the fact that rB d ⊆ K.
Theoretical Guarantees
In the following, we present theoretical guarantees of our D-BBCG. We first provide expected regret bounds of D-BBCG for convex losses and strongly convex losses, respectively.
Theorem 7 Let α = 0, h = n 1/4 ξ T dM T 3/4 √ 1−σ 2 (P )R , δ = cT −1/4 , K = L = √ T , where c > 0 is a
constant such that δ ≤ r, and ξ T = 1 + 8 ln n √ T γ , where 0.5 > γ > 0 is a constant. Under Assumptions 1, 2, 3, and 5, for any i ∈ V , with probability at least 1 − 2γ, Algorithm 3 has R T,i = O n 5/4 (1 − σ 2 (P )) −1/2 T 3/4 ξ T .
Remark 10 While the above theorem presents a high-probability regret bound for convex losses, it is hard to extend it for strongly convex losses. We note that according to the proof of Theorem 7, the high-probability regret bound of D-BBCG has a term O(K B log(1/γ)), where K is incurred by the delayed update mechanism and B log(1/γ) is incurred by the application of the classical Azuma's concentration inequality (Azuma, 1967). If we consider strongly convex losses, we would like to set K = T 2/3 (ln T ) −2/3 to control the communication complexity, but in this case the term O(K B log(1/γ)) is worse than the expected regret bound in Theorem 6. Therefore, to extend the above theorem for for strongly convex losses, we may need some novel techniques to improve the term O(K B log(1/γ)), which will be investigated in the future.
Analysis
In the following, we only provide the proof of Theorems 5 and 6. The proof of Theorem 7 can be found in the Appendix.
Proof of Theorems 5 and 6
Similar to the proof of Theorem 1, we first define several auxiliary variables. Letz(m) =
m (x) =z(m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .
Similarly, we define x i (m + 1) = argmin x∈K δ F m,i (x) for any m ∈ [B + 1], where
F m,i (x) = z i (m) x + (m − 1)αK 2 x 2 2 + h x − x in 2 2 .
is defined in Algorithm 3. Moreover, we need to introduce the following lemmas.
Lemma 6 Let d i (m) = g i (m) − αKx i (m) for m ∈ [B]
. Under Assumptions 1, 2, and 5, for any i ∈ V and m ∈ [B], Algorithm 3 ensures that
E[ d i (m) 2 ] 2 ≤ E[ d i (m) 2 2 ] ≤ 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 .
Lemma 7 (Derived from the Proof of Lemma 6 in Zhang et al. (2017)) For any i ∈ [n], let
d i (1), . . . , d i (m) ∈ R d be a sequence of vectors. Let z i (1) = 0, z i (m + 1) = j∈N i P ij z j (m) + d i (m), andz(m) = 1 n n i=1 z i (m) for m ∈ [B], where P satisfies Assumption 3. For any i ∈ V and m ∈ [B], assuming E[ d i (m) 2 ] ≤ G where G > 0 is a constant, we have E[ z i (m) −z(m) 2 ] ≤ G √ n 1 − σ 2 (P )
.
Lemma 8 (Lemma 2.6 in Hazan (2016) and Lemma 6 in Wan et al. (2021)) Let f (x) : R d → R be α-strongly convex and G-Lipschitz over a convex and compact set K ⊂ R d . Then,
its δ-smoothed version f δ (x) = E u∼B d [f (x + δu)]
has the following properties.
• f δ (x) is α-strongly convex over K δ ; • | f δ (x) − f (x)| ≤ δG for any x ∈ K δ ; • f δ (x) is G-Lipschitz over K δ .E[ x i (m) −x(m) 2 ] = E[0] = 0.(28)
For any B ≥ m ≥ 2, we note that F m−1,i (x) is ((m − 2)αK + 2h)-smooth, and Algorithm 3 ensures
x i (m) = CG(K δ , L, F m−1,i (x), x i (m − 1)).
According to Lemma 1 and Assumption 2, for B ≥ m ≥ 2, it is easy to verify that
F m−1,i (x i (m)) − F m−1,i ( x i (m)) ≤ 8((m − 2)αK + 2h)R 2 L + 2 .
Then, for any B ≥ m ≥ 2, it is easy to verify that
x i (m) −x(m) 2 ≤ x i (m) − x i (m) 2 + x i (m) −x(m) 2 ≤ 2F m−1,i (x i (m)) − 2F m−1,i ( x i (m)) (m − 2)αK + 2h + x i (m) −x(m) 2 ≤ 4R √ L + 2 + x i (m) −x(m) 2(29)
where the second inequality is due to the fact that F m−1,i (x) is also ((m−2)αK +2h)-strongly convex and (7). Moreover, for any B ≥ m ≥ 2, similar to (17) and (18), we have
x i (m) = argmin x∈K δ z i (m − 1) x + (m − 2)αK 2 x 2 2 + h x − x in 2 2 = argmin x∈K δ 2 (m − 2)αK + 2h (z i (m − 1) − 2hx in ) x + x 2 2 andx (m) = argmin x∈K δz (m − 1) x + (m − 2)αK 2 x 2 2 + h x − x in 2 2 = argmin x∈K δ 2 (m − 2)αK + 2h (z(m − 1) − 2hx in ) x + x 2 2 .
Therefore, for any B ≥ m ≥ 2, by applying Lemma 4, we have
x i (m) −x(m) 2 ≤ z i (m − 1) − 2hx in −z(m − 1) + 2hx in 2 (m − 2)αK + 2h = z i (m − 1) −z(m − 1) 2 (m − 2)αK + 2h
By further combining with (29), for any B ≥ m ≥ 2, we have
E[ x i (m) −x(m) 2 ] ≤ 4R √ L + 2 + E[ x i (m) −x(m) 2 ] ≤ 4R √ L + 2 + E[ z i (m − 1) −z(m − 1) 2 ] (m − 2)αK + 2h ≤ 4R √ L + 2 + 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 √ n ((m − 2)αK + 2h)(1 − σ 2 (P ))(30)
where the last inequality is due to
E[ z i (m − 1) −z(m − 1) 2 ] ≤ 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 √ n 1 − σ 2 (P ) .
which is derived by combining Lemma 6 with Lemma 7. Let u 1 = 0 and
u m = 4R √ L + 2 + 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 √ n ((m − 2)αK + 2h)(1 − σ 2 (P ))
for any B ≥ m ≥ 2. From (28) and (30), for any m ∈ [B], it holds that
E[ x i (m) −x(m) 2 ] ≤ u m . T t=1 f t (x), x * = (1 − δ/r)
x * , and f t,j,δ (x) denote the δ-smoothed version of f t,j (x). For any i, j ∈ V , m ∈ [B], and t ∈ T m , by applying Lemma 8, we have
E[ f t,j,δ (x i (m)) − f t,j,δ ( x * )] ≤E[ f t,j,δ (x(m)) − f t,j,δ ( x * ) + G x(m) − x i (m) 2 ] ≤E[ f t,j,δ (x j (m)) − f t,j,δ ( x * ) + G x(m) − x j (m) 2 ] + Gu m ≤E ∇ f t,j,δ (x j (m)) (x j (m) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m ≤E ∇ f t,j,δ (x j (m)) (x(m + 1) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m + E[∇ f t,j,δ (x j (m)) (x j (m) −x(m + 1))] ≤E ∇ f t,j,δ (x j (m)) (x(m + 1) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m + E[ ∇ f t,j,δ (x j (m)) 2 x j (m) −x(m + 1) 2 ] ≤E ∇ f t,j,δ (x j (m)) (x(m + 1) − x * ) − α 2 x j (m) − x * 2 2 + 2Gu m + E[G( x j (m) −x(m) 2 + x(m) −x(m + 1) 2 )] ≤E (∇ f t,j,δ (x j (m)) − αx j (m)) (x(m + 1) − x * ) − α 2 ( x * 2 2 − x(m + 1) 2 2 ) + E [G x(m) −x(m + 1) 2 ] + 3Gu m (31)
where the first two inequalities are due to the fact that f t,j,δ (x) is G-Lipschitz over K δ , the third inequality is due to the strong convexity of f t,j,δ (x), and the last inequality is due to
x j (m) − x * 2 2 = x j (m) −x(m + 1) 2 2 + 2x j (m) (x(m + 1) − x * ) + x * 2 2 − x(m + 1) 2 2 ≥2x j (m) (x(m + 1) − x * ) + x * 2 2 − x(m + 1) 2 2 .
Moreover, it is not hard to verify that
R T,i = B m=1 t∈Tm n j=1 (f t,j (x i (m) + δu i (t)) − f t,j (x * )) ≤ B m=1 t∈Tm n j=1 ((f t,j (x i (m)) + G δu i (t) 2 ) − (f t,j ( x * ) − G x * − x * 2 )) ≤ B m=1 t∈Tm n j=1 f t,j (x i (m)) − f t,j ( x * ) + G δu i (t) 2 + δGR r ≤ B m=1 t∈Tm n j=1 (( f t,j,δ (x i (m)) + δG) − ( f t,j,δ ( x * ) − δG)) + δnGT + δnGRT r = B m=1 t∈Tm n j=1 ( f t,j,δ (x i (m)) − f t,j,δ ( x * )) + 3δnGT + δnGRT r(32)
where the first inequality is due to Assumption 1, the second inequality is due to x * ∈ K and Assumption 2, and the third inequality is due to Lemma 8.
By combining (31) with (32), we have
E [R T,i ] ≤ B m=1 t∈Tm n j=1 E (∇ f t,j,δ (x j (m)) − αx j (m)) (x(m + 1) − x * ) − α 2 ( x * 2 2 − x(m + 1) 2 2 ) + nKG B m=1 E [ x(m) −x(m + 1) 2 ] + 3nKG B m=1 u m + 3δnGT + δnGRT r .
(33)
Let f m (x) =d(m) x + αK 2 x 2 2 . Due to Lemma 2, we have B m=1 t∈Tm n j=1 E (∇ f t,j,δ (x j (m)) − αx j (m)) (x(m + 1) − x * ) − α 2 ( x * 2 2 − x(m + 1) 2 2 ) = B m=1 n j=1 E ( g j (m) − αKx j (m)) (x(m + 1) − x * ) − αK 2 ( x * 2 2 − x(m + 1) 2 2 ) =n B m=1 E d (m) (x(m + 1) − x * ) − αK 2 ( x * 2 2 − x(m + 1) 2 2 ) =n B m=1 E f m (x(m + 1)) − f m ( x * )(34)
According to the definition and (12), we havē
x(m+1) = argmin x∈K δz (m) x+ (m − 1)αK 2 x 2 2 +h x−x in 2 2 = argmin x∈K δ m−1 τ =1 f τ (x)+h x−x in 2 2 .
By applying Lemma 5 with the loss functions { f m (x)} B m=1 , the decision set K δ , and the regularizer R(
x) = h x − x in 2 2 , we have B m=1 f m (x(m + 1)) − f m ( x * ) ≤h x * − x in 2 2 + h x(2) − x in 2 2 + B m=1 f m (x(m + 1)) − f m (x(m + 2)) ≤4hR 2 + B m=1 ∇ f m (x(m + 1)) (x(m + 1) −x(m + 2)) ≤4hR 2 + B m=1 ∇ f m (x(m + 1)) 2 x(m + 1) −x(m + 2) 2 ≤4hR 2 + B m=1 d (m) + αKx(m + 1) 2 x(m + 1) −x(m + 2) 2 .(35)
Note thatF m+1 (x) is (mαK + 2h)-strongly convex andx(m + 2) = argmin x∈K δF m+1 (x). For any m ∈ [B], we have
mαK + 2h 2 x(m + 1) −x(m + 2) 2 2 ≤F m+1 (x(m + 1)) −F m+1 (x(m + 2)) =F m (x(m + 1)) + f m (x(m + 1)) −F m (x(m + 2)) − f m (x(m + 2)) ≤∇ f m (x(m + 1)) (x(m + 1) −x(m + 2)) ≤ d (m) + αKx(m + 1) 2 x(m + 1) −x(m + 2) 2
where the first inequality is due to (7) and the second inequality is due tox(m + 1) = argmin x∈K δF m (x) and the convexity of f m (x). Moreover, for any m ∈ [B], the above inequality can be simplified as
x(m + 1) −x(m + 2) 2 ≤ 2 d (m) + αKx(m + 1) 2 mαK + 2h .(36)
By combining (33), (34), (35), and (36), we have
E [R T,i ] ≤4nhR 2 + n B m=1 E 2 d (m) + αKx(m + 1) 2 2 mαK + 2h + nKG B m=1 E [ x(m) −x(m + 1) 2 ] + 3nKG B m=1 u m + 3δnGT + δnGRT r ≤n B m=1 E 2 d (m) + αKx(m + 1) 2 2 mαK + 2h + nKG B m=2 E 2 d (m − 1) + αKx(m) 2 (m − 1)αK + 2h + 3nKG B m=1 u m + 3δnGT + δnGRT r + 4nhR 2 ≤n B m=1 E 2 d (m) + αKx(m + 1) 2 2 mαK + 2h + nKG B m=1 E 2 d (m) + αKx(m + 1) 2 mαK + 2h + 3nKG B m=1 u m + 3δnGT + δnGRT r + 4nhR 2(37)
where the second inequality is derived by bounding x(m) −x(m + 1) 2 using (36) for m > 1 andx(2) = argmin x∈K δF 1 (x) = x in =x(1) for m = 1.
With the above inequality, we can establish the specific regret bound for convex losses and strongly convex losses, respectively.
Convex Losses
We first consider the case with convex losses, in which the parameters of our Algorithm 3 are set to α = 0, K = L = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R , δ = cT −1/4 . Because of α = 0, K = √ T , and δ = cT −1/4 , we have
E[ d (m) + αKx(m + 1) 2 2 ] =E[ d (m) 2 2 ] ≤ 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 = 2d 2 M 2 c 2 + 2G 2 T
where the first inequality is due to Lemma 6. Therefore, with α = 0, K = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R
, and δ = cT −1/4 , we have
n B m=1 E 2 d (m) + αKx(m + 1) 2 2 mαK + 2h ≤ d 2 M 2 c 2 + G 2 2n 3/4 1 − σ 2 (P )RT 3/4 dM =O(n 3/4 T 3/4 ).(38)Similarly, with α = 0, K = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R , and δ = cT −1/4 , we have nKG B m=1 E 2 d (m) + αKx(m + 1) 2 mαK + 2h ≤ 2d 2 M 2 c 2 + 2G 2 n 3/4 1 − σ 2 (P )GRT 3/4 dM =O(n 3/4 T 3/4 ).(39)
Note that u 1 = 0 and u m = 4R
√ L+2 + 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 √ n ((m−2)αK+2h)(1−σ 2 (P )) for any B ≥ m ≥ 2. With α = 0, K = L = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R , and δ = cT −1/4 , we have 3nKG B m=1 u m =3nKG B m=2 4R √ L + 2 + 2K dM δ 2 + 2K 2 G 2 √ n 2h(1 − σ 2 (P )) ≤ 12nK(B − 1)GR √ L + 2 + 2K dM δ 2 + 2K 2 G 2 3n 3/2 K(B − 1)G 2h(1 − σ 2 (P ))
≤12nGRT 3/4 + 2d 2 M 2 c 2 + 2G 2 3n 5/4 GRT 3/4 2dM 1 − σ 2 (P ) =O n 5/4 (1 − σ 2 (P )) −1/2 T 3/4 (40)
Moreover, with K = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R
, and δ = cT −1/4 , we have 3δnGT + δnGRT r + 4nhR 2 =3cnGT 3/4 + cnGRT 3/4 r + 4n 5/4 dM RT 3/4 1 − σ 2 (P ) =O n 5/4 (1 − σ 2 (P )) −1/2 T 3/4 . for convex losses, which completes the proof of Theorem 5.
Strongly Convex Losses
We continue to consider the case with the strongly convex losses, in which the parameters of our Algorithm 3 are set to α > 0, K = L = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK.
With K = T 2/3 (ln T ) −2/3 and δ = cT −1/3 (ln T ) 1/3 , we have
E[ d (m) + αKx(m) 2 ] 2 ≤E[ d (m) + αKx(m) 2 2 ] ≤ E[2 d (m) 2 2 ] + E[2 αKx(m) 2 2 ] ≤4K dM δ 2 + 4K 2 G 2 + 6(αKR) 2 = 4d 2 M 2 c 2 + 4G 2 + 6α 2 R 2 T ln T 4/3
where the third inequality is due to Lemma 6 and Assumption 2.
For brevity, let C = 4d 2 M 2
c 2 + 4G 2 + 6α 2 R 2 . With α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, we have n B m=1 E 2 d (m) + αKx(m + 1) 2 2 mαK + 2h ≤ 2nC αK T ln T 4/3 B m=1 1 m + 2 ≤ 2nC αK T ln T 4/3 B m=1 1 m ≤ 2nCT 2/3 α(ln T ) 2/3 (1 + ln B) ≤ 2nCT 2/3 α(ln T ) 2/3 + 2nCT 2/3 (ln T ) 1/3 α = O(nT 2/3 (log T ) 1/3 )(42)
where the last inequality is due to B ≤ T . Similarly, with α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, we have
nKG B m=1 E 2 d (m) + αKx(m + 1) 2 mαK + 2h ≤ 2nG √ C α T ln T 2/3 B m=1 1 m + 2 ≤ 2nG √ C α T ln T 2/3 B m=1 1 m ≤ 2nG √ C α T ln T 2/3 (1 + ln B) = O(nT 2/3 (log T ) 1/3 ).(43)
Moreover, with α > 0, K = L = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, we have
u m = 4R √ L + 2 + 2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 √ n mαK(1 − σ 2 (P )) ≤ 4R(ln T ) 1/3 T 1/3 + √ Cn √ 2mα(1 − σ 2 (P ))
for any B ≥ m ≥ 2.
Then, with u 1 = 0, α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, we have
3nKG B m=1 u m ≤ 12nK(B − 1)GR(ln T ) 1/3 T 1/3 + 3n √ CnG √ 2α(1 − σ 2 (P )) T ln T 2/3 B m=1 1 m ≤12nGRT 2/3 (ln T ) 1/3 + 3n √ CnG √ 2α(1 − σ 2 (P )) T ln T 2/3 (1 + ln T ) =O n 3/2 (1 − σ 2 (P )) −1 T 2/3 (log T ) 1/3 .(44)
Moreover, with α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, we have
3δnGT + δnGRT r + 4nhR 2 = 3cnG + cnGR r T 2/3 (ln T ) 1/3 + 4nαR 2 T 2/3 (ln T ) −2/3 = O(nT 2/3 (ln T ) 1/3 ).(45)
Finally, by combining (37), (42), (43), (44), and (45), our Algorithm 3 with α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK ensures
E [R T,i ] = O n 3/2 (1 − σ 2 (P )) −1 T 2/3 (log T ) 1/3
for α-strongly convex losses, which completes the proof of Theorem 6.
Proof of Lemma 6
We first notice that d i (m) 2 2 = g i (m) − αKx i (m) 2 2 ≤ 2 g i (m) 2 2 + 2 αKx i (m) 2 2 ≤ 2 g i (m) 2 2 + 2(αKR) 2 where the last inequality is due to Assumption 2.
Moreover, it is easy to provide an upper bound of E[ g i (m) 2 2 ] by following the proof of Lemma 5 in Garber and Kretzu (2020). We include the detailed proof for completeness.
Let t j = (m − 1)K + j for j = 1, . . . , K. We have a9a 123 2 32561 ijcnn1 22 2 49990 aloi 128 1000 108000 news20 62061 20 15935 where the second inequality is due to Lemmas 2 and 8. Therefore, we have
E g i (m) 2 2 |x i (m) =E K j=1 g i (t j ) g i (t j ) x i (m) + E K j=1 k∈[K]∩k =j g i (t j ) g i (t k ) x i (m) =E K j=1 g i (t j ) 2 2 x i (m) + K j=1 k∈[K]∩k =j E [g i (t j )| x i (m)] E [g i (t k )| x i (m)] ≤K dM δ 2 + K j=1 k∈[K]∩k =j E [g i (t j )| x i (m)] 2 E [g i (t k )| x i (m)] 2 ≤K dM δ 2 + (K 2 − K)G 2 ≤K dM δ 2 + K 2 G 2(46)E[ d i (m) 2 2 ] ≤2E[ g i (m) 2 2 ] + 2(αKR) 2 = 2E E g i (m) 2 2 |x i (m) + 2(αKR) 2 ≤2K dM δ 2 + 2K 2 G 2 + 2(αKR) 2 .
Moreover, according to Jensen's inequality, we have
E[ d i (m) 2 ] 2 ≤ E[ d i (m) 2 2 ].
Experiments
In this section, we perform simulation experiments on the multiclass classification problem and the binary classification problem to verify the performance of our proposed algorithms.
Datasets and Topologies of the Networks
We conduct experiments on four publicly available datasets-aloi, news20, a9a, and ijcnn1 from the LIBSVM repository (Chang and Lin, 2011), and the details of these datasets are summarized in Table 1. Specifically, aloi and news20 are used in the multiclass classification problem, and the other two datasets are used in the binary classification problem. For any dataset, let T e denote the number of examples. We first divide it into n equally-sized parts where each part contains T e /n examples, and then distribute them onto the n computing nodes in the network, 1 where n = 9 for the multiclass classification problem and n = 100 for the binary classification problem. Moreover, each part of the dataset will be reused n times, which implies that the number of rounds T is equal to n T e /n . To model the distributed network, we will use three types of graphs including a complete graph, a two-dimensional grid graph, and a cycle graph. The complete graph is a "well connected" network, where each node is connected to all other nodes. In contrast, the cycle graph is a "poorly connected" network, each node of which is only connected to two other nodes. Moreover, in the two-dimensional grid graph, each node not in the boundary is connected to its four nearest neighbors in axis-aligned directions. Its connectivity is between that of the complete graph and the cycle graph.
For the weight matrix P , we first compute P ij for i = j as
P ij = 0, if j / ∈ N i , 1/ max(|N i |, |N j |), if j ∈ N i .
Then, we compute P ij = 1 − q∈N i ,q =i P iq for i = j. In this way, we can ensure that P satisfies Assumption 3 for all three types of graphs.
Multiclass Classification
Following Zhang et al. (2017), we first compare our D-BOCG against their D-OCG by conducting experiments on distributed online multiclass classification. Let k be the number of features, and let v be the number of classes. In the t-th round, after receiving a single example e i (t) ∈ R k , each local learner i chooses a decision matrix X i (t) = [x 1 ; x 2 ; . . . ;
x v ] ∈ R v×k from the convex set
K = {X ∈ R v×k | X * ≤ τ }
where X * denotes the trace norm of X and τ is set to be 50. Note that X i (t) can be utilized to predict the class label of e i (t) by computing argmax ∈[v] x e i (t). Then, the true class label y i (t) ∈ {1, . . . , v} is revealed, which incurs the multivariate logistic loss
f t,i (X i (t)) = ln 1 + =y i (t) e x e i (t)−x y i (t) e i (t) .
The average loss of node i at the t-th round is defined as
AL(t, i) = 1 tn t q=1 n j=1 f q,j (X i (q)).(47)
For both methods, we simply initialize X i (1) = 0 v×k , ∀i ∈ [n]. According to Zhang et al. (2017), we set s t = 1/ √ t and η = cT −3/4 for D-OCG by tuning the constant c. Because the multivariate logistic loss is not strongly convex, the parameters of our D-BOCG are selected according to Corollary 1. Specifically, we set α = 0, K = L = √ T , and h = T 3/4 /c by tuning the constant c. For both methods, the constant c is selected from {0.01, . . . , 1e5}. Fig. 1 shows the comparisons of our D-BOCG and D-OCG on distributed online multiclass classification over the complete graph. We find that the average loss of the worst local node in D-BOCG decreases faster than that of the worst local node in D-OCG with the increasing of communication rounds, which verifies our theoretical results about the regret bound and communication complexity of D-BOCG. Furthermore, Fig. 2 shows comparisons of D-BOCG on distributed online multiclass classification over different graphs. We find that with the improvement of the graph connectivity, the convergence of our D-BOCG is slightly improved, which is also consistent with our theoretical results about the regret bound of D-BOCG.
Binary Classification
We also consider the problem of binary classification in the distributed online learning setting. In the t-th round, each local learner i receives a single example e i (t) ∈ R d and chooses a decision x i (t) ∈ R d from the convex set where τ is set to be 10. Then, the true class label y i (t) ∈ {−1, 1} is revealed, and it suffers the regularized hinge loss
K = {x ∈ R d | x 1 ≤ τ }f t,i (x i (t)) = max 1 − y i (t)e i (t) x i (t), 0 + λ x i (t) 2 2
where λ is set to be 0.1. Similar to (47), the average loss of node i at the t-th round is defined as
AL(t, i) = 1 tn t q=1 n j=1 f q,j (x i (q)).
Note that the regularized hinge loss is 2λ-strongly convex. To utilize the strong convexity, we can set parameters of D-BOCG according to Corollary 2. Moreover, to show the advantage of utilizing the strong convexity, we also run D-BOCG with parameters in Corollary 1, which only utilize the convexity condition. To distinguish between these two different instances of D-BOCG, we denote D-BOCG with parameters in Corollary 2 as D-BOCG sc , and D-BOCG with parameters in Corollary 1 as D-BOCG c . For D-OCG, D-BOCG c , and D-BOCG sc , we simply initialize x i (1) = τ 1/d, ∀i ∈ [n], where 1 denotes the vector with each entry equal 1. The parameters of D-OCG are set in the same way as D-OCG in previous experiments, and the parameters of D-BOCG c are set in the same way as D-BOCG in previous experiments. For D-BOCG sc , according to Corollary 2, we set α = 2λ and K = L = T 2/3 (ln T ) −2/3 . Although we use h = αK in Corollary 2, in the experiments, we set h = c αK by tuning the constant c from {1, 2, 3, 4, 5}. It is easy to verify that the modified h only affect the constant factor of the original regret bound in Corollary 2. Fig. 3 shows comparisons of D-OCG, D-BOCG c , and D-BOCG sc on distributed online binary classification over the complete graph. First, the average loss of the worst local node in D-BOCG c and D-BOCG sc decreases faster than that of the worst local node in D-OCG with the increasing of communication rounds, which validates our advantage in the communication complexity again. Moreover, our D-BOCG sc outperforms D-BOCG c , which further validates the advantage of utilizing the strong convexity. Fig. 4 and 5 show comparisons of D-BOCG c and D-BOCG sc on distributed online binary classification over different graphs. We find that the effect of the graph connectivity is similar to that presented in Fig. 2, though the number of nodes increases from 9 to 100.
Then, to verify the performance of our D-BBCG, we compare it with our D-BOCG. Note that D-BBCG only uses approximate gradients generated by the one-point gradient estimator, the performance of which is highly affected by the dimensionality. Therefore, to make a fair comparison, we only use ijcnn1, the dimensionality of which is relatively small. Specifically, we denote D-BBCG with parameters in Theorem 5 as D-BBCG c , and D-BBCG with parameters in Theorem 6 as D-BBCG sc . According to Theorems 5 and 6, we set α = 0,
x i (1) = (1 − δ √ d/τ )1/d, ∀i ∈ [n]
for both D-BBCG c and D-BBCG sc . Since D-BBCG c and D-BBCG sc are randomized algorithms, we repeat them 10 times and report the average results. Fig. 6 shows comparisons of D-BOCG c , D-BOCG sc , D-BBCG c , and D-BBCG sc on distributed online binary classification for ijcnn1. For all three types of graphs, we find that D-BBCG c is worse than D-BOCG c and D-BBCG sc is worse than D-BOCG cs , which is reasonable because D-BBCG c and D-BBCG sc are working with the more challenging bandit setting. Moreover, D-BBCG sc is better than D-BBCG c , which validates the advantage of utilizing the strong convexity in the bandit setting.
Conclusion and Future Work
In this paper, we first propose a projection-free algorithm called D-BOCG for distributed online convex optimization. Our analysis shows that D-BOCG enjoys an O(T 3/4 ) regret bound with O( √ T ) communication rounds for convex losses, and a better regret bound of O(T 2/3 (log T ) 1/3 ) with fewer O(T 1/3 (log T ) 2/3 ) communication rounds for strongly convex losses. In the case with convex losses, the O(T 3/4 ) regret bound of D-BOCG matches the best result established by the existing projection-free algorithm with T communication rounds, and the O( √ T ) communication rounds required by D-BOCG match (in terms of T ) the lower bound for any distributed online algorithm attaining the O(T 3/4 ) regret. In the case with strongly convex losses, we also provide a lower bound to show that the O(T 1/3 (log T ) 2/3 ) communication rounds required by D-BOCG are nearly optimal (in terms of T ) for obtaining the O(T 2/3 (log T ) 1/3 ) regret bound up to polylogarithmic factors. Furthermore, to handle the bandit setting, we propose a bandit variant of D-BOCG, namely D-BBCG, and obtain similar theoretical guarantees.
Besides the future work discussed in Section 5.3 and Remark 10, there are still several open problems to be investigated. First, in the standard OCO, Hazan and Minasyan (2020) have proposed a projection-free algorithm that obtains an expected regret bound of O(T 2/3 ) for convex and smooth losses. It is interesting to extend their algorithm to the distributed setting studied in this paper. However, their algorithm is not based on conditional gradient, which makes the extension non-trivial. Second, in this paper, the weight matrix P is assumed to be symmetric and doubly stochastic. It is appealing to consider a more practical scenario, in which P could be asymmetric or only column (or row) stochastic (Yang et al., 2019;Yi et al., 2020). Finally, we will investigate whether the regret bound for the full information setting can be improved if a few projections are allowed. We note that O(log T ) projections are sufficient to achieve the optimal convergence rate for stochastic optimization of smooth and strongly convex functions (Zhang et al., 2013).
Appendix A. Proof of Corollaries 1 and 2
Corollary 1 can be proved by substituting α = 0, K = L = √ T , and h = n 1/4 T 3/4 G √ 1−σ 2 (P )R into Theorem 2, as follows
R T,i ≤ 12nGRT √ T + 2 + B m=2
3n 5/4 T 1/4 GR 2 1 − σ 2 (P ) + B m=1 2 1 − σ 2 (P )n 3/4 T 1/4 GR + 4n 5/4 T 3/4 GR 1 − σ 2 (P )
≤12nGRT 3/4 + 11n 5/4 T 3/4 GR 2 1 − σ 2 (P ) + 2 1 − σ 2 (P )n 3/4 T 3/4 GR where the last inequality is due to B − 1 < B = T /K = √ T . Corollary 2 can be proved by substituting α > 0, K = L = T 2/3 (ln T ) −2/3 , and h = αK into Theorem 2, as follows
R T,i ≤ 12nGRT √ L + B m=2 3nGK(G + αR) √ n mα(1 − σ 2 (P )) + B m=1 4nK(G + 2αR) 2 (m + 2)α + 4nαKR 2 ≤ 12nGRT √ L + 3nG(G + αR) √ n α(1 − σ 2 (P )) + 4n(G + 2αR) 2 α B m=1 K m + 4nαKR 2 ≤12nGRT 2/3 (ln T ) 1/3 + 3nG(G + αR) √ n α(1 − σ 2 (P )) + 4n(G + 2αR) 2 α K(1 + ln B) + 4nαKR 2 ≤ 3n 3/2 G(G + αR) α(1 − σ 2 (P )) + 4n(G + 2αR) 2 α T 2/3 ((ln T ) −2/3 + (ln T ) 1/3 ) + 12nGRT 2/3 (ln T ) 1/3 + 4nR 2 αT 2/3 (ln T ) −2/3
where the last inequality is due to K = T 2/3 (ln T ) −2/3 and ln B ≤ ln T .
Appendix B. Proof of Theorem 7
In the beginning, we define several auxiliary variables. Letz(m) = 1 n n i=1 z i (m) for any m ∈ [B + 1] andḡ(m) = 1 n n i=1 g i (m) for any m ∈ [B]. Then, we definex(1) = x in and x(m + 1) = argmin x∈K δF m (x) for any m ∈ [B + 1], wherē
F m (x) =z(m) x + h x − x in 2 2 . Similarly, we define x i (m + 1) = argmin x∈K δ F m,i (x) for any m ∈ [B + 1], where F m,i (x) = z i (m) x + h x − x in 2 2
is defined in Algorithm 3 when α = 0.
Moreover, let x * ∈ argmin x∈K T t=1 f t (x) and x * = (1 − δ/r)x * . For any j ∈ V and t ∈ [T ], we denote the δ-smoothed version of f t,j (x) by f t,j,δ (x). Note that as in (32), we have proved that Algorithm 3 ensures
R T,i ≤ B m=1 t∈Tm n j=1 ( f t,j,δ (x i (m)) − f t,j,δ ( x * )) + 3δnGT + δnGRT r .(48)
To bound the term B m=1 t∈Tm n j=1 ( f t,j,δ (x i (m)) − f t,j,δ ( x * )) in (48), we assume that for all i ∈ V and m = 1, . . . , B, Algorithm 3 ensures that
g i (m) 2 ≤ G = ξ T dM √ K δ + KG.
Then, we can derive an upper bound of x i (m) −x(m) 2 . For any B ≥ m ≥ 2, we note that F m−1,i (x) is 2h-smooth, and Algorithm 3 ensures
x i (m) = CG(K δ , L, F m−1,i (x), x i (m − 1)).
According to Lemma 1, Assumption 2, and K δ ⊆ K, for B ≥ m ≥ 2, it is easy to verify that
F m−1,i (x i (m)) − F m−1,i ( x i (m)) ≤ 16hR 2 L + 2 .
Then, for any B ≥ m ≥ 2, we have
x i (m) −x(m) 2 ≤ x i (m) − x i (m) 2 + x i (m) −x(m) 2 ≤ F m−1,i (x i (m)) − F m−1,i ( x i (m)) h + x i (m) −x(m) 2 ≤ 4R √ L + 2 + x i (m) −x(m) 2 ≤ 4R √ L + 2 + 1 2h z i (m) − 2hx in −z(m) + 2hx in 2 ≤ 4R √ L + 2 + G √ n 2h(1 − σ 2 (P ))(49)
where the second inequality is due to the fact that F m−1,i (x) is 2h-strongly convex and (7), the fourth inequality is due to Lemma 4, and the last inequality is due to Lemma 3.
For brevity, let = 4R
√ L+2 + G √ n 2h
(1−σ 2 (P )) . By combining (49) with x i (1) =x(m) = x in , for any m ∈ [B], we have
x i (m) −x(m) 2 ≤ .(50)
For any i, j ∈ V , m ∈ [B], and t ∈ T m , according to Lemma 8 and Assumption 1, f t,j,δ (x) is also convex and G-Lipschitz. Then, by combining with (50), we have
f t,j,δ (x i (m)) − f t,j,δ ( x * ) ≤ f t,j,δ (x(m)) − f t,j,δ ( x * ) + G x(m) − x i (m) 2 ≤ f t,j,δ (x j (m)) − f t,j,δ ( x * ) + G x(m) − x j (m) 2 + G ≤∇ f t,j,δ (x j (m)) (x j (m) − x * ) + 2G ≤∇ f t,j,δ (x j (m)) (x j (m) −x(m)) + ∇ f t,j,δ (x j (m)) (x(m) − x * ) + 2G ≤ ∇ f t,j,δ (x j (m)) 2 x j (m) −x(m) 2 + ∇ f t,j,δ (x j (m)) (x(m) − x * ) + 2G ≤∇ f t,j,δ (x j (m)) (x(m) − x * ) + 3G .(51)
By combining (48) with (51), for any i ∈ V , we have
R T,i ≤ B m=1 t∈Tm n j=1 ∇ f t,j,δ (x j (m)) (x(m) − x * ) + 3nGT + 3δnGT + δnGRT r .
Then, to bound B m=1 t∈Tm n j=1 ∇ f t,j,δ (x j (m)) (x(m) − x * ), we introduce the following lemma.
Lemma 9 Letz(m) = 1 n n i=1 z i (m) for any m ∈ [B + 1] andḡ(m) = 1 n n i=1 g i (m) for any m ∈ [B]. Definex(1) = x in , where x in is an input of Algorithm 3. Moreover, definē F m (x) =z(m) x + h x − x in 2 2 andx(m + 1) = argmin x∈K δF m (x) for any m ∈ [B + 1].
Under Assumptions 1, 2, 3, and an additional assumption that g i (m) 2 ≤ G for any i ∈ V and m ∈ [B], with probability at least 1 − γ, Algorithm 3 with α = 0 has
B m=1 t∈Tm n j=1 ∇ f t,j,δ (x j (m)) (x(m) − x * ) ≤ 2nR(KG + G) 2B ln 1 γ + 4nhR 2 + 2nB G 2 h where x * = (1 − δ/r)x * , x * ∈ argmin x∈K T t=1 f t (x), and f t,j,δ (x) denotes the δ-smoothed version of f t,j (x).
Lemma 10 Under Assumptions 1 and 5, for all i ∈ V and m ∈ [B], Algorithm 3 has
g i (m) 2 ≤ 1 + 8 ln nB γ dM √ K δ + KG
with probability at least 1 − γ.
Then, by applying Lemma 10 with B = T /K = √ T , we have
Pr (A) ≥ 1 − γ.(53)
Finally, we complete the proof by combining (52) with (53).
Appendix C. Proof of Lemmas 3 and 7
These two lemmas can be derived by following the proof of Lemma 6 in Zhang et al. (2017). For completeness, we include the detailed proof in this paper. Let P s denote the s-th power of P and P s ij denote the j-th entry of the i-row in P s for any s ≥ 0. Note that P 0 denotes the identity matrix I n . For m = 1, it is easy to verify that
z i (m) −z(m) 2 = 0 ≤ √ n G 1 − σ 2 (P )
.
To analyze the case with B ≥ m ≥ 2, we introduce two intermediate results from Zhang et al. (2017) and Duchi et al. (2011). First, as shown in the proof of Lemma 6 at Zhang et al. (2017), for any B ≥ m ≥ 2, we have
z i (m) −z(m) 2 ≤ m−1 τ =1 n j=1 P m−1−τ ij − 1 n d j (τ ) 2(55)
under Assumption 3. Second, as shown in Appendix B of Duchi et al. (2011), when P is a doubly stochastic matrix, for any positive integer s and any x in the n-dimensional probability simplex, it holds that
P 0 x − 1/n 1 ≤ σ s 2 (P ) √ n(56)
where 1 is the all-ones vector in R n . Let e i denote the i-th canonical basis vector in R n . By substituting x = e i into (56), we have P s e i − 1/n 1 ≤ σ s 2 (P )
√ n(57)
for any positive integer s. If s = 0, we also have P 0 e i − 1/n 1 = 2(n − 1) n ≤ √ n = σ 0 2 (P )
√ n(58)
where the inequality is due to n ≥ 1.
Then, for any B ≥ m ≥ 2, by combining (55)
where the first equality is due to the symmetry of P . Because of (57), (58), and σ 2 (P ) < 1, for any B ≥ m ≥ 2, we have
z i (m) −z(m) 2 ≤ G m−1 τ =1 σ 2 (P ) m−1−τ √ n = (1 − σ 2 (P ) m−1 ) G √ n 1 − σ 2 (P ) ≤ √ n G 1 − σ 2 (P )
. (60) By combining (54) and (60), we can complete the proof of Lemma 3. Furthermore, by taking the expectation on the both sides of (55) and combining with E[ d i (m) 2 ] ≤ G, we can prove Lemma 7 in a similar way.
Appendix D. Proof of Lemma 9
We first introduce the classical Azuma's inequality (Azuma, 1967) for martingales in the following lemma.
∇ f t,j,δ (x j (m)) − g j (m) (x(m) − x * ) ≤ n j=1 t∈Tm ∇ f t,j,δ (x j (m)) − g j (m) 2 (x(m) − x * ) 2 ≤ 2R n j=1 t∈Tm ∇ f t,j,δ (x j (m)) 2 + g j (m) 2 ≤ 2R n j=1 t∈Tm ∇ f t,j,δ (x j (m)) 2 + 2nR G ≤ 2nRKG + 2nR G(62)
where the second inequality is due to Assumption 2, and the last inequality is due to Lemma 8 and |T m | = K.
Then, by applying Lemma 11 with ∆ = 2nR(KG + G) 2B ln 1 γ , with probability at least 1 − γ, we have
Therefore, we still need to bound B m=1ḡ (m) (x(m) − x * ). According to Assumption 3, it is easy to verify that Algorithm 3 with α = 0 ensures z(m + 1) = 1 n
where the last inequality is due to Assumption 2.
Note thatF m+1 (x) is 2h-strongly convex andx(m + 2) = argmin x∈K δF m+1 (x). For any m ∈ [B], we have h x(m + 1) −x(m + 2) 2 2 ≤F m+1 (x(m + 1)) −F m+1 (x(m + 2)) =F m (x(m + 1)) +ḡ(m) x(m + 1) −F m (x(m + 2)) −ḡ(m) x(m + 2) ≤ ḡ(m) 2 x(m + 1) −x(m + 2) 2 where the first inequality is due to (7) and the second inequality is due tox(m + 1) = argmin x∈K δF m (x).
The above inequality implies that for any m ∈ [B], it holds that
x(m + 1) −x(m + 2) 2 2 ≤ ḡ(m) 2 h .
By combining with (65), we have
where the last inequality is due tox(1) = x in andx(2) = argmin x∈K δF 1 (x) = x in . Since g i (m) 2 ≤ G, for any m ∈ [B], we also have ḡ(m) 2 = 1 n
n i=1 g i (m) 2 ≤ 1 n n i=1 g i (m) 2 ≤ G.(67)
By substituting (67) into (66)
, we have B m=1ḡ (m) (x(m) − x * ) ≤ 4hR 2 + (2B − 1) G 2 h ≤ 4hR 2 + 2B G 2 h .(68)
Finally, by substituting (63) and (68) into (64), we complete the proof.
Appendix E. Proof of Lemma 10
This proof is inspired by the proof of Theorem 12 in Gross (2011), which gave the classical Bernstein inequality for independent vector-valued random variables. However, the vectorvalued random variables in this proof are only conditionally independent, and we do not need to use the Bernstein inequality to incorporate the variance information. According to Algorithm 3, for any i ∈ V and m = 1, . . . , B, conditioned on x i (m), g i ((m − 1)K + 1), . . . , g i (mK)
are K independent random vectors. For brevity, for j = 1, . . . , K, let
X j = g i (t j )
where t j = (m − 1)K + j, and let N = K j=1 X j 2 , S j = k =j X k .
To bound N by using Lemma 11, we define X 0 = {x i (m)}, X j = {x i (m), X 1 , . . . , X j } for j ≥ 1 and a sequence D 1 , . . . , D K as
D j = E[N |X j ] − E[N |X j−1 ].
It is not hard to verify that
E[D j |X j−1 ] = E[E[N |X j ] − E[N |X j−1 ]|X j−1 ] = 0
which implies that D 1 , . . . , D K is a martingale difference sequence.
Then, using the triangle inequality, we have N ≤ S j 2 + X j 2 and N ≥ S j 2 − X j 2 .
Moreover, according to the Algorithm 3 and Assumption 5, we have
X j 2 = d δ f t j ,i (y i (t j ))u i (t j ) 2 ≤ dM δ .
Therefore, by combining with (69), we have N ≤ S j 2 + dM δ and N ≥ S j 2 − dM δ .
Then, we have
D j ≤ E[ S j 2 |X j ] + dM δ − E[ S j 2 |X j−1 ] + dM δ = 2dM δ D j ≥E[ S j 2 |X j ] − dM δ − E[ S j 2 |X j−1 ] − dM δ = − 2dM δ
where the above two equalities are due to E[ S j 2 |X j ] = E[ S j 2 |X j−1 ], because S j dose not depend on X j given x i (m). Therefore, we have |D j | ≤ 2dM δ . Let ∆ = √ KdM δ 8 ln nB γ . Then, by applying Lemma 11, with probability at least 1 − γ nB , we have
N − E[N |x i (m)] = E[N |X K ] − E[N |X 0 ] = K j=1 D j ≤ √ KdM δ 8 ln nB γ which implies that g i (m) 2 = N ≤ √ KdM δ 8 ln nB γ + E[N |x i (m)] ≤ √ KdM δ 8 ln nB γ + E[N 2 |x i (m)].
where the last inequality is due to Jensen's inequality. By combining the above inequality with N 2 = g i (m) 2 2 and (46), with probability at least 1 − γ nB , we have g i (m) 2 ≤ 1 + 8 ln nB γ dM √ K δ + KG.
Finally, by using the union bound, we complete the proof for all i ∈ V and m = 1, . . . , B.
Theorem 5 ,
5Let α = 0, K = L = √ T , h = n 1/4 dM T δ = cT −1/4 , where c > 0 is a constant such that δ ≤ r. Under Assumptions 1, 2, 3, and 5, for any i ∈ V , Algorithm 3 ensures E [R T,i ] = O n 5/4 (1 − σ 2 (P )) −1/2 T 3/4 . Theorem 6 Let α > 0, K = T 2/3 (ln T ) −2/3 , δ = cT −1/3 (ln T ) 1/3 , and h = αK, where c > 0 is a constant such that δ ≤ r. Under Assumptions 1, 2, 3, 4, and 5, for any i ∈ V , Theorems 5 and 6 show that D-BBCG can attain an expected regret bound of O(T 3/4 ) with O( √ T ) communication rounds for convex losses, and attain an expected regret bound of O(T 2/3 (log T ) 1/3 ) with O(T 1/3 (log T ) 2/3 ) communication rounds, which is similar to D-BOCG in the full information setting. Moreover, we show that D-BBCG enjoys a high-probability regret bound of O(T 3/4 (log T ) 1/2 ) with O( √ T ) communication rounds for convex losses.
i (m) for m ∈ [B + 1], and let d i (m) = g i (m) − αKx i (m) andd(m) = 1 n n i=1 d i (m) for m ∈ [B]. Then, we definex(1) = x in andx(m + 1) = argmin x∈K δF m (x) for any m ∈ [B + 1], whereF
Now, we derive an upper bound of E[ x i (m) −x(m) 2 ] for any m ∈ [B]. If m = 1, according to the definition and Algorithm 3, it is easy to verify that
(37), (38), (39), (40), and (41), our Algorithm 3 with α = 0, K = L = √ T , h = n 1/4 dM T 3/4 √ 1−σ 2 (P )R , δ = cT −1/4 ensures E [R T,i ] = O n 5/4(1 − σ 2 (P )) −1/2 T 3/4
Figure 1 :Figure 2 :
12Comparisons of D-BOCG and D-OCG on distributed online multiclass classification over the complete graph. Comparisons of D-BOCG on distributed online multiclass classification over different graphs.
Figure 3 :
3Comparisons of D-OCG, D-BOCG c , and D-BOCG sc on distributed online binary classification over the complete graph.
Figure 4 :Figure 5 :
45Comparisons of D-BOCG c on distributed online binary classification over different graphs. Comparisons of D-BOCG sc on distributed online binary classification over different graphs. K = L = √ T , δ = 10T −1/4 , and h = T 3/4 /c for D-BBCG c where the constant c is tuned from {0.01, . . . , 1e5}, and set α = 2λ, K = L = T 2/3 (ln T ) −2/3 , δ = 10T −1/3 (ln T ) 1/3 , and h = c αK where the constant c is tuned from {1, 2, 3, 4, 5}. Moreover, we initialize
Figure 6 :
6Comparisons of D-BOCG c , D-BOCG sc , D-BBCG c , and D-BBCG sc on distributed online binary classification for ijcnn1.
∇∇
Lemma 11 Suppose D 1 , . . . , D s is a martingale difference sequence and |D j | ≤ c j almost surely. Then,To apply Lemma 11, with T m = {(m − 1)K + 1, . . . , mK}, we define f t,j,δ (x j (m)) − g j (t) (x(m) − x * f t,j,δ (x j (m)) − g j (m) (x(m) − x * ).(61)According to Algorithm 3 and Lemma 2, we have E [D m | x 1 (m), . . . , x n (m),x(m)] = 0 which further implies that D 1 , . . . , D B is a martingale difference sequence with
) (x(m) − x * ).
P ij z j (m) + g i (
according to the definition, for any m ∈ [B + 1], we havē x(m + 1) = argmin x∈K δF m (x) = argmin x∈K δz (m) x + h x − x in 2 2 .By applying Lemma 5 with the linear loss functions {ḡ(m) x} B m=1 , the decision set K = K δ and the regularizer R(x) = h x −
Table 1 :
1Summary of datasets
Dataset
# Features
# Classes
# Examples
Projection-free Distributed Online Learning
. The remaining Te − n Te/n examples are not used.
According to Lemma 9, by assuming that g i (m) 2 ≤ G for any i ∈ V and m ∈ [B], with probability at least 1 − γ, we haveLet A denote the event of g i (m) 2 ≤ G, ∀i ∈ V, m ∈ [B]. Because we have used the event A as a fact, the above result should be formulated asFurthermore, we introduce the following lemma with respect to the probability of the event A.
Optimal strategies and minimax lower bounds for online convex games. Jacob Abernethy, Peter L Bartlett, Alexander Rakhlin, Ambuj Tewari, Proceedings of the 21st Annual Conference on Learning Theory. the 21st Annual Conference on Learning TheoryJacob Abernethy, Peter L. Bartlett, Alexander Rakhlin, and Ambuj Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the 21st Annual Conference on Learning Theory, pages 415-423, 2008.
Optimal algorithms for online convex optimization with multi-point bandit feedback. Alekh Agarwal, Ofer Dekel, Lin Xiao, Proceedings of the 23rd Annual Conference on Learning Theory. the 23rd Annual Conference on Learning TheoryAlekh Agarwal, Ofer Dekel, and Lin Xiao. Optimal algorithms for online convex optimization with multi-point bandit feedback. In Proceedings of the 23rd Annual Conference on Learning Theory, pages 28-40, 2010.
Algorithms for portfolio management based on the newton method. Amit Agarwal, Elad Hazan, Satyen Kale, Robert E Schapire, Proceedings of the 23rd International Conference on Machine Learning. the 23rd International Conference on Machine LearningAmit Agarwal, Elad Hazan, Satyen Kale, and Robert E. Schapire. Algorithms for portfolio management based on the newton method. In Proceedings of the 23rd International Conference on Machine Learning, pages 9-16, 2006.
Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. Baruch Awerbuch, Robert D Kleinberg, Proceedings of the 36th Annual ACM Symposium on Theory of Computing. the 36th Annual ACM Symposium on Theory of ComputingBaruch Awerbuch and Robert D. Kleinberg. Adaptive routing with end-to-end feedback: Distributed learning and geometric approaches. In Proceedings of the 36th Annual ACM Symposium on Theory of Computing, pages 45-53, 2004.
Online linear optimization and adaptive routing. Baruch Awerbuch, Robert D Kleinberg, Journal of Computer and System Sciences. 741Baruch Awerbuch and Robert D. Kleinberg. Online linear optimization and adaptive routing. Journal of Computer and System Sciences, 74(1):97-114, 2008.
Weighted sums of certain dependent random variables. Kazuoki Azuma, Tohoku Mathematical Journal. 193Kazuoki Azuma. Weighted sums of certain dependent random variables. Tohoku Mathematical Journal, 19(3):357-367, 1967.
Gossip along the way: order-optimal consensus through randomized path averaging. Florence Bénézit, Alexandros G Dimakis, Patrick Thiran, Martin Vetterli, Proceedings of the 45th Annual Allerton Conference on Communication, Control, and Computing. the 45th Annual Allerton Conference on Communication, Control, and ComputingFlorence Bénézit, Alexandros G. Dimakis, Patrick Thiran, and Martin Vetterli. Gossip along the way: order-optimal consensus through randomized path averaging. In Proceedings of the 45th Annual Allerton Conference on Communication, Control, and Computing, 2007.
Universal portfolios with and without transaction costs. Avrim Blum, Adam Kalai, Machine Learning. 35Avrim Blum and Adam Kalai. Universal portfolios with and without transaction costs. Machine Learning, 35(3):193-205, 1999.
Convex Optimization. Stephen Boyd, Lieven Vandenberghe, Cambridge University PressStephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University Press, 2004.
How to use expert advice. Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P Helmbold, Robert E Schapire, Manfred K Warmuth, Journal of the ACM. 443Nicolò Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K. Warmuth. How to use expert advice. Journal of the ACM, 44(3):427-485, 1997.
LIBSVM: A library for support vector machines. Chih-Chung Chang, Chih-Jen Lin, ACM Transactions on Intelligent Systems and Technology. 227Chih-Chung Chang and Chih-Jen Lin. LIBSVM: A library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2(27):1-27, 2011.
Projection-free online optimization with stochastic gradient: From convexity to submodularity. Lin Chen, Christopher Harshaw, Hamed Hassani, Amin Karbasi, Proceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine LearningLin Chen, Christopher Harshaw, Hamed Hassani, and Amin Karbasi. Projection-free online optimization with stochastic gradient: From convexity to submodularity. In Proceedings of the 35th International Conference on Machine Learning, pages 814-823, 2018.
Projection-free bandit convex optimization. Lin Chen, Mingrui Zhang, Amin Karbasi, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. the 22nd International Conference on Artificial Intelligence and StatisticsLin Chen, Mingrui Zhang, and Amin Karbasi. Projection-free bandit convex optimization. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 2047-2056, 2019.
Dual averaging for distributed optimization: Convergence analysis and network scaling. John C Duchi, Alekh Agarwal, Martin J Wainwright, IEEE Transactions on Automatic Control. 573John C. Duchi, Alekh Agarwal, and Martin J. Wainwright. Dual averaging for distributed optimization: Convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57(3):592-606, 2011.
Optimal rates for zero-order convex optimization: The power of two function evaluations. John C Duchi, Michael I Jordan, Martin J Wainwright, Andre Wibisono, IEEE Transactions on Information Theory. 615John C. Duchi, Michael I. Jordan, Martin J. Wainwright, and Andre Wibisono. Optimal rates for zero-order convex optimization: The power of two function evaluations. IEEE Transactions on Information Theory, 61(5):2788-2806, 2015.
Online convex optimization in the bandit setting: Gradient descent without a gradient. Abraham D Flaxman, Adam Tauman Kalai, H Brendan Mcmahan, Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms. the 16th Annual ACM-SIAM Symposium on Discrete AlgorithmsAbraham D. Flaxman, Adam Tauman Kalai, and H. Brendan McMahan. Online convex optimization in the bandit setting: Gradient descent without a gradient. In Proceedings of the 16th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 385-394, 2005.
An algorithm for quadratic programming. Marguerite Frank, Philip Wolfe, Naval Research Logistics Quarterly. 31-2Marguerite Frank and Philip Wolfe. An algorithm for quadratic programming. Naval Research Logistics Quarterly, 3(1-2):95-110, 1956.
Using and combining predictors that specialize. Yoav Freund, Robert E Schapire, Yoram Singer, Manfred K Warmuth, Proceedings of the 29th Annual ACM Symposium on Theory of Computing. the 29th Annual ACM Symposium on Theory of ComputingYoav Freund, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth. Using and combining predictors that specialize. In Proceedings of the 29th Annual ACM Symposium on Theory of Computing, pages 334-343, 1997.
A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. Dan Garber, Elad Hazan, SIAM Journal on Optimization. 263Dan Garber and Elad Hazan. A linearly convergent conditional gradient algorithm with applications to online and stochastic optimization. SIAM Journal on Optimization, 26(3): 1493-1528, 2016.
Improved regret bounds for projection-free bandit convex optimization. Dan Garber, Ben Kretzu, Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics. the 23rd International Conference on Artificial Intelligence and StatisticsDan Garber and Ben Kretzu. Improved regret bounds for projection-free bandit convex optimization. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, pages 2196-2206, 2020.
Revisiting projection-free online learning: the strongly convex case. Dan Garber, Ben Kretzu, Proceedings of the 24th International Conference on Artificial Intelligence and Statistics. the 24th International Conference on Artificial Intelligence and StatisticsDan Garber and Ben Kretzu. Revisiting projection-free online learning: the strongly convex case. In Proceedings of the 24th International Conference on Artificial Intelligence and Statistics, pages 3592-3600, 2021.
Recovering low-rank matrices from few coefficients in any basis. David Gross, IEEE Transactions on Information Theory. 573David Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57(3):1548-1566, 2011.
Introduction to online convex optimization. Elad Hazan, Foundations and Trends in Optimization. 23-4Elad Hazan. Introduction to online convex optimization. Foundations and Trends in Optimization, 2(3-4):157-325, 2016.
Projection-free online learning. Elad Hazan, Satyen Kale, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningElad Hazan and Satyen Kale. Projection-free online learning. In Proceedings of the 29th International Conference on Machine Learning, pages 1843-1850, 2012.
Faster projection-free online learning. Elad Hazan, Edgar Minasyan, Proceedings of the 33rd Annual Conference on Learning Theory. the 33rd Annual Conference on Learning TheoryElad Hazan and Edgar Minasyan. Faster projection-free online learning. In Proceedings of the 33rd Annual Conference on Learning Theory, pages 1877-1893, 2020.
Online distributed optimization via dual averaging. Saghar Hosseini, Airlie Chapman, Mehran Mesbahi, 52nd IEEE Conference on Decision and Control. Saghar Hosseini, Airlie Chapman, and Mehran Mesbahi. Online distributed optimization via dual averaging. In 52nd IEEE Conference on Decision and Control, pages 1484-1489, 2013.
Revisiting frank-wolfe: Projection-free sparse convex optimization. Martin Jaggi, Proceedings of the 30th International Conference on Machine Learning. the 30th International Conference on Machine LearningMartin Jaggi. Revisiting frank-wolfe: Projection-free sparse convex optimization. In Proceedings of the 30th International Conference on Machine Learning, pages 427-435, 2013.
Online metric learning and fast similarity search. Prateek Jain, Brian Kulis, S Inderjit, Kristen Dhillon, Grauman, Advances in Neural Information Processing Systems 21. Prateek Jain, Brian Kulis, Inderjit S. Dhillon, and Kristen Grauman. Online metric learning and fast similarity search. In Advances in Neural Information Processing Systems 21, pages 761-768, 2008.
Decentralized stochastic optimization and gossip algorithms with compressed communication. Anastasia Koloskova, Sebastian U Stich, Martin Jaggi, Proceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine LearningAnastasia Koloskova, Sebastian U. Stich, and Martin Jaggi. Decentralized stochastic optimization and gossip algorithms with compressed communication. In Proceedings of the 36th International Conference on Machine Learning, pages 3478-3487, 2019.
Projection free online learning over smooth sets. Y Kfir, Andreas Levy, Krause, Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics. the 22nd International Conference on Artificial Intelligence and StatisticsKfir Y. Levy and Andreas Krause. Projection free online learning over smooth sets. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics, pages 1458-1466, 2019.
Detection, classification and tracking of targets in distributed sensor networks. Dan Li, Kerry D Wong, Yu H Hu, Akbar M Sayeed, IEEE Signal Processing Magazine. 192Dan Li, Kerry D. Wong, Yu H. Hu, and Akbar M. Sayeed. Detection, classification and tracking of targets in distributed sensor networks. IEEE Signal Processing Magazine, 19 (2):17-29, 2002.
On distributed averaging algorithms and quantization effects. Angelia Nedić, Alex Olshevsky, Asuman Ozdaglar, John N Tsitsiklis, IEEE Transactions on Automatic Control. 5411Angelia Nedić, Alex Olshevsky, Asuman Ozdaglar, and John N. Tsitsiklis. On distributed averaging algorithms and quantization effects. IEEE Transactions on Automatic Control, 54(11):2506-2517, 2009.
Distributed stochastic subgradient projection algorithms for convex optimization. S Sundhar Ram, A Nedić, V V Veeravalli, Journal of Optimization Theory and Applications. 1473S. Sundhar Ram, A. Nedić, and V. V. Veeravalli. Distributed stochastic subgradient projection algorithms for convex optimization. Journal of Optimization Theory and Applications, 147(3):516-545, 2010.
Online learning and online convex optimization. Foundations and Trends in Machine Learning. Shai Shalev-Shwartz, 4Shai Shalev-Shwartz. Online learning and online convex optimization. Foundations and Trends in Machine Learning, 4(2):107-194, 2011.
Online distance metric learning for object tracking. Grigorios Tsagkatakis, Andreas Savakis, IEEE Transactions on Circuits and Systems for Video Technology. 21Grigorios Tsagkatakis and Andreas Savakis. Online distance metric learning for object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 21(12): 1810-1821, 2011.
Distributed strongly convex optimization. I Konstantinos, Michael G Tsianos, Rabbat, Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing. the 50th Annual Allerton Conference on Communication, Control, and ComputingKonstantinos I. Tsianos and Michael G. Rabbat. Distributed strongly convex optimization. In Proceedings of the 50th Annual Allerton Conference on Communication, Control, and Computing, pages 593-600, 2012.
Projection-free online learning over strongly convex sets. Yuanyu Wan, Lijun Zhang, Proceedings of the 35th AAAI Conference on Artificial Intelligence. the 35th AAAI Conference on Artificial IntelligenceYuanyu Wan and Lijun Zhang. Projection-free online learning over strongly convex sets. In Proceedings of the 35th AAAI Conference on Artificial Intelligence, pages 10076-10084, 2021.
Projection-free distributed online convex optimization with O( √ T ) communication complexity. Yuanyu Wan, Wei-Wei Tu, Lijun Zhang, Proceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine LearningYuanyu Wan, Wei-Wei Tu, and Lijun Zhang. Projection-free distributed online convex opti- mization with O( √ T ) communication complexity. In Proceedings of the 37th International Conference on Machine Learning, pages 9818-9828, 2020.
Online strongly convex optimization with unknown delays. Yuanyu Wan, Wei-Wei Tu, Lijun Zhang, Machine Learning. 2021Yuanyu Wan, Wei-Wei Tu, and Lijun Zhang. Online strongly convex optimization with unknown delays. Machine Learning, 2021.
Distributed average consensus with leastmean-square deviation. Lin Xiao, Stephen Boyd, Seung-Jean Kim, Journal of Parallel and Distributed Computing. 671Lin Xiao, Stephen Boyd, and Seung-Jean Kim. Distributed average consensus with least- mean-square deviation. Journal of Parallel and Distributed Computing, 67(1):33-46, 2007.
Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties. Feng Yan, Shreyas Sundaram, S V N Vishwanathan, Yuan Qi, IEEE Transactions on Knowledge and Data Engineering. 2511Feng Yan, Shreyas Sundaram, S.V.N. Vishwanathan, and Yuan Qi. Distributed autonomous online learning: Regrets and intrinsic privacy-preserving properties. IEEE Transactions on Knowledge and Data Engineering, 25(11):2483-2493, 2013.
A survey of distributed optimization. Tao Yang, Xinlei Yi, Junfeng Wu, Ye Yuan, Di Wu, Ziyang Meng, Yiguang Hong, Hong Wang, Zongli Lin, Karl H Johansson, Annual Reviews in Control. 47Tao Yang, Xinlei Yi, Junfeng Wu, Ye Yuan, Di Wu, Ziyang Meng, Yiguang Hong, Hong Wang, Zongli Lin, and Karl H. Johansson. A survey of distributed optimization. Annual Reviews in Control, 47:278-305, 2019.
Distributed online convex optimization with time-varying coupled inequality constraints. Xinlei Yi, Xiuxian Li, Lihua Xie, Karl H Johansson, IEEE Transactions on Signal Processing. 68Xinlei Yi, Xiuxian Li, Lihua Xie, and Karl H. Johansson. Distributed online convex optimization with time-varying coupled inequality constraints. IEEE Transactions on Signal Processing, 68:731-746, 2020.
O(log T ) projections for stochastic optimization of smooth and strongly convex functions. Lijun Zhang, Tianbao Yang, Rong Jin, Xiaofei He, Proceedings of the 30th International Conference on Machine Learning. the 30th International Conference on Machine LearningLijun Zhang, Tianbao Yang, Rong Jin, and Xiaofei He. O(log T ) projections for stochas- tic optimization of smooth and strongly convex functions. In Proceedings of the 30th International Conference on Machine Learning, pages 1121-1129, 2013.
Projectionfree distributed online learning in networks. Wenpeng Zhang, Peilin Zhao, Wenwu Zhu, C H Steven, Tong Hoi, Zhang, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningWenpeng Zhang, Peilin Zhao, Wenwu Zhu, Steven C. H. Hoi, and Tong Zhang. Projection- free distributed online learning in networks. In Proceedings of the 34th International Conference on Machine Learning, pages 4054-4062, 2017.
Online convex programming and generalized infinitesimal gradient ascent. Martin Zinkevich, Proceedings of the 20th International Conference on Machine Learning. the 20th International Conference on Machine LearningMartin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th International Conference on Machine Learning, pages 928-936, 2003.
| [] |
[
"Zero-mode entanglement across a conformal defect",
"Zero-mode entanglement across a conformal defect"
] | [
"Luca Capizzi \nSISSA and INFN Sezione di Trieste\nvia Bonomea 265I-34136TriesteItaly\n",
"Viktor Eisler \nInstitute of Theoretical and Computational Physics\nGraz University of Technology\nPetersgasse 16A-8010GrazAustria\n"
] | [
"SISSA and INFN Sezione di Trieste\nvia Bonomea 265I-34136TriesteItaly",
"Institute of Theoretical and Computational Physics\nGraz University of Technology\nPetersgasse 16A-8010GrazAustria"
] | [] | We consider a free-fermion chain with a conformal defect that features an extended zero mode, and study the entanglement properties in its mixed ground state. The zero-mode induced degeneracy modifies the density of states in the single-particle entanglement spectrum, which can be calculated via the full counting statistics. For a homogeneous chain, the resulting change in the Rényi entropy is derived analytically for arbitrary subsystem ratios in the thermodynamic limit. For a conformal defect located in the center, analogous results can be obtained for the half-chain entanglement. In particular, we observe parity effects for half-chains with even/odd sites, which do not decay with size. | 10.1088/1742-5468/acd68f | [
"https://export.arxiv.org/pdf/2303.10425v1.pdf"
] | 257,632,524 | 2303.10425 | e8aabc3b988585a80b3f1cfdb40415bc4cc447e2 |
Zero-mode entanglement across a conformal defect
Luca Capizzi
SISSA and INFN Sezione di Trieste
via Bonomea 265I-34136TriesteItaly
Viktor Eisler
Institute of Theoretical and Computational Physics
Graz University of Technology
Petersgasse 16A-8010GrazAustria
Zero-mode entanglement across a conformal defect
We consider a free-fermion chain with a conformal defect that features an extended zero mode, and study the entanglement properties in its mixed ground state. The zero-mode induced degeneracy modifies the density of states in the single-particle entanglement spectrum, which can be calculated via the full counting statistics. For a homogeneous chain, the resulting change in the Rényi entropy is derived analytically for arbitrary subsystem ratios in the thermodynamic limit. For a conformal defect located in the center, analogous results can be obtained for the half-chain entanglement. In particular, we observe parity effects for half-chains with even/odd sites, which do not decay with size.
spectrum. For a single zero mode, the proper ground state of the system is then a mixture of two pure states, with the zero mode either empty or occupied. Interestingly, the entropy of the mixed ground state differs from the pure state one by a nontrivial function of the subsystem ratio, which was calculated analytically for free Dirac or Majorana fermions [19][20][21]. The result has later been verified for a quantum Ising chain with a topological defect [22,23], which describes a special boundary condition that produces a zero mode [24]. One should stress that this zero mode is an extended excitation, and should not be confused with the ones that are localized at the boundaries, as found in gapped phases of various quantum chains [25][26][27].
The topological defect corresponds to perfect transmission and thus reproduces the zero-mode entropy found for a periodic chain [20]. Here we address the question, how the zero-mode contribution is altered for a defect with imperfect transmission. In fact, the presence of a defect at the subsystem boundary in free-particle chains is known to modify the logarithmic scaling of the entropy, leading to a prefactor (also dubbed as effective central charge) which depends on the transmission properties of the defect. This was first investigated numerically in free-fermion and transverse Ising chains [28][29][30], and the analytic expression of the effective central charge was found in [31,32]. On the CFT side, the entanglement across conformal defects was studied for the free boson [33] and the Ising model [34], perfectly matching the lattice results. Generalizations for CFT junctions with multiple wires were considered in [35][36][37].
In this paper we consider a free-fermion chain with a conformal defect which supports an exact zero mode, and study the corresponding finite contribution to the half-chain entropy in its mixed ground state. The conformal defect on the lattice mimics the scale-invariant properties of a conformal interface in CFT [38], and was first studied in [39]. Most importantly, it allows one to establish an exact relation between the half-chain entanglement spectrum of the defect as well as that of the homogeneous chain. We use this relation to derive an analytical prediction for the zeromode entropy, thus generalizing the studies of Ref. [22] to a chain with imperfect transmission. In particular, our result shows parity effects in terms of the half-chain length, which vanish only when the defect is completely transmissive. Our analytical predictions, derived in the thermodynamic limit, are in perfect agreement with the numerical results.
We organize our manuscript as follows. In Sec. II we introduce the model and the methods employed. In Sec. III we characterize the zero-mode entropy in the homogeneous chain for arbitrary subsystem ratios, reproducing the results of [20] in an alternative way. In Sec. IV we compute the zero-mode entropy of the half-chain in the presence of a conformal defect. We summarize and discuss our results in Sec. V, leaving some technical details of the calculations in three appendices.
II. MODEL AND METHODS
We consider hopping chains described by the Hamiltonian
H = m,n H m,n c † m c n ,(1)
where c † m and c m are fermionic creation/annihilation operators satisfying anticommutation relations {c † m , c n } = δ m,n . We shall focus on models with only nearest-neighbour hopping and local chemical potentials. The Hamiltonian (1) can be diagonalized by finding the eigenvalue decomposition of the real and symmetric hopping matrix
H m,n = k ω k φ k (m)φ k (n) ,(2)
with single-particle spectrum ω k and corresponding eigenvectors φ k . In general, the ground state of the chain is a Fermi sea with occupied modes ω k < 0 for k < k F . However, the situation changes if the system supports a zero mode, i.e. one has an eigenvalue ω k F = 0 in the spectrum. In this case, the mode occupation should be determined as a proper zero-temperature limit of the Fermi function n k = lim β→∞ 1 e βω k + 1 =
1 ω k < 0 , 1 2 ω k = 0 , 0 ω k > 0 .(3)
In other words, due to the zero mode and the resulting degeneracy of the spectrum, the ground stateρ becomes mixed, given by an equal superposition
ρ = 1 2 (ρ 0 +ρ 1 )(4)
of two pure Fermi-sea ground statesρ 0 = |0 0| andρ 1
= |1 1|, where |0 = k<k F c † k |∅ , |1 = k≤k F c † k |∅ ,(5)
denote the states with the zero mode being empty or occupied, respectively. Here the empty state with no particles is denoted by |∅ , and c † k are the creation operators in the diagonal basis ofĤ. Our goal is to study the contribution of the zero mode to the entanglement between a subsystem A and its complement B. This is encoded in the reduced density matrixρ A = Tr Bρ , which can be written in the form [40,41]
ρ A = 1 Z exp − κ ε κ f † κ f κ .(6)ε κ = ln 1 − ζ κ ζ κ , ζ κ = 1 e εκ + 1(7)
to the eigenvalues ζ κ of the reduced correlation matrix C A with elements
C mn = c † m c n = k n k φ k (m)φ k (n) , m, n ∈ A .(8)
Note that we use the symbol κ to differentiate the modes of the entanglement Hamiltonian in (6) from those of the physical Hamiltonian in (2).
With the spectrum ε κ at hand, the entanglement entropy S = −Tr(ρ A lnρ A ) is obtained as
S = κ s(ε κ ) , s(ε) = ε e ε + 1 + ln(1 + e −ε ) .(9)
In the thermodynamic limit of a very large subsystem, the spectrum becomes densely spaced and the sum can be replaced by an integral
S → dε ρ(ε) s(ε) , ρ(ε) = dκ dε ,(10)
where ρ(ε) is the density of states. In order to obtain analytical results for ρ(ε) in the thermodynamic limit, it is useful to introduce the resolvent
R(z) = Tr 1 z − C A − 1 z .(11)
Using the Sokhotski-Plemelj formula of complex analysis, the spectral density of C A is related to the resolvent as
ρ(ζ) ≡ Tr [δ(ζ − C A ) − δ(ζ)] = 1 2π lim →0 + Im [R(ζ − i ) − R(ζ + i )] .(12)
Note that, for later convenience, a 1/z term has been subtracted in the definition (11) of the resolvent, which leads to an additional δ(ζ) contribution in the spectral density (12). This additional term, however, does not play a role since we will be interested in the difference of spectral densities.
The final step is to relate the resolvent to the full counting statistics (FCS), which is just the probability distribution of the particle numberN A = n∈A c † n c n within the subsystem A. The cumulant generating function of the FCS reads
χ(α) = ln Tr(ρ A e iαN A ) = Tr ln 1 − (1 − e iα )C A .(13)
Introducing the variable
z = 1 1 − e iα ,(14)
and considering the FCS as a function χ(z), one can immediately see that the resolvent follows as
R(z) = d dz χ(z) .(15)
Hence, we have directly related the spectral density of the reduced correlation matrix to the FCS, which has been extensively studied in the ground states of critical 1D systems [42][43][44][45][46][47]. Obtaining the density of the entanglement spectrum is then a simple change of variables
ρ(ε) = dζ dε ρ(ζ) ,(16)
using the relations (7).
In the following section we study the zero-mode induced variation of the density of states in a homogeneous chain, and the resulting change in the entropy.
III. ZERO MODE IN A HOMOGENEOUS CHAIN
Let us first consider an open chain of even length 2L with some local chemical potentials at its boundaries, such that the nonvanishing entries of the hopping matrix are given by
H m,m+1 = H m+1,m = −1/2 , H 1,1 = H 2L,2L = 1/2 .(17)
It is easy to show that the eigenvalues and vectors are
ω k = − cos πk 2L , φ k (m) = N k sin πk 2L (m − 1/2) ,(18)
where k = 1, . . . , 2L and the normalization factor is given by N k = 1/ √ L for k = 2L as well as
N 2L = 1/ √ 2L.
One has thus a single zero mode with k = k F = L, and the squared amplitudes of the corresponding eigenvector are constant φ 2 k F (m) = 1/(2L). Note that the existence of the zero mode is due to the special choice of boundary conditions in (17). Instead, for an open chain of length 2L without boundary potentials, the momenta are quantized as πk/(2L + 1), and the zero mode is absent. It would only appear for odd chain sizes, however, for a better analogy with the defect problem in the next section, we prefer to work with an even number of sites.
The correlation matrix elements can be calculated explicitly using the eigenvectors in (18). In particular, the stateρ 1 has L occupied modes and one obtains
C 1,mn = sin π(2L+1) 4L (m − n) 4L sin π 4L (m − n) − sin π(2L+1) 4L (m + n − 1)
4L sin π 4L (m + n − 1)
.
C mn = C 1,mn − 1 2L sin π 2 (m − 1/2) sin π 2 (n − 1/2) ,(19)
and the matrix elements C 0,mn for the stateρ 0 are very similar, with an extra factor two multiplying the second term. We are interested in the change of the entanglement entropy δS = S − S 0 , which gives the contribution of the zero mode in the mixed stateρ in (4) as compared to the pure statê ρ 0 . The entropies are calculated for a subsystem A = [1, ] via the correlation matrices C A and C 0,A , respectively, using the methods introduced in section II.
In order to obtain our analytical results, we consider a thermodynamic limit by fixing the ratio r = /(2L) and sending L → ∞. We focus on extracting the difference of the density of states δρ(ε) = ρ(ε) − ρ 0 (ε) in the entanglement spectra. It turns out that the key object we need is the ratio of the FCS calculated for the pure statesρ 0 andρ 1 , with the zero mode either empty or occupied. This can be obtained via bosonization and CFT techniques, as shown in appendix A, yielding the simple result
Tr(ρ 1 e iαN A ) Tr(ρ 0 e iαN A ) = det(1 − (1 − e iα )C 1,A ) det(1 − (1 − e iα )C 0,A ) = e iαr .(21)
The physical interpretation of (21) is that the inclusion of the zero mode simply shifts the mean particle number by r in A, but does not affect any higher order cumulants in the limit L → ∞.
Note that it seems hard to derive (21) directly on the lattice for the geometry at hand. Results for the FCS are available for an infinite chain [44], obtained via Fisher-Hartwig methods for Toeplitz determinants [48]. One could generalize it to Toeplitz + Hankel matrices using the results of Ref.
[49], however, this would correspond to a semi-infinite chain with a segment 1 at the boundary.
Although the asymptotics of the determinant in the FCS is not directly accessible when we fix the ratio r, one could check the cumulants directly. In particular, the particle number fluctuations are given by Tr [C σ,A (1 − C σ,A )] for σ = 0, 1, and we observe numerically that their difference vanishes slowly as ln L/L for L → ∞, in an alternating fashion.
For the mixed state ρ, the change of the cumulant generating function is obtained via (21) as
δχ(α) = χ(α) − χ 0 (α) = ln 1 + e iαr 2 .(22)
The difference of the resolvents δR(z) = R(z) − R 0 (z) can be obtained using (15), by first substituting the z variable (14) and performing the derivative
δR(z) = d dz δχ(z) = − r z(1 − z) (1 − z −1 ) r 1 + (1 − z −1 ) r .(23)
The change of the spectral density then follows from the formula (12). To carry out the limit, let us first note that the expression (23) has a branch cut along z ∈ [0, 1], and thus δρ(ζ) is supported on this interval, as it should. To evaluate the jump across the branch cut, we need the limit
lim →0 + (1 − (ζ ± i ) −1 ) r = e ±iπr (ζ −1 − 1) r .(24)
Plugging this into (23) and (12), we arrive at
δρ(ζ) = r πζ(1 − ζ) sin(πr)(ζ −1 − 1) r 1 + 2 cos(πr)(ζ −1 − 1) r + (ζ −1 − 1) 2r .(25)
Note that, apart from the branch cut, the resolvent (23) has an extra pole at z = 0. Indeed, from the z → 0 behaviour δR(z) −r/z one infers that there is an extra delta function −rδ(ζ) appearing in the spectral density. However, since the entropy density vanishes at the spectral edge ζ = 0, we will simply discard this contribution.
The change in the entanglement spectrum density is obtained via (16) by a change of variables δρ(ε) = r 2π sin(πr) cos(πr) + cosh(εr)
.
The spectral density δρ(ε) is shown in Fig. 1 for various ratios r. One observes that the density becomes more and more peaked around ε = 0 as one increases the ratio towards r → 1. Indeed, in this limit one can expand (26) to get
δρ(ε) 1 π π(1 − r) π 2 (1 − r) 2 + ε 2 → r→1 δ(ε),(27)
such that it precisely reproduces a delta function. In terms of the correlation matrix spectrum, it corresponds to the appearance of an eigenvalue ζ = 1/2. Obviously, this is simply the half-filled mode in (3), as in the limit r → 1 of a full system one has ζ κ = n k for the eigenvalues of C. One should also remark that the entropy difference S−S 1 measured from the state with an occupied zero mode produces exactly the same results. Indeed, this simply corresponds to a change r → −r in (22), but the final result (26) is manifestly symmetric under this transformation. In our numerical calculations presented in the next subsections we actually used the convention δS = S − S 1 .
A. Zero-mode entropy
We can now apply the above results to calculate the zero-mode contribution to the entropy.
Using the entropy density (9) as well as the density of states (26), one arrives at the integral It is instructive to compare the above expression to the one derived in Ref. [20] for chiral fermions in a ring geometry, which reads
δS = ∞ −∞ dε δρ(ε) s(ε) .(28)δS = πr ∞ 0 dh tanh(πhr)(coth(πh) − 1) .(29)
Using the symmetry for ε → −ε, the integrals (28) and (29) are defined on the same domain but with integrands that do not match. Remarkably, however, a numerical evaluation of the integrals shows, that they reproduce the exact same function δS(r).
The origin of this mismatch can be understood as follows. The derivation in [20] is also based on the resolvent, and one can actually verify that the result for δR(z) is exactly the same as ours in (23). However, the entropy is then extracted by making use of the formula
δS = ∞ 1 dz (1 − z) [δR(z) − δR(1 − z)] ,(30)
and a subsequent change of variables h = 1 2π ln( z z−1 ) leads exactly to the result (29). In other words, (30) uses the analytical regime of the resolvent to reproduce δS by a mathematical trick.
Indeed, performing the integration over z before taking the trace in the definition of the resolvent
δR, one obtains δS = S − S 1 with S = Tr[−C A ln C A − (1 − C A ) ln(1 − C A )],(31)
and similarly for S 1 using C 1,A . This is exactly the free-fermion formula for the entropies, and thus δS is now obtained without ever referencing the spectral density δρ(ζ). Note also the analogy between the change of variables ζ → ε and z → h.
generalize it to evaluate the Rényi entropies
S n = 1 1 − n ln Tr(ρ n A ),(32)
and the corresponding zero-mode contributions δS n = S n − S 1,n . Inserting the expression for the Rényi entropy density in terms of ζ, one has
δS n = 1 1 − n 1 0 dζ δρ(ζ) log [ζ n + (1 − ζ) n ] ,(33)
which can be evaluated numerically for arbitrary Rényi index n. However, a considerable simplification occurs for integer indices n ≥ 2. Indeed, in such cases there is a well-known relation between the Rényi entropy and the FCS [50]
S n = 1 1 − n n−1 2 p=− n−1 2 χ(α p ) , α p = 2πp n .(34)
Since the relation is linear in the generating function, one can directly apply it to the difference, and using (22) one arrives at
δS n = 1 1 − n n−1 2 p=− n−1 2 ln cos πpr n .(35)
The zero-mode Rényi entropies are shown in Fig. 2 and compared against numerical calculations, performed with a fixed ratio r = /(2L) and increasing L. One should note, that the numerical data shows relatively strong finite-size corrections, and the entropy difference is well described by
δS n ( , L) = δS n (r) + (−1) a 1/n .(36)
One has thus an alternation with the parity of the subsystem size and some unusual correction scaling with a power 1/n, that originates from the pure state and was noticed earlier [51]. We used the above ansatz to fit the data and extract the scaling part δS n (r). As observed in Fig. 2, the fits are in excellent agreement with the analytical results. In general, the curves for each n interpolate smoothly and monotonically between the values 0 and ln (2). A special value for r = 1/2 is δS = ln(2) − 1/2. Note that the line for n = 1 is obtained by a numerical evaluation of the integral (28), whereas the n = ∞ case follows from converting the sum (35) into an integral.
B. Spectral shift
The agreement between the analytical and numerical results is remarkable, despite the fact that the actual numerical spectra are still very far from being continuous and densely spaced for the chain sizes considered. To better understand the mechanism behind the emergence of the zeromode contribution, we shall have a closer look at the entanglement spectra. According to (10), the density of states is obtained as the derivative of the spectral function κ(ε), which is simply the inverted spectrum ε κ plotted against the integer index κ. Thus the spectral function κ(ε) simply counts the number of eigenvalues up to ε.
For the case of a pure Fermi sea, the asymptotics of such spectral functions is known for the infinite or semi-infinite hopping chain [52]. These results are based on the analysis of the discrete sine kernel from the original work of Slepian [53]. Although the correlation matrix C 1 in (19) is kind of a deformed sine kernel, we are not aware of any rigorous results for its spectral function κ 1 (ε). Nevertheless, we try to guess the result by analogy to [53], as well as from the relation of κ 1 (ε) to the entropy. Namely, for the half-filled ground stateρ 1 we put forward the ansatz
κ 1 −κ 1 = ε 2π 2 ln 8L π sin(πr) − 1 π ϕ ε 2π ,(37)
whereκ 1 = Tr(ρ 1NA ) + 1/2 is just a constant related to the average number of particles in A, while ϕ(z) is given via the Gamma function as
ϕ(z) = arg Γ(1/2 + iz) .(38)S 1 = ∞ −∞ dε dκ 1 dε s(ε) = 1 6 ln 8L π sin(πr) + C 2 ,(39)
where the constant term follows from the non-linear part of the spectral function as
C = − 1 π 2 ∞ −∞ dε s(ε)ϕ ε 2π ≈ 0.495 .(40)
The expression (39) resembles very closely the result for the open chain without boundary fields, which was studied in [51]. Indeed, the only modification is that 2(4L + 2) is replaced by 8L in the argument of the logarithm. This is motivated by the fact, that the eigenfunctions of the simple open chain vanish at sites m = 0 and m = 2L + 1, and one thus needs to add two extra sites to embed the chain and its mirror image into a periodic ring [51]. Here, instead, the eigenfunctions (18) vanish at m = 1/2 and 2L + 1/2, and one can argue that the two extra sites are not needed and the effective length of the corresponding ring is 4L. Note also, that the constant C in (40) is precisely the one that enters the entropy of the ring [54], which now appears with a factor 1/2 due to the single boundary between the subsystem A and the rest of the chain. After having motivated our ansatz (37) for the spectral function, we now compare it against exact numerical calculations in Fig. 3 for various ratios r, and observe an excellent agreement. We now move forward to study the spectral function κ(ε) of the mixed state, associated to the correlation matrix C A in (20). This is related to the matrix C 1,A by a rank-one update, which induces slight shifts between the eigenvalues ε κ and ε 1,κ in the corresponding spectra. It should be stressed, however, that in the numerics the independent variable κ is always an integer. To extract the spectral shift δκ(ε) = κ(ε) − κ 1 (ε), we have to treat ε as the independent variable, i.e.
we invert ε κ → κ(ε κ ) and subtract κ 1 (ε κ ) by using our ansatz (37). In the thermodynamic limit, the spectral shift follows by integrating the density δρ(ε) in (26), which yields
δκ(ε) = 1 π arctan tan πr 2 tanh εr 2 ,(41)
where the integration constant was chosen such that δκ(0) = 0. The comparison of the numerically extracted spectral shift to the analytical result (41) is shown in Fig. 4, with a remarkable agreement.
Note that, in order to bring the numerical data to a symmetric form, a constant r/2 has been added, which exactly corresponds to the differenceκ 1 −κ of the average particle number in A. To conclude this section we mention, that the zero-mode contributions can also be investigated for a periodic ring of size 2L, with boundary conditions H 1,2L = H 2L,1 = −1/2 in (1). The momenta are then quantized as πk/L with k = −L + 1, . . . , L, and one has a pair of zero modes with ±k F = L/2 for even half-chain sizes. Both of them have to be included with an occupation n ±k F = 1/2, which leads to a mixed ground state composed of four different pure states. The ansatz (37) has to be modified by replacing 8L → 4L and multiplying the r.h.s. with a factor two, that accounts for the increased density of states due to the two boundary points of A. Carrying out the numerical analysis analogously, one finds that the spectral shift δκ(ε) as well as the resulting entropy difference δS are both multiplied by a factor of two. Since the results look very similar to the ones obtained for the open chain in this section, we do not report them here.
In the previous section we have presented an alternative derivation of the zero-mode contribution, that has been studied previously for a translational invariant system [20]. We shall now extend these results for free-fermion chains with a particular form of a defect, with the nonzero elements of the hopping matrix given by
H m,m+1 = H m+1,m = −1 m = L −λ m = L , H L,L = −H L+1,L+1 = 1 − λ 2 ,(42)
while the boundary potentials H 1,1 = H 2L,2L = 1 2 are the same as in the homogeneous case. Note that we use a prime notation to distinguish the quantities defined for the defect problem from the ones of the homogeneous case with λ = 1. In fact, it turns out that the eigenmodes of the two Hamiltonians are intimately related via
ω k = ω k = − cos πk 2L , φ k (m) = α k φ k (m) m ≤ L β k φ k (m) m > L .(43)
In other words, the spectra are identical, supporting a zero mode at k = k F = L for arbitrary values of λ, while the eigenvectors are related to their homogeneous counterparts (18) by a rescaling that is different on the left/right hand side of the defect and is given by the factors
α 2 k = 1 + (−1) k 1 − λ 2 , β 2 k = 1 − (−1) k 1 − λ 2 .(44)
The situation is thus completely analogous to the case of a simple open chain, where this socalled conformal defect was studied previously [39]. The terminology derives from the fact that the transmission amplitude s = λ is independent of the incoming momentum k. Furthermore, for the particular case of a half-chain bipartition, the entanglement spectra are related as
cosh ε σ,κ 2 = 1 λ cosh ε σ,κ 2 ,(45)
where σ = 0, 1 refers to the pure states in (5). The relation is proved in appendix B. One should stress that (45) holds for arbitrary particle number N , but only for a half-chain A = [1, L], we thus restrict our attention to this case. Importantly, the main feature of the entanglement spectra for the defect is the presence of a gap between
ε ± = ±2 acosh(λ −1 )(46)
where no eigenvalues are allowed.
Before turning to the FCS, let us comment on a special property of the spectra that will become important for the defect. Indeed, it turns out that the homogeneous spectrum at r = 1/2 has a particle-hole symmetry, which leads to parity effects in terms of the particle number N . For an even L, this implies that the spectra of the state with N = L contain N/2 pairs with ±ε 1,κ and ±ε 1,κ , respectively. On the other hand, for N = L − 1 particle-hole symmetry implies the existence of a single ε 0,κ = 0, with the rest of the nonzero eigenvalues coming in pairs. One then observes that the corresponding defect eigenvalue is located on the upper edge ε + of the gap, thus making the spectrum slightly asymmetric. Note that an eigenvalue on the lower edge ε − of the gap appears when considering the spectra for the right half-chain. For odd L the two cases discussed above are simply interchanged.
We are now ready to derive the formula analogous to (21). Rewriting the FCS generators χ σ (α) with σ = 0, 1 in the pure states (5) in terms of the corresponding spectra ε σ,κ , one has
Re χ σ (α) = 1 2 κ ln 1 − sin 2 ( α 2 ) cosh 2 ( ε σ,κ 2 ) , Im χ σ (α) = 1 2i κ ln e ε κ + e iα e ε κ + e −iα .(47)
The real part can immediately be related via (45) to the homogeneous FCS
Re χ σ (α) = Re χ σ (α ) , sin α 2 = λ sin α 2(48)
by a change of variables α → α . Taking the logarithm of (21) implies then Re [χ 1 (α) − χ 0 (α)]=0.
Dealing with the imaginary part requires some care due to the parity effects discussed above. For a perfectly symmetric spectrum, one can simply add the contributions from the pairs. Using
e ε 1,κ + e iα e ε 1,κ + e −iα e −ε 1,κ + e iα e −ε 1,κ + e −iα = e 2iα ,(49)
this gives for an even L
Im χ 1 (α) = α L 2 .(50)
For N = L − 1, the extra eigenvalue sitting at the gap edge has to be added separately
Im χ 0 (α) = α L 2 − 1 + 1 2i Im ln e ε + + e iα e ε + + e −iα .(51)
Inserting (46) and using some identities, the difference can be expressed as
Im χ 1 (α) − χ 0 (α) = α 2 + tan −1 1 − λ 2 tan α 2 ,(52)
which is the analogue of (21) in the presence of the defect. The case of odd L is straightforward to obtain, and gives a result similar to (52) but with a minus sign in front of the second term.
It should be noted, that the above result implies a highly nontrivial change in the FCS. Contrary to the homogeneous case, it is not a simple shift in the average particle number, but affects higher order cumulants as well. In particular, since (52) is an odd function of α, the FCS becomes skewed.
In turn, it yields for the difference of the FCS between the mixed and pure states
δχ ± (α) = χ (α) − χ 0 (α) = ln 1 + e iα/2±i tan −1 ( √ 1−λ 2 tan(α/2)) 2 ,(53)
where the ± sign refers to L being even or odd, respectively. It is instructive to check the limit λ = 1, where δχ ± (α) = δχ(α) reproduces the homogeneous result (22) with r = 1/2. On the other hand, for a disconnected chain λ = 0 one has the simple expressions
δχ + (α) = ln 1 + e iα 2 , δχ − (α) = 0,(54)
which correspond again to Eq. (22) with the effective ratios r = 1, 0, respectively. This immediately yields the values δS e = ln(2) and δS o = 0 for the even/odd case in the limit λ → 0.
A. Spectral density
With the result (53) at hand, we can now perform the calculation for the spectral density in the exact same way as for the homogeneous case. Namely, we introduce the variable z in (14), and employ the identities
e iα/2 = 1 − 1 z , tan(α/2) = 1 i 1 1 − 2z , e tanh −1 (x) = 1 + x 1 − x .(55)
Furthermore, we also introduce the parameters
z ± = 1 ± √ 1 − λ 2 2 ,(56)
such that (53) can be rewritten as
δχ ± (z) = ln 1 + 1 − 1 z z−z ± z−z ∓ 2 .(57)
Rearranging the expression one gets
δχ ± (z) = ln z(z ∓ − z) + (1 − z)(z − z ± ) 2 − 1 2 ln (z(z − z ∓ )) ,(58)
where the first term is responsible for the continuous part of the spectral density, while the second one simply contributes the delta functions − 1 2 (δ(z) + δ(z − z ∓ )). Using
d dz (z − a)(b − z) = a+b 2 − z (z − a)(b − z) ,(59)
we get for the derivative of the first term in (58) after tedious but straightforward algebra
1 z ± (1 − 2z) −z ± − z ∓ 2 − z (1 − z)(z − z ± ) z(z ∓ − z) + z ± + 1 2 − z z(z ∓ − z) (1 − z)(z − z ± ) .(60)
At this point, one should be particularly careful to deal properly with the branch cuts appearing in the region z ∈ (0, z − ) ∪ (z + , 1), due to the presence of the square roots. We first consider the case z ∈ (0, z − ), where the jump across the branch cut is
lim →0 + (z ∓ i − z ± )(1 − z ± i ) = ∓i (1 − z)(z ± − z),(61)
and thus we arrive at
δρ ± (ζ) = 1 πz ± (1 − 2ζ) z ∓ 2 − ζ (1 − ζ)(z ± − ζ) ζ(z ∓ − ζ) + z ± + 1 2 − ζ ζ(z ∓ − ζ) (1 − ζ)(z ± − ζ)
.
For z ∈ (z + , 1) one gets the same result, and thus Eq.
δR + (z)(z − 1/2) 1, δR − (z)(z − 1/2) 0.(63)
The pole is thus present only for δR + (z), leading to the even/odd spectral densities
δρ e (ζ) = δρ + (ζ) − 1 2 δ(ζ − z − ) + δ(ζ − 1/2), δρ o (ζ) = δρ − (ζ) − 1 2 δ(ζ − z + ),(64)
where we discarded again the irrelevant delta functions at ζ = 0.
The change in the density of the entanglement spectrum is obtained by transforming variables via (16). Noting that z ± = ζ(ε ∓ ), after some algebra one finds
δρ e (ε) = δρ + (ε) + δ(ε) − 1 2 δ(ε − ε + ) , δρ o (ε) = δρ − (ε) − 1 2 δ(ε − ε − ) ,(65)
where the continuous part of the density is given by
δρ ± (ε) = 1 2π(1 ± √ 1 − λ 2 ) | sinh(ε)| λ 2 cosh 2 ε 2 − 1 ∓ √ 1 − λ 2 λ 2 cosh 2 ε 2 − 1 .(66)
These are shown in Fig. 5 for some values of λ, with the dashed and solid lines corresponding to δρ + (ε) and δρ − (ε), respectively. As is clear from (66), both functions diverge around ε ± , and they move away from the homogeneous (λ = 1) curve in opposite directions. In fact, the density difference δρ + (ε) is negative close to the gap edges, and it becomes very small moving away from it. Hence the dominant contribution to the even spectral density is actually delivered by the delta peaks in (65) for smaller values of λ.
B. Numerical results
In the following we present our numerical results for the defect, and compare them to the analytical predictions obtained via the spectral densities in (65). We start with the entanglement entropy, focusing on the case n = 1. The zero-mode entropy for the defect follows as
δS e = 2 ∞ ε + dε δρ + (ε) s(ε) − s(ε + ) 2 + ln(2) , δS o = 2 ∞ ε + dε δρ − (ε) s(ε) − s(ε + ) 2 ,(67)
where we used the symmetry of the spectral and entropy densities under ε → −ε. In Fig. (6) we compare the above integral expressions to the entropy difference S − S 1 obtained from the numerics. As expected, the data converge to different values for even/odd L, and shows finite size Finally, we present our results for the spectral shift, obtained as the integral of the density (66).
It turns out that this can be evaluated in a closed form and yields
δκ ± (ε) = 1 2π arctan λ 2 cosh 2 ε 2 − 1 ∓ arctan λ 2 cosh 2 ε 2 − 1 √ 1 − λ 2 ,(68)
where the integration constant has been chosen such that δκ ± (ε + ) = 0. Note that for ε → ∞ one has δκ + → 0 and δκ − → 1/2. The comparison to the numerical data is carried out in a similar fashion as in the homogeneous case. First, one inverts the mixed-and pure-state spectra, ε κ and ε 1,κ , as shown in the left panel of Fig. 7 for both even/odd cases. In order to subtract the blue curve from the red one, we need again an interpolation of the data at the proper ε values. This, however, is simply obtained by combining the homogeneous ansatz (37) with the eigenvalue relation (45). The difference of the counting functions obtained this way is shown by the symbols in the right panel of Fig. 7, for the positive part (ε > ε + ) of the spectrum. Note that in the odd case we have to subtract the constant 1/2 from δκ − , to correctly reproduce the asymptotics shown by the data. In contrast, the negative part of the spectra is properly described by the functions δκ + − 1/2 and δκ − , respectively. The constant shifts between the positive/negative parts are actually due to the delta functions in (65).
V. DISCUSSION
We studied the variation of the ground-state entropy induced by the presence of a zero mode in free-fermion chains with a conformal defect. The underlying change in the density of the entanglement spectrum is calculated analytically via the FCS and the related resolvent function. In the homogeneous case, we reobtained the result of Ref. [20] for the zero-mode entanglement entropy in an alternative way, and generalized it to the Rényi entropies. The calculations can also be extended to the defect, by making use of the relation that connects the spectrum to that of the homogeneous chain. We find excellent agreement between the analytical and numerical results.
A particular feature observed for the defect is the presence of parity effects in the zero-mode entropy, such that δS e and δS o differs for even/odd half-chain sizes. In fact, a closer inspection reveals that the parity effects are present in the mixed-state entropy S, while the pure-state entropies S 0 and S 1 show only alternations that vanish as 1/L. Remarkably, finite parity effects were observed also in the pure ground state of a simple hopping defect [55], which does not support a zero mode. Moreover, the qualitative behaviour of the parity term as a function of the transmission amplitude seems rather similar to our result δS e − δS o , shown in the inset of Fig. 6. However, a quick numerical comparison of the two cases reveals, that the two functions are slightly different.
Indeed, the parity effects for the hopping defect must originate from the fact, that the relation analogous to (45) is satisfied only approximately there. It would be interesting to see, whether the methods employed here could be generalized to understand this case.
alized to the continuum setting, i.e. for a junction of quantum wires described by a scale-invariant scattering matrix. Indeed, the main relation (45) connecting the defect spectrum to the homogeneous one remains unchanged [56]. Furthermore, as shown in Appendix C by a direct calculation, while for even particle numbers the entanglement spectra are particle-hole symmetric, for odd occupations the symmetry is explicitly broken for the defect. This is exactly the same mechanism as the one observed for the lattice problem in Sec. IV.
It would also be nice to extend the results for the conformal defect to arbitrary subsystem ratios.
Again, the major bottleneck is to find the generalization of (45), to relate the defect problem to the homogeneous one. Finally, it would be important to check the universality of the results by investigating more complicated defect problems supporting a zero mode. We believe that our results could be reproduced by a pure boundary CFT approach, similar to the one employed in Appendix A, and it would be a fingerprint of universality.
Another interesting direction to explore is the evolution of entanglement across the defect, starting from a mixed initial state due to a zero-mode degeneracy. Indeed, it contains information both about the spreading of quantum correlations as well as the classical ones due to the mixture. The methods developed here could be combined with recent results on the time evolution of correlations [57,58] and FCS across a defect [59,60], and used to analyse different quench protocols, In this appendix, we provide a proof of the formula Eq. (21) employing CFT techniques. We first give a field theoretic characterization of our system in the absence of a defect, and then we compare the FCS of the vacuum and the one-particle state. We consider the CFT of the Dirac fermions (see Ref. [65]) on a spatial box of length L. We denote by |0 its vacuum, that we represent as a two-dimensional strip geometry parametrized by a complex variable z
Re(z) ∈ [0, L].(A1)
The boundary conditions at the edges are chosen of the Dirichlet type (the density of fermions vanishes at that points), and we depict them as boundary lines Re(z) = 0, L extended over the euclidean time (for simplicity we use L for the chain size, in contrast to 2L used in the main text).
We consider the subregion
A = [0, ],(A2)
that is an interval attached to the boundary, and we study its FCS. For instance, given the U (1) symmetry corresponding to the imbalance of particle and antiparticles, we construct its restriction over A asN
A = 0 dx Ψ † (x)Ψ(x),(A3)
with Ψ(x) being the Dirac field. The quantum fluctuations ofN A in the vacuum state, are encoded in the full counting statistics 0| e iαN A |0 .
As shown in [66], a clever way to compute the expectation value above is via bosonization techniques. For instance, the following correspondence holds
e iαN A ∼ V α/2π (z = 0)V −α/2π (z = ), α ∈ [−π, π],(A5)
with V ±α/2π (z,z) being vertex operators, whose expression in terms of the chiral modes φ,φ is V ±α/2π (z,z) = e ±iα/2π(φ(z)+φ(z)) .
Due to the choice of the boundary condition at z = 0, one can show that the (boundary) scaling dimension of V α/2π (z = 0) vanishes, and from now on we just discard its insertion. In contrast, the (bulk) scaling dimension of V −α/2π (z = ) has a nontrivial value given by [65]
∆ = α 2π 2 .(A7)
Finally, we get
0| e iαN A |0 ∼ 0| V −α/2π (z = ) |0 ,(A8)
namely a one-point function of a scalar primary field in a strip geometry, that we aim to compute below. To do so, we first employ the conformal transformation which maps the strip geometry onto the upper half-plane (UHP) Im(w) ≥ 0. In this new geometry, the one-point function is fixed by symmetries [67] and it is given by
w = −e −i π L z ,(A9)V −α/2π (w,w) UHP = 1 (w −w) ∆ .(A10)
We summarize the construction above in Fig. 8, which gives a pictorial representation of the two geometries considered above.
To proceed further, we should go back to the initial geometry, and, using the transformation law of primary operators, we express
0| V −α/2π (z,z) |0 = dw dz ∆ V −α/2π (w,w) UHP = π 2L 1 sin π(z+z) 2L ∆ .(A11)
Putting everything together, we get for the full-counting statistics of the vacuum as
0| e iαN A |0 ∼ π 2L 1 sin π L ( α 2π ) 2 ,(A12)
which holds up to a non-universal (α-dependent) proportionality constant.
We now repeat the same calculation for the excited state made of a single particle just above the Fermi sea (the vacuum), denoted here by |1 (see also Ref. [68]). A powerful approach to tackle this problem relies on the equivalence between the CFT in the UHP, to its chiral counterpart on the whole complex plane (see Refs. [65,69] for details), a procedure called unfolding. In this way, one can employ radial quantization, describing the excited state via an insertion of a local field at w = 0, ∞ on the conformal vacuum in planar geometry. For our purpose, we need to insert the chiral vertex operators V ±1 , with conformal dimension 1, at w = 0, ∞, corresponding to the bra/ket of the one-particle state (as explained in [68]). Through the unfolding procedure, the antiholomorphic fields inserted in the upper half-plane are mapped onto holomorphic fields at their specular position wrt the real axis. In particular, this leads to the replacement
V −α/2π (w,w) → V −α/2π (w)V α/2π (w) (A13)
inside the correlation functions. In the end, we need the 4-point function
V −1 (∞)V α/2π (w)V −α/2π (w)V 1 (0) (A14)
evaluated in the planar geometry, and the result is [65,70]
V −1 (∞)V α/2π (w)V −α/2π (w)V 1 (0) V −1 (∞)V 1 (0) = V α/2π (w)V −α/2π (w) × w w α/2π . (A15)
The previous expression gives the expectation value of V −α/2π (w,w) in the UHP, and the denominator V −1 (∞)V 1 (0) ensures the proper normalization of the state |1 . We represent the unfolded geometry and the field insertions in Fig. 9, which summarizes this construction. The last step is the map w → z, which brings back to the initial geometry, and we get
1| V −α/2π (z,z) |1 = dw dz ∆ V −1 (∞)V α/2π (w)V −α/2π (w)V 1 (0) V −1 (∞)V 1 (0) = (A16) 0| V −α/2π (z,z) |0 × e i α 2L (z+z) .(A17)
In the end, we express the ratio of FCS as
1| e iαN A |1 0| e iαN A |0 = e iα L ,(A18)
which is the main result employed in Section III, see Eq. (21). From our prediction, one learns that the difference of connected moments between the two states is universal. For instance one has
1|N A |1 − 0|N A |0 = L ,(A19)
while the difference of the other moments is vanishing. We finally mention that the same result was already obtained for the ring geometry (periodic boundary conditions) in [70], with similar techniques. The origin of this match, which is not obvious a priori, is ultimately found in the equivalence between the CFT on the UHP and its chiral counterpart on the plane.
To conclude this appendix, we point out a technical, albeit fundamental, observation. The CFT calculation presented here refers to α ∈ [−π, π], while other real values of α can be obtained via the periodic property α → α + 2π, which comes from the definition. However, in the main text we have implicitly analytically continued the result over complex values of α, as the change of variable z = 1 1−e iα in Eq. (14) was employed for real z. We conjecture that this procedure is justified. Nevertheless, we point out that while the analytical continuation of Eq. (A18) over the whole complex plane is clearly possible, it does not coincide with the actual value of the ratio of FCS, as it does not satisfy the symmetry under α → α + 2π. Moreover, we argue that the match between the two is present only in the strip Re(α) ∈ [−π, π]. A rigorous analysis of the analytic properties of the FCS is nevertheless beyond the purpose of this work, and we refer to Ref. [71] for a similar discussion.
Appendix B: Relation between entanglement spectra
In this appendix we prove the relation (45). Let us consider a Fermi sea ground state with the lowest N modes occupied and the rest empty. The correlation matrix for the defect reads
C mn = N k=1 φ k (m)φ k (n) ,(B1)
with the eigenvectors given in (43). We shall focus on a half-chain partition with A = [1, L]. The key step is to consider the product C A (1 − C A ), with matrix elements given by
C A (1 − C A ) mn = N k=1 2L l=N +1 α 2 k α 2 l φ k (m)A kl φ l (n) ,(B2)
where we introduced the overlap matrix
A kl = L j=1 φ k (j)φ l (j) . (B3)
Now the main observation is that (B2) depends on the defect only via the factor
α 2 k α 2 l = (1 + √ 1 − λ 2 ) 2 k − l even λ 2 k − l odd ,(B4)
where we used (44). Furthermore, we can also show that the overlap matrix has a checkerboard structure. Inserting the explicit form (18) of the eigenvectors, the sum (B3) can be carried out as
A kl = sin π 2 (m − n) 4L sin π 4L (m − n) − sin π 2 (m + n) 4L sin π 4L (m + n) .(B5)
Thus one can immediately see, that the matrix elements A kl are nonvanishing only for k − l odd, and together with (B4) this leads to the relation
C A (1 − C A ) = λ 2 C A (1 − C A ) .(B6)
Rewriting in terms of the eigenvalues one has
ζ κ (1 − ζ κ ) = λ 2 ζ κ (1 − ζ κ ) .(B7)
Finally, using (7) one obtains
ζ κ (1 − ζ κ ) = 1 4 cosh 2 εκ 2 (B8)
and similarly for the defect eigenvalues. Inserting into (B7), one arrives at the relation (45) reported in the main text.
Appendix C: Even/odd effects for the Schrödinger junction
Here, we consider a Fermi gas in a finite geometry with a conformal defect in the middle, which is dubbed as Schrödinger junction [56,72,73]. This system is closely related to the chain (1), and one expects that the universal features of the two models are captured by the same field theory (see Ref. [36]). For instance, the entropy of the state with the first N levels filled is known to diverge logarithmically in N , and the prefactor is the same for the CFT [36] and the chain [31]. In this appendix, we show the presence of peculiar even/odd effects as N is varied. The mechanism we find is equivalent to the one of the chain in Sec. IV, and, as we show, it gives rise to the same universal features.
The points of the Schrödinger junction are parametrized by a pair
(x, j), x ∈ [0, L], j = 1, 2,(C1)
where j labels the two wires and x the associated spatial position. The bulk Hamiltonian, as a function of the fermionic field Ψ j (x), is
H = 2 j=1 L 0 dx 1 2 ∂ x Ψ † j (x) (∂ x Ψ j (x)) ,(C2)
and the boundary conditions have to be specified at x = 0, L. At x = 0 we consider a scale invariant
scattering matrix S = √ 1 − λ 2 λ λ − √ 1 − λ 2 , λ ∈ [0, 1] (C3)
which couples the two wires explicitly, and it corresponds to the conformal defect. Here, λ is the transmission amplitude and its physical meaning is the same as for the chain of Sec. IV. At x = L, we choose Dirichlet boundary conditions, namely Ψ j (L) = 0.
We now aim to characterize the eigenstates of H. To do so, we have to find first its singleparticle levels, and then specify their occupation numbers. A convenient strategy is a change of basis which diagonalizes S, whose eigenvalues are ±1, via a unitary 2 × 2 matrix U. This amounts to the introduction of a pair of unphysical fields ϕ 1 (x), ϕ 2 (x) as
Ψ i (x) = 2 j=1 U ij ϕ j (x),(C5)
We now consider a Fermi sea made by the first N N/D levels filled with boundary conditions N/D respectively, and the total particle number is N = N N + N D . The correlation function of the unphysical fields is thus
ϕ † j (x)ϕ j (x ) = δ jj × C N (x, x ), j = 1 C D (x, x ), j = 2,(C8)
where
C N (x, x ) = N N n=1 φ N n (x)φ N n (x ), C D (x, x ) = N D n=1 φ D n (x)φ D n (x )(C9)
Now, for a fixed particle number N , we focus on the lowest energy state, which corresponds to N N = N D = N/2 for N even and N N = N D + 1 = (N + 1)/2 for N odd. Going back to the physical fields Ψ j , one can eventually express the correlation function as (see Ref. [36] for details)
C jj (x, x ) ≡ Ψ † j (x)Ψ j (x ) = 1 + S 2 jj C N (x, x ) + 1 − S 2 jj C D (x, x ).(C10)
We construct the restricted kernels associated to the first and the second wire, denoted here by C 11 (x, x ) and C 22 (x, x ), and compute their spectrum. Following [36], we consider a N dimensional subspace of L 2 ([0, L]) spanned by the set of wave functions {φ N n (x)} N N n=1 ∪ {φ D n (x)} N D n=1 , that we take as a (non-orthonormal) basis. For instance, one can show that the kernel C jj (x, x ) acts non-trivially in the subspace considered, while it vanishes on the orthogonal subspace. Thus, by projecting the kernel on this subspace, we can access directly its non-vanishing spectrum. We do so, and we end up with the following (block) matrix representation
C N 1 Q 0 0 , C D 0 0 Q † 1 ,(C11)
with Q nn a N N × N D rectangular matrix defined as
corresponding to the scalar product between the non-orthogonal basis elements. Here, the symbol refers to the equivalence of the non-zero spectrum, i.e. the set of non-zero eigenvalues. In this way, the restricted kernel of the first wire is expressed as a N × N matrix
C 11 1 + √ 1 − λ 2 2 1 Q 0 0 + 1 − √ 1 − λ 2 2 0 0 Q † 1 .(C13)
one of C 11 , obtained for λ = 1, as [36] C 11 (1 − C 11 ) = λ 2 C 11 (1 − C 11 ),
which is equivalent to Eq. (B6). Since the relation (C14) is not invertible, the spectrum of C 11
does not fix unambigously the one of C 11 . In particular, as we will show below, C 11 and 1 − C 11 have different spectra for N odd, and the (particle-hole) symmetry C 11 ↔ 1 − C 11 is explicitly broken. For the sake of convenience, we introduce the N × N matrix
Γ = 1 − 2C 11 ,(C15)
so that the particle-hole symmetry corresponds to Γ ↔ −Γ. To compute its spectrum, we express the characteristic polynomial of Γ det (y − Γ) = det
y + √ 1 − λ 2 (1 + √ 1 − λ 2 )Q (1 − √ 1 − λ 2 )Q † y − √ 1 − λ 2 ,(C16)
Since it is symmetric under y → −y, it means that the particle hole symmetry Γ ↔ −Γ is preserved.
In contrast, when N is odd, since N N = N D + 1 we get det (y − Γ) = (y + 1 − λ 2 )det y 2 − 1 + λ 2 (1 − Q † Q) .
In other words, there is a single eigenvalue − √ 1 − λ 2 of Γ, corresponding to the eigenvalue 1+ √ 1−λ 2 2 of C 11 , which breaks explicitly the particle-hole symmetry, while the other ones preserve it. Similar calculation can be performed also for the second wire, and one gets an eigenvalue 1− √ 1−λ 2 2 for C 22 whenever N is odd, which implies an asymmetry between the two wires. Its origin can be easily traced back to the choice of the scattering matrix (C3) which is asymmetric under the exchange of the wires. In summary, we find precisely the same mechanism in the continuum as for the lattice model in Sec. IV, which leads to the parity effects in the entropy.
FIG. 1 .
1Difference of the spectral density δρ(ε) for various values of r. For r → 1 the density converges to a delta function localized at ε = 0.
FIG. 2 .
2Zero-mode Rényi entropies δS n as a function of the ratio r for various n. The symbols of matching color show the numerical results obtained via data fits to(36).
FIG. 3 .
3Spectral function κ 1 (ε) of the Fermi sea ground state for various ratios r and L = 100. The solid lines of matching color correspond to the ansatz(37).
FIG. 4 .
4Spectral shift for various r and L = 100. The solid lines show the analytical result (41).
(62) provides the continuous part of the spectral density, which turns out to be symmetric under ζ → 1 − ζ. Further singular contributions to the spectral density are delivered by the poles of the resolvent. In particular, the term (1 − 2z) in the denominator of Eq. (60) might give rise to a pole at z = 1/2. The presence of the pole can be spotted by approximating (60) around z = 1/2, where we find the limits
FIG. 5 .
5Continuous part of the spectral density δρ + (ε) (dashed lines) and δρ − (ε) (solid lines) for various values of λ.
corrections of the form δS e,o (L) = δS e,o + a e,o /L. Fitting this expression for increasing L, one obtains the values indicated by the symbols in Fig. 6. They show an excellent agreement with the analytical results. Note that the zero-mode entropy δS e (δS o ) increases (decreases) monotonously as the defect strength goes towards λ → 0. Their difference δS e − δS o is shown in the inset.
FIG. 6 .
6Zero-mode entropy δS e and δS o for even/odd half-chain lengths as a function of λ. The symbols show the fits to the numerical data, and the lines correspond to the analytical result in (67). The inset shows the difference δS e − δS o .
FIG. 7 .
7Left: inverted single-particle entanglement spectra ε 1,κ and ε κ for λ = 0.5. Right: spectral shift for ε > ε + and various values of λ. The symbols show the numerically obtained results (see text), while the solid lines correspond to the analytical formula(68). The top/bottom rows correspond to the even/odd cases wih L = 100 and L = 99, respectively.
FIG. 8 .
8One-point function of V −α/2π in the strip geometry (left) and upper half-plane (right), computed in the vacuum state |0 . The red lines at the edges of the strip represent the boundary points extended over the euclidean time, and they are mapped onto the real axis via z → w.
FIG. 9 .
9Chiral CFT on the plane, corresponding to the full CFT on the UHP via unfolding. V 1 (0) and V −1 (∞) represent the ket |1 and bra 1| associated to the one-particle excited state. The insertion of the chiral vertex operators V −α/2π (w), V α/2π (w) is related to the scalar vertex operator V −α/2π (w,w) of the UHP.
= 0
0which are decoupled and satisfy Neumann(N)/Dirichlet(D) boundary conditions at x two boundary conditions, we denote the single-particle eigenfunctions asφ N n (x) = 2 L cos (n − 1/2) πx L , φ D n (x) = 2L sin nπx L , n = 1, 2, . . .
x), n = 1, . . . , N N , n = 1, . . . , N D ,
that is the determinant of a block matrix, and it can be evaluated thanks to the det(A)det(D − CA −1 B). (C17)When N is even, every block in Eq. (C16) is N/2 × N/2 and the characteristic polynomial is det (y − Γ) = det y 2 − 1 + λ 2 (1 − Q † Q) .
Entanglement in many-body systems. L Amico, R Fazio, A Osterloh, V Vedral, 10.1103/RevModPhys.80.517Rev. Mod. Phys. 80517L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Entanglement in many-body systems, Rev. Mod. Phys. 80, 517 (2008).
Colloquium: Area laws for the entanglement entropy. J Eisert, M Cramer, M B Plenio, 10.1103/RevModPhys.82.277Rev. Mod. Phys. 82277J. Eisert, M. Cramer, and M. B. Plenio, Colloquium: Area laws for the entanglement entropy, Rev. Mod. Phys. 82, 277 (2010).
Entanglement entropy in extended quantum systems. P Calabrese, J Cardy, B Doyon, 10.1088/1751-8121/42/50/500301J. Phys. A: Math. Theor. 42500301P. Calabrese, J. Cardy, and B. Doyon, Entanglement entropy in extended quantum systems, J. Phys. A: Math. Theor. 42, 500301 (2009).
Quantum entanglement in condensed matter systems. N Laflorencie, 10.1016/j.physrep.2016.06.008Phys. Rep. 6461N. Laflorencie, Quantum entanglement in condensed matter systems, Phys. Rep. 646, 1 (2016).
Entanglement entropy and conformal field theory. P Calabrese, J Cardy, 10.1088/1751-8113/42/50/504005J. Phys. A Math. Theor. 42504005P. Calabrese and J. Cardy, Entanglement entropy and conformal field theory, J. Phys. A Math. Theor. 42, 504005 (2009).
Geometric and renormalized entropy in conformal field theory. C Holzhey, F Larsen, F Wilczek, 10.1016/0550-3213(94)90402-2Nucl. Phys. B. 424443C. Holzhey, F. Larsen, and F. Wilczek, Geometric and renormalized entropy in conformal field theory, Nucl. Phys. B 424, 443 (1994).
Entanglement entropy and quantum field theory. P Calabrese, J Cardy, 10.1088/1742-5468/2004/06/P06002J. Stat. Mech.: Theory Exp. 066002P. Calabrese and J. Cardy, Entanglement entropy and quantum field theory, J. Stat. Mech.: Theory Exp. 2004 (06), P06002.
Entanglement in quantum critical phenomena. G Vidal, J I Latorre, E Rico, A Kitaev, 10.1103/PhysRevLett.90.227902Phys. Rev. Lett. 90227902G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, Entanglement in quantum critical phenomena, Phys. Rev. Lett. 90, 227902 (2003).
Entanglement entropy in quantum impurity systems and systems with boundaries. I Affleck, N Laflorencie, E S Sørensen, 10.1088/1751-8113/42/50/504009J. Phys. A: Math. Theor. 42504009I. Affleck, N. Laflorencie, and E. S. Sørensen, Entanglement entropy in quantum impurity systems and systems with boundaries, J. Phys. A: Math. Theor. 42, 504009 (2009).
Universal noninteger "ground-state degeneracy" in critical quantum systems. I Affleck, A W W Ludwig, 10.1103/PhysRevLett.67.161Phys. Rev. Lett. 67161I. Affleck and A. W. W. Ludwig, Universal noninteger "ground-state degeneracy" in critical quantum systems, Phys. Rev. Lett. 67, 161 (1991).
Entanglement entropies in conformal systems with boundaries. L Taddia, J Xavier, F C Alcaraz, G Sierra, 10.1103/PhysRevB.88.075112Phys. Rev. B. 8875112L. Taddia, J. Xavier, F. C. Alcaraz, and G. Sierra, Entanglement entropies in conformal systems with boundaries, Phys. Rev. B 88, 075112 (2013).
Second Rényi entropy and annulus partition function for onedimensional quantum critical systems with boundaries. B Estienne, Y Ikhlef, A Rotaru, 10.21468/SciPostPhys.12.4.141SciPost Phys. 12141B. Estienne, Y. Ikhlef, and A. Rotaru, Second Rényi entropy and annulus partition function for one- dimensional quantum critical systems with boundaries, SciPost Phys. 12, 141 (2022).
Rényi entropies for one-dimensional quantum systems with mixed boundary conditions. B Estienne, Y Ikhlef, A Rotaru, 10.48550/arXiv.2301.02124arXiv:2301.02124B. Estienne, Y. Ikhlef, and A. Rotaru, Rényi entropies for one-dimensional quantum systems with mixed boundary conditions (2023), arXiv:2301.02124.
Rényi entanglement entropies of descendant states in critical systems with boundaries: conformal field theory and spin chains. L Taddia, F Ortolani, T Pálmai, 10.1088/1742-5468/2016/09/093104J. Stat. Mech. 93104L. Taddia, F. Ortolani, and T. Pálmai, Rényi entanglement entropies of descendant states in critical systems with boundaries: conformal field theory and spin chains, J. Stat. Mech. 2016, 093104 (2016).
Modular Hamiltonians for the massless Dirac field in the presence of a boundary. M Mintchev, E Tonni, 10.1007/jhep03(2021)204J. High Energ. Phys. 20213204M. Mintchev and E. Tonni, Modular Hamiltonians for the massless Dirac field in the presence of a boundary, J. High Energ. Phys. 2021 (3), 204.
Boundary effects in the critical scaling of entanglement entropy in 1d systems. N Laflorencie, E S Sørensen, M.-S Chang, I Affleck, 10.1103/PhysRevLett.96.100603Phys. Rev. Lett. 96100603N. Laflorencie, E. S. Sørensen, M.-S. Chang, and I. Affleck, Boundary effects in the critical scaling of entanglement entropy in 1d systems, Phys. Rev. Lett. 96, 100603 (2006).
Entanglement and boundary entropy in quantum spin chains with arbitrary direction of the boundary magnetic fields. J C Xavier, M A Rajabpour, 10.1103/PhysRevB.101.235127Phys. Rev. B. 101235127J. C. Xavier and M. A. Rajabpour, Entanglement and boundary entropy in quantum spin chains with arbitrary direction of the boundary magnetic fields, Phys. Rev. B 101, 235127 (2020).
Entanglement entropy in critical quantum spin chains with boundaries and defects. A Roy, H Saleur, 10.1007/978-3-031-03998-0_3Entanglement in Spin Chains: From Theory to Quantum Technology Applications. A. Bayat, S. Bose, and H. JohannessonChamSpringer International PublishingA. Roy and H. Saleur, Entanglement entropy in critical quantum spin chains with boundaries and defects, in Entanglement in Spin Chains: From Theory to Quantum Technology Applications, edited by A. Bayat, S. Bose, and H. Johannesson (Springer International Publishing, Cham, 2022) pp. 41-60.
Entanglement entropy of a massive fermion on a torus. C P Herzog, T Nishioka, 10.1007/jhep03(2013)077J. High Energ. Phys. 2013377C. P. Herzog and T. Nishioka, Entanglement entropy of a massive fermion on a torus, J. High Energ. Phys. 2013 (3), 77.
Entanglement hamiltonians for chiral fermions with zero modes. I Klich, D Vaman, G Wong, 10.1103/PhysRevLett.119.120401Phys. Rev. Lett. 119120401I. Klich, D. Vaman, and G. Wong, Entanglement hamiltonians for chiral fermions with zero modes, Phys. Rev. Lett. 119, 120401 (2017).
Entanglement hamiltonians and entropy in 1 + 1d chiral fermion systems. I Klich, D Vaman, G Wong, 10.1103/PhysRevB.98.035134Phys. Rev. B. 9835134I. Klich, D. Vaman, and G. Wong, Entanglement hamiltonians and entropy in 1 + 1d chiral fermion systems, Phys. Rev. B 98, 035134 (2018).
Entanglement Entropy in the Ising Model with Topological Defects. A Roy, H Saleur, 10.1103/PhysRevLett.128.090603Phys. Rev. Lett. 12890603A. Roy and H. Saleur, Entanglement Entropy in the Ising Model with Topological Defects, Phys. Rev. Lett. 128, 090603 (2022).
Entanglement entropy and negativity in the Ising model with defects. D Rogerson, F Pollmann, A Roy, 10.1007/jhep06(2022)165J. High Energ. Phys. 20226165D. Rogerson, F. Pollmann, and A. Roy, Entanglement entropy and negativity in the Ising model with defects, J. High Energ. Phys. 2022 (6), 165.
Topological defects on the lattice: I. The Ising model. D Aasen, R S K Mong, P Fendley, 10.1088/1751-8113/49/35/354001J. Phys. A: Math. Theor. 49354001D. Aasen, R. S. K. Mong, and P. Fendley, Topological defects on the lattice: I. The Ising model, J. Phys. A: Math. Theor. 49, 354001 (2016).
Unpaired majorana fermions in quantum wires. A Y Kitaev, 10.1070/1063-7869/44/10S/S29Physics-Uspekhi. 44131A. Y. Kitaev, Unpaired majorana fermions in quantum wires, Physics-Uspekhi 44, 131 (2001).
Parafermionic edge zero modes in Zn-invariant spin chains. P Fendley, 10.1088/1742-5468/2012/11/P11020J. Stat. Mech.: Theory Exp. 20121111020P. Fendley, Parafermionic edge zero modes in Zn-invariant spin chains, J. Stat. Mech.: Theory Exp. 2012 (11), P11020.
Strong zero modes and eigenstate phase transitions in the XYZ/interacting Majorana chain. P Fendley, 10.1088/1751-8113/49/30/30LT01J. Phys. A: Math. Theor. 49P. Fendley, Strong zero modes and eigenstate phase transitions in the XYZ/interacting Majorana chain, J. Phys. A: Math. Theor. 49, 30LT01 (2016).
Entanglement entropy with interface defects. I Peschel, 10.1088/0305-4470/38/20/002J. Phys. A: Math. Gen. 384327I. Peschel, Entanglement entropy with interface defects, J. Phys. A: Math. Gen. 38, 4327 (2005).
Entanglement entropy with localized and extended interface defects. F Iglói, Z Szatmári, Y.-C Lin, 10.1103/PhysRevB.80.024405Phys. Rev. B. 8024405F. Iglói, Z. Szatmári, and Y.-C. Lin, Entanglement entropy with localized and extended interface defects, Phys. Rev. B 80, 024405 (2009).
Fano resonances and entanglement entropy. V Eisler, S S Garmon, 10.1103/PhysRevB.82.174202Phys. Rev. B. 82174202V. Eisler and S. S. Garmon, Fano resonances and entanglement entropy, Phys. Rev. B 82, 174202 (2010).
Entanglement in fermionic chains with interface defects. V Eisler, I Peschel, 10.1002/andp.201000055Ann. Phys. (Berlin). 522679V. Eisler and I. Peschel, Entanglement in fermionic chains with interface defects, Ann. Phys. (Berlin) 522, 679 (2010).
Exact results for the entanglement across defects in critical chains. I Peschel, V Eisler, 10.1088/1751-8113/45/15/155301J. Phys. A: Math. Theor. 45155301I. Peschel and V. Eisler, Exact results for the entanglement across defects in critical chains, J. Phys. A: Math. Theor. 45, 155301 (2012).
Entanglement through conformal interfaces. K Sakai, Y Satoh, 10.1088/1126-6708/2008/12/001J. High Energy Phys. 12121K. Sakai and Y. Satoh, Entanglement through conformal interfaces, J. High Energy Phys. 12 (12), 001.
Entanglement entropy through conformal interfaces in the 2D Ising model. E Brehm, I Brunner, 10.1007/JHEP09(2015)080J. High Energ. Phys. 2015980E. Brehm and I. Brunner, Entanglement entropy through conformal interfaces in the 2D Ising model, J. High Energ. Phys. 2015 (9), 80.
Entanglement entropy at CFT junctions. M Gutperle, J D Miller, 10.1103/physrevd.95.106008Phys. Rev. D. 95106008M. Gutperle and J. D. Miller, Entanglement entropy at CFT junctions, Phys. Rev. D 95, 106008 (2017).
Rényi entropy and negativity for massless dirac fermions at conformal interfaces and junctions. L Capizzi, S Murciano, P Calabrese, 10.1007/JHEP08(2022)171J. High Energ. Phys. 20228171L. Capizzi, S. Murciano, and P. Calabrese, Rényi entropy and negativity for massless dirac fermions at conformal interfaces and junctions, J. High Energ. Phys. 2022 (8), 171.
Rényi entropy and negativity for massless complex boson at conformal interfaces and junctions. L Capizzi, S Murciano, P Calabrese, 10.1007/jhep11(2022)105J. High Energ. Phys. 202211105L. Capizzi, S. Murciano, and P. Calabrese, Rényi entropy and negativity for massless complex boson at conformal interfaces and junctions, J. High Energ. Phys. 2022 (11), 105.
Permeable conformal walls and holography. C Bachas, J Boer, R Dijkgraaf, H Ooguri, 10.1088/1126-6708/2002/06/027J. High Energ. Phys. 0627C. Bachas, J. Boer, R. Dijkgraaf, and H. Ooguri, Permeable conformal walls and holography, J. High Energ. Phys. 2002 (06), 027.
On entanglement evolution across defects in critical chains. V Eisler, I Peschel, 10.1209/0295-5075/99/20001Europhys Lett. 9920001V. Eisler and I. Peschel, On entanglement evolution across defects in critical chains, Europhys Lett. 99, 20001 (2012).
Calculation of reduced density matrices from correlation functions. I Peschel, 10.1088/0305-4470/36/14/101J. Phys. A: Math. Gen. 36205I. Peschel, Calculation of reduced density matrices from correlation functions, J. Phys. A: Math. Gen. 36, L205 (2003).
Reduced density matrices and entanglement entropy in free lattice models. I Peschel, V Eisler, 10.1088/1751-8113/42/50/504003J. Phys. A: Math. Theor. 42504003I. Peschel and V. Eisler, Reduced density matrices and entanglement entropy in free lattice models, J. Phys. A: Math. Theor. 42, 504003 (2009).
Quantum noise as an entanglement meter. I Klich, L Levitov, 10.1103/physrevlett.102.100502Phys. Rev. Lett. 102100502I. Klich and L. Levitov, Quantum noise as an entanglement meter, Phys. Rev. Lett. 102, 100502 (2009).
Entanglement entropy from charge statistics: Exact relations for noninteracting many-body systems. H F Song, C Flindt, S Rachel, I Klich, K Le Hur, 10.1103/PhysRevB.83.161408Phys. Rev. B. 83161408H. F. Song, C. Flindt, S. Rachel, I. Klich, and K. Le Hur, Entanglement entropy from charge statistics: Exact relations for noninteracting many-body systems, Phys. Rev. B 83, 161408 (2011).
Quantum fluctuations of one-dimensional free fermions and Fisher-Hartwig formula for Toeplitz determinants. A G Abanov, D A Ivanov, Y Qian, 10.1088/1751-8113/44/48/485001J. Phys. A: Math. Theor. 44485001A. G. Abanov, D. A. Ivanov, and Y. Qian, Quantum fluctuations of one-dimensional free fermions and Fisher-Hartwig formula for Toeplitz determinants, J. Phys. A: Math. Theor. 44, 485001 (2011).
Counting free fermions on a line: a Fisher-Hartwig asymptotic expansion for the Toeplitz determinant in the double-scaling limit. D A Ivanov, A G Abanov, V V Cheianov, 10.1088/1751-8113/46/8/085003J. Phys. A: Math. Theor. 4685003D. A. Ivanov, A. G. Abanov, and V. V. Cheianov, Counting free fermions on a line: a Fisher-Hartwig asymptotic expansion for the Toeplitz determinant in the double-scaling limit, J. Phys. A: Math. Theor. 46, 085003 (2013).
Free fermions on a line: Asymptotics of the entanglement entropy and entanglement spectrum from full counting statistics. R Süsstrunk, D A Ivanov, 10.1209/0295-5075/100/60009Europhys. Lett. 10060009R. Süsstrunk and D. A. Ivanov, Free fermions on a line: Asymptotics of the entanglement entropy and entanglement spectrum from full counting statistics, Europhys. Lett. 100, 60009 (2013).
L Capizzi, S Murciano, P Calabrese, 10.48550/arXiv.2302.08209arXiv:2302.08209Full counting statistics and symmetry resolved entanglement for free conformal theories with interface defects (2023). L. Capizzi, S. Murciano, and P. Calabrese, Full counting statistics and symmetry resolved entanglement for free conformal theories with interface defects (2023), arXiv:2302.08209.
The Fisher-Hartwig conjecture and generalizations. E Basor, C A Tracy, 10.1016/0378-4371(91)90149-7Physica A. 177167E. Basor and C. A. Tracy, The Fisher-Hartwig conjecture and generalizations, Physica A 177, 167 (1991).
Asymptotics of Toeplitz, Hankel, and Toeplitz+Hankel determinants with Fisher-Hartwig singularities. P Deift, A Its, I Krasovsky, 10.4007/annals.2011.174.2.12Ann. Math. 1741243P. Deift, A. Its, and I. Krasovsky, Asymptotics of Toeplitz, Hankel, and Toeplitz+Hankel determinants with Fisher-Hartwig singularities, Ann. Math. 174, 1243 (2011).
Entanglement and alpha entropies for a massive Dirac field in two dimensions. H Casini, C D Fosco, M Huerta, 10.1088/1742-5468/2005/07/P07007J. Stat. Mech.: Theory Exp. 077007H. Casini, C. D. Fosco, and M. Huerta, Entanglement and alpha entropies for a massive Dirac field in two dimensions, J. Stat. Mech.: Theory Exp. 2005 (07), P07007.
Universal parity effects in the entanglement entropy of XX chains with open boundary conditions. M Fagotti, P Calabrese, 10.1088/1742-5468/2011/01/p01017J. Stat. Mech. 1017M. Fagotti and P. Calabrese, Universal parity effects in the entanglement entropy of XX chains with open boundary conditions, J. Stat. Mech. 2011, P01017 (2011).
Free-fermion entanglement and spheroidal functions. V Eisler, I Peschel, 10.1088/1742-5468/2013/04/p04028J. Stat. Mech. 4028V. Eisler and I. Peschel, Free-fermion entanglement and spheroidal functions, J. Stat. Mech. 2013, P04028 (2013).
Prolate spheroidal wave functions, Fourier analysis, and uncertainty -V: the discrete case. D Slepian, 10.1002/j.1538-7305.1978.tb02104.xBell Syst. Techn. J. 571371D. Slepian, Prolate spheroidal wave functions, Fourier analysis, and uncertainty -V: the discrete case, Bell Syst. Techn. J. 57, 1371 (1978).
Quantum spin chain, Toeplitz determinants and the Fisher-Hartwig conjecture. B Q Jin, V E Korepin, 10.1023/B:JOSS.0000037230.37166.42J. Stat. Phys. 11679B. Q. Jin and V. E. Korepin, Quantum spin chain, Toeplitz determinants and the Fisher-Hartwig conjecture, J. Stat. Phys. 116, 79 (2004).
Parity effects and universal terms of O(1) in the entanglement near a boundary. H Schlömer, C Tan, S Haas, H Saleur, 10.21468/SciPostPhys.13.5.110SciPost Phys. 13110H. Schlömer, C. Tan, S. Haas, and H. Saleur, Parity effects and universal terms of O(1) in the entan- glement near a boundary, SciPost Phys. 13, 110 (2022).
Entanglement entropy of quantum wire junctions. P Calabrese, M Mintchev, E Vicari, 10.1088/1751-8113/45/10/105206J. Phys. A: Math. Theor. 45105206P. Calabrese, M. Mintchev, and E. Vicari, Entanglement entropy of quantum wire junctions, J. Phys. A: Math. Theor. 45, 105206 (2012).
Quench dynamics of noninteracting fermions with a delta impurity. G Gouraud, P Doussal, G Schehr, 10.1088/1751-8121/ac83fbJ. Phys. A Math. Theor. 55395001G. Gouraud, P. Doussal, and G. Schehr, Quench dynamics of noninteracting fermions with a delta impurity, J. Phys. A Math. Theor. 55, 395001 (2022).
Stationary time correlations for fermions after a quench in the presence of an impurity. G Gouraud, P L Doussal, G Schehr, 10.48550/ARXIV.2211.15447arXiv:2211.15447G. Gouraud, P. L. Doussal, and G. Schehr, Stationary time correlations for fermions after a quench in the presence of an impurity (2022), arXiv:2211.15447.
Fredholm determinants, full counting statistics and Loschmidt echo for domain wall profiles in one-dimensional free fermionic chains. O Gamayun, O Lychkovskiy, J.-S Caux, 10.21468/SciPostPhys.8.3.036SciPost Phys. 836O. Gamayun, O. Lychkovskiy, and J.-S. Caux, Fredholm determinants, full counting statistics and Loschmidt echo for domain wall profiles in one-dimensional free fermionic chains, SciPost Phys. 8, 036 (2020).
On Landauer-Büttiker formalism from a quantum quench. O Gamayun, Y Zhuravlev, N Iorgov, 10.48550/ARXIV.2211.08330arXiv:2211.08330O. Gamayun, Y. Zhuravlev, and N. Iorgov, On Landauer-Büttiker formalism from a quantum quench (2022), arXiv:2211.08330.
Time evolution of entanglement negativity across a defect. M Gruber, V Eisler, 10.1088/1751-8121/ab831cJ. Phys. A: Math. Theor. 53205301M. Gruber and V. Eisler, Time evolution of entanglement negativity across a defect, J. Phys. A: Math. Theor. 53, 205301 (2020).
Entanglement evolution after a global quench across a conformal defect. L Capizzi, V Eisler, 10.48550/ARXIV.2209.03297arxiv:2209.03297L. Capizzi and V. Eisler, Entanglement evolution after a global quench across a conformal defect (2022), arxiv:2209.03297.
Domain wall melting across a defect. L Capizzi, S Scopa, F Rottoli, P Calabrese, 10.1209/0295-5075/acb50aEurophys. Lett. 14131002L. Capizzi, S. Scopa, F. Rottoli, and P. Calabrese, Domain wall melting across a defect, Europhys. Lett. 141, 31002 (2023).
Entanglement evolution across a conformal interface. X Wen, Y Wang, S Ryu, 10.1088/1751-8121/aab561J. Phys. A: Math. Theor. 51195004X. Wen, Y. Wang, and S. Ryu, Entanglement evolution across a conformal interface, J. Phys. A: Math. Theor. 51, 195004 (2018).
P Francesco, P Mathieu, D Sénéchal, 10.1007/978-1-4612-2256-9Conformal Field Theory. Springer Science & Business MediaP. Francesco, P. Mathieu, and D. Sénéchal, Conformal Field Theory (Springer Science & Business Media, 1997).
A O Gogolin, A A Nersesyan, A M Tsvelik, Bosonization and strongly correlated systems. Cambridge university pressA. O. Gogolin, A. A. Nersesyan, and A. M. Tsvelik, Bosonization and strongly correlated systems (Cambridge university press, 2004).
The upper half-plane is invariant under w → w + (real translation) and w → e w (scaling). ∈ , The upper half-plane is invariant under w → w + (real translation) and w → e w (scaling), ∈ R.
Entanglement of low-energy excitations in conformal field theory. F C Alcaraz, M I Berganza, G Sierra, 10.1103/PhysRevLett.106.201601Phys. Rev. Let. 106201601F. C. Alcaraz, M. I. Berganza, and G. Sierra, Entanglement of low-energy excitations in conformal field theory, Phys. Rev. Let. 106, 201601 (2011).
J Cardy, 10.48550/arXiv.hep-th/0411189arxiv:0411189Boundary conformal field theory. J. Cardy, Boundary conformal field theory (2004), arxiv:0411189.
Symmetry resolved entanglement entropy of excited states in a CFT. L Capizzi, P Ruggiero, P Calabrese, 10.1088/1742-5468/ab96b6J. Stat. Mech.: Theory Exp. 2020773101L. Capizzi, P. Ruggiero, and P. Calabrese, Symmetry resolved entanglement entropy of excited states in a CFT, J. Stat. Mech.: Theory Exp. 2020 (7), 073101.
Symmetry resolved entanglement: exact results in 1d and beyond. S Fraenkel, M Goldstein, 10.1088/1742-5468/ab7753J. Stat. Mech.: Theory Exp. 2020333106S. Fraenkel and M. Goldstein, Symmetry resolved entanglement: exact results in 1d and beyond, J. Stat. Mech.: Theory Exp. 2020 (3), 033106.
The entanglement entropy of one-dimensional systems in continuous and homogeneous space. P Calabrese, M Mintchev, E Vicari, 10.1088/1742-5468/2011/09/p09028Theory Exp. 2011 (09). 9028P. Calabrese, M. Mintchev, and E. Vicari, The entanglement entropy of one-dimensional systems in continuous and homogeneous space, J. Stat. Mech.: Theory Exp. 2011 (09), P09028.
Entanglement entropy of one-dimensional gases. P Calabrese, M Mintchev, E Vicari, 10.1103/PhysRevLett.107.020601Phys. Rev. Lett. 10720601P. Calabrese, M. Mintchev, and E. Vicari, Entanglement entropy of one-dimensional gases, Phys. Rev. Lett. 107, 020601 (2011).
| [] |
[
"Irreducible components of Hilbert schemes of rational curves with given normal bundle",
"Irreducible components of Hilbert schemes of rational curves with given normal bundle"
] | [
"Alberto Alzati ",
"Riccardo Re "
] | [] | [
"Algebraic Geometry"
] | We develop a new general method for computing the decomposition type of the normal bundle to a projective rational curve. This method is then used to detect and explain an example of a reducible Hilbert scheme parametrizing all the rational curves in P s with a given decomposition type of the normal bundle. We also characterize smooth nondegenerate rational curves contained in rational normal scrolls in terms of the splitting type of their restricted tangent bundles and compute their normal bundles. | 10.14231/ag-2017-004 | null | 117,366,547 | 1502.02521 | cf274f821495850adb82451076d0b8beed3adedd |
Irreducible components of Hilbert schemes of rational curves with given normal bundle
2017
Alberto Alzati
Riccardo Re
Irreducible components of Hilbert schemes of rational curves with given normal bundle
Algebraic Geometry
41201710.14231/AG-2017-004To Rosario Strano, on his 70th birthday
We develop a new general method for computing the decomposition type of the normal bundle to a projective rational curve. This method is then used to detect and explain an example of a reducible Hilbert scheme parametrizing all the rational curves in P s with a given decomposition type of the normal bundle. We also characterize smooth nondegenerate rational curves contained in rational normal scrolls in terms of the splitting type of their restricted tangent bundles and compute their normal bundles.
Introduction
The projective rational curves C ⊂ P s of degree d form a quasi-projective irreducible subscheme H rat d,s of the Hilbert scheme of P s . Any of these curves is the image of a birational map f : P 1 → P s , defined up an automorphism of P 1 . If one restricts oneself to rational curves with ordinary singularities, one may classify these curves by considering the splitting types as a direct sum of line bundles of the vector bundles f * T P s and N f = f * T P s /T P 1 , commonly called the restricted tangent bundle and the normal bundle of the curve C, respectively. It is well known that the classification of rational curves by the splitting type of f * T P s produces irreducible subvarieties of H rat d,s ; see [Ver83,Ram90]. One can also look at [AR15] for a geometric characterization of rational curves with a given splitting of f * T P s and at [Iar14] for related results in the commutative algebra language.
Since the early eighties of the past century, a natural question about rational curves in projective spaces has been whether the subschemes of H rat d,s characterized by a given splitting of N f are irreducible as well. This has been proved to be true for rational curves in P 3 , see [EvdV81,EvdV82,GS80]. The irreducibility problem has also been shown to have a positive answer for the general splitting type of N f , see [Sac80], and more recently other results related to this problem have been obtained in [Ran07] and [Ber14]. However, the general irreducibility problem remained open.
In this paper we show that the irreducibility problem has a negative answer in general, producing the first known example of a reducible Hilbert scheme of rational curves characterized by a given splitting of N f . In order to achieve this, we develop a new general method to compute the spaces of global sections H 0 N f (−k) and therefore the splitting type of N f .
Notation and summary of results
A rational curve C ⊂ P s is a curve that can be birationally parametrized by a regular map f : P 1 → P s . We will always assume that C is non-degenerate, that is, not contained in any hyperplane H ⊂ P s , and of degree d > s with s 3; in particular, we are excluding the well-known case of the rational normal curves. Let I C be the ideal sheaf of C in P s ; then the normal sheaf of C is the sheaf N C = Hom O C (I C /I 2 C , O C ). Recall also that the tangent sheaf of a noetherian scheme X over Spec(C) is defined as T X = Hom O X (Ω 1 X/C , O X ). Taking the differential of the parametrization map f produces an exact sequence
0 → T P 1 df → f * T P s → f * N C .
When C has ordinary singularities, df is a vector bundle embedding and the sequence
0 → T P 1 df → f * T P s → f * N C → 0
is exact and identifies f * N C as the quotient bundle f * T P s /df (T P 1 ). We will write f * N C = N f and call this vector bundle the normal bundle to C. Therefore we will assume that C is irreducible and with ordinary singularities when we will be dealing with the normal bundle N f associated with a given parametrization f : P 1 → C.
Given a multiset of s − 1 integers c = c 1 , c 2 , . . . , c s−1 , ordered in such a way that c 1 c 2 · · · c s−1 , we will denote by H c the Hilbert scheme of irreducible degree d rational curves with ordinary singularities C ⊂ P s that can be birationally parametrized by a map f : P 1 → P s such that the normal bundle N f splits as N f = s−1 i=1 O(c i + d + 2). Let U ∼ = C 2 be a 2-dimensional vector space and P 1 = P(U ) its associated projective line. Let S d U be the dth symmetric product of U . Let ν d : P(U ) → P(S d U ) be the dth Veronese embedding, and let us consider the rational normal curve C d = ν d (P(U )).
Our main general result is Theorem 4.1. After representing, up to projective transformations, a degree d rational curve as the projection of C d from a vertex P(T ) ⊂ P(S d U ), we prove an identification of the spaces of global sections
H 0 T f (−d − 2 − k) and H 0 N f (−d − 2 − k) with the spaces ker D ∩ (S k U ⊗ T ) ⊂ S k U ⊗ S d U and ker D 2 ∩ (S k U ⊗ T ) ⊂ S k U ⊗ S d U , respectively,
where D is the first-order transvectant operator, that is, D = ∂ x ⊗ ∂ y − ∂ y ⊗ ∂ x , with x, y a basis of U and ∂ x , ∂ y the dual basis, acting by derivation. By means of this result one can relate the splitting types of T f and N f with the position of the vertex P(T ) with respect to the rational normal curve C d .
In Section 6 we introduce and discuss our example of a Hilbert scheme H c of rational curves C ⊂ P 8 of degree d = 11 with exactly two irreducible components of dimension 98 whose general points represent smooth rational curves, therefore providing a counterexample to the above-mentioned irreducibility problem.
In Section 7, Theorem 7.3, we give a characterization of smooth rational curves contained in rational normal scrolls in terms of the splitting type of their restricted tangent bundles and compute their normal bundles. The same theorem also shows how to construct these curves as Normal bundles of rational curves projections of a rational normal curve.
Rational curves as projections of a rational normal curve
Given a C-vector space W , we denote by P(W ) the projective space of 1-dimensional subspaces of W . More generally, we denote by Gr(e + 1, W ) or Gr(e, P(W )) the Grassmannian of (e + 1)dimensional subspaces of W , or equivalently, of e-dimensional linear subspaces of P(W ). If T ⊆ W is an (e + 1)-dimensional subspace, we will denote its associated point in Gr(e, P(W )) by [T ] or [P(T )]. Accordingly, if w ∈ W is a non-zero vector, we will denote its associated point by [w] ∈ P(W ).
Let U ∼ = C 2 be a 2-dimensional vector space and P 1 = P(U ) its associated projective line. Let S d U be the dth symmetric product of U . Let ν d :
P 1 → P(S d U ) = P d the dth Veronese embedding, defined by ν d (p) = [p d ].
We set C d = ν d (P 1 ), which is the rational normal curve given by the set of pure tensors in S d U . For any b 1, we denote by
Sec b−1 C d the closure of the set of [τ ] ∈ P(S d U ) such that τ = p d 1 + · · · + p d b , for [p i ] ∈ C d distinct points, that is, the (b − 1)st secant variety of C d .
Let C ⊂ P s = P(V ) be a non-degenerate rational curve of degree d. For the next considerations we will not need to assume that C has ordinary singularities. The normalization map
ν C : P(U ) → C is the restriction of a map f : P(U ) → P s such that f * O P s (1) = ν * C O C (1) = O P 1 (d). The map f is defined by an injection f * : H 0 O P s (1) = V * → H 0 O P 1 (d) = S d U * such that f * (V * ) spans O P 1 (d) at any point of P 1 . Let us set T = f * (V * ) ⊥ ⊂ S d U , e + 1 = dim T = d − s .
Then one sees that the map f * can be identified with the dual of the map S d U → S d U/T ∼ = → V . In particular, up to a linear isomorphism, we identify P s and P(S d U/T ), and the map f and the composition f = π T • ν d , where π T : P(S d U ) P(S d U/T ) is the projection of the vertex P(T ). We want to underline the fact that for any ψ ∈ Aut(P s ), the curve C = ψ(C) is obtained by changing f * : V * → S d U * into g * = f * • ψ, with ψ ∈ GL(V * ) a linear automorphism representing ψ. Hence the space T = f * (V * ) ⊥ is not affected by such a transformation. This means that one has a natural bijection between the set of orbits of maps f : P 1 → P s under the left action of PGL(s + 1) and the set of projection vertexes P(T ) obtained as above.
We recall that the condition that f * (V * ) spans O P1 (d) at any point of P 1 is equivalent to P(T ) ∩ C d = ∅, and the fact that f is birational to the image corresponds to the fact that P(T ) ∩ Sec 1 C d is finite.
The discussion above shows that the Hilbert scheme H rat d,s of rational curves in P s is settheoretically described as the set of images of rational maps π T • ν d composed with projective transformations of P s , with the extra condition that the map π T • ν d : P 1 → P s is birational to the image. More precisely, for V the open subset of [T ] ∈ Gr(e + 1, S d U )) such that P(T ) ∩ C d is empty and P(T ) ∩ Sec 1 T is finite, we see that there exists a map V × PGL(s + 1) → H rat d,s mapping ([T ], φ) ∈ V × PGL(s + 1) to the curve C = φ(π T (C d )), and this map is surjective.
2.1 PGL(2)-action on the space of vertexes P(T ) Let us fix a map f = π T •ν d : P 1 → P s , associated with a vertex P(T ) as in the construction above. Let us consider an automorphism φ ∈ PGL(2). We will denote with the same letter φ a fixed representative of the given automorphism as an element of GL(2). One observes that the d-fold symmetric product S d φ of the map φ acts on S d U by the action on generators (S d φ)(l d ) = φ(l) d , and one can define the induced action on the Grassmannian Gr(e + 1,
S d U ) by [T ] → [(S d φ)(T )].
Now, let us consider the composition
f φ = f • φ −1 : P 1 → P s .
One has the following formula:
f φ = π (S d φ)(T ) • ν d .
(2.1)
Indeed, we know that f is determined by the subspace T ⊥ ⊂ S d U * ; let us write T ⊥ = g 0 , . . . , g s . Then f φ is determined by W = g 0 •φ −1 , . . . , g s •φ −1 , and by the GL(2)-invariance of the duality
pairing S d U * ⊗ S d U → C, one immediately sees that W = (S d φ)(T ) ⊥ ⊂ S d U * .
Above, we saw that the space of maps f : P 1 → P s that birationally parametrize a nondegenerate rational curve C ⊂ P s of degree d is identified with V × PGL(s + 1), by mapping
([T ], φ) to f = φ • (π T • ν d ).
We then showed that the right PGL(2)-action on this space of maps can be identified with the left action of PGL(2) on V × PGL(s + 1) defined by its left action on V.
Irreducibility criteria and dimension formulas
To show the irreducibility of a subscheme H P ⊆ H rat d,s defined by a geometric property P on rational curves C ⊂ P s , it will be sufficient to prove the irreducibility of the subvariety V P of those [T ] ∈ Gr(e + 1, S d U ) such that the curve C = π T (C d ) satisfies property P . Indeed, in that case V P × PGL(s + 1) → H P is onto, with irreducible domain. To compute dim H P from the map π : V P × PGL(s + 1) → H P , one applies the following result, which is almost obvious and very well known in the special case H P = H rat d,s , but which we will need in the more general form stated here.
Proposition 2.1. With the notation set above, if V P is irreducible, then H P is irreducible of dimension dim H P = dim V P + dim PGL(s + 1) − 3.
Proof. From the above discussion it follows that the fiber over an arbitrary [C] ∈ H P is
π −1 ([C]) = Orb([T ]) × Stab(C) ,
with Orb([T ]) the orbit of [T ] under the action of PGL(2) on the Grassmannian Gr(e + 1, S d U ) and Stab(C) ⊂ PGL(s + 1) the group of projective transformations preserving C. First, we consider the case when dim Orb([T ]) < 3 = dim PGL(2), that is, when [T ] is fixed by some 1-dimensional subgroup of PGL(2). The 1-dimensional subgroups of PGL(2) either fix one point [x] ∈ P 1 and contain the translations group acting on the basis x, y as (x, y) → (x, y + αy), with α ∈ C, or fix two points [x], [y] ∈ P 1 and contain the group (x, y) → (x, λy), with λ ∈ C * . Any subspace T ⊂ S d U fixed by a group of the first type must contain the pure tensor [x d ], and hence [T ] ∈ V. A space fixed by a subgroup of the second type is necessarily monomial; that is, T = x ν 0 y d−ν 0 , . . . , x νe y d−νe . One can see that such a space gives a point [T ] ∈ V, that is, P(T ) ∩ Sec 1 C d = ∅ if and only if d − 2 ν 0 · · · ν e 2, and hence it can exist if d − 3 e + 1. In this case one sees dim Orb(T ) = dim PGL(2) − dim Stab(T ) = 2. Now, we consider the cases when dim Stab(C) > 0. A classical reference for this class of curves, called the algebraic Klein-Lie curves, or algebraic W -curves, is for example [EC34,libro V,§ 24]. In a suitable coordinate system, any 1-dimensional subgroup of PGL(s + 1) whose orbits in P s are not lines takes the form t → diag(t µ 0 , . . . , t µs ), with µ i ∈ Z normalized and ordered such that Normal bundles of rational curves 0 = µ 0 · · · µ s . Its orbits t → (α 0 t µ 0 : · · · : α s t µs ) represent non-degenerate rational curves of degree d if and only if the integers µ i are distinct, α i = 0 for all i = 0, . . . , s and µ s = d. Hence there exists only a finite number of possible choices of such integers µ 0 , . . . , µ s for a fixed d, that is, a finite number of non-degenerate degree d Klein-Lie curves in P s up to projective equivalence. All of them can be obtained up to projective equivalence as projections C = π T (C d ) in the following way. For any fixed basis x, y ∈ U consider the vertex P(T ) generated by monomials x ν 0 y d−ν 0 , . . . , x νe y d−νe , with e + 1 = d − s and {ν 0 , . . . , ν e } = {0, . . . , d} \ {µ 0 , . . . , µ s }. Then C = π T (C d ) is a curve parametrized as t → (t µ 0 : · · · : t µs ) with respect to the basis (x µ iȳ d−µ i ) of S d U/T . Hence we have found that non-degenerate rational curves with dim Stab(C) > 0 come from those vertexes P(T ) with dim Orb([T ]) = 2 that were already analyzed above. In all those cases one has dim π −1 ([C]) = dim(Orb(T ) × Stab(C)) = 2 + 1 = 3 .
In any other case one has dim Orb(T ) = 3 and dim Stab(C) = 0.
A classification of the projection vertexes P(T )
Let us consider a non-zero subspace T ⊆ S d U , with d 2. Let us denote by x, y a basis of U and by u, v the dual basis in U * . Recall that u, v may be identified with ∂ x , ∂ y acting as linear forms on U , and an arbitrary element ω ∈ U * will be written ω = α∂ x + β∂ y for suitable α, β ∈ C. We define
∂T = ω(T ) | ω ∈ U * . (2.2)
We remark that if U = x, y , then ∂T = ∂ x T + ∂ y T . One observes that in the trivial case T = S d U , we have ∂T = S d−1 U . One can see that this is the only possible case when dim ∂T < dim T , either as an easy exercise or as a consequence of Proposition 2.3 below. We also introduce the space ∂ −1 T ⊂ S d+1 U defined in the following way:
∂ −1 T = ω∈U * ω −1 (T ) . (2.3)
In this case we have
∂ −1 T = ∂ −1 x T ∩ ∂ −1 y T . Of course one has ∂ −1 S d U = S d+1 U . For g ∈ S d+b U we introduce the vector space ∂ b (g) = ∂ b x g, ∂ b−1 x ∂ y g, . . . , ∂ b y g ⊆ S d U .
(2.4) By convention, we set ∂ b (g) = 0 if b = −1.
2.4
The numerical type of a subspace T ⊂ S d U We will need the following notation and results from the article [AR15].
Definition 2.2. We will say that a proper linear space P(S) ⊂ P d is C d -generated if P(S) is generated by its schematic intersection with C d . Setting a + 1 = dim S, we will also say in this case that P(S) is (a+1)-secant to C d . We will say that a vector subspace
S ⊆ S d U is C d -generated if P(S) is C d -generated.
Notation. Given a proper subspace T ⊂ S d U , we denote by S T the smallest subspace containing the schematic intersection P(T ) ∩ C d as a subscheme. We set a = dim S T − 1 = dim P(S T ), with the convention that dim ∅ = −1. either r = 0 and in this case one has T = S T and T is C d -generated, or r 1 and there exist forms f 1 , . . . , f r , with f i ∈ P d+b i \ Sec b i C d+b i for i = 1, . . . , r, with b 1 · · · b r 0, such that T and ∂T are the direct sums
T = S T ⊕ ∂ b 1 (f 1 ) ⊕ · · · ⊕ ∂ br (f r ) , ∂T = ∂S ⊕ ∂ b 1 +1 (f 1 ) ⊕ · · · ⊕ ∂ br+1 (f r ) .
The (r + 1)-uple (a, b 1 , . . . , b r ) is uniquely determined from T . A space T as above exists if and only if a −1, b i 0 for all i = 1, . . . , r and a + 1 + (b i + 2) d.
Definition 2.4. We say that a subspace T as in Proposition 2.3 has numerical type (a, b 1 , . . . , b r ). If S T = 0, that is, P(T ) ∩ C d = ∅, then a = −1 and we will say T that has type (b 1 , . . . , b r ).
Let us also recall the following result from [AR15].
Proposition 2.5 ([AR15, Proposition 5]). Assume that T ⊆ S d U has type (a, b 1 , . . . , b r ), so that it has a decomposition
T = S T ⊕ r i=1 ∂ b i (f i )
satisfying the requirements of Proposition 2.3. Then ∂ −1 (S T ) = S ∂ −1 T and dim ∂ −1 (S T ) = dim S T = a + 1, and there exists a decomposition
∂ −1 T = ∂ −1 S T ⊕ i : b i 1 ∂ b i −1 (f i ) .
In particular, ∂ −1 T has type (a, b 1 − 1, . . . , b r 1 − 1) with r 1 = max(i : b i 1).
The splitting type of the restricted tangent bundle of rational curves
The main result of [AR15] about the splitting type of the restricted tangent bundle f * T P s , that we will write as T f for short, of a parametrized rational curve f : P 1 → P s is the following.
Proposition 2.6 ([AR15, Theorem 3]). Assume that f : P 1 → P s is obtained by projecting the rational normal curve C d from a vertex P(T ) with T of type (b 1 , . . . , b r ). Then r s and the splitting type of T f is
T f = O P 1 (b 1 + d + 2) ⊕ · · · ⊕ O P 1 (b r + d + 2) ⊕ O s−r P 1 (d + 1)
. We also recall the restricted Euler sequence
0 → O P 1 → (S d U/T ) ⊗ O P 1 (d) → T f → 0 , from which one gets deg T f = (s + 1)d.
Review of some SL(U )-invariant operators
In this section we will review some well-known invariant operators between spaces of tensors on U or U * , for the convenience of the reader and for later reference. Invariance will mean GL(U )or SL(U )-invariance.
The duality pairing
The duality pairing is the natural pairing S d U * ⊗ S d U → C that identifies either of the two spaces as the dual of the other. It may be defined by considering any element of S d U * as a Normal bundles of rational curves differential operator on S d U . More precisely, if x, y ∈ U and u, v ∈ U * are dual bases, then one has the formula
f (u, v) ∈ S d U * , l = λx + µy ∈ U ⇒ f l d = d!f (λ, µ) .
General contractions
The contraction maps
S k U * ⊗ S b U → S b−k U ,
defined for any 0 k b, or the analogous maps interchanging U and U * , can be interpreted in a way similar to that given in 3.1 by letting the tensors in S k U * act on S b U as differential operators. The following formulas are straightforward consequences of the definition of the action of f ∈ S k U * as a differential operator:
f (l b ) = b k f (l k )l b−k , (3.1) f (η(g)) = (ηf )(g) , ∀ f ∈ S k U * , ∀ η ∈ U * , ∀ g ∈ S b+1 U . (3.2)
The multiplication maps
The multiplication maps are the maps m :
S i U ⊗ S j U → S i+j U , or the same with U * in the place of U , defined on pure generators by m(l i ⊗ h j ) = l i h j .
The polarization maps
The polarization maps are maps p k : S d+k U → S k U ⊗ S d U proportional to duals of the multiplication maps m : S k U * ⊗ S d U * → S d+k U * , with proportionality factor determined such that m(p k (f )) = f for any f ∈ S d+k U . For this reason, the polarization maps are always injective. The maps p k are uniquely defined by
p k (l d+k ) = l k ⊗ l d .
One has the following well-known closed formula for p k in terms of a fixed basis x, y for U :
p k (f ) = (deg f − k)! deg f ! k i=0 k i x k−i y i ⊗ ∂ k−i x ∂ i y (f ) . (3.3)
The multiplication by
ξ = x ⊗ y − y ⊗ x The multiplication by ξ = x ⊗ y − y ⊗ x is an SL(U )-invariant element of U ⊗ U , which indeed
generates the irreducible subrepresentation of GL(U ) given by U ∧U ⊂ U ⊗U . The multiplication by ξ acts in the following way:
ξ : S i−1 U ⊗ S j−1 U → S i U ⊗ S j U .
Observe that for any k d one has the direct sum decomposition
S k U ⊗ S d U = p k S d+k U ⊕ ξp k−1 S d+k−2 U ⊕ · · · ⊕ ξ k p 0 S d−2k U . (3.4)
Here we set S i U = 0 if i < 0. This decomposition is equal to the Pieri decomposition of S k U ⊗S d U as a GL(U )-representation, for which we refer to [FH91]. Note that grouping the terms in (3.4)
85
A. Alzati and R. Re in a suitable way, one obtains
S k U ⊗ S d U = p k S d+k U ⊕ ξ S k−1 U ⊗ S d−1 U , (3.5) S k U ⊗ S d U = p k S d+k U ⊕ ξp k−1 S d+k−2 U ⊕ ξ 2 S k−2 U ⊗ S d−2 U . (3.6) 3.6 The operator D = D x,y = ∂ x ⊗ ∂ y − ∂ y ⊗ ∂ x The operator D = D x,y = ∂ x ⊗ ∂ y − ∂ y ⊗ ∂ x is classically know as the first-order transvectant; see, for example, [Olv99, Definition 5.2]. If (x , y ) = (x, y)A is a new basis for U , then the operator D transforms as D x ,y = (det A) −1 D x,y ; see [Olv99, formula (5.3)].
In particular, D is invariant with respect to the SL(U )-representation on U * ⊗U * . In this article we will consider the following actions of D as a differential operator:
D : S k U ⊗ S d U → S k−1 U ⊗ S d−1 U .
The operator D satisfies the following property.
Lemma 3.1. For any τ ∈ S k−1 U ⊗ S d−1 U one has D(ξτ ) = (d + k)τ + ξD(τ ) .
Moreover, one has D(p k (f )) = 0 for any f ∈ S d+k U .
We omit the proof, that can be achieved by a direct computation, reducing oneself to the case τ = x k−1 ⊗ y d−1 by linearity and SL(2)-invariance. One consequence of the lemma above is the following.
Corollary 3.2. For any d, k 1 or d, k 2, respectively, the following sequences are exact:
0 → p k (S d+k U ) → S k U ⊗ S d U D → S k−1 U ⊗ S d−1 U → 0 , 0 → p k (S d+k U ) ⊕ ξp k−1 (S d+k−2 U ) → S k U ⊗ S d U D 2 → S k−2 U ⊗ S d−2 U → 0 .
Proof. We start with the first sequence. The fact that the sequence is a complex is the second statement of Lemma 3.1. By the first statement of Lemma 3.1 and by (3.4) and (3.5), the operator D maps the subspace ξ(
S k−1 U ⊗ S d−1 U ) of the space S k U ⊗ S d U = p k (S d+k U ) ⊕ ξ(S k−1 U ⊗ S d−1 U ) onto S k−1 U ⊗ S d−1 U .
The exactness in the middle also follows from the decomposition (3.5). The proof of the exactness of the second sequence is very similar. One first shows
D 2 p k S d+k U ⊕ ξp k−1 S d+k−2 U = 0
by applying Lemma 3.1 twice. Then the exactness follows from (3.4) and (3.6) in a way similar to that for the first sequence.
In a different vein, one can use the operator D 2 to produce the invariant map
S k U ⊗ S b U * D 2 −→ S k−2 U ⊗ S b+2 U * . (3.7)
In this map, the tensor
D 2 = ∂ 2 x ⊗ ∂ 2 y − 2∂ x ∂ y ⊗ ∂ x ∂ y + ∂ 2 y ⊗ ∂ 2
x acts by contraction on the S k Ucomponents and by multiplication on the S b U * -component. Later, we will need the following result.
Proposition 3.3. The map (3.7) has maximal rank for any b 0 and k 2.
Proof. We use the identification φ :
U * → U that maps α∂ x + β∂ y to −βx + αy. Note that φ is SL(2)-invariant, as φ ∧ φ maps ∂ x ∧ ∂ y to y ∧ (−x) = x ∧ y. Then for any i, j 0, the map 1 ⊗ S j (φ) : S i U ⊗ S j U * → S i U ⊗ S j U is a isomorphism.
We can rewrite the map (3.7) in terms of these identifications as follows:
S k U ⊗ S b U δ 2 −→ S k−2 U ⊗ S b+2 U , with δ 2 = ∂ 2 x ⊗ x 2 + 2∂ x ∂ y ⊗ xy + ∂ 2 y ⊗ y 2 = (∂ x ⊗ x + ∂ y ⊗ y) 2 ,
acting as before by contraction on S k U and by multiplication on S b U . Now the fact that δ 2 has maximal rank is a consequence of the following more general result.
Lemma 3.4. For any (n + 1)-dimensional C-vector space V = x 0 , . . . , x n and any k a and b 0,
for δ = (∂ x 0 ⊗ x 0 + · · · + ∂ xn ⊗ x n ), the map S k V ⊗ S b V δ a −→ S k−a V ⊗ S b+a V (3.8)
has maximal rank.
The result above is already known; for example, one can see that it is a consequence of [Re12, Theorem 2]. However, we find it more convenient to give a new proof here, since we did not find any clear reference for the statement above in the existing literature.
Sketch of proof. We use the invariance of δ and the Pieri decompositions of
S k V ⊗ S b V and S k−a V ⊗ S b+a V as SL(V )-modules. As is well known, S k V ⊗ S b V = min(k,b) i=0 S (k+b−i,i) V , (3.9)
where S (k+b−i,i) V is the SL(V )-irreducible tensor space resulting by applying to V the Schur functor associated with the Young diagram with two rows of lengths k + b − i and i, respectively. One has the similar decomposition
S k−a V ⊗ S b+a V = min(k−a,a+b) i=0 S (k+b−i,i) V .
(3.10)
Note that if b k − a, then all the summands S (k+b−i,i) V appearing in (3.9) also appear in (3.10) and, on the other hand, if b k − a, then all the summands in (3.10) appear in (3.9). Then the proof is complete if one shows that for any summand appearing in both the formulas above, the composition
S (k+b−i,i) V → S k V ⊗ S b V δ a −→ S k−a V ⊗ S b+a V S (k+b−i,i) V
is non-zero and hence an isomorphism. It is well known that the first invariant inclusion identifies
S (k+b−i,i) V as the subspace of S k V ⊗ S b V generated by tensors of the form ξ 1 · · · ξ i f , where the ξ j are tensors of the form x h ⊗ x k − x k ⊗ x h and f ∈ p k−i (S k+b−2i V ) ⊂ S k−i V ⊗ S b−i V . Then one observes the fundamental fact that δ(x h ⊗ x k − x k ⊗ x h ) = 0. Since δ is a derivation on the commutative ring S • V ⊗ S • V , one deduces that δ commutes with x h ⊗ x k − x k ⊗ x h and hence δ a (ξ 1 · · · ξ i f ) = ξ 1 · · · ξ i δ a (f ).
Then one concludes by the observation that f = p k−i (g) and one can easily check that δ a (f ) = δ a (p k−i (g)) = p k−i−a (g), up to some non-zero rational factor. Hence the map δ a is non-zero when restricted to S (k+b−i,i) V .
87
A. Alzati and R. Re
3.7 The invariant embeddings ψ k : U ⊗ S d+k−1 U → S k U ⊗ S d U
We define the invariant embeddings ψ k as the compositions
U ⊗ S d+k−1 U 1⊗p k − −−− → U ⊗ S k U ⊗ S d−1 U m − −−− → S k U ⊗ S d U,
where m is the multiplication of the first and the third tensor components of U ⊗ S k U ⊗ S d−1 U . The maps ψ k are obviously SL(U )-invariant. We will show that the maps ψ k are invariant embeddings for any k 1.
Proposition 3.5. For any d 2 and k 1, the map ψ k is injective and
ψ k U ⊗ S d+k−1 U = ker D 2 : S k U ⊗ S d U → S k−2 U ⊗ S d−2 U ,
where the map above is set to be the zero map in the case k = 1.
Proof. We use the decomposition U ⊗ S d+k−1 U = p 1 (S d+k U ) ⊕ ξS d+k−2 U , which is a particular case of (3.5). Since the two summands are irreducible representations of SL(U ) and the map ψ k is SL(U )-invariant, to show the injectivity of ψ k it will be sufficient to show that ψ k is non-zero on the summands p 1 (S d+k U ) and ξS d+k−2 U . We will achieve that by computing ψ k on some special elements of these summands.
For l ⊗ l d+k−1 ∈ p 1 (S d+k U ) we see that ψ k l ⊗ l d+k−1 = m (1 ⊗ p k ) l ⊗ l d+k−1 = m l ⊗ l k ⊗ l d−1 = l k ⊗ l d ∈ p k S d+k U ⊂ S k U ⊗ S d U .
Now, let us consider the element ξx d+k−2 = x ⊗ x d+k−2 y − y ⊗ x d+k−1 ∈ ξS d+k−2 U . We compute separately ψ k (x ⊗ x d+k−2 y) and ψ k (y ⊗ x d+k−1 ). One finds easily
ψ k y ⊗ x d+k−1 = x k ⊗ x d−1 y . From formula (3.3) one has p k (x d+k−2 y) = (d − 1)! (d + k − 1)! x k ⊗ ∂ k x (x d+k−2 y) + kx k−1 y ⊗ ∂ k−1 x ∂ y x d+k−2 y = (d − 1)! (d + k − 1)! (d + k − 2)! (d − 2)! x k ⊗ x d−2 y + kx k−1 y ⊗ (d + k − 2)! (d − 1)! x d−1 = 1 d + k − 1 (d − 1)x k ⊗ x d−2 y + kx k−1 y ⊗ x d−1 .
Hence one obtains
ψ k x ⊗ x d+k−2 y = 1 d + k − 1 (d − 1)x k ⊗ x d−1 y + kx k−1 y ⊗ x d .
Normal bundles of rational curves
Then we find
ψ k ξx d+k−1 = ψ k x ⊗ x d+k−2 y − ψ k y ⊗ x d+k−1 = 1 d + k − 1 (d − 1)x k ⊗ x d−1 y + kx k−1 y ⊗ x d − 1 d + k − 1 (d + k − 1)x k ⊗ x d−1 y = k d + k − 1 x k−1 y ⊗ x d − x k ⊗ x d−1 y = − k d + k − 1 ξ x k−1 ⊗ x d−1 ∈ ξp k−1 S d+k−2 U .
The calculations made above show that ψ k restricts to a non-zero SL(U )-invariant map on p 1 (S d+k U ) and ξS d+k−2 U . In particular, by the SL(U )-irreducibility of these spaces, one gets
ψ k p 1 S d+k U = p k S d+k U , ψ k ξS d+k−2 U = ξp k−1 S d+k−2 U ,
proving the global injectivity of ψ k . Moreover, applying Corollary 3.2, one has
ψ k U ⊗ S d+k−1 U = p k S d+k U ⊕ ξp k−1 S d+k−2 U = ker D 2 .
A new setup for computing the cohomology of N f
From now on we will assume that f : P 1 → P s parametrizes a rational curve with ordinary singularities and that f = π T • ν d , so the parametrized curve arises as projection of the rational normal curve C d from a vertex P(T ). Let us recall the operator
D 2 : S k U ⊗ S d U → S k−2 U ⊗ S d−2 U
discussed in Section 3. We state the main theorem of this article, whose proof will be given at the end of this section.
Theorem 4.1. For any k 1 one has
h 0 T f (−d − 2 − k) = dim ker D ∩ S k U ⊗ T , h 0 N f (−d − 2 − k) = dim ker D 2 ∩ S k U ⊗ T .
Euler sequence and its consequences
Let C ⊂ P s be a degree d rational curve with ordinary singularities. As in the notation above we assume that there is a parametrization map f : P 1 → P s obtained by projecting the rational normal curve C d from a vertex P(T ) ⊂ P(S d U ). Since f = π T • ν d , we have P s = P(S d U/T ). Note also that the natural inclusion (S d U/T ) * ⊂ S d U * identifies (S d U/T ) * and T ⊥ . Hence we can set dim T = e + 1 , dim T ⊥ = s + 1 = d − e .
89
A. Alzati and R. Re
We have a commutative diagram
O P 1 = − −−− → O P 1 0 − −−− → U ⊗ O P 1 (1) J(f ) − −−− → (T ⊥ ) * ⊗ O P 1 (d) − −−− → N f − −−− → 0 id 0 − −−− → T P 1 df − −−− → T f − −−− → N f − −−− → 0 .
Indeed, if the map f : P 1 → P((T ⊥ ) * ) = P s is given in coordinates by
f (u : v) = (g 0 (u, v) : · · · : g s (u, u)) , with g i (u, v) ∈ S d U * , then the map J(f ) : U ⊗ O P 1 (1) → (T ⊥ ) * ⊗ O P 1 (d) in the diagram above is given fiberwise by the differentials df | (u,v) : T (u,v) (CP 1 ) → T f (u,v) (CP s )
of the map f : CP 1 → CP s between the associated affine cones. Hence it has associated matrix
J(f ) = ∂ u g 0 (u, v) ∂ v g 0 (u, v) . . . . . . ∂ u g s (u, v) ∂ v g s (u, v) .
Let us consider the exact sequence
0 → U ⊗ O P 1 (1) → T ⊥ * ⊗ O P 1 (d) → N f → 0 . (4.1) From this sequence we get deg N f (−d − 1) = −(d − e) + 2d = d + e .
Writing, as in the introduction,
N f = O P 1 (c 1 + d + 2) ⊕ · · · ⊕ O P 1 (c s−1 + d + 2) (4.2)
with c 1 · · · c s−1 , we see that
s−1 i=1 (c i + 1) = d + e , s−1 i=1 c i = 2(e + 1) . (4.3)
Taking the cohomology exact sequence from (4.1) we obtain, for any k d + 1,
H 0 N f (−k) → U ⊗ H 1 O P 1 (1 − k) → (T ⊥ ) * ⊗ H 1 O P 1 (d − k) H 1 N f (−k) . (4.4) If k = d + 1, one obtains H 0 N f (−d − 1) ∼ = U ⊗ H 1 O P 1 (−d).
Let us now consider the cases k d + 2. We have T ⊥ = g 0 , . . . , g s , and we denote by g * 0 , . . . , g * s the dual basis of the g i in (T ⊥ ) * = S d U/T . Recall that if we write U * = u, v , with u, v the dual basis of x, y ∈ U , then the first non-zero map in (4.1) is defined by x ⊗ l → i g * i ⊗ l∂ u g i and y ⊗ l → i g * i ⊗ l ∂ v g i , for any local sections l, l of O P 1 (1).
As is well known, by Serre duality one can identify the spaces H 1 O P 1 (1−k) and
H 1 O P 1 (d−k) appearing in the exact sequence (4.4) with (H 0 O P 1 (k − 3)) * = S k−3 U and (H 0 O P 1 (k − d − 2)) * = S k−d−2 U , respectively. Moreover, it is well known that any sheaf map O P 1 (1 − k) σ → O P 1 (d − k) associated with a global section σ ∈ H 0 O P 1 (d − 1) = S d−1 U * induces a map H 1 O P 1 (1 − k) σ → H 1 O P 1 (d − k)
between the cohomology spaces that, under the identifications above, can be written as the linear map S k−3 U σ → S k−d−2 U defined by letting σ act as a differential operator 90 Normal bundles of rational curves on S k−3 U . In our case the sheaf map U ⊗ O P 1 (1 − k) → (T ⊥ ) * ⊗ O P 1 (d − k) arising from (4.1), after the identifications U ∼ = C 2 and T ⊥ ∼ = C s+1 by means of the mentioned bases x, y and g 0 , . . . , g s can be seen as a sheaf map O 2
P 1 (1 − k) → O s+1 P 1 (d − k) whose components have the form O P 1 (1 − k) ∂ug i → O P 1 (d − k) and O P 1 (1 − k) ∂vg i → O P 1 (d − k)
. The induced maps on the H 1 cohomology spaces are therefore ∂ u g i : S k−3 U → S k−2−d U and ∂ v g i : S k−3 U → S k−2−d U , acting as differential operators of order d − 1.
From the discussion above it follows that one can compute H 0 N f (−k) as the kernel of the linear map
U ⊗ S k−3 U → T ⊥ * ⊗ S k−d−2 U (4.5) defined by x⊗f → i g * i ⊗(∂ u g i )(f ) and y ⊗f → i g * i ⊗(∂ v g i )(f ), where ∂ u g i , ∂ v g i : S k−3 U → S k−2−d U act as differential operators of order d − 1. Let us compute the kernel H 0 N f (−k) of the linear map (4.5). The space H 0 N f (−k), seen as a subspace of U ⊗S k−3 U , is the space of tensors x⊗f 0 +y⊗f 1 ∈ U ⊗S k−3 U such that (∂ u g i )(f 0 )+(∂ v g i )(f 1 ) = 0 ∈ S k−d−2 U for all i = 0, .
. . , s. This is equivalent to imposing that f 0 (P ∂ u g) + f 1 (P ∂ v g) = 0 for any g ∈ T ⊥ and any P ∈ S k−d−2 U * . This is equivalent to saying that
P (f 0 )(∂ u g) + P (f 1 )(∂ v g) = 0 (4.6)
for any P ∈ S k−d−2 U * and any g ∈ T ⊥ . By applying the version of formula (3.2) with the roles of U and U * interchanged and recalling that the elements x, y ∈ U act as ∂ u , ∂ v on C[u, v], respectively, one sees that for any φ ∈ S d−1 U and any g ∈ S d U * one has φ(∂ u g) = (xφ)(g) and similarly φ(∂ v g) = (yφ)(g). Hence we can rewrite (4.6) in the following form:
(xP (f 1 ) + yP (f 2 ))(g) = 0 , ∀ g ∈ T ⊥ , ∀ P ∈ S k−d−2 U * ,
which means
xP (f 1 ) + yP (f 2 ) ∈ T , ∀ P ∈ S k−d−2 U * . (4.7)
Notation. The calculations made above hold for any k d + 2. We find it convenient, from now on, to redefine k to be what was first k − d − 2. Accordingly, we set, for any k 0,
T k = x ⊗ f 0 + y ⊗ f 1 ∈ U ⊗ S d+k−1 U | xP (f 0 ) + yP (f 1 ) ∈ T, ∀ P ∈ S k U * .
Hence we can summarize the discussion above in the following result.
Proposition 4.2. Under the notation above, we have the following relation for any k 0:
H 0 N f (−d − 2 − k) = T k .
(4.8)
The following proposition collects some facts that will be needed later, as well as some first applications of the result above.
Proposition 4.3. Assume that N f has a splitting of the form (4.2). Then the following hold:
(i) One has h 0 N f (−d − k − 2) = i : c i k (c i − k + 1) for any k ∈ Z. (ii) Setting f (−k) = h 0 N f (−d − k − 2) for any k ∈ Z, one has #{i | c i = k} = ∆ 2 f (−k) = f (−k) − 2f (−k − 1) + f (−k − 2) . (iii) s−1 i=1 (c i + 1) = d + e = d + dim P(T ). (iv) s−1 i=1 c i = 2(e + 1) = 2 dim T .
91
A. Alzati and R. Re
(v) c s−1 0.
Proof. Items (i) and (ii) are easy and well known. The relations (iii) and (iv) coincide with formulas (4.3) and therefore have already been proven.
From Proposition 4.2 we have the identification
H 0 N f (−d − 2) = x ⊗ f 1 + y ⊗ f 2 ∈ U ⊗ S d−1 U | xf 1 + yf 2 ∈ T ,
and therefore we see that
H 0 N f (−d − 2) ∼ = m −1 (T ) ⊂ U ⊗ S d−1 U , (4.9)
where m is the multiplication map m : U ⊗ S d−1 U → S d U . Now, the kernel of m is given by the tensors of the form x ⊗ yh − y ⊗ xh, with arbitrary h ∈ S d−2 U . Then one has
h 0 N f (−d − 2) = dim m −1 (T ) = d − 1 + dim T = d + e .
(4.10)
On the other hand, by (4.3) we know
d + e = h 0 N f (−d − 2) = i : c i 0 (c i + 1) s−1 i=1 (c i + 1) = d + e .
This implies c 1 · · · c s−1 −1. We will also need to know the value of h 0 N f (−d − 1). This is obtained from the exact sequence (4.1), from which it easily follows that
H 0 N f (−d − 1) ∼ = U ⊗ H 1 O P 1 (−d) and hence h 0 N f (−d − 1) = 2(d − 1)
. Now, applying fact (ii) for k = −1 and using relations (iii) and (iv) and the above calculation of f (1) = h 0 N f (−d − 1), we see that #{i | c i = −1} = 2(d − 1) − 2(d + e) + 2(e + 1) = 0, which completes the proof of relation (v).
Completion of the proof of Theorem 4.1
Proof of Theorem 4.1. We start with the part of the statement about T f . At the beginning of [AR15, Section 6.2, p. 1334] we showed the equality
h 0 T f (−d − 2 − k) = dim ∂ −k T .
Moreover, from Corollary 3.2 we know p k (S d+k U ) = ker D ⊂ S k U ⊗ S d U . Then one finds
ker D ∩ S k U ⊗ T = p k S k+d U ∩ S k U ⊗ T = p k f ∈ S d+k U | ∂ k−i x ∂ i y (f ) ∈ T, ∀ i = 0, . . . , k ∼ = ∂ −k T.
Hence we find the equality
h 0 T f (−d − 2 − k) = dim(ker D ∩ (S k U ⊗ U )).
Now, we prove the statement about N f . By Proposition 4.2 we know
H 0 N f (−d − 2 − k) = T k with T k ⊆ U ⊗ S d+k−1 U the subspace consisting of those elements x ⊗ f 0 + y ⊗ f 1 such that xP (f 0 ) + yP (f 1 ) ∈ T for any P ∈ S k U * . This is equivalent to the condition x∂ k−i x ∂ i y (f 0 ) + y∂ k−i x ∂ i y (f 1 ) ∈ T , ∀ i = 0, . . . , k .
Recall that by formula (3.3) one has
ψ k (x ⊗ f 0 + y ⊗ f 1 ) = m(x ⊗ p k (f 0 ) + y ⊗ p k (f 1 )) = const · k i=1 k i x k−i y i ⊗ (x∂ k−i x ∂ i y (f 0 ) + y∂ k−i x ∂ i y (f 1 )) .
Normal bundles of rational curves
Therefore, by the definition of T k , we have
x ⊗ f 0 + y ⊗ f 1 ∈ T k ⇐⇒ x∂ k−i x ∂ i y (f 0 ) + y∂ k−i x ∂ i y (f 1 ) ∈ T ∀ i = 0, . . . , k , ⇐⇒ ψ k (x ⊗ f 0 + y ⊗ f 1 ) ∈ S k U ⊗ T .
On the other hand, by Proposition 3.5, one has ψ k (x ⊗ f 0 + y ⊗ f 1 ) ∈ Im(ψ k ) = ker D 2 and ψ k is injective for k 1. Hence for any k 1 one has
H 0 N f (−d − 2 − k) ∼ = T k ψ k ∼ = ker D 2 ∩ S k U ⊗ T .
5. Some general consequences of Theorem 4.1
5.1 The dimension h 0 N f (−d − 2 − k) for k = 0, 1, 2
Proposition 5.1. The spaces H 0 N f (−d − 2 − k) have the following dimensions for k = 0, 1, 2:
k = 0 : h 0 N f (−d − 2) = d − 1 + dim T , k = 1 : h 0 N f (−d − 3) = 2 dim T , k = 2 : h 0 N f (−d − 4) = 3 dim T − dim ∂ 2 T .
Proof. The case k = 0 is the formula (4.10) and has already been discussed.
The case k = 1 is a consequence of the degree of N f and was already established by the formulas (4.3), but it also follows from the fact that D 2 = 0 on the space U ⊗ S d U and therefore, by Theorem 4.1, one has
H 0 N f (−d − 3) ∼ = U ⊗ T .
Finally, for k = 2, by Theorem 4.1 we have to compute dim S 2 U ⊗ T ∩ ker D 2 = dim ker D 2 S 2 U ⊗T . Note that dim(S 2 U ⊗ T ) = 3 dim T , and hence the claim on h 0 N f (−d − 4) follows if we show that D 2 (S 2 U ⊗ T ) = ∂ 2 T . We know D 2 ax 2 + bxy + cy 2 ⊗ τ = 2aτ xx − 2bτ xy + 2cτ yy .
By choosing τ ∈ T and a, b, c appropriately, one sees that τ xx , τ xy , τ yy ∈ D 2 (S 2 U ⊗ T ) and since these elements generate ∂ 2 T , one obtains ∂ 2 T ⊆ D 2 (S 2 U ⊗ T ). The converse inclusion is obvious.
Corollary 5.2. The number of summands equal to O P 1 (d + 2) in the splitting type (4.
2) of N f is equal to d − 1 − dim ∂ 2 T .
Proof. This follows immediately from Proposition 4.3(v) applied to k = 0 and the dimensions computed in Proposition 5.1.
Some general results on
h 0 N f (−d − 2 − k) with k 3
The computation of kernels and images of the maps
D 2 : S k U ⊗ T → S k−2 U ⊗ S d−2 U
for k 3 may be not easy for an arbitrary T . Sometimes one can reduce this computation to the case of subspaces of smaller dimension. This is possible by means of the following easy lemma.
Lemma 5.3. Assume that for a given decomposition T = T 1 ⊕T 2 one also has ∂ 2 T = ∂ 2 T 1 ⊕∂ 2 T 2 . Then for any k 2 the map D 2 : S k U ⊗ T → S k−2 U ⊗ S d−2 U is the direct sum of its restrictions to S k U ⊗ T i for i = 1, 2. In particular its rank is the sum of the ranks of the two restrictions.
Proof. This is immediate, since the image of res(D 2 ) :
S k U ⊗ T i → S k−2 U ⊗ S d−2 U is contained in S k−2 U ⊗ ∂ 2 T i for i = 1, 2.
From Lemma 5.3 one deduces the following result. of type (b 1 , . . . , b r ), and that ∂T has type (b 1 + 1, . . . , b r + 1). Let us denote by
Proposition 5.4. Assume T = ∂ b 1 (f 1 ) ⊕ · · · ⊕ ∂ br (f r ),D 2 i : S k U ⊗ ∂ b i (f i ) → S k−2 U ⊗ ∂ b i +2 (f i )the
restriction of D 2 for any i = 1, . . . , r. Then the maps D 2 i have maximal rank for any i = 1, . . . , r, and the rank of D 2 : S k U ⊗ T → S k−2 U ⊗ S d−2 U is the sum of their ranks.
Proof. In view of Lemma 5.3 we only need to show that D 2 i has maximal rank for any i = 1, . . . , r. Note that by Proposition 2.3 the assumption that the type of ∂T is (b 1 +1, . . . , b r +1) in particular
implies dim ∂ b i +2 (f i ) = b i +3 for all i, hence one has an isomorphism S b i +2 U * → ∂ b i +2 (f i ) defined by Ω → Ω(f i ) for any Ω ∈ S b i +2 U * .
Recall also that since T has type (b 1 , . . . , b r ), one knows dim ∂ b i (f i ) = b i + 1, and hence one has an isomorphism S b i U * → ∂ b i (f i ) defined in the same way as above. Under these isomorphisms, the maps D 2 i are identified with the map (3.7) with b = b i and hence, by Proposition 3.3, they have maximal rank.
As an application of the result above, we compute the normal bundles of rational curves obtained from vertexes T of the most special type, that is, T = ∂ e (g) with g ∈ P ( S d+e )\Sec e C d+e .
Proposition 5.5. If the curve C ⊂ P s is obtained from a vertex T of numerical type (e), that is, T = ∂ e (g) with g ∈ P(S d+e ) \ Sec e C d+e , then
N f = O 2 P 1 (d + e + 3) ⊕ O d−e−4 P 1 (d + 2) .
Proof. One can apply Proposition 5.4 and find
h 0 N f (−d − 2 − k) = max(0, (k + 1)(e + 1) − (k − 1)(e + 3)) = max(0, 2e + 4 − 2k) . This section is dedicated to the construction of the first known example, to our knowledge, of a reducible Hilbert scheme of rational curves with a given splitting type of the normal bundle.
As in the introduction, we will denote by H c the Hilbert scheme of degree d irreducible, non-degenerate rational curves in P s , with ordinary singularities and with normal bundle with splitting type O P 1 (c i + d + 2). We will consider the case c = (2, 2, 1, 1, 0, 0, 0); therefore we have s − 1 = 7. Moreover, from (c i + 1) = 13 = d + e and c i = 6 = 2(e + 1) we get e = 2
Normal bundles of rational curves and d = 11; that is, we are dealing with rational curves of degree 11 in P 8 . More precisely, we are dealing with parametrized curves of degree 11 in P 8 with splitting type of the normal bundle given by
N f = O 2 P 1 (15) ⊕ O 2 P 1 (14) ⊕ O 3 P 1 (13)
. These curves are obtained, up to a projective transformation in P 8 , as projections of the rational curve C 11 = ν 11 (P 1 ) ⊆ P(S 11 U ) from a 2-dimensional vertex P(T ), so that dim T = e + 1 = 3 .
We recall that the knowledge of the (s − 1)-uple (c 1 , . . . , c s−1 ) is equivalent to the knowledge of the dimensions of the spaces H 0 N f (−d − 2 − k) = T k . In our case these dimensions are the following:
dim T 0 = i : c i 0 (c i + 1) = 13 , dim T 1 = i : c i 1 c i = 6 , dim T 2 = i : c i 2 (c i − 1) = 2 , dim T 3 = i : c i 3 (c i − 2) = 0 , dim T k = 0 , ∀ k 3 .
We also recall that T k ∼ = ker(D 2 : S k U ⊗ T → S k−2 U ⊗ ∂ 2 T ) for all k 1. Since the vertex P(T ) must not intersect C 11 , we have only three possibilities for the numerical type of T , namely the type (2), the type (1, 0) and the type (0, 0, 0). We can immediately rule out the type (2) by the following argument. By Proposition 5.1 one has dim ∂ 2 T = dim S 2 U ⊗ T − dim T 2 = 7 .
(6.1)
If T is of type (2), then T = ∂ 2 (f ) for some polynomial f ∈ S 13 U and hence ∂ 2 T = ∂ 4 (g), which has dimension at most 5. Therefore we are left with the possibilities that T has type (1, 0) or (0, 0, 0).
Curves from spaces T of type (1, 0)
We will show that from a general vertex T of type (1, 0) we always obtain a curve with splitting of the normal bundle corresponding to c = (2, 2, 1, 1, 0, 0, 0). Recall that such a vertex has the form
T = ∂(f ) ⊕ g ,
with sufficiently general f ∈ P(S 12 U ) and g ∈ P(S 11 U ), where the latter is determined by T up to an element of ∂(f ). Hence the dimension of the space of such T is given by dim P(S 12 U ) + dim P(S 11 U/∂(f )) = 12+9 = 21. The same conclusion can be reached by means of the dimension formula provided by [AR15, Theorem 2]. Now, we know that a general T ⊂ S 11 U of type (1, 0) has ∂T of type (2, 1). This may be shown starting from a particular T , for example T = x 3 y 8 , x 4 y 7 , x 7 y 4 = ∂(x 4 y 8 ) ⊕ (x 7 y 4 ), from which we get the direct sum decompositions ∂T = ∂ 2 (x 4 y 8 ) ⊕ ∂(x 7 y 4 ) and ∂ 2 T = ∂ 3 (x 4 y 8 ) ⊕ ∂ 2 (x 7 y 4 ). Then one can extend the result to a general T of type (1, 0) by lower semicontinuity of dim ∂ 2 T . Hence for a general T of type (1, 0) we find dim ∂ 2 T = dim ∂T + 2 = 7, as required by (6.1). In particular, one obtains ∂ 2 T = ∂ 3 (f ) ⊕ ∂ 2 (g) and for any k 2 the map D 2 : S k U ⊗ T →
95
A. Alzati and R. Re S k−2 U ⊗ ∂ 2 T can be written as the direct sum of the maps
D 2 : S k U ⊗ ∂(f ) → S k−2 U ⊗ ∂ 3 (f ) , D 2 : S k U ⊗ (g) → S k−2 U ⊗ ∂ 2 (g) .
By construction one has dim ∂(f ) = 2, dim ∂ 3 (f ) = 4, dim(g) = 1 and dim ∂ 2 (g) = 3, hence one has the identifications S i U * ∼ = ∂ i (f ) for i = 1, 3 and S j U * ∼ = ∂ j (g) for j = 0, 2. By means of these identifications the maps above become
D 2 : S k U ⊗ U * → S k−2 U ⊗ S 3 U * , D 2 : S k U ⊗ S 0 U * → S k−2 U ⊗ S 2 U * ,
where D 2 now operates as in Proposition 3.3. Hence the maps have maximal rank. For k = 3 the map D 2 : S 3 U ⊗∂(f ) → U ⊗∂ 3 (f ) has domain of dimension 8 and codomain of dimension 8, hence is an isomorphism. The map D 2 : S 3 U ⊗(g) → U ⊗∂ 2 (g) has domain of dimension 4 and codomain of dimension 6; hence it is injective. In conclusion, we obtain T 3 = 0, and hence also T k = 0 for all k 3. So we get the dimensions of the spaces T k that correspond to c = (2, 2, 1, 1, 0, 0, 0). By Proposition 2.1 we have obtained an irreducible subscheme of H c of dimension 21+dim PGL(9)− dim PGL(2) = 98. We observe that the general curve in the subscheme of H c just defined is a smooth rational curve. Indeed, this is equivalent to showing that a general P(T ) with T of type (1, 0) as above does not intersect Sec 1 C 11 . Let us fix [g] ∈ P 11 \ Sec 1 C 11 ; then the dimension of the cone over Sec 1 C 11 with vertex [g], defined as the join J = J([g], Sec 1 C 11 ), is dim J = 4. Let us define
J = [f ] ∈ P S 12 U | ∃ ω ∈ U * : [ω(f )] ∈ J .
Then one finds dim J 6; indeed, J = q∈J,ω∈P(U * ) P(ω −1 (q)). Therefore there exists an [f ] ∈ P 12 \ J . Then one can conclude that for a general T = ∂(f ) ⊕ g one has P(T ) ∩ Sec 1 C d = ∅ .
6.2 Curves from spaces T of type (0, 0, 0) Unlike the previous case of T of type (1, 0), it will not be true that a general T ⊆ S 11 U of type (0, 0, 0) can produce a rational curve in H c . Instead, we will show that the space of all T of type (0, 0, 0) whose general element produces curves in H c is a proper irreducible subvariety of the space of all T of type (0, 0, 0). Now, we have dim ∂T = dim T + 3 = 6. Recall that to obtain a curve in H c one must have dim ∂ 2 T = 7. Hence, under the notations of Proposition 2.3, the space ∂T has type (a, b 1 ) with dim ∂T = a + 1 + b 1 + 1 = 6, that is, (a, b 1 ) = (a, 4 − a).
Case a = −1. One has a = −1 if and only if P(∂T ) does not intersect C 10 ⊂ P(S 10 U ), so we see that ∂T has type (b 1 ) = (5), that is, ∂T = ∂ 5 (g) for some [g] ∈ Sec 5 C 15 ⊂ P 15 and hence ∂ 2 T = ∂ 6 (g) has dimension 7, as required.
We compute the dimension of the variety of the spaces T under consideration. We observe that for a fixed general [g] ∈ P(S 15 U ), any sufficiently general T ⊆ ∂ −1 T = ∂ 4 (g) will have type (0, 0, 0) and ∂T = ∂ 5 (g). One can first show the claim for a special pair g, T , for example g = x 8 y 7 and T = x 4 y 7 , x 6 y 5 , x 8 y 3 . Then the result holds for general g, T by semicontinuity, more precisely by the upper semicontinuity of dim ∂ −1 T , which is equal to 0 if and only if T has type (0, 0, 0), by Proposition 2.5. Hence we can find spaces T meeting our requirements in a dense open subset of Gr(3, ∂ 4 (g)), whose dimension is dim Gr(3, ∂ 4 (g)) = 6. Moreover, since a general T ⊂ ∂ 4 (g) constructed as above has ∂T = ∂ 5 (g), the space g = ∂ −5 (∂T ) is uniquely determined by T . Hence the final count of parameters for spaces T as above is the following: dim P S 15 U + dim Gr(3, 5) = 15 + 6 = 21 .
Case a 0. By Proposition 2.3 a general T of type (a, 4 − a) has the form ∂T = p 10 0 , . . . , p 10 a ⊕ ∂ 4−a (g) for a suitable [g] ∈ Sec 4−a C 14−a ⊂ P(S 14−a U ). Note that the C 10 -generated part of ∂T is uniquely determined by ∂T and hence by T ; that is, the points p 0 , . . . , p a are uniquely determined. On the other hand, g is determined only modulo W = p 14−a 0 , . . . , p 14−a a . We have
T ⊆ ∂ −1 ∂T = p 10 0 , . . . , p 10 a ⊕ ∂ 3−a (g) ,
which is again a space of dimension 5, uniquely determined by T . However, we now have [g]∈P(S 14−a U/W ), which gives us 13 − 2a parameters. Hence a dimension count similar to the one above provides us with a number of parameters equal to 13−2a+a+1+dim Gr(3, 5) = 20−a. So in the case a 0 we find a variety of vertexes P(T ) of smaller dimension than in the case a = −1. Since we are looking for components of H c of maximal dimension, we will be satisfied if we get one such component from the case a = −1.
So we have reduced ourselves to showing that a general T of type (0, 0, 0) with ∂T of type (5) produces a curve in H c . Note that from the known data d = 11, dim T = 3 and dim ∂ 2 T = 7 we already have dim T 0 = d+dim T = 13, dim T 1 = 2 dim T = 6 and dim T 2 = 3 dim T −dim ∂ 2 T = 2. From the characterization of c = (2, 2, 1, 1, 0, 0, 0) in terms of the dimensions of the spaces T k , we will get a curve in H c from the vertex T if and only if dim T 3 = 0. By semicontinuity, if we show this for a special T of type (0, 0, 0) and ∂T of type (5), then the same will hold for such T in general. We take the same example as above.
g = x 8 y 7 , T = x 8 y 3 , x 6 y 5 , x 4 y 7 .
Notation. To simplify calculations, we denote by [h] any fixed non-zero rational multiple of the polynomial h. Similarly, [h] + [g] will denote a fixed linear combination of h and g with non-zero rational coefficients.
We compute T 3 as the kernel of D 2 : S 3 U ⊗ T → U ⊗ ∂ 2 T . In particular, we will get T 3 = 0 if we show that the image of that map has dimension 12. Recalling that
D 2 = ∂ 2 x ⊗ ∂ 2 y − 2∂ x ∂ y ⊗ ∂ x ∂ y + ∂ 2 y ⊗ ∂ 2
x , we see the following:
D 2 x 3 , x 2 y, xy 2 , y 3 ⊗ x 8 y 3 = x ⊗ x 8 y , y ⊗ x 8 y + x ⊗ x 7 y 2 , y ⊗ x 7 y 2 + x ⊗ x 6 y 3 , y ⊗ x 6 y 3 , D 2 x 3 , x 2 y, xy 2 , y 3 ⊗ x 6 y 5 = x ⊗ x 6 y 3 , y ⊗ x 6 y 3 + x ⊗ x 5 y 4 , y ⊗ x 5 y 4 + x ⊗ x 4 y 5 , y ⊗ x 4 y 5 , D 2 x 3 , x 2 y, xy 2 , y 3 ⊗ x 4 y 7 = x ⊗ x 4 y 5 , y ⊗ x 4 y 5 + x ⊗ x 3 y 6 , y ⊗ x 3 y 6 + x ⊗ x 2 y 7 , y ⊗ x 2 y 7 .
The space D 2 (S 3 ⊗ T ) is generated by the 12 elements shown on the right-hand sides of the equalities above. After taking suitable linear combinations of them, they are reduced to the 97 A. Alzati and R. Re following set of generators:
x ⊗ x 8 y , y ⊗ x 8 y + x ⊗ x 7 y 2 , y ⊗ x 7 y 2 , y ⊗ x 6 y 3 ,
x ⊗ x 6 y 3 ,
x ⊗ x 5 y 4 , y ⊗ x 5 y 4 , y ⊗ x 4 y 5 ,
x ⊗ x 4 y 5 , x ⊗ x 3 y 6 , y ⊗ x 3 y 6 + x ⊗ x 2 y 7 , y ⊗ x 2 y 7 .
After this simplification, one can easily see that the 12 generators are linearly independent. This completes the proof that T 3 = 0. Finally, we observe that in the given example of T = x 8 y 3 , x 6 y 5 , x 4 y 7 one has T ⊥ = u 11 , u 10 v, u 9 v 2 , u 7 v 4 , u 5 v 6 , u 3 v 8 , u 2 v 9 , uv 10 , v 11 , and since the elements of given basis of T ⊥ serve also as the components of a parametrization map f = π T • ν d : P 1 → P s , one easily sees that the parametrized curve is smooth. Hence the general curve in the same component of H c is also smooth.
Conclusion.
We have found that for c = (2, 2, 1, 1, 0, 0, 0) the Hilbert scheme H c is the union of two irreducible components, each of dimension equal to 21 + dim PGL(9) − dim PGL(2) = 98, by Proposition 2.1. One component has general point representing a smooth rational curve constructed from a general vertex T of type (1, 0) with ∂T of type (2, 1). The other component has general point representing smooth rational curves constructed from a general vertex T of type (0, 0, 0) with ∂T of type (5). We also observe that, by Proposition 2.6, the restricted tangent bundles are the following (setting d = 11):
f * T P s = O P 1 (d + 3) ⊕ O P 1 (d + 2) ⊕ O 6 P 1 (d + 1) for T of type (1, 0) , f * T P s = O 3 P 1 (d + 2) ⊕ O 5 P 1 (d + 1) for T of type (0, 0, 0)
. On the other hand, for any [C] ∈ H c one has
N f = O 2 P 1 (d + 4) ⊕ O 2 P 1 (d + 3) ⊕ O 3 P 1 (d + 2)
. Remark 6.1. One may note that the decomposition type given above has the form N f = F ⊕ O 3 P 1 (d + 2) with F = O 2 P 1 (d + 4) ⊕ O 2 P 1 (d + 3) of almost balanced type, and hence N f has the most general possible type among the vector bundles on P 1 of the same rank and degree and with summand O 3 P 1 (d + 2). Therefore the same counterexample discussed in this section also gives the following.
Example 6.2. The variety parametrizing the rational curves of degree d = 11 in P 8 with normal bundle N f with three summands of degree d + 2 = 13 is reducible. This is actually a counterexample to [Ber14,Theorem 4.8]. It seems that in the preparatory results leading to Theorem 4.8, especially Lemma 4.3, the author has overlooked his own more detailed treatment of the same results given in his Ph.D. thesis [Ber11], where more restrictive hypotheses are given. In [Ber11], Theorem 4.8 of [Ber14] is stated as Theorem 3.4.16, which in turn is deduced from Theorems 3.3.9 and 3.4.10. Our counterexample corresponds to the case n = 11, d = 8, k = 3, r = 2 and ρ n,k r = 3 in the author's notation, and it is not covered by Theorems 3.3.9 and 3.4.10 of [Ber11].
Smooth rational curves in rational normal scrolls
In this section we will characterize smooth rational curves contained in rational normal scroll surfaces in terms of the splitting type of their restricted tangent bundles T f , and we will also compute the splitting type of their normal bundles N f . Our main result can be viewed as a generalization of [EvdV81, Propositions 5 and 6], where the authors characterized smooth rational curves contained in a smooth quadric in P 3 by their restricted tangent bundles and computed their normal bundles. The general purpose of this section is to illustrate the idea that especially the splitting type of T f may have a deep impact on the extrinsic geometry of the curve C ⊂ P s . Notation. Following the notation of [Har77, II, Section 7], we denote by P(E) the projective bundle associated with a vector bundle E on P 1 of rank t 1. Recall that an epimorphism of vector bundles C s+1 ⊗ O P 1 → E defines a regular map g : P(E) → P s such that, for H the pullback of an hyperplane of P s , one has deg H t−1 = deg E = deg ∧ t E. If the map g : P(E) → P s is birational to the image, then, setting S = Im(g), one finds deg S = deg E.
Let C ⊂ P s be a smooth non-degenerate rational curve of degree d, biregularly parametrized by a map f : P 1 → P s which, as discussed in preceding sections, we can assume of the form f = π T • ν d up to a projective transformation of P s . As usual we will set dim T = e + 1 and s = d − e − 1. Throughout this section we will assume s 3 and d s + 1, that is, T = 0. We first study a sufficient condition for C to be smooth.
Lemma 7.1. Let T = ∂ e (g) be a vertex of type (e). Then the curve C = π T (C d ) is smooth if and only if g ∈ P(S d+e U ) \ Sec e+1 C d+e .
Proof. Our strategy of proof will be to show that when T has type (e), the curve C is smooth if and only if ∂T has type (e + 1). Indeed, by Proposition 2.3 one sees that ∂T = ∂ e+1 (g) being of type (e + 1) is equivalent to [g] ∈ Sec e+1 C d+e . Note that the point [g] ∈ P(S d+e U ) such that ∂T = ∂ e+1 (g) has type (e + 1) is unique, since one sees that g = ∂ −e−1 (∂T ) by iteratively applying Proposition 2.5.
The condition that C is smooth is given by P(T ) ∩ Sec 1 C d = ∅. Observe that T being of type (e) in particular implies P(T ) ∩ C d = ∅ and dim P(∂T ) = dim P(T ) + 1. Hence the space P(∂T ), which a priori is the join P( ω(T ) | [ω] ∈ P(U * ) ), in this case is also the union P(∂T ) = ω∈U * P(ω(T )). Then one has P(∂T ) ∩ C d−1 = ∅ if and only if there exist ω ∈ U * and l ∈ U such that [l d−1 ] ∈ P(ω(T )). Setting m = ω ⊥ , this is equivalent to saying that in P ( . This is equivalent to the condition P(T ) ∩ Sec 1 C d = ∅, that is, to C not being smooth.
Therefore, we have shown that C is smooth if and only if P(∂T ) ∩ C d−1 = ∅, that is, S ∂T = 0, with the notation of Proposition 2.3. Moreover, for T = ∂ e (g), one has ∂T = ∂ e+1 (g) and ∂ 2 T = ∂ e+2 (g), hence dim ∂ 2 T − dim ∂T 1. Then, by Proposition 2.3 applied to the space ∂T , we see that C is smooth if and only if ∂T has type (e + 1).
Remark 7.2. Note that the open set P(S d+e U ) \ Sec e+1 C d+e is non-empty and of dimension d + e = 2d − s − 1 if and only if dim Sec e+1 C d+e = 2e + 3 d + e − 1, which is true, as we are assuming s = d − e − 1 3. Now, we can state and prove the main result of this section.
Theorem 7.3. Let us assume that C is a non-degenerate irreducible smooth rational curve of degree d s + 1 with parametrization map f = π T • ν d : P 1 → C ⊂ P s . Then the following conditions are equivalent:
(i) The vertex T is of type (e), that is, T = ∂ e (g) with [g] ∈ P(S d+e U ) \ Sec e+1 C d+e .
(ii) T f = O P 1 (d + 2 + e) ⊕ O s−1 P 1 (d + 1).
99
A. Alzati and R. Re (iii) The curve C is contained in a smooth rational normal scroll S ∼ = P(E) ⊂ P s , with E = O P 1 (α) ⊕ O P 1 (β), where α, β > 0 and α + β = s − 1.
Moreover, under any of the conditions above, the following also hold:
(1) The rational normal scroll containing C is uniquely determined by C.
(2) The normal bundle N f has splitting type N f ∼ = O 2 P 1 (d + e + 3) ⊕ O s−3 P 1 (d + 2). Proof. (i) ⇐⇒ (ii). By Proposition 2.6 one sees that T has type (e), that is, T = ∂ e (g) with [g] ∈ Sec e C d+e if and only if T f = O P 1 (d + 2 + e) ⊕ O s−1 P 1 (d + 1). Since we are assuming C smooth, by Lemma 7.1 one actually has [g] ∈ Sec e+1 C d+e .
(ii)⇒(iii). We set V = T ⊥ and recall the restricted Euler sequence appearing in the second column of the diagram of Section 4.1:
0 → O P 1 → V * ⊗ O P 1 (d) → T f → 0 .
From this sequence and the existence of the sub-line bundle O P 1 (d + 2 + e) → T f , we deduce a commutative diagram with exact rows and columns
0 − −−− → O P 1 (−d) − −−− → E * − −−− → O P 1 (e + 2) − −−− → 0 ∼ = 0 − −−− → O P 1 (−d) − −−− → V * ⊗ O P 1 − −−− → T f (−d) − −−− → 0 O s−1 P 1 (1) ∼ = − −−− → O s−1 P 1 (1)
, where E * is defined as the preimage of O P 1 (e + 2) in V * ⊗ O P 1 . Dually, we get a exact sequence 0 → O s−1 P 1 (−1) → V ⊗ O P 1 → E → 0. It immediately follows that E has splitting type E ∼ = O P 1 (α) ⊕ O P 1 (β) with α, β 0 and α + β = s − 1. Moreover, the sheaf map V ⊗ O P 1 → O P 1 (d) that is naturally associated with f is the composition of the sheaf epimorphisms V ⊗ O P 1 → E → O P 1 (d). Let us set Y = P(E). Then the sheaf epimorphism V ⊗ O P 1 → E provides a map Y → P s whose image S is a ruled surface of minimal degree s − 1, and the existence of the factorization V ⊗ O P 1 → E → O P 1 (d) shows that the curve C is contained in S as the image of a section C of the P 1 -bundle Y → P 1 . We only have to show that α, β > 0. Indeed, if for example α = 0 and β = s − 1, then S is a cone over a rational normal curve in P s−1 ; more precisely, the map Y → S contracts the unique curve C 0 of Y with C 2 0 = 1 − s to the vertex of the cone S. In this case the section C ⊂ Y has divisor class C ≡ C 0 + dF , with F a fiber of Y → P 1 , and intersection number C · C 0 = d + 1 − s 2 for d s + 1. Hence C cannot be smooth for d s + 1. This argument excludes the case of the cone; therefore E = O P 1 (α) ⊕ O P 1 (β), with α + β = s − 1 and α, β > 0. In this case one also sees that the map Y → P 1 is an embedding, that is, Y ∼ = S, so S is a smooth rational normal scroll.
(iii)⇒(ii). Assume C ⊂ S ⊂ P s , with S a smooth rational normal scroll. In particular, S is isomorphic to a rational ruled surface P(E), embedded in P s by means of a surjection of vector bundles V ⊗ O P 1 → E. The fact that deg S = s − 1 is equivalent to deg E = s − 1. The fact that C ⊂ S ∼ = P(E) is a section of the projection map P(E) → P 1 implies the existence of a sheaf epimorphism E → O P 1 (d) such that the epimorphism V ⊗ O P 1 → O P 1 (d) associated with the embedding C ⊂ P s factors as V ⊗ O P 1 → E → O P 1 (d). Setting L = ker(E → O P 1 (d)), we see that L ∼ = O P 1 (s − 1 − d) = O P 1 (−e − 2). Now, we can dualize all the sheaf morphisms that we have Normal bundles of rational curves introduced so far, obtaining a diagram of the form
0 − −−− → O P 1 (−d) − −−− → E * − −−− → O P 1 (e + 2) − −−− → 0 ∼ = 0 − −−− → O P 1 (−d) − −−− → V * ⊗ O P 1 − −−− → T f (−d) − −−− → 0 . (7.1)
That is, we have obtained a sheaf embedding O P 1 (d + e + 2) → T f . Since deg T f = (s + 1)d = (s − 1)(d + 1) + d + e + 2 and the degree of any summand O P 1 (δ) in a splitting of T f is at least d + 1, we can conclude that T f has the form stated in condition (ii). Proof of statement (1). After fixing homogeneous coordinates on P s , the last row of the diagram (7.1) is uniquely determined by the parametrization map f : P 1 → P s , since this map defines uniquely the sheaf embedding O P 1 (−d) → V * ⊗O P 1 . Hence it is determined by C up to the action of PGL(2) = Aut(P 1 ). Moreover, there exists only one sheaf embedding O P 1 (e + 2) → T f (−d) for the given splitting type T f = O P 1 (d + e + 2) ⊕ O s−1 P 1 (d + 1). Hence the sheaf embedding E * → V * ⊗ O P 1 in the diagram (7.1) is also uniquely determined by C up to the action of PGL(2) on P 1 . This means that the parametrization map P(E) → S ⊂ P s is uniquely determined by C, up to the (equivariant) action of PGL(2) on P(E) → P 1 . Hence S is uniquely determined by C.
Proof of statement (2). The stated formula for the splitting type of N f is an immediate consequence of Proposition 5.5.
Remark 7.4. There is a classical connection between the property of a non-degenerate irreducible curve C of sufficiently high degree of being contained in a rational normal scroll and the number of independent quadric hypersurfaces containing C. Indeed, one has the following result, essentially due to Castelnuovo.
Proposition 7.5. A non-degenerate and irreducible curve C ⊂ P s of degree d 2s + 1 has h 0 I C (2) (s − 1)(s − 2)/2. If in addition C is smooth and rational, the equality holds if and only if C is contained in a smooth rational normal scroll of dimension 2.
Sketch of proof. Let Γ = C ∩ H be a general hyperplane section of C, which is in general linear position. Then, from the exact sequence 0 → I C (1) → I C (2) → I Γ,H (2) → 0 , one finds h 0 I C (2) h 0 I Γ,H (2). By a classical argument of Castelnuovo, any 2s − 1 points of Γ impose independent conditions on the quadrics of H ∼ = P s−1 , hence h 0 I Γ,H (2) h 0 O H (2) − 2s + 1 = s(s + 1)/2 − 2s + 1 = (s − 1)(s − 2)/2, proving the stated inequality.
If the equality holds, then Γ imposes exactly 2s − 1 conditions on the quadrics of H ∼ = P s−1 , and since deg H 2s + 1 = 2(s − 1) + 3, one can apply Castelnuovo's lemma as in [GH78, Chapter 4, p. 531], and conclude that Γ is contained in a unique rational normal curve of P s−1 . Hence, by the arguments in the proof of the lemma in [GH78, Chapter 4, pp. 531-532], either the curve C is contained in a rational normal scroll or s = 5 and C is contained in a Veronese surface in P 5 . When C is a smooth rational curve, we can exclude that S is the Veronese surface ν 2 (P 2 ) ⊂ P 5 , because any non-degenerate smooth curve C ⊂ S would come from a smooth curve of degree at least 3 of P 2 , hence cannot be rational. Therefore we are left with the case of S a rational normal scroll. As in the proof of the implication (ii)⇒(iii) of Theorem 7.3, it is easy to see that S is smooth. The converse follows from the fact that a rational normal scroll S ⊂ P s is contained in (s − 1)(s − 2)/2 independent quadrics.
A. Alzati and R. Re
We conclude this section with a discussion of the relevance of the smoothness assumption in Theorem 7.3. Indeed, one can see that the implication (iii)⇒(ii) of Theorem 7.3 is false if one does not assume C to be smooth. To this purpose, one can find counterexamples already in P 3 . This fact was not explicitly observed in [EvdV81], where the case s = 3 of Theorem 7.3 was proved. Here it is such an example.
Example 7.6. Let us consider g : P 1 → P 1 × P 1 defined by g(u, v) = u 2 : v 2 ; u 3 : v 3 and compose it with the Segre embedding P 1 × P 1 → P 3 so as to obtain f : P 1 → P 3 defined by f (u, v) = u 5 : u 2 v 3 : v 2 u 3 : v 5 . This is a parametrization of a rational curve C (with two cusps) of degree 5 contained in the quadric Q ⊂ P 3 of equation x 0 x 3 − x 1 x 2 = 0, which is a very simple rational normal scroll. Therefore C satisfies condition (iii) of Theorem 7.3. Note that C is a curve of divisor class (2, 3) in P 1 × P 1 , so C is not a section of any of the two P 1 -bundle structures Q → P 1 . We have, by construction, T ⊥ = u 5 , u 2 v 3 , v 2 u 3 , v 5 .
One immediately sees that T = x 4 y, xy 4 and therefore ∂T = x 4 , x 3 y, xy 3 , y 4 , so that dim ∂T = dim T + 2. Hence, from Proposition 2.3 and Definition 2.4 one sees that T has numerical type (0, 0), and by Proposition 2.6 one finds
T f = O 2 P 1 (7) ⊕ O P 1 (6) . (7.2)
This contradicts condition (ii) of Theorem 7.3. Observe that the curve C has no ordinary singularities, but it can be deformed to a rational curve C ⊂ Q of divisor class (2, 3) with two nodes. Since the vertex T relative to C has numerical type (0, 0) and this is the general numerical type for subspaces T ⊂ S 5 U of dimension 2, the vertex T relative to C will have type (0, 0) as well. Hence the restricted tangent sheaf to C has splitting type as in formula (7.2), providing a counterexample to condition (ii) of Theorem 7.3 by means of a curve with ordinary singularities.
Proposition 2.3 ([AR15, Theorem 1]). Let T be a proper subspace of S d U . Let S T be as defined above. Then dim ∂S T = dim S T . Moreover, if we define r = dim ∂T − dim(T ), then r 0 and 83 A. Alzati and R. Re
Setting f (−k) = h 0 N f (−d−2−k) for k 0, as in Proposition 4.3, we see that the sequence f (−k) is d + e, 2e + 2, 2e, . . . , 2, 0, . . . .Its second difference isd − e − 4, 0, . . . , 0, 2, 0, . . . ,where the last 2 appears at the place k = e+1. Hence, by Proposition 4.3, one has (c 1 , . . . , c s−1 ) = (e + 1, e + 1, 0, . . . , 0), with s − 1 = d − e − 2. By formula (4.2), we obtain the stated splitting type of N f . 6. Example of a reducible Hilbert scheme of rational curves with fixed normal bundle: H c with c = (
T ) there exists an element of the form [λl d + µm d ] if [m] = [l] and an element of the form [l d−1 n] if [m] = [l]
AcknowledgementsWe thank G. Ottaviani and F. Russo for many stimulating and helpful discussions during the development of this work.
PGL(2) actions on Grassmannians and projective construction of rational curves with given restricted tangent bundle. A Alzati, R Re, J. Pure Appl. Algebra. 2195A. Alzati and R. Re, PGL(2) actions on Grassmannians and projective construction of rational curves with given restricted tangent bundle, J. Pure Appl. Algebra 219 (2015), no. 5, 1320-1335;
. 10.1016/j.jpaa.2014.06.007https://doi.org/10.1016/j.jpaa.2014.06.007.
Normal bundle of rational curves and the Waring's problem. A Bernardi, Università degli Studi di FirenzePh.D. thesisA. Bernardi, Normal bundle of rational curves and the Waring's problem, Ph.D. thesis, Univer- sità degli Studi di Firenze, 2011.
Normal bundle of rational curves and Waring decomposition. A Bernardi, 10.1016/j.jalgebra.2013.11.008J. Algebra. 400A. Bernardi, Normal bundle of rational curves and Waring decomposition, J. Algebra 400 (2014), 123-141; https://doi.org/10.1016/j.jalgebra.2013.11.008.
F Enriques, O Chisini, Lezioni sulla teoria geometrica delle equazioni e delle funzioni algebriche. BolognaZanichelli EditoreIF. Enriques and O. Chisini, Lezioni sulla teoria geometrica delle equazioni e delle funzioni algebriche, Vol. I (1915), Vol. II (1918), Vol. III (1924), Vol. IV (1934) (Zanichelli Editore, Bologna).
Normal bundles of rational curves. Normal bundles of rational curves
On the normal bundles of smooth rational space curves. D Evdv81, A Eisenbud, Van De Ven, 10.1007/BF01450541Math. Ann. 2564EvdV81 D. Eisenbud and A. van de Ven, On the normal bundles of smooth rational space curves, Math. Ann. 256 (1981), no. 4, 453-463; https://doi.org/10.1007/BF01450541.
On the variety of smooth rational space curves with given degree and normal bundle. D Evdv82, A Eisenbud, Van De Ven, 10.1007/BF01393373Invent. Math. 671EvdV82 D. Eisenbud and A. van de Ven, On the variety of smooth rational space curves with given degree and normal bundle, Invent. Math. 67 (1982), no. 1, 89-100; https://doi.org/10.1007/ BF01393373.
Representation theory: a first course. W Fulton, J Harris, Grad. Texts in Math. 129Springer-VerlagW. Fulton and J. Harris, Representation theory: a first course, Grad. Texts in Math., vol. 129 (Springer-Verlag, New York, 1991);
. 10.1007/978-1-4612-0979-9https://doi.org/10.1007/978-1-4612-0979-9.
Principles of algebraic geometry. P Griffiths, J Harris, Pure Appl. Math. John Wiley & SonsP. Griffiths and J. Harris, Principles of algebraic geometry, Pure Appl. Math. (John Wiley & Sons, New York, 1978).
Normal bundles of rational curves in P 3. F Ghione, G Sacchiero, 10.1007/BF01316971Manuscripta Math. 332F. Ghione and G. Sacchiero, Normal bundles of rational curves in P 3 , Manuscripta Math. 33 (1980), no. 2, 111-128; https://doi.org/10.1007/BF01316971.
Algebraic geometry, Grad. R Hartshorne, Texts in Math. 52Springer-VerlagR. Hartshorne, Algebraic geometry, Grad. Texts in Math., vol. 52 (Springer-Verlag, New York - Heidelberg, 1977);
. 10.1007/978-1-4757-3849-0https://doi.org/10.1007/978-1-4757-3849-0.
Strata of vector spaces of forms in R = k[x, y], and of rational curves in P k. A Iar14, Iarrobino, 10.1007/s00574-014-0070-xBull. Braz. Math. Soc. (N.S.). 454Iar14 A. Iarrobino, Strata of vector spaces of forms in R = k[x, y], and of rational curves in P k , Bull. Braz. Math. Soc. (N.S.) 45 (2014), no. 4, 711-725; https://doi.org/10.1007/ s00574-014-0070-x.
Classical invariant theory. P J Olver, London Math. Soc. Stud. Texts. 44Cambridge Univ. PressP. J. Olver, Classical invariant theory, London Math. Soc. Stud. Texts, vol. 44 (Cambridge Univ. Press, Cambridge, 1999);
. 10.1017/CBO9780511623660https://doi.org/10.1017/CBO9780511623660.
La stratification du schéma de Hilbert des courbes rationnelles de P n par le fibré tangent restreint. L Ram90, Ramella, C. R. Acad. Sci. Paris Sér. I Math. 3113Ram90 L. Ramella, La stratification du schéma de Hilbert des courbes rationnelles de P n par le fibré tangent restreint, C. R. Acad. Sci. Paris Sér. I Math. 311 (1990), no. 3, 181-184.
Normal bundles of rational curves in projective spaces. Z Ran, 10.4310/AJM.2007.v11.n4.a3Asian J. Math. 114Z. Ran, Normal bundles of rational curves in projective spaces, Asian J. Math. 11 (2007), no. 4, 567-608; https://doi.org/10.4310/AJM.2007.v11.n4.a3.
Principal parts bundles on projective spaces and quiver representations. R Re12, Re, 10.1007/s12215-012-0084-4Rend. Circ. Mat. Palermo. 612Re12 R. Re, Principal parts bundles on projective spaces and quiver representations, Rend. Circ. Mat. Palermo 61 (2012), no. 2, 179-198; https://doi.org/10.1007/s12215-012-0084-4.
Normal bundles of rational curves in projective space. G Sac80, Sacchiero, Ann. Univ. Ferrara Sez. VII (N.S.26Sac80 G. Sacchiero, Normal bundles of rational curves in projective space, Ann. Univ. Ferrara Sez. VII (N.S.) 26 (1980), 33-40.
Two dimensional σ-models and harmonic maps from S 2 to S 2n. J Ver83, Verdier, 10.1007/3-540-12291-5_17Group Theoretical Methods in Physics. 180SpringerLecture Notes in Phys.Ver83 J. Verdier, Two dimensional σ-models and harmonic maps from S 2 to S 2n , Group Theoreti- cal Methods in Physics (Istanbul, 1982) (eds M. Serdaroǧlu and E.Ínönü), Lecture Notes in Phys., vol. 180 (Springer, Berlin -Heidelberg, 1983), 136-141; https://doi.org/10.1007/ 3-540-12291-5_17.
. Alberto Alzati [email protected]. Alberto Alzati [email protected]
. F Dipartimento Di Matematica, Enriques, [email protected] Saldini. 50Università di MilanoDipartimento di Matematica F. Enriques, Università di Milano, via Saldini 50, 20133 Milano, Italy Riccardo Re [email protected]
. Dipartimento Di Matematica E Informatica, Andrea Doria. 695125Università di CataniaDipartimento di Matematica e Informatica, Università di Catania, viale Andrea Doria 6, 95125
| [] |
[
"Supplementary material S.1 Maintenance operations for vertical alignment correction",
"Supplementary material S.1 Maintenance operations for vertical alignment correction"
] | [] | [] | [] | We list the nine maintenance operations used for forecasting vertical track alignment inTable 4of the main manuscript. These nine operations are categorized by merging more detailed operations. Here we show the full list of the maintenance operations including the detailed operations inTable S1. For example, the category of sleeper maintenance includes sleeper replacement, loose sleeper repair, and so on. | 10.1038/s41598-023-29303-7 | null | 256,701,999 | 2211.03549 | e77c72a114bb27ff410a77c0bd2db5ba4b960edc |
Supplementary material S.1 Maintenance operations for vertical alignment correction
Supplementary material S.1 Maintenance operations for vertical alignment correction
We list the nine maintenance operations used for forecasting vertical track alignment inTable 4of the main manuscript. These nine operations are categorized by merging more detailed operations. Here we show the full list of the maintenance operations including the detailed operations inTable S1. For example, the category of sleeper maintenance includes sleeper replacement, loose sleeper repair, and so on.
S.2 Loss curves for training and validation data
Figure S1
show the loss curves of ConvLSTM, GRU, and LSTM in the comparison experiment. The blue and orange lines show the losses for training and validation data, respectively. As shown in Figure S1, the loss curves for both the training and validation data decrease as the epoch progresses. These results indicate overfitting does not occur for all ConvLSTM, GRU, and LSTM. Tables S2, S3 and S4 show the results of the ablation study on the exogenous factor for LSTM and GRU. The results are similar to those of ConvLSTM (see Tables 9, 10, and 11 in the main manuscript). For example, comparing "with-all" and "without-maintenance", "without-maintenance" shows a higher RMSE and a lower accuracy for both the entire dataset and the dataset with thresholds α = −4, −6 [mm]. Therefore, the maintenance records are also significant for forecasting in LSTM and GRU. Additionally, the results prove that ConvLSTM outperforms LSTM and GRU (see Tables 9, 10, and 11 in the main manuscript).
Figure S1 :
S1The loss curves of (a) ConvLSTM, (b) GRU, and (c) LSTM.
Supplementary Table S2: Results of the ablation study (RMSE) for (a) LSTM and (b) GRU. The RMSE is calculated with both the entire data and data with the threshold levels α = −4, −6[mm].To determine the best architectures for LSTM and GRU, we examine the forecasting performance by changing the number of layers for LSTM and GRU.Tables S5 and S6show the RMSE and accuracy of the tuning results, respectively. In the tables, the number of layers means that the architecture consists of that number of layers. We also show the results by ConvLSTM for comparison. In each case, ConvLSTM provides better RMSE and accuracy than those obtained by tuning LSTM and GRU.SupplementaryTableS5: RMSE results by tuning the number of layers for (a) LSTM and (b) GRU. ±0.3mm ±0.5 mm ±1.0 mm ±0.3 mm ±0.5 mm ±1.±0.3mm ±0.5 mm ±1.0 mm ±0.3 mm ±0.5 mm ±1.Supplementary Table S6: Accuracy results by tuning the number of layers for (a) LSTM and (b) GRU.Case name
Exogenous data
RMSE(mm) (↓)
Maintenance Structure Rail joint Ballast age Tonnage
Rainfall
Entire
< −4mm < −6mm
w/ all
✓
✓
✓
✓
✓
✓
0.302
1.091
2.477
w/o maintenance
✓
✓
✓
✓
✓
0.369
1.226
2.651
w/o structure
✓
✓
✓
✓
✓
0.305
1.114
2.526
w/o rail joint
✓
✓
✓
✓
✓
0.302
1.104
2.469
w/o ballast age
✓
✓
✓
✓
✓
0.305
1.091
0.247
w/o tonnage
✓
✓
✓
✓
✓
0.305
1.090
2.417
w/o rainfall
✓
✓
✓
✓
✓
0.300
1.079
2.402
w/o all
0.369
1.194
2.582
(a) LSTM
Case name
Exogenous data
RMSE(mm) (↓)
Maintenance Structure Rail joint Ballast age Tonnage
Rainfall
Entire
< −4mm < −6mm
w/ all
✓
✓
✓
✓
✓
✓
0.300
1.085
2.406
w/o maintenance
✓
✓
✓
✓
✓
0.366
1.270
2.762
w/o structure
✓
✓
✓
✓
✓
0.299
1.091
2.416
w/o rail joint
✓
✓
✓
✓
✓
0.299
1.083
2.398
w/o ballast age
✓
✓
✓
✓
✓
0.299
1.093
2.414
w/o tonnage
✓
✓
✓
✓
✓
0.299
1,089
2.402
w/o rainfall
✓
✓
✓
✓
✓
0.299
1.093
2.437
w/o all
0.365
1.285
2.828
(b) GRU
Case name
Exogenous data
Accuracy(%) (↑)
Maintenance Structure Rail joint Ballast age Tonnage Rainfall
< −4mm
±0.3mm ±0.5mm
±1.0mm
w/ all
✓
✓
✓
✓
✓
✓
56.55
72.51
85.13
w/o maintenance
✓
✓
✓
✓
✓
05.14
22.74
70.60
w/o structure
✓
✓
✓
✓
✓
57.26
71.65
84.07
w/o rail joint
✓
✓
✓
✓
✓
57.05
72.38
84.70
w/o ballast age
✓
✓
✓
✓
✓
59.51
73.16
85.03
w/o tonnage
✓
✓
✓
✓
✓
59.98
73.48
85.12
w/o rainfall
✓
✓
✓
✓
✓
63.66
75.83
85.83
w/o all
05.52
25.08
73.19
(a) LSTM
Case name
Exogenous data
Accuracy(%) (↑)
Maintenance Structure Rail joint Ballast age Tonnage Rainfall
< −4mm
±0.3mm ±0.5mm
±1.0mm
w/ all
✓
✓
✓
✓
✓
✓
61.21
74.96
86.01
w/o maintenance
✓
✓
✓
✓
✓
07.56
23.32
64.36
w/o structure
✓
✓
✓
✓
✓
63.55
76.22
86.01
w/o rail joint
✓
✓
✓
✓
✓
63.40
76.11
85.97
w/o ballast age
✓
✓
✓
✓
✓
63.66
75.83
85.71
w/o tonnage
✓
✓
✓
✓
✓
63.83
76.08
85.76
w/o rainfall
✓
✓
✓
✓
✓
64.48
76.59
86.26
w/o all
09.24
24.70
64.02
(b) GRU
Supplementary Table S3: Results of the ablation study (accuracy) for (a) LSTM and (b) GRU. The accuracy is calculated
with tolerance ε = 0.3, 0.5, 1.0[mm] on the data with the evaluation threshold levels α = −4[mm].
Case name
Exogenous data
Accuracy(%) (↑)
Maintenance Structure Rail joint Ballast age Tonnage Rainfall
< −6mm
±0.3mm ±0.5mm
±1.0mm
w/ all
✓
✓
✓
✓
✓
✓
05.44
17.67
46.02
w/o maintenance
✓
✓
✓
✓
✓
00.00
00.00
01.36
w/o structure
✓
✓
✓
✓
✓
06.21
21.75
42.52
w/o rail joint
✓
✓
✓
✓
✓
22.14
35.92
52.82
w/o ballast age
✓
✓
✓
✓
✓
04.66
20.58
45.05
w/o tonnage
✓
✓
✓
✓
✓
09.71
24.66
46.02
w/o rainfall
✓
✓
✓
✓
✓
15.15
30.10
48.74
w/o all
00.00
00.00
03.30
(a) LSTM
Case name
Exogenous data
Accuracy(%) (↑)
Maintenance Structure Rail joint Ballast age Tonnage Rainfall
< −6mm
±0.3mm ±0.5mm
±1.0mm
w/ all
✓
✓
✓
✓
✓
✓
16.12
30.87
49.71
w/o maintenance
✓
✓
✓
✓
✓
00.00
00.00
01.55
w/o structure
✓
✓
✓
✓
✓
24.08
36.50
51.84
w/o rail joint
✓
✓
✓
✓
✓
23.88
35.34
51.65
w/o ballast age
✓
✓
✓
✓
✓
21.36
33.20
50.29
w/o tonnage
✓
✓
✓
✓
✓
22.33
34.17
51.07
w/o rainfall
✓
✓
✓
✓
✓
20.97
34.17
51.46
w/o all
00.00
00.00
01.55
(b) GRU
Supplementary Table S4: Results of the ablation study (accuracy) for (a) LSTM and (b) GRU. The accuracy is calculated
with tolerance ε = 0.3, 0.5, 1.0[mm] on the data with the evaluation threshold levels α = −6[mm].
S.4 Layer tuning of LSTM and GRU
Num. layers
RMSE(mm) (↓)
Entire < −4mm < −6mm
1
0.300
1.096
2.451
2
0.302
1.091
2.411
3
0.304
1.107
2.460
4
0.315
1.137
2.538
ConvLSTM
0.293
1.071
2.343
(a) LSTM
Num. layers
RMSE(mm) (↓)
entire < −4mm < −6mm
1
0.300
1.097
2.428
2
0.300
1.085
2.406
3
0.299
1.090
2.412
4
0.299
1.083
2.399
ConvLSTM
0.293
1.071
2.343
(b) GRU
Num. layers
Accuracy(%) (↑)
< −4mm
< −6mm
0 mm
1
62.02
75.12
85.58
10.87
26.99
48.54
2
56.55
72.52
85.14
05.44
17.67
46.21
3
56.03
71.28
84.15
03.11
19.81
43.69
4
47.04
64.86
81.84
00.78
05.83
34.56
ConvLSTM
66.48
77.82
87.35
26.02
37.28
54.76
(a) LSTM
Num. layers
Accuracy(%) (↑)
< −4mm
< −6mm
0 mm
1
60.74
74.27
85.04
18.06
29.51
47.18
2
61.21
74.96
86.01
16.12
30.87
49.71
3
62.53
75.65
85.71
23.30
34.56
51.07
4
62.07
75.18
85.72
18.83
31.07
48.54
ConvLSTM
66.48
77.82
87.35
26.02
37.28
54.76
(b) GRU
| [] |
[
"COMBO: Conservative Offline Model-Based Policy Optimization",
"COMBO: Conservative Offline Model-Based Policy Optimization"
] | [
"Tianhe Yu [email protected] \nStanford University\n2 UC Berkeley\n\nEqual Contribution)\n\n",
"Aviral Kumar [email protected] \nEqual Contribution)\n\n",
"Rafael Rafailov \nStanford University\n2 UC Berkeley\n",
"Aravind Rajeswaran \nFacebook AI Research\n\n",
"Sergey Levine ",
"Chelsea Finn \nStanford University\n2 UC Berkeley\n"
] | [
"Stanford University\n2 UC Berkeley",
"Equal Contribution)\n",
"Equal Contribution)\n",
"Stanford University\n2 UC Berkeley",
"Facebook AI Research\n",
"Stanford University\n2 UC Berkeley"
] | [] | Model-based reinforcement learning (RL) algorithms, which learn a dynamics model from logged experience and perform conservative planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating conservatism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We empirically find that uncertainty estimation is not accurate and leads to poor performance in certain scenarios in offline model-based RL. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that trains a value function using both the offline dataset and data generated using rollouts under the model while also additionally regularizing the value function on out-of-support state-action tuples generated via model rollouts. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. Theoretically, we show that COMBO satisfies a policy improvement guarantee in the offline setting. Through extensive experiments, we find that COMBO attains greater performance compared to prior offline RL on problems that demand generalization to related but previously unseen tasks, and also consistently matches or outperforms prior offline RL methods on widely studied offline RL benchmarks, including image-based tasks.PreliminariesMarkov Decision Processes and Offline RL. We study RL in the framework of Markov decision processes (MDPs) specified by the tuple M = (S, A, T, r, µ 0 , γ). S, A denote the state and action spaces. T (s |s, a) and r(s, a) ∈ [−R max , R max ] represent the dynamics and reward function respectively. µ 0 (s) denotes the initial state distribution, and γ ∈ (0, 1) denotes the discount factor. We denote the discounted state visitation distribution of a policy π using d π M (s) := (1 − γ) ∞ t=0 γ t P(s t = s|π), where P(s t = s|π) is the probability of reaching state s at time t by rolling out π in M. Similarly, we denote the state-action visitation distribution with d π M (s, a) := d π M (s)π(a|s). The goal of RL is to learn a policy that maximizes the return, or long term cumulative rewards: max π J(M, π) := 1 1−γ E (s,a)∼d π M (s,a) [r(s, a)]. Offline RL is the setting where we have access only to a fixed dataset D = {(s, a, r, s )}, which consists of transition tuples from trajectories collected using a behavior policy π β . In other words, the dataset D is sampled from d π β (s, a) := d π β (s)π β (a|s). We define M as the empirical MDP induced by the dataset D and d(s, a) as sampled-based version of d π β (s, a). In the offline setting, the goal is to find the best possible policy using the fixed offline dataset.Model-Free Offline RL Algorithms. One class of approaches for solving MDPs involves the use of dynamic programming and actor-critic schemes[56,5], which do not explicitly require the learning | null | [
"https://arxiv.org/pdf/2102.08363v2.pdf"
] | 231,934,209 | 2102.08363 | 245682e8b3fa76f4a3e2991b5497577af95cbb3f |
COMBO: Conservative Offline Model-Based Policy Optimization
Tianhe Yu [email protected]
Stanford University
2 UC Berkeley
Equal Contribution)
Aviral Kumar [email protected]
Equal Contribution)
Rafael Rafailov
Stanford University
2 UC Berkeley
Aravind Rajeswaran
Facebook AI Research
Sergey Levine
Chelsea Finn
Stanford University
2 UC Berkeley
COMBO: Conservative Offline Model-Based Policy Optimization
Model-based reinforcement learning (RL) algorithms, which learn a dynamics model from logged experience and perform conservative planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating conservatism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We empirically find that uncertainty estimation is not accurate and leads to poor performance in certain scenarios in offline model-based RL. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that trains a value function using both the offline dataset and data generated using rollouts under the model while also additionally regularizing the value function on out-of-support state-action tuples generated via model rollouts. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. Theoretically, we show that COMBO satisfies a policy improvement guarantee in the offline setting. Through extensive experiments, we find that COMBO attains greater performance compared to prior offline RL on problems that demand generalization to related but previously unseen tasks, and also consistently matches or outperforms prior offline RL methods on widely studied offline RL benchmarks, including image-based tasks.PreliminariesMarkov Decision Processes and Offline RL. We study RL in the framework of Markov decision processes (MDPs) specified by the tuple M = (S, A, T, r, µ 0 , γ). S, A denote the state and action spaces. T (s |s, a) and r(s, a) ∈ [−R max , R max ] represent the dynamics and reward function respectively. µ 0 (s) denotes the initial state distribution, and γ ∈ (0, 1) denotes the discount factor. We denote the discounted state visitation distribution of a policy π using d π M (s) := (1 − γ) ∞ t=0 γ t P(s t = s|π), where P(s t = s|π) is the probability of reaching state s at time t by rolling out π in M. Similarly, we denote the state-action visitation distribution with d π M (s, a) := d π M (s)π(a|s). The goal of RL is to learn a policy that maximizes the return, or long term cumulative rewards: max π J(M, π) := 1 1−γ E (s,a)∼d π M (s,a) [r(s, a)]. Offline RL is the setting where we have access only to a fixed dataset D = {(s, a, r, s )}, which consists of transition tuples from trajectories collected using a behavior policy π β . In other words, the dataset D is sampled from d π β (s, a) := d π β (s)π β (a|s). We define M as the empirical MDP induced by the dataset D and d(s, a) as sampled-based version of d π β (s, a). In the offline setting, the goal is to find the best possible policy using the fixed offline dataset.Model-Free Offline RL Algorithms. One class of approaches for solving MDPs involves the use of dynamic programming and actor-critic schemes[56,5], which do not explicitly require the learning
Introduction
Offline reinforcement learning (offline RL) [30,34] refers to the setting where policies are trained using static, previously collected datasets. This presents an attractive paradigm for data reuse and safe policy learning in many applications, such as healthcare [62], autonomous driving [65], robotics [25,48], and personalized recommendation systems [59]. Recent studies have observed that RL algorithms originally developed for the online or interactive paradigm perform poorly in the offline case [14,28,26]. This is primarily attributed to the distribution shift that arises over the course of learning between the offline dataset and the learned policy. Thus, development of algorithms specialized for offline RL is of paramount importance to benefit from the offline data available in aformentioned applications. In this work, we develop a principled model-based offline RL algorithm that matches or exceeds the performance of prior offline RL algorithms in benchmark tasks.
A major paradigm for algorithm design in offline RL is to incorporate conservatism or regularization into online RL algorithms. Model-free offline RL algorithms [15,28,63,21,29, 27] directly incorporate conservatism into the policy or value function training and do not require learning a dynamics model. However, model-free algorithms learn only on the states in the offline dataset, which can lead to overly conservative algorithms. In contrast, model-based algorithms [26,67] learn a pessimistic dynamics model, which in turn induces a conservative estimate of the value function. By generating and training on additional synthetic data, model-based algorithms have the potential for 35th Conference on Neural Information Processing Systems (NeurIPS 2021). broader generalization and solving new tasks using the offline dataset [67]. However, these methods rely on some sort of strong assumption about uncertainty estimation, typically assuming access to a model error oracle that can estimate upper bounds on model error for any state-action tuple. In practice, such methods use more heuristic uncertainty estimation methods, which can be difficult or unreliable for complex datasets or deep network models. It then remains an open question as to whether we can formulate principled model-based offline RL algorithms with concrete theoretical guarantees on performance without assuming access to an uncertainty or model error oracle. In this work, we propose precisely such a method, by eschewing direct uncertainty estimation, which we argue is not necessary for offline RL. by utilizing both the offline dataset as well as simulated data from the model. Crucially, COMBO does not require uncertainty quantification, and the value function learned by COMBO is less conservative on the transitions seen in the dataset than CQL. This enables COMBO to steer the agent towards higher value states compared to CQL, which may steer towards more optimal states, as illustrated in the figure.
Our main contribution is the development of conservative offline model-based policy optimization (COMBO), a new model-based algorithm for offline RL. COMBO learns a dynamics model using the offline dataset. Subsequently, it employs an actor-critic method where the value function is learned using both the offline dataset as well as synthetically generated data from the model, similar to Dyna [57] and a number of recent methods [20, 67,7,48]. However, in contrast to Dyna, COMBO learns a conservative critic function by penalizing the value function in state-action tuples that are not in the support of the offline dataset, obtained by simulating the learned model. We theoretically show that for any policy, the Q-function learned by COMBO is a lower-bound on the true Q-function. While the approach of optimizing a performance lower-bound is similar in spirit to prior model-based algorithms [26,67], COMBO crucially does not assume access to a model error or uncertainty oracle. In addition, we show theoretically that the Q-function learned by COMBO is less conservative than model-free counterparts such as CQL [29], and quantify conditions under which the this lower bound is tighter than the one derived in CQL. This is illustrated through an example in Figure 1. Following prior works [31], we show that COMBO enjoys a safe policy improvement guarantee. By interpolating model-free and model-based components, this guarantee can utilize the best of both guarantees in certain cases. Finally, in our experiments, we find that COMBO achieves the best performance on tasks that require out-of-distribution generalization and outperforms previous latent-space offline model-based RL methods on image-based robotic manipulation benchmarks. We also test COMBO on commonly studied benchmarks for offline RL and find that COMBO generally performs well on the benchmarks, achieving the highest score in 9 out of 12 MuJoCo domains from the D4RL [12] benchmark suite. of a dynamics model. To capture the long term behavior of a policy without a model, we define the action value function as Q π (s, a) := E [ ∞ t=0 γ t r(s t , a t ) | s 0 = s, a 0 = a] , where future actions are sampled from π(·|s) and state transitions happen according to the MDP dynamics. Consider the following Bellman operator: B π Q(s, a) := r(s, a) + γE s ∼T (·|s,a),a ∼π(·|s ) [Q(s , a )], and its sample based counterpart: B π Q(s, a) := r(s, a) + γQ(s , a ), associated with a single transition (s, a, s ) and a ∼ π(·|s ). The action-value function satisfies the Bellman consistency criterion given by B π Q π (s, a) = Q π (s, a) ∀(s, a). When given an offline dataset D, standard approximate dynamic programming (ADP) and actor-critic methods use this criterion to alternate between policy evaluation [40] and policy improvement. A number of prior works have observed that such a direct extension of ADP and actor-critic schemes to offline RL leads to poor results due to distribution shift over the course of learning and over-estimation bias in the Q function [14,28,63]. To address these drawbacks, prior works have proposed a number of modifications aimed towards regularizing the policy or value function (see Section 6). In this work, we primarily focus on CQL [29], which alternates between:
Policy Evaluation: The Q function associated with the current policy π is approximated conservatively by repeating the following optimization: where µ(·|s) is a wide sampling distribution such as the uniform distribution over action bounds. CQL effectively penalizes the Q function at states in the dataset for actions not observed in the dataset. This enables a conservative estimation of the value function for any policy [29], mitigating the challenges of over-estimation bias and distribution shift.
Policy Improvement: After approximating the Q function asQ π , the policy is improved as π ← arg max π E s∼D,a∼π (·|s) Q π (s, a) . Actor-critic methods with parameterized policies and Q functions approximate arg max and arg min in above equations with a few gradient descent steps.
Model-Based Offline RL Algorithms. A second class of algorithms for solving MDPs involve the learning of the dynamics function, and using the learned model to aid policy search. Using the given dataset D, a dynamics model T is typically trained using maximum likelihood estimation as: min T E (s,a,s )∼D log T (s |s, a) . A reward modelr(s, a) can also be learned similarly if it is unknown. Once a model has been learned, we can construct the learned MDP M = (S, A, T ,r, µ 0 , γ), which has the same state and action spaces, but uses the learned dynamics and reward function. Subsequently, any policy learning or planning algorithm can be used to recover the optimal policy in the model asπ = arg max π J( M, π).
This straightforward approach is known to fail in the offline RL setting, both in theory and practice, due to distribution shift and model-bias [51,26]. In order to overcome these challenges, offline model-based algorithms like MOReL [26] and MOPO [67] use uncertainty quantification to construct a lower bound for policy performance and optimize this lower bound by assuming a model error oracle u(s, a). By using an uncertainty estimation algorithm like bootstrap ensembles [43,4,37], we can estimate u(s, a). By constructing and optimizing such a lower bound, offline model-based RL algorithms avoid the aforementioned pitfalls like model-bias and distribution shift. While any RL or planning algorithm can be used to learn the optimal policy for M, we focus specifically on MBPO [20, 57] which was used in MOPO. MBPO follows the standard structure of actor-critic algorithms, but in each iteration uses an augmented dataset D ∪ D model for policy evaluation. Here, D is the offline dataset and D model is a dataset obtained by simulating the current policy using the learned dynamics model. Specifically, at each iteration, MBPO performs k-step rollouts using T starting from state s ∈ D with a particular rollout policy µ(a|s), adds the model-generated data to D model , and optimizes the policy with a batch of data sampled from D ∪ D model where each datapoint in the batch is drawn from D with probability f ∈ [0, 1] and D model with probability 1 − f .
Conservative Offline Model-Based Policy Optimization
The principal limitation of prior offline model-based algorithms (discussed in Section 2) is the assumption of having access to a model error oracle for uncertainty estimation and strong reliance on heuristics of quantifying the uncertainty. In practice, such heuristics could be challenging for complex datasets or deep neural network models [44]. We argue that uncertainty estimation is not
Algorithm 1 COMBO: Conservative Model Based Offline Policy Optimization
Require: Offline dataset D, rollout distribution µ(·|s), learned dynamics model T θ , initialized policy and critic π φ and Q ψ . 1: Train the probabilistic dynamics model T θ (s , r|s, a) = N (µ θ (s, a), Σ θ (s, a)) on D. 2: Initialize the replay buffer Dmodel ← ∅. 3: for i = 1, 2, 3, · · · , do 4: Collect model rollouts by sampling from µ and T θ starting from states in D. Add model rollouts to Dmodel.
5:
Conservatively evaluate π i φ by repeatedly solving eq. 2 to obtainQ π i φ ψ using samples from D ∪ Dmodel. 6: Improve policy under state marginal of d f by solving eq. 3 to obtain π i+1 φ . 7: end for imperative for offline model-based RL and empirically show that uncertainty estimation could be inaccurate in offline RL problems especially when generalization to unknown behaviors is required in Section 5.1.1. Our goal is to develop a model-based offline RL algorithm that enables optimizing a lower bound on the policy performance, but without requiring uncertainty quantification. We achieve this by extending conservative Q-learning [29], which does not require explicit uncertainty quantification, into the model-based setting. Our algorithm COMBO, summarized in Algorithm 1, alternates between a conservative policy evaluation step and a policy improvement step, which we outline below.
Conservative Policy Evaluation: Given a policy π, an offline dataset D, and a learned model of the MDPM, the goal in this step is to obtain a conservative estimate of Q π . To achieve this, we penalize the Q-values evaluated on data drawn from a particular state-action distribution that is more likely to be out-of-support while pushing up the Q-values on state-action pairs that are trustworthy, which is implemented by repeating the following recursion:
Q k+1 ←arg min Q β E s,a∼ρ(s,a) [Q(s, a)]−Es,a∼D[Q(s, a)] + 1 2 E s,a,s ∼d f Q(s, a)− B πQk (s, a) 2 .(2)
Here, ρ(s, a) and d f are sampling distributions that we can choose. Model-based algorithms allow ample flexibility for these choices while providing the ability to control the bias introduced by these choices. For ρ(s, a), we make the following choice: ρ(s, a) = d π M (s)π(a|s), where d π M (s) is the discounted marginal state distribution when executing π in the learned model M. Samples from d π M (s) can be obtained by rolling out π in M. Similarly, d f is an f −interpolation between the offline dataset and synthetic rollouts from the model: 1] is the ratio of the datapoints drawn from the offline dataset as defined in Section 2 and µ(·|s) is the rollout distribution used with the model, which can be modeled as π or a uniform distribution. To avoid notation clutter, we also denote d f := d µ f . Under such choices of ρ and d f , we push down (or conservatively estimate) Q-values on state-action tuples from model rollouts and push up Q-values on the real state-action pairs from the offline dataset. When updating Q-values with the Bellman backup, we use a mixture of both the model-generated data and the real data, similar to Dyna [57]. Note that in comparison to CQL and other model-free algorithms, COMBO learns the Q-function over a richer set of states beyond the states in the offline dataset. This is made possible by performing rollouts under the learned dynamics model, denoted by d µ M (s, a). We will show in Section 4 that the Q function learned by repeating the recursion in Eq. 2 provides a lower bound on the true Q function, without the need for explicit uncertainty estimation. Furthermore, we will theoretically study the advantages of using synthetic data from the learned model, and characterize the impacts of model bias.
d µ f (s, a) := f d(s, a) + (1 − f ) d µ M (s, a), where f ∈ [0,
Policy Improvement Using a Conservative Critic: After learning a conservative criticQ π , we improve the policy as:
π ← arg max π E s∼ρ,a∼π(·|s) Q π (s, a)
where ρ(s) is the state marginal of ρ(s, a). When policies are parameterized with neural networks, we approximate the arg max with a few steps of gradient descent. In addition, entropy regularization can also be used to prevent the policy from becoming degenerate if required [17]. In Section 4.2, we show that the resulting policy is guaranteed to improve over the behavior policy.
Practical Implementation Details. Our practical implementation largely follows MOPO, with the key exception that we perform conservative policy evaluation as outlined in this section, rather than using uncertainty-based reward penalties. Following MOPO, we represent the probabilistic dynamics model using a neural network, with parameters θ, that produces a Gaussian distribution over the next state and reward: T θ (s t+1 , r|s, a) = N (µ θ (s t , a t ), Σ θ (s t , a t )). The model is trained via maximum likelihood. For conservative policy evaluation (eq. 2) and policy improvement (eq. 3), we augment ρ with states sampled from the offline dataset, which shows more stable improvement in practice. It is relatively common in prior work on model-based offline RL to select various hyperparameters using online policy rollouts [67,26,3,33]. However, we would like to avoid this with our method, since requiring online rollouts to tune hyperparameters contradicts the main aim of offline RL, which is to learn entirely from offline data. Therefore, we do not use online rollouts for tuning COMBO, and instead devise an automated rule for tuning important hyperparameters such as β and f in a fully offline manner. We search over a small discrete set of hyperparameters for each task, and use the value of the regularization term
Theoretical Analysis of COMBO
In this section, we theoretically analyze our method and show that it optimizes a lower-bound on the expected return of the learned policy. This lower bound is close to the actual policy performance (modulo sampling error) when the policy's state-action marginal distribution is in support of the state-action marginal of the behavior policy and conservatively estimates the performance of a policy otherwise. By optimizing the policy against this lower bound, COMBO guarantees policy improvement beyond the behavior policy. Furthermore, we use these insights to discuss cases when COMBO is less conservative compared to model-free counterparts.
COMBO Optimizes a Lower Bound
We first show that training the Q-function using Eq. 2 produces a Q-function such that the expected off-policy policy improvement objective [8] computed using this learned Q-function lower-bounds its actual value. We will reuse notation for d f and d from Sections 2 and 3. Assuming that the Q-function is tabular, the Q-function found by approximate dynamic programming in iteration k, can be obtained by differentiating Eq. 2 with respect to Q k (see App. A for details):
Q k+1 (s, a) = ( B π Q k )(s, a) − β ρ(s, a) − d(s, a) d f (s, a) .(4)
Eq. 4 effectively applies a penalty that depends on the three distributions appearing in the COMBO critic training objective (Eq. 2), of which ρ and d f are free variables that we choose in practice as discussed in Section 3. For a given iteration k of Eq. 4, we further define the expected penalty under ρ(s, a) as:
ν(ρ, f ) := E s,a∼ρ(s,a) ρ(s, a) − d(s, a) d f (s, a) .(5)
Next, we will show that the Q-function learned by COMBO lower-bounds the actual Q-function under the initial state distribution µ 0 and any policy π. We also show that the asymptotic Q-function learned by COMBO lower-bounds the actual Q-function of any policy π with high probability for a large enough β ≥ 0, which we include in Appendix A.2. Let M represent the empirical MDP which uses the empirical transition model based on raw data counts. The Bellman backups over the dataset distribution d f in Eq. 2 that we analyze is an f −interpolation of the backup operator in the empirical MDP (denoted by B π M ) and the backup operator under the learned model M (denoted by B π M ). The empirical backup operator suffers from sampling error, but is unbiased in expectation, whereas the model backup operator induces bias but no sampling error. We assume that all of these backups enjoy concentration properties with concentration coefficient C r,T,δ , dependent on the desired confidence value δ (details in Appendix A.2). This is a standard assumption in literature [31]. Now, we state our main results below. Proposition 4.1. For large enough β, we have E s∼µ0,a∼π(·|s) [Q π (s, a)] ≤ E s∼µ0,a∼π(·|s) [Q π (s, a)], where µ 0 (s) is the initial state distribution. Furthermore, when s is small, such as in the large sample regime, or when the model bias m is small, a small β is sufficient to guarantee this condition along with an appropriate choice of f . 2), our result shows that COMBO is less conservative in that it does not underestimate the value function at every state in the dataset like CQL (Remark 1) and might even overestimate these values. Instead COMBO penalizes Q-values at states generated via model rollouts from ρ(s, a). Note that in general, the required value of β may be quite large similar to prior works, which typically utilize a large constant β, which may be in the form of a penalty on a regularizer [36,29] or as constants in theoretically optimal algorithms [23,49]. While it is challenging to argue that that either COMBO or CQL attains the tightest possible lower-bound on return, in our final result of this section, we discuss a sufficient condition for the COMBO lower-bound to be tighter than CQL.
Proposition 4.2. Assuming previous notation, let ∆ π COMBO := E s,a∼d M (s),π(a|s) Q π (s, a) and ∆ π CQL := E s,a∼d M (s),π(a|s) Q π CQL (s, a) denote the average values on the dataset under the Qfunctions learned by COMBO and CQL respectively. Then, ∆ π COMBO ≥ ∆ π CQL , if:
E s,a∼ρ(s,a) π(a|s) π β (a|s) − E s,a∼d M (s),π(a|s) π(a|s) π β (a|s) ≤ 0. ( * )
Proposition 4.2 indicates that COMBO will be less conservative than CQL when the action probabilities under learned policy π(a|s) and the probabilities under the behavior policy π β (a|s) are closer together on state-action tuples drawn from ρ(s, a) (i.e., sampled from the model using the policy π(a|s)), than they are on states from the dataset and actions from the policy, d M (s)π(a|s). COMBO's objective (Eq. 2) only penalizes Q-values under ρ(s, a), which, in practice, are expected to primarily consist of out-of-distribution states generated from model rollouts, and does not penalize the Q-value at states drawn from d M (s). As a result, the expression ( * ) is likely to be negative, making COMBO less conservative than CQL.
Safe Policy Improvement Guarantees
Now that we have shown various aspects of the lower-bound on the Q-function induced by COMBO, we provide policy improvement guarantees for the COMBO algorithm. Formally, Proposition 4.3 discuss safe improvement guarantees over the behavior policy. building on prior work [46,31,29]. Proposition 4.3 (ζ-safe policy improvement). Letπ out (a|s) be the policy obtained by COMBO. Then, if β is sufficiently large and ν(ρ π , f ) − ν(ρ β , f ) ≥ C for a positive constant C, the policyπ out (a|s) is a ζ-safe policy improvement over π β in the actual MDP M, i.e., J(π out , M) ≥ J(π β , M) − ζ, with probability at least 1 − δ, where ζ is given by,
O γf (1 − γ) 2 E s∼dπ out M |A| |D(s)| DCQL(πout, π β ) := (1) +O γ(1 − f ) (1 − γ) 2 DTV(M, M) := (2) − β C (1 − γ) := (3) .
The complete statement (with constants and terms that grow smaller than quadratic in the horizon) and proof for Proposition 4.3 is provided in Appendix A.4. D CQL denotes a notion of probabilistic distance between policies [29] which we discuss further in Appendix A.4. The expression for ζ in Proposition 4.3 consists of three terms: term (1) captures the decrease in the policy performance due to limited data, and decays as the size of D increases. The second term (2) captures the suboptimality induced by the bias in the learned model. Finally, as we show in Appendix A.4, the third term (3) comes from ν(ρ π , f ) − ν(ρ β , f ), which is equivalent to the improvement in policy performance as a result of running COMBO in the empirical and model MDPs. Since the learned model is trained on the dataset D with transitions generated from the behavior policy π β , the marginal distribution ρ β (s, a) is expected to be closer to d(s, a) for π β as compared to the counterpart for the learned policy, ρ π . Thus, the assumption that ν(ρ π , f ) − ν(ρ β , f ) is positive is reasonable, and in such cases, an appropriate (large) choice of β will make term (3) large enough to counteract terms (1) and (2) that reduce policy performance. We discuss this elaborately in Appendix A.4 (Remark 3).
Further note that in contrast to Proposition 3.6 in Kumar et al. [29], note that our result indicates the sampling error (term (1)) is reduced (multiplied by a fraction f ) when a near-accurate model is used to augment data for training the Q-function, and similarity, it can avoid the bias of model-based methods by relying more on the model-free component. This allows COMBO to attain the best-of-both model-free and model-based methods, via a suitable choice of the fraction f .
To summarize, through an appropriate choice of f , Proposition 4.3 guarantees safe improvement over the behavior policy without requiring access to an oracle uncertainty estimation algorithm.
Experiments
In our experiments, we aim to answer the follow questions: (1) Can COMBO generalize better than previous offline model-free and model-based approaches in a setting that requires generalization to tasks that are different from what the behavior policy solves? (2) How does COMBO compare with prior work in tasks with high-dimensional image observations? (3) How does COMBO compare to prior offline model-free and model-based methods in standard offline RL benchmarks?
To answer those questions, we compare COMBO to several prior methods. In the domains with compact state spaces, we compare with recent model-free algorithms like BEAR [28], BRAC [63], and CQL [29]; as well as MOPO [67] and MOReL [26] which are two recent model-based algorithms. In addition, we also compare with an offline version of SAC [17] (denoted as SAC-off), and behavioral cloning (BC). In high-dimensional image-based domains, which we use to answer question (3), we compare to LOMPO [48], which is a latent space offline model-based RL method that handles image inputs, latent space MBPO (denoted LMBPO), similar to Janner et al.
[20] which uses the model to generate additional synthetic data, the fully offline version of SLAC [32] (denoted SLAC-off), which only uses a variational model for state representation purposes, and CQL from image inputs. To our knowledge, CQL, MOPO, and LOMPO are representative of state-of-the-art model-free and model-based offline RL methods. Hence we choose them as comparisons to COMBO. To highlight the distinction between COMBO and a naïve combination of CQL and MBPO, we perform such a comparison in Table 8 in Appendix C. For more details of our experimental set-up, comparisons, and hyperparameters, see Appendix B. Table 1: Average returns of halfcheetah-jump and ant-angle and average success rate of sawyer-door-close that require out-of-distribution generalization. All results are averaged over 6 random seeds. We include the mean and max return / success rate of episodes in the batch data (under Batch Mean and Batch Max, respectively) for comparison. We also include the 95%-confidence interval for COMBO.
Results on tasks that require generalization
To answer question (1), we use two environments halfcheetah-jump and ant-angle constructed in Yu et al. [67], which requires the agent to solve a task that is different from what the behavior policy solved. In both environments, the offline dataset is collected by policies trained with original reward functions of halfcheetah and ant, which reward the robots to run as fast as possible. The behavior policies are trained with SAC with 1M steps and we take the full replay buffer as the offline dataset. Following Yu et al. [67], we relabel rewards in the offline datasets to reward the halfcheetah to jump as high as possible and the ant to run to the top corner with a 30 degree angle as fast as possible. Following the same manner, we construct a third task sawyer-door-close based on the environment in Yu et al. [66], Rafailov et al. [48]. In this task, we collect the offline data with SAC policies trained with a sparse reward function that only gives a reward of 1 when the door is opened by the sawyer robot and 0 otherwise. The offline dataset is similar to the "medium-expert" dataset in the D4RL benchmark since we mix equal amounts of data collected by a fully-trained SAC policy and a partially-trained SAC policy. We relabel the reward such that it is 1 when the door is closed and 0 otherwise. Therefore, in these datasets, the offline RL methods must generalize beyond behaviors in the offline data in order to learn the intended behaviors. We visualize the sawyer-door-close environment in the right image in Figure 3 in Appendix B.4.
We present the results on the three tasks in Table 1. COMBO significantly outperforms MOPO, MOReL and CQL, two representative model-based methods and one representative model-free methods respectively, in the halfcheetah-jump and sawyer-door-close tasks, and achieves an approximately 8%, 4% and 12% improvement over MOPO, MOReL and CQL respectively on the ant-angle task. These results validate that COMBO achieves better generalization results in practice by behaving less conservatively than prior model-free offline methods (compare to CQL, which doesn't improve much), and does so more robustly than prior model-based offline methods (compare to MOReL and MOPO). To further understand why COMBO outperforms prior model-based methods in tasks that require generalization, we argue that one of the main reasons could be that uncertainty estimation is hard in these tasks where the agent is required to go further away from the data distribution. To test this intuition, we perform empirical evaluations to study whether uncertainty quantification with deep neural networks, especially in the setting of dynamics model learning, is challenging and could cause problems with uncertainty-based model-based offline RL methods such as MOReL [26] and MOPO [67]. In our evaluations, we consider maximum learned variance over the ensemble (denoted as Max Var) max i=1,...,N Σ i θ (s, a) F (used in MOPO). We consider two tasks halfcheetah-jump and ant-angle. We normalize both the model error and the uncertainty estimates to be within scale [0, 1] and performs linear regression that learns the mapping between the uncertainty estimates and the true model error. As shown in Figure 2, on both tasks, Max Var is unable to accurately predict the true model error, suggesting that uncertainty estimation used by offline model-based methods is not accurate and might be the major factor that results in its poor performance. Meanwhile, COMBO circumvents challenging uncertainty quantification problem and achieves better performances on those tasks, indicating the effectiveness and the robustness of the method.
Results on image-based tasks
To answer question (2), we evaluate COMBO on two image-based environments: the standard walker (walker-walk) task from the the DeepMind Control suite [61] and a visual door opening environment with a Sawyer robotic arm (sawyer-door) as used in Section 5.1.
Dataset
Results on the D4RL tasks
Finally, to answer the question (3), we evaluate COMBO on the OpenAI Gym [6] domains in the D4RL benchmark [12], which contains three environments (halfcheetah, hopper, and walker2d) and four dataset types (random, medium, medium-replay, and medium-expert). We include the results in Table 3. The numbers of BC, SAC-off, BEAR, BRAC-P and BRAC-v are taken from the D4RL paper, while the results for MOPO, MOReL and CQL are based on their respective papers [67,29]. COMBO achieves the best performance in 9 out of 12 settings and comparable result in 1 out of the remaining 3 settings (hopper medium-replay). As noted by Yu et al. [67] and Rafailov et al. [48], model-based offline methods are generally more performant on datasets that are collected by a wide range of policies and have diverse state-action distributions (random, medium-replay datasets) while model-free approaches do better on datasets with narrow distributions (medium, medium-expert datasets). However, in these results, COMBO generally performs well across dataset types compared to existing model-free and model-based approaches, suggesting that COMBO is robust to different dataset types.
Related Work
Offline RL [10,50,30,34] Model-based offline RL. Model-based offline RL methods [11,9,24,26,67,39,3,60,48,33,68] provide an alternative approach to policy learning that involves the learning of a dynamics model using techniques from supervised learning and generative modeling. Such methods however rely either on uncertainty quantification of the learned dynamics model which can be difficult for deep network models [44], or on directly constraining the policy towards the behavioral policy similar to model-free algorithms [39]. In contrast, COMBO conservatively estimates the value function by penalizing it in out-of-support states generated through model rollouts. This allows COMBO to retain all benefits of model-based algorithms such as broad generalization, without the constraints of explicit policy regularization or uncertainty quantification.
Conclusion
In the paper, we present conservative offline model-based policy optimization (COMBO), a modelbased offline RL algorithm that penalizes the Q-values evaluated on out-of-support state-action pairs. In particular, COMBO removes the need of uncertainty quantification as widely used in previous model-based offline RL works [26,67], which can be challenging and unreliable with deep neural networks [44]. Theoretically, we show that COMBO achieves less conservative Q values compared to prior model-free offline RL methods [29] and guarantees a safe policy improvement. In our empirical study, COMBO achieves the best generalization performances in 3 tasks that require adaptation to unseen behaviors. Moreover, COMBO is able scale to vision-based tasks and outperforms or obtain comparable results in vision-based locomotion and robotic manipulation tasks. Finlly, on standard D4RL benchmark, COMBO generally performs well across dataset types compared to prior methods Despite the advantages of COMBO, there are few challenges left such as the lack of an offline hyperparameter selection scheme that can yield a uniform hyperparameter across different datasets and an automatically selected f conditioned on the model error. We leave them for future work.
[12] Justin Fu, Aviral Kumar, Ofir Nachum, George Tucker, and Sergey Levine. D4rl: Datasets for deep data-driven reinforcement learning, 2020.
[13] Scott Fujimoto and Shixiang Shane Gu. A minimalist approach to offline reinforcement learning. arXiv preprint arXiv:2106.06860, 2021.
[14] Scott Fujimoto, David Meger, and Doina Precup. Off-policy deep reinforcement learning without exploration. arXiv preprint arXiv:1812.02900, 2018.
[15] Scott Fujimoto, Herke Van Hoof, and David Meger. Addressing function approximation error in actor-critic methods. arXiv preprint arXiv:1802.09477, 2018.
[
A Proofs from Section 4
In this section, we provide proofs for theoretical results in Section 4. Before the proofs, we note that all statements are proven in the case of finite state space (i.e., |S| < ∞) and finite action space (i.e., |A| < ∞) we define some commonly appearing notation symbols appearing in the proof:
• P M and r M (or P and r with no subscript for notational simplicity) denote the dynamics and reward function of the actual MDP M • P M and r M denote the dynamics and reward of the empirical MDP M generated from the transitions in the dataset • P M and r M denote the dynamics and reward of the MDP induced by the learned model M We also assume that whenever the cardinality of a particular state or state-action pair in the offline dataset D, denoted by |D(s, a)|, appears in the denominator, we assume it is non-zero. For any non-existent (s, a) / ∈ D, we can simply set |D(s, a)| to be a small value < 1, which prevents any bound from producing trivially ∞ values.
A.1 A Useful Lemma and Its Proof
Before proving our main results, we first show that the penalty term in equation 4 is positive in expectation. Such a positive penalty is important to combat any overestimation that may arise as a result of using B.
Since the derivative of ν(ρ, f ) with respect to f is always positive, it is an increasing function of f for a fixed ρ, and this proves the second part (2) of the Lemma. Using this property, we can show the part (1) of the Lemma as follows:
∀f ∈ (0, 1], ν(ρ, f ) ≥ ν(ρ, 0) =
A.2 Proof of Proposition 4.1
Before proving this proposition, we provide a bound on the Bellman backup in the empirical MDP, B M . To do so, we formally define the standard concentration properties of the reward and transition dynamics in the empirical MDP, M, that we assume so as to prove Proposition A.1. Following prior work [42,19, 29], we assume: Assumption A1. ∀ s, a ∈ M, the following relationships hold with high probability,
E π(a |s ) Q k (s , a ) ≤ C r,δ + γC P,δ 2R max /(1 − γ) |D(s, a)| .
Thus the overestimation due to sampling error in the empirical MDP, M is bounded as a function of a bigger constant, C r,P,δ that can be expressed as a function of C r,δ and C P,δ , and depends on δ via a log(1/δ) dependency. For the purposes of proving Proposition A.1, we assume that:
∀s, a, B M πQk − B π MQ k ≤ C r,T,δ R max (1 − γ) |D(s, a)| .(9)
Next, we provide a bound on the error between the bellman backup induced by the learned dynamics model and the learned reward, B M , and the actual Bellman backup, B M . To do so, we note that:
B M πQk − B π MQ k = r M (s, a) − r M (s, a) (10) +γ s P M (s |s, a) − P M (s |s, a) E π(a |s ) Q k (s , a ) ≤ |r M (s, a) − r M (s, a)| + γ 2R max 1 − γ D(P, P M ),(11)
where D(P, P M ) is the total-variation divergence between the learned dynamics model and the actual MDP. Now, we show that the asymptotic Q-function learned by COMBO lower-bounds the actual Q-function of any policy π with high probability for a large enough β ≥ 0. We will use Equations 9 and 11 to prove such a result. Proposition A.1 (Asymptotic lower-bound). Let P π denote the Hadamard product of the dynamics P and a given policy π in the actual MDP and let S π := (I − γP π ) −1 . Let D denote the totalvariation divergence between two probability distributions. For any π(a|s), the Q-function obtained by recursively applying Equation 4, withB π = f B π M + (1 − f )B π M , with probability at least 1 − δ, results inQ π that satisfies:
∀s, a,Q π (s, a) ≤ Q π (s, a) − β · S π ρ − d d f (s, a) + f S π C r,T,δ R max (1 − γ) |D| (s, a) + (1 − f ) S π |r − r M | + 2γR max 1 − γ D(P, P M ) (s, a).
Proof. We first note that the Bellman backupB π induces the following Q-function iterates as per Equation 4,
Q k+1 (s, a) = B πQk (s, a) − β ρ(s, a) − d(s, a) d f (s, a) = f B π MQ k (s, a) + (1 − f ) B π MQ k (s, a) − β ρ(s, a) − d(s, a) d f (s, a) = B πQk (s, a) − β ρ(s, a) − d(s, a) d f (s, a) + (1 − f ) B M πQk − B πQk (s, a) + f B M πQk − B πQk (s, a) ∀s, a,Q k+1 ≤ B πQk − β ρ − d d f + (1 − f ) |r M − r M | + 2γR max 1 − γ D(P, P M ) + f C r,T,δ R max (1 − γ) |D|
Since the RHS upper bounds the Q-function pointwise for each (s, a), the fixed point of the Bellman iteration process will be pointwise smaller than the fixed point of the Q-function found by solving for the RHS via equality. Thus, we get that
Q π (s, a) ≤ S π r M =Q π (s,a) −β S π ρ − d d f (s, a) + f S π C r,T,δ R max (1 − γ) |D| (s, a) + (1 − f ) S π |r − r M | + 2γR max 1 − γ D(P, P M ) (s, a),
which completes the proof of this proposition.
Next, we use the result and proof technique from Proposition A.1 to prove Corollary 4.1, that in expectation under the initial state-distribution, the expected Q-value is indeed a lower-bound. Proof. To prove this corollary, we note a slightly different variant of Proposition A.1. To observe this, we will deviate from the proof of Proposition A.1 slightly and will aim to express the inequality using B M , the Bellman operator defined by the learned model and the reward function. Denoting (I − γP M ) −1 as S π M , doing this will intuitively allow us to obtain β (µ(s)π(a|s))
T S π M ρ−d d f
(s, a) as the conservative penalty which can be controlled by choosing β appropriately so as to nullify the potential overestimation caused due to other terms. Formally,
Q k+1 (s, a) = B πQk (s, a) − β ρ(s, a) − d(s, a) d f (s, a) = B π MQ k (s, a) − β ρ(s, a) − d(s, a) d f (s, a) + f B π M − B π MQ k (s, a)
:=∆(s,a)
By controlling ∆(s, a) using the pointwise triangle inequality:
∀s, a, B π MQ k − B π MQ k ≤ B πQk − B π MQ k + B π MQ k − B πQk ,(12)
and then iterating the backup B π M to its fixed point and finally noting that ρ(s, a) = (µ · π) T S π M (s, a), we obtain:
E µ,π [Q π (s, a)] ≤ E µ,π [Q π M (s, a)] − β E ρ(s,
The terms marked as "terms independent of β" correspond to the additional positive error terms obtained by iterating B πQk − B π MQ k and B π MQ k − B πQk , which can be bounded similar to the proof of Proposition A.1 above. Now by replacing the model Q-function, E µ,π [Q π M (s, a)] with the actual Q-function, E µ,π [Q π (s, a)] and adding an error term corresponding to model error to the bound, we obtain that:
E µ,π [Q π (s, a)] ≤ E µ,π [Q π (s, a)] + terms independent of β − β E ρ(s,a) ρ(s, a) − d(s, a) d f (s, a) =ν(ρ,f )>0
.
(14) Hence, by choosing β large enough, we obtain the desired lower bound guarantee. In this section, we will provide a proof for Proposition 4.2, and show that the COMBO can be less conservative in terms of the estimated value. To recall, let ∆ π COMBO := E s,a∼d M (s),π(a|s) Q π (s, a and let ∆ π CQL := E s,a∼d M ,π(a|s) Q π CQL (s, a) . From Kumar et al.
[29], we obtain thatQ π CQL (s, a) := Q π (s, a)−β π(a|s)−π β (a|s) π β (a|s)
. We shall derive the condition for the real data fraction f = 1 for COMBO, thus making sure that d f (s) = d π β (s). To derive the condition when ∆ π COMBO ≥ ∆ π CQL , we note the following simplifications:
∆ π COMBO ≥ ∆ π CQL (15) =⇒ s,a d M (s)π(a|s)Q π (s, a) ≥ s,a d M (s)π(a|s)Q π CQL (s, a)(16)
=⇒ β s,a d M (s)π(a|s) ρ(s, a) − d π β (s)π β (a|s) d π β (s)π β (a|s) ≤ β s,a d M (s)π(a|s) π(a|s) − π β (a|s) π β (a|s) .
Now, in the expression on the left-hand side, we add and subtract d π β (s)π(a|s) from the numerator inside the paranthesis.
s,a d M (s, a) ρ(s, a) − d π β (s)π β (a|s) d π β (s)π β (a|s) (18) = s,a d M (s, a) ρ(s, a) − d π β (s)π(a|s) + d π β (s)π(a|s) − d π β (s)π β (a|s) d π β (s)π β (a|s) (19) = s,a d M (s, a) π(a|s) − π β (a|s) π β (a|s) (1) + s,a d M (s, a) · ρ(s) − d π β (s) d π β (s) · π(a|s) π β (a|s)(20)
The term marked (1) is identical to the CQL term that appears on the right in Equation 17. Thus the inequality in Equation 17 is satisfied when the second term above is negative. To show this, first note that d π β (s) = d M (s) which results in a cancellation. Finally, re-arranging the second term into expectations gives us the desired result. An analogous condition can be derived when f = 1, but we omit that derivation as it will be hard to interpret terms appear in the final inequality.
A.4 Proof of Proposition 4.3
To prove the policy improvement result in Proposition 4.3, we first observe that using Equation 4 for Bellman backups amounts to finding a policy that maximizes the return of the policy in the a modified "f-interpolant" MDP which admits the Bellman backup B π , and is induced by a linear interpolation of backups in the empirical MDP M and the MDP induced by a dynamics model M and the return of a policy π in this effective f-interpolant MDP is denoted by J(M, M, f, π). Alongside this, the return is penalized by the conservative penalty where ρ π denotes the marginal state-action distribution of policy π in the learned model M.
J(f, π) = J(M, M, f, π) − β ν(ρ π , f ) 1 − γ .(21)
We will require bounds on the return of a policy π in this f-interpolant MDP, J(M, M, f, π), which we first prove separately as Lemma A.2 below and then move to the proof of Proposition 4.3.
J(π, M) − α, J(π, M) + α ,
where α is given by:
α = 2γ(1 − f ) (1 − γ) 2 R max D (P M2 , P M ) + γf 1 − γ E d π M π P π M − P π M1 Q π M + f 1 − γ E s,a∼d π M π [|r M1 (s, a) − r M (s, a)|] + 1 − f 1 − γ E s,a∼d π M π [|r M2 (s, a) − r M (s, a)|]. (22)
Proof. To prove this lemma, we note two general inequalities. First, note that for a fixed transition dynamics, say P , the return decomposes linearly in the components of the reward as the expected return is linear in the reward function:
J(P, r M f ) = J(P, f r M1 + (1 − f )r M2 ) = f J(P, r M1 ) + (1 − f )J(P, r M2 ).
As a result, we can bound J(P, r M f ) using J(P, r) for a new reward function r of the auxiliary MDP, M, as follows r(s, a)] r(s, a)] .
J(P, r M f ) = J(P, f r M1 + (1 − f )r M2 ) = J(P, r + f (r M1 − r) + (1 − f )(r M2 − r) = J(P, r) + f J(P, r M1 − r) + (1 − f )J(P, r M2 − r) = J(P, r) + f 1 − γ E s,a∼d π M (s)π(a|s) [r M1 (s, a) −+ 1 − f 1 − γ E s,a∼d π M (s)π(a|s) [r M2 (s, a) −
Second, note that for a given reward function, r, but a linear combination of dynamics, the following bound holds:
J(P M f , r) = J(f P M1 + (1 − f )P M2 , r) = J(P M + f (P M1 − P M ) + (1 − f )(P M2 − P M ), r) = J(P M , r) − γ(1 − f ) 1 − γ E s,a∼d π M (s)π(a|s) P π M2 − P π M Q π M − γf 1 − γ E s,a∼d π M (s)π(a|s) P π M − P π M1 Q π M ∈ J(P M , r) ± γf (1 − γ) E s,a∼d π M (s)π(a|s) P π M − P π M1 Q π M + 2γ(1 − f )R max (1 − γ) 2 D(P M2 , P M ) .
To observe the third equality, we utilize the result on the difference between returns of a policy π on two different MDPs, P M1 and P M f from Agarwal et al. [1] (Chapter 2, Lemma 2.2, Simulation Lemma), and additionally incorporate the auxiliary MDP M in the expression via addition and subtraction in the previous (second) step. In the fourth step, we finally bound one term that corresponds to the learned model via the total-variation divergence D(P M2 , P M ) and the other term corresponding to the empirical MDP M is left in its expectation form to be bounded later.
Using the above bounds on return for reward-mixtures and dynamics-mixtures, proving this lemma is straightforward:
J(M 1 , M 2 , f, π) := J(P M f , f r M1 + (1 − f )r M2 ) = J(f P M1 + (1 − f )P M2 , r M f ) ∈ J(P M f , r M ) ± f 1 − γ E s,a∼d π M π [|r M1 (s, a) − r M (s, a)|] + 1 − f 1 − γ E s,a∼d π M π [|r M2 (s, a) − r M (s, a)|] :=∆ R ,
where the second step holds via linear decomposition of the return of π in M f with respect to the reward interpolation, and bounding the terms that appear in the reward difference. For convenience, we refer to these offset terms due to the reward as ∆ R . For the final part of this proof, we bound J(P M f , r M ) in terms of the return on the actual MDP, J(P M , r M ), using the inequality proved above that provides intervals for mixture dynamics but a fixed reward function. Thus, the overall bound is given by J(π, M f ) ∈ [J(π, M) − α, J(π, M) + α], where α is given by:
α = 2γ(1 − f ) (1 − γ) 2 R max D (P M2 , P M ) + γf 1 − γ E d π M π P π M − P π M1 Q π M + ∆ R .(23)
This concludes the proof of this lemma.
Finally, we prove Theorem 4.3 that shows how policy optimization with respect toĴ(f, π) affects the performance in the actual MD by using Equation 21 and building on the analysis of pure model-free algorithms from Kumar et al. [29]. We restate a more complete statement of the theorem below and present the constants at the end of the proof.
O γf (1 − γ) 2 E s∼d πout M |A| |D(s)| (D CQL (π out , π β ) + 1) + O γ(1 − f ) (1 − γ) 2 D TV (P M , P M ) − β C (1 − γ) .
Proof. We first note that since policy improvement is not being performed in the same MDP, M as the f-interpolant MDP, M f , we need to upper and lower bound the amount of improvement occurring in the actual MDP due to the f-interpolant MDP. As a result our first is to relate J(π, M) and J(π, M f ) := J(M, M, f, π) for any given policy π.
Step 1: Bounding the return in the actual MDP due to optimization in the f-interpolant MDP. By directly applying Lemma A.2 stated and proved previously, we obtain the following upper and lower-bounds on the return of a policy π:
J(M, M, f, π) ∈ [J(π, M) − α, J(π, M) + α] ,
where α is shown in Equation 22. As a result, we just need to bound the terms appearing the expression of α to obtain a bound on the return differences. We first note that the terms in the expression for α are of two types: (1) terms that depend only on the reward function differences (captured in ∆ R in Equation 23), and (2) terms that depend on the dynamics (the other two terms in Equation 23).
To bound ∆ R , we simply appeal to concentration inequalities on reward (Assumption A1), and bound ∆ R as:
∆ R := f 1 − γ E s,a∼d π M π [|r M1 (s, a) − r M (s, a)|] + 1 − f 1 − γ E s,a∼d π M π [|r M2 (s, a) − r M (s, a)|] ≤ C r,δ 1 − γ E s,a∼d π M π 1 D(s, a) + 1 1 − γ ||R M − R M || := ∆ u R .
Note that both of these terms are of the order of O(1/(1 − γ)) and hence they don't figure in the informal bound in Theorem 4.3 in the main text, as these are dominated by terms that grow quadratically with the horizon. To bound the remaining terms in the expression for α, we utilize a result directly from Kumar et al. [29] for the empirical MDP, M, which holds for any policy π(a|s), as shown below.
γ (1 − γ) E s,a∼d π M (s)π(a|s) P π M − P π M1 Q π M ≤ 2γR max C P,δ (1 − γ) 2 E s∼d π M (s)
|A| |D(s)| D CQL (π, π β )(s) + 1 .
Step 2: Incorporate policy improvement in the f-inrerpolant MDP. Now we incorporate the improvement of policy π out over the policy π β on a weighted mixture of M and M. In what follows, we derive a lower-bound on this improvement by using the fact that policy π out is obtained by maximizingĴ(f, π) from Equation 21. As a direct consequence of Equation 21, we note that
J(f, π out ) = J(M, M, f, π out ) − β ν(ρ π , f ) 1 − γ ≥Ĵ(f, π β ) = J(M, M, f, π β ) − β ν(ρ β , f ) 1 − γ(24)
• In addition, note that when f is close to 1, the bound reverts to a standard model-free policy improvement bound and when f is close to 0, the bound reverts to a typical model-based policy improvement bound. In scenarios with high sampling error (i.e. smaller |D(s)|), if we can learn a good model, i.e., D(P M , P M ) is small, we can attain policy improvement better than model-free methods by relying on the learned model by setting f closer to 0. A similar argument can be made in reverse for handling cases when learning an accurate dynamics model is hard.
B Experimental details
In this section, we include all details of our empirical evaluations of COMBO.
B.1 Practical algorithm implementation details
Model training. In the setting where the observation space is low-dimensional, as mentioned in Section 3, we represent the model as a probabilistic neural network that outputs a Gaussian distribution over the next state and reward given the current state and action:
T θ (s t+1 , r|s, a) = N (µ θ (s t , a t ), Σ θ (s t , a t )).
We train an ensemble of 7 such dynamics models following [20] and pick the best 5 models based on the validation prediction error on a held-out set that contains 1000 transitions in the offline dataset D.
During model rollouts, we randomly pick one dynamics model from the best 5 models. Each model in the ensemble is represented as a 4-layer feedforward neural network with 200 hidden units. For the generalization experiments in Section 5.1, we additionally use a two-head architecture to output the mean and variance after the last hidden layer following [67].
In the image-based setting, we follow Rafailov et al. [48] and use a variational model with the following components:
Image encoder:
h t = E θ (o t ) Inference model: s t ∼ q θ (s t |h t , s t−1 , a t−1 )
Latent transition model: s t ∼ T θ (s t |s t−1 , a t−1 ) Reward predictor: r t ∼ p θ (r t |s t ) Image decoder:
o t ∼ D θ (o t |s t ).(27)
We train the model using the evidence lower bound:
max θ T −1 τ =0 E q θ [log D θ (o τ +1 |s τ +1 )] − E q θ D KL [q θ (o τ +1 , s τ +1 |s τ , a τ ) T θτ (s τ +1 , a τ +1 )]
At each step τ we sample a latent forward model T θτ from a fixed set of K models [ T θ1 , . . . , T θ K ]. For the encoder E θ we use a convolutional neural network with kernel size 4 and stride 2. For the Walker environment we use 4 layers, while the Door Opening task has 5 layers. The D θ is a transposed convolutional network with stride 2 and kernel sizes [5,5,6,6] and [5,5,5,6,6] respectively. The inference network has a two-level structure similar to Hafner et al.
[18] with a deterministic path using a GRU cell with 256 units and a stochastic path implemented as a conditional diagonal Gaussian with 128 units. We only train an ensemble of stochastic forward models, which are also implemented as conditional diagonal Gaussians.
Policy Optimization. We sample a batch size of 256 transitions for the critic and policy learning. We set f = 0.5, which means we sample 50% of the batch of transitions from D and another 50% from D model . The equal split between the offline data and the model rollouts strikes the balance between conservatism and generalization in our experiments as shown in our experimental results in Section 5. We represent the Q-networks and policy as 3-layer feedforward neural networks with 256 hidden units.
For the choice of ρ(s, a) in Equation 2, we can obtain the Q-values that lower-bound the true value of the learned policy π by setting ρ(s, a) = d π M (s)π(a|s). However, as discussed in [29], computing π by alternating the full off-policy evaluation for the policyπ k at each iteration k and one step of policy improvement is computationally expensive. Instead, following [29], we pick a particular distribution ψ(a|s) that approximates the the policy that maximizes the Q-function at the current iteration and set ρ(s, a) = d π M (s)ψ(a|s). We formulate the new objective as follows:
Q k+1 ← arg min Q β E s∼d π M(+ 1 2 E s,a,s ∼d f Q(s, a) − B πQk (s, a)) 2 + R(ψ),(28)
where R(ψ) is a regularizer on ψ. In practice, we pick R(ψ) to be the −D KL (ψ(a|s) Unif(a)) and under such a regularization, the first term in Equation 28 corresponds to computing softmax of the Q-values at any state s as follows:
Q k+1 ← arg min Q max ψ β E s∼d π M (s) log a Q(s, a) − Es,a∼D [Q(s, a)]+ 1 2 E s,a,s ∼d f Q(s, a) − B πQk (s, a)) 2 .(29)
We estimate the log-sum-exp term in Equation 29 by sampling 10 actions at every state s in the batch from a uniform policy Unif(a) and the current learned policy π(a|s) with importance sampling following [29].
B.2 Hyperparameter Selection
In this section, we discuss the hyperparameters that we use for COMBO. In the D4RL and generalization experiments, our method are built upon the implementation of MOPO provided at: https://github.com/tianheyu927/mopo. The hyperparameters used in COMBO that relates to the backbone RL algorithm SAC such as twin Q-functions and number of gradient steps follow from those used in MOPO with the exception of smaller critic and policy learning rates, which we will discuss below. In the image-based domains, COMBO is built upon LOMPO without any changes to the parameters used there. For the evaluation of COMBO, we follow the evaluation protocol in D4RL [12] and a variety of prior offline RL works [29, 67, 26] and report the normalized score of the smooth undiscounted averaged return over 3 random seeds for all environments except sawyer-door-close and sawyer-door where we report the average success rate over 3 random seeds. As mentioned in Section 3, we use the regularization objective in Eq. 2 to select the hyperparameter from a range of pre-specified candidates in a fully offline manner, unlike prior model-based offline RL schemes such as [67] and [26] that similar hyperparameters as COMBO and tune them manually based on policy performance obtained via online rollouts.
We now list the additional hyperparameters as follows.
• Rollout length h. We perform a short-horizon model rollouts in COMBO similar to Yu et al. [67] and Rafailov et al. [48]. For the D4RL experiments and generalization experiments, we followed the defaults used in MOPO and used h = 1 for walker2d and sawyer-door-close, h = 5 for hopper, halfcheetah and halfcheetah-jump, and h = 25 for ant-angle. In the image-based domain we used rollout length of h = 5 for both the the walker-walk and sawyer-door-open environments following the same hyperparameters used in Rafailov et al. [48].
• Q-function and policy learning rates. On state-based domains, we apply our automatic selection rule to the set {1e − 4, 3e − 4} for the Q-function learning rate and the set {1e − 5, 3e − 5, 1e − 4} for the policy learning rate. We found that 3e − 4 for the Q-function learning rate (also used previously in Kumar et al. [29]) and 1e−4 for the policy learning rate (also recommended previously in Kumar et al. [29] for gym domains) work well for almost all domains except that on walker2d where a smaller Q-function learning rate of 1e − 4 and a correspondingly smaller policy learning rate of 1e − 5 works the best according to our automatic hyperparameter selection scheme. In the image-based domains, we followed the defaults from prior work [48] and used 3e − 4 for both the policy and Q-function.
• Conservative coefficient β. We use our hyperparameter selection rule to select the right β from the set {0.5, 1.0, 5.0} for β, which correspond to low conservatism, medium conservatism and high conservatism. A larger β would be desirable in more narrow dataset distributions with lower-coverage of the state-action space that propagates error in a backup whereas a smaller β is desirable with diverse dataset distributions. On the D4RL experiments, we found that β = 0.5 works well for halfcheetah agnostic of dataset quality, while on hopper and walker2d, we found that the more "narrow" dataset distributions: medium and medium-expert datasets work best with larger β = 5.0 whereas more "diverse" dataset distributions: random and medium-replay datasets work best with smaller β = 0.5 which is consistent with the intuition. On generalization experiments, β = 1.0 works best for all environments. In the image-domains we use β = 0.5 for the medium-replay walker-walk task and and β = 1.0 for all other domains, which again is in accordance with the impact of β on performance.
• Choice of ρ(s, a). We first decouple ρ(s, a) = ρ(s)ρ(a|s) for convenience. As discussed in Appendix B.1, we use ρ(a|s) as the soft-maximum of the Q-values and estimated with log-sum-exp. For ρ(s), we apply the automatic hyperparameter selection rule to the set {d π M , ρ(s) = d f }. We found that d π M works better the hopper task in D4RL while d f is better for the rest of the environments. For the remaining domains, we found ρ(s) = d f works well.
• Choice of µ(a|s). For the rollout policy µ, we use our automatic selection rule on the set {Unif(a), π(a|s)}, i.e. the set that contains a random policy and a current learned policy. We found that µ(a|s) = Unif(a) works well on the hopper task in D4RL and also in the ant-angle generalization experiment. For the remaining state-based environments, we discovered that µ(a|s) = π(a|s) excels. In the image-based domain, we found that µ(a|s) = Unif(a) works well in the walker-walk domain and µ(a|s) = π(a|s) is better for the sawyer-door environment. We observed that µ(a|s) = Unif(a) behaves less conservatively and is suitable to tasks where dynamics models can be learned fairly precisely.
• Choice of f . For the ratio between model rollouts and offline data f , we input the set {0.5, 0.8} to our automatic hyperparameter selection rule to figure out the best f on each domain. We found that f = 0.8 works well on the medium and medium-expert in the walker2d task in D4RL. For the remaining environments, we find f = 0.5 works well.
We also provide additional experimental results on how our automatic hyperparameter selection rule selects hyperparameters. As shown in Table 4, 5, 6 and 7, our automatic hyperparameter selection rule is able to pick the hyperparameters β, µ(a|s), ρ(s) and f and that correspond to the best policy performance based on the regularization value. Table 4: We include our automatic hyperparameter selection rule of β on a set of representative D4RL environments. We show the policy performance (bold with the higher number) and the regularizer value (bold with the lower number). Lower regularizer value consistently corresponds to the higher policy return, suggesting the effectiveness of our automatic selection rule.
B.3 Details of generalization environments
For halfcheetah-jump and ant-angle, we follow the same environment used in MOPO.
For sawyer-door-close, we train the sawyer-door environment in https://github.com/ rlworkgroup/metaworld with dense rewards for opening the door until convergence. We collect 50000 transitions with half of the data collected by the final expert policy and a policy that reaches the performance of about half the expert level performance. We relabel the reward such that Task µ(a|s) = Unif(a) µ(a|s) = Unif(a) µ(a|s) = π(a|s) µ(a|s) = π(a|s) performance regularizer value performance regularizer value hopper-medium 97.2 -2035.9 52.6 -14.9 walker2d-medium 7.9 -106.8 81.9 -1991.2 Table 5: We include our automatic hyperparameter selection rule of µ(a|s) on the medium datasets in the hopper and walker2d environments from D4RL. We follow the same convention defined in Table 4 and find that our automatic selection rule can effectively select µ offline. Table 6: We include our automatic hyperparameter selection rule of ρ(s) on the medium datasets in the hopper and walker2d environments from D4RL. We follow the same convention defined in Table 4 and find that our automatic selection rule can effectively select ρ offline.
the reward is 1 when the door is fully closed and 0 otherwise. Hence, the offline RL agent is required to learn the behavior that is different from the behavior policy in a sparse reward setting. We provide the datasets in the following anonymous link 1 . We visualize our image-based environments in Figure 3. We use the standard walker-walk environment from Tassa et al. [61] with 64 × 64 pixel observations and an action repeat of 2. Datasets were constructed the same way as Fu et al. [12] with 200 trajectories each. For the sawyer-door we use 128 × 128 pixel observations. The medium-expert dataset contains 1000 rollouts (with a rollout length of 50 steps) covering the state distribution from grasping the door handle to opening the door. The expert dataset contains 1000 trajectories samples from a fully trained (stochastic) policy. The data was obtained from the training process of a stochastic SAC policy using dense reward function as defined in Yu et al. [66]. However, we relabel the rewards, so an agent receives a reward of 1 when the door is fully open and 0 otherwise. This aims to evaluate offline-RL performance in a sparse-reward setting. All the datasets are from [48].
B.4 Details of image-based environments
B.5 Computation Complexity
For the D4RL and generalization experiments, COMBO is trained on a single NVIDIA GeForce RTX 2080 Ti for one day. For the image-based experiments, we utilized a single NVIDIA GeForce RTX 2070. We trained the walker-walk tasks for a day and the sawyer-door-open tasks for about two days.
B.6 License of datasets
We acknowledge that all datasets used in this paper use the MIT license. Table 8: Comparison between COMBO and CQL+MBPO on tasks that require out-of-distribution generalization. Results are in average returns of halfcheetah-jump and ant-angle and average success rate of sawyer-door-close. All results are averaged over 6 random seeds, ± the 95%-confidence interval.
C Comparison to the Naive Combination of CQL and MBPO
In this section, we stress the distinction between COMBO and a direct combination of two previous methods CQL and MBPO (denoted as CQL + MBPO). CQL+MBPO performs Q-value regularization using CQL while expanding the offline data with MBPO-style model rollouts. While COMBO utilizes Q-value regularization similar to CQL, the effect is very different. CQL only penalizes the Q-value on unseen actions on the states observed in the dataset whereas COMBO penalizes Q-values on states generated by the learned model while maximizing Q values on state-action tuples in the dataset. Additionally, COMBO also utilizes MBPO-style model rollouts for also augmenting samples for training Q-functions.
To empirically demonstrate the consequences of this distinction, CQL + MBPO performs quite a bit worse than COMBO on generalization experiments (Section 5.1) as shown in Table 8. The results are averaged across 6 random seeds (± denotes 95%-confidence interval of the various runs). This suggests that carefully considering the state distribution, as done in COMBO, is crucial.
Figure 1 :
1COMBO learns a conservative value function
Q
k+1 ←arg min Q β E s∼D,a∼µ(·|s) [Q(s, a)]−Es,a∼D[Q(s, a)] + 1 2 E s,a,s ∼D Q(s, a)− B π Q k (s, a) 2 ,(1)
E s,a∼ρ(s,a) [Q(s, a)]−E s,a∼D [Q(s, s)] (shown in Eq. 2) to pick hyperparameters in an entirely offline fashion. We select the hyperparameter setting that achieves the lowest regularization objective, which indicates that the Q-values on unseen model-predicted state-action tuples are not overestimated. Additional details about the practical implementation and the hyperparameter selection rule are provided in Appendix B.1 and Appendix B.2 respectively.
Figure 2 :
2We visualize the fitted linear regression line between the model error and two uncertainty quantification methods maximum learned variance over the ensemble (denoted as Max Var) on two tasks that test the generalization abilities of offline RL algorithms (halfcheetah-jump and ant-angle). We show that Max Var struggles to predict the true model error. Such visualizations indicates that uncertainty quantification is challenging with deep neural networks and could lead to poor performance in model-based offline RL in settings where out-of-distribution generalization is needed. In the meantime, COMBO addresses this issue by removing the burden of performing uncertainty quantification.
Lemma A.1 ((Interpolation Lemma). For any f ∈ [0, 1], and any given ρ(s, a) ∈ ∆ |S||A| , let d f be an f-interpolation of ρ and D, i.e., d f (s, a) := f d(s, a) + (1 − f )ρ(s, a). For a given iteration k of Equation 4, we restate the definition of the expected penalty under ρ(s, a) in Eq. 5:ν(ρ, f ) := E s,a∼ρ(s,a) ρ(s, a) − d(s, a) d f (s, a).Then ν(ρ, f ) satisfies, (1) ν(ρ, f ) ≥ 0, ∀ρ, f , (2) ν(ρ, f ) ismonotonically increasing in f for a fixed ρ, and (3) ν(ρ, f ) = 0 iff ∀ s, a, ρ(s, a) = d(s, a) or f = 0. Proof. To prove this lemma, we use algebraic manipulation on the expression for quantity ν(ρ, f ) and show that it is indeed positive and monotonically increasing in f ∈ [s, a) − d(s, a) ρ(s, a) + f (d(s, a) − ρ(s, a)) s, a) (ρ(s, a) − d(s, a)) 2 · 1 (ρ(s, a) + f (d(s, a) − ρ(s,
to prove the third part (3) of this Lemma, note that when f = 0, ν(ρ, f ) = 0 (as shown above), and similarly by setting ρ(s, a) = d(s, a) note that we obtain ν(ρ, f ) = 0. To prove the only if side of (3), assume that f = 0 and ρ(s, a) = d(s, a) and we will show that in this case ν(ρ, f ) = 0. When d(s, a) = ρ(s, a), the derivative dν(ρ,f ) df > 0 (i.e., strictly positive) and hence the function ν(ρ, f ) is a strictly increasing function of f . Thus, in this case, ν(ρ, f ) > 0 = ν(ρ, 0) ∀f > 0. Thus we have shown that if ρ(s, a) = d(s, a) and f > 0, ν(ρ, f ) = 0, which completes our proof for the only if side of (3).
≥ 1 − δ |r M (s, a) − r(s, a)| ≤ C r,δ |D(s, a)| , ||P M (s |s, a) − P (s |s, a)|| 1 ≤ C P,δ |D(s, a)| . Under this assumption and assuming that the reward function in the MDP, r(s, a) is bounded, as |r(s, a)| ≤ R max , we can bound the difference between the empirical Bellman operator, B M and the actual MDP, B M , B M πQk − B π MQ k = |(r M (s, a) − r M (s, a)) +γ s (P M (s |s, a) − P M (s |s, a)) E π(a |s ) Q k (s , a ) ≤ |r M (s, a) − r M (s, a)| + γ s (P M (s |s, a) − P M (s |s, a))
Corollary A. 1 (
1Corollary 4.1 restated). For a sufficiently large β, we have a lower-bound that E s∼µ0,a∼π(·|s) [Q π (s, a)] ≤ E s∼µ0,a∼π(·|s) [Q π (s, a)], where µ 0 (s) is the initial state distribution. Furthermore, when s is small, such as in the large sample regime; or when the model bias m is small, a small β is sufficient along with an appropriate choice of f .
a) ρ(s, a) − d(s, a) d f (s, a) + terms independent of β.
Remark 1 (
1COMBO does not underestimate at every s ∈ D unlike CQL.). Before concluding this section, we discuss how the bound obtained by COMBO (Equation 14) is tighter than CQL. CQL learns a Q-function such that the value of the policy under the resulting Q-function lower-bounds the true value function at each state s ∈ D individually (in the absence of no sampling error), i.e., ∀s ∈ D,V π CQL (s) ≤ V π (s), whereas the bound in COMBO is only valid in expectation of the value function over the initial state distribution, i.e.,E s∼µ0(s) [V π COMBO (s)] ≤ E s∼µ0(s) [V π (s)],and the value function at a given state may not be a lower-bound. For instance, COMBO can overestimate the value of a state more frequent in the dataset distribution d(s, a) but not so frequent in the ρ(s, a) marginal distribution of the policy under the learned model M. To see this more formally, note that the expected penalty added in the effective Bellman backup performed by COMBO (Equation 4), in expectation under the dataset distribution d(s, a), ν(ρ, d, f ) is actually negative: s, a) − ρ(s, a) f d(s, a) + (1 − f )ρ(s, a) < 0, where the final inequality follows via a direct application of the proof of Lemma A.1. Thus, COMBO actually overestimates the values at atleast some states (in the dataset) unlike CQL. A.3 Proof of Proposition 4.2
Lemma A. 2 (
2Bound on return in f-interpolant MDP). For any two MDPs, M 1 and M 2 , with the same state-space, action-space and discount factor, and for a given fraction f ∈ [0, 1], define the f-interpolant MDP M f as the MDP on the same state-space, action-space and with the same discount as the MDP with dynamics: P M f := f P M1 + (1 − f )P M2 and reward function: r M f := f r M1 + (1 − f )r M2 . Then, given any auxiliary MDP, M, the return of any policy π in M f , J(π, M f ), also denoted by J(M 1 , M 2 , f, π), lies in the interval:
Theorem 2 (
2Formal version of Proposition 4.3). Letπ out (a|s) be the policy obtained by COMBO. Assume ν(ρ πout , f ) − ν(ρ β , f ) ≥ C for some constant C > 0. Then, the policy π out (a|s) is a ζsafe policy improvement over π β in the actual MDP M, i.e., J(π out , M) ≥ J(π β , M) − ζ, with probability at least 1 − δ, where ζ is given by(where ρ β (s, a)
s),a∼ψ(a|s) [Q(s, a)] − Es,a∼D [Q(s, a)]
Figure 3 :
3Our image-based environments: The observations are 64 × 64 and 128 × 128 raw RGB images for the walker-walk and sawyer-door tasks respectively. The sawyer-door-close environment used in in Section 5.1 also uses the sawyer-door environment.
The proof for Proposition 4.1 can be found in Appendix A.2. Finally, while Kumar et al. [29] also analyze how regularized value function training can provide lower bounds on the value function at each state in the dataset [29] (Proposition 3.1-3.
Table 2 :
2Results for vision experiments. For the Walker task each number is the normalized score proposed in[12] of the policy at the last iteration of training, averaged over 3 random seeds. For the Sawyer task, we report success rates over the last 100 evaluation runs of training. For the dataset, M refers to medium, M-R refers to medium-replay, and M-E refers to medium expert. in Appendix B.4. To extend COMBO to the image-based setting, we follow Rafailov et al.[48] and train a recurrent variational model using the offline data and use train COMBO in the latent space of this model. We present MOPO MOReL CQL SAC-off BEAR BRAC-p BRAC-vFor the walker task we construct
4 datasets: medium-replay (M-R),
medium (M), medium-expert (M-
E), and expert, similar to Fu et al.
[12], each consisting of 200 tra-
jectories. For sawyer-door task
we use only the medium-expert
and the expert datasets, due to
the sparse reward -the agent is
rewarded only when it success-
fully opens the door. Both en-
vironments are visulized in Fig-
ure 3
Table 3 :
3Results for D4RL datasets. Each number is the normalized score proposed in[12] of the policy atthe last iteration of training, averaged over 6 random seeds. We take results of MOPO, MOReL and CQL from
their original papers and results of other model-free methods from [12]. We include the performance of behavior
cloning (BC) for comparison. We include the 95%-confidence interval for COMBO. We bold the highest score
across all methods.
results in Table 2. On the walker-walk task, COMBO performs in line with LOMPO and previous
methods. On the more challenging Sawyer task, COMBO matches LOMPO and achieves 100%
success rate on the medium-expert dataset, and substantially outperforms all other methods on the
narrow expert dataset, achieving an average success rate of 96.7%, when all other model-based and
model-free methods fail.
is the task of learning policies from a static dataset of past interactions with the environment. It has found applications in domains including robotic manipulation [25, 38, 48, 54],
NLP [21, 22] and healthcare [52, 62]. Similar to interactive RL, both model-free and model-based
algorithms have been studied for offline RL, with explicit or implicit regularization of the learning
algorithm playing a major role.
Model-free offline RL. Prior model-free offline RL algorithms have been designed to regularize
the learned policy to be "close" to the behavioral policy either implicitly via regularized variants of
importance sampling based algorithms [47, 58, 35, 59, 41], offline actor-critic methods [53, 45, 27, 16,
64], applying uncertainty quantification to the predictions of the Q-values [2, 28, 63, 34], and learning
conservative Q-values [29, 55] or explicitly measured by direct state or action constraints [14, 36], KL
divergence [21, 63, 69], Wasserstein distance, MMD [28] and auxiliary imitation loss [13]. Different
from these works, COMBO uses both the offline dataset as well as model-generated data.
Michael Janner, Justin Fu, Marvin Zhang, and Sergey Levine. When to trust your model: Model-based policy optimization. In Advances in Neural Information Processing Systems, pages 12498-12509, 2019. [21] Natasha Jaques, Asma Ghandeharioun, Judy Hanwen Shen, Craig Ferguson, Agata Lapedriza, Noah Jones, Shixiang Gu, and Rosalind Picard. Way off-policy batch deep reinforcement learning of implicit human preferences in dialog. arXiv preprint arXiv:1907.00456, 2019. Ying Jin, Zhuoran Yang, and Zhaoran Wang. Is pessimism provably efficient for offline rl? In International Conference on Machine Learning, pages 5084-5096. PMLR, 2021.[24] Gregory Kahn, Adam Villaflor, Pieter Abbeel, and Sergey Levine. Composable actionconditioned predictors: Flexible off-policy learning for robot navigation. In Conference on Robot Learning, pages 806-816. PMLR, 2018.16] Seyed Kamyar Seyed Ghasemipour, Dale Schuurmans, and Shixiang Shane Gu. Emaq:
Expected-max q-learning operator for simple yet effective offline and online rl. In International
Conference on Machine Learning, pages 3682-3691. PMLR, 2021.
[17] Tuomas Haarnoja, Aurick Zhou, Pieter Abbeel, and Sergey Levine. Soft actor-critic: Off-
policy maximum entropy deep reinforcement learning with a stochastic actor. arXiv preprint
arXiv:1801.01290, 2018.
[18] Danijar Hafner, Timothy Lillicrap, Ian Fischer, Ruben Villegas, David Ha, Honglak Lee, and
James Davidson. International conference on machine learning. In International Conference on
Machine Learning, 2019.
[19] Thomas Jaksch, Ronald Ortner, and Peter Auer. Near-optimal regret bounds for reinforcement
learning. Journal of Machine Learning Research, 11(4), 2010.
[20] [22] Natasha Jaques, Judy Hanwen Shen, Asma Ghandeharioun, Craig Ferguson, Agata Lapedriza,
Noah Jones, Shixiang Shane Gu, and Rosalind Picard. Human-centric dialog training via offline
reinforcement learning. arXiv preprint arXiv:2010.05848, 2020.
[23] [25] Dmitry Kalashnikov, Alex Irpan, Peter Pastor, Julian Ibarz, Alexander Herzog, Eric Jang,
Deirdre Quillen, Ethan Holly, Mrinal Kalakrishnan, Vincent Vanhoucke, et al. Scalable deep
reinforcement learning for vision-based robotic manipulation. In Conference on Robot Learning,
pages 651-673. PMLR, 2018.
[26] Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, and Thorsten Joachims. Morel:
Model-based offline reinforcement learning. arXiv preprint arXiv:2005.05951, 2020.
[27] Ilya Kostrikov, Rob Fergus, Jonathan Tompson, and Ofir Nachum. Offline reinforcement
learning with fisher divergence critic regularization. In International Conference on Machine
Learning, pages 5774-5783. PMLR, 2021.
[28] Aviral Kumar, Justin Fu, Matthew Soh, George Tucker, and Sergey Levine. Stabilizing off-policy
q-learning via bootstrapping error reduction. In Advances in Neural Information Processing
Systems, pages 11761-11771, 2019.
[29] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for
offline reinforcement learning. arXiv preprint arXiv:2006.04779, 2020.
Table 7 :
7We include our automatic hyperparameter selection rule of f on the medium datasets in the hopper and walker2d environments from D4RL. We follow the same convention defined inTable 4and find that our automatic selection rule can effectively select f offline.Environment
Batch
Mean
Batch
Max
COMBO
(Ours)
CQL+MBPO
halfcheetah-jump
-1022.6
1808.6
5392.7±575.5
4053.4±176.9
ant-angle
866.7
2311.9
2764.8±43.6
809.2±135.4
sawyer-door-close
5%
100%
100%±0.0%
62.7%±24.8%
The datasets of the generalization environments are available at the anonymous link: https://drive. google.com/file/d/1pn6dS5OgPQVp_ivGws-tmWdZoU7m_LvC/view?usp=sharing.
Acknowledgments and Disclosure of FundingWe thank members of RAIL and IRIS for their support and feedback. This work was supported in part by ONR grants N00014-20-1-2675 and N00014-21-1-2685 as well as Intel Corporation. AK and SL are supported by the DARPA Assured Autonomy program. AR was supported by the J.P. Morgan PhD Fellowship in AI.FollowingStep 1, we will use the upper bound on J(M, M, f, π) for policy π = π out and a lower-bound on J(M, M, f, π) for policy π = π β and obtain the following inequality:The term marked by ( * ) in the above expression can be upper bounded by the concentration properties of the dynamics as done in Step 1 in this proof:Finally, using Equation 25, we can lower-bound the policy return difference as:Plugging the bounds for terms (a), (b) and (c) in the expression for ζ where J(π out , M)−J(π β , M) ≥ ζ, we obtain:Remark 3 (Interpretation of Proposition 4.3). Now we will interpret the theoretical expression for ζ in Equation 26, and discuss the scenarios when it is negative. When the expression for ζ is negative, the policy π out is an improvement over π β in the original MDP, M.• We first discuss if the assumption of ν(ρ πout , f ) − ν(ρ β , f ) ≥ C > 0 is reasonable in practice. Note that we have never used the fact that the learned model P M is close to the actual MDP, P M on the states visited by the behavior policy π β in our analysis. We will use this fact now: in practical scenarios, ν(ρ β , f ) is expected to be smaller than ν(ρ π , f ), since ν(ρ β , f ) is directly controlled by the difference and density ratio of ρ β (s, a) and d(s, a):by Lemma A.1 which is expected to be small for the behavior policy π β in cases when the behavior policy marginal in the empirical MDP, d π β M (s, a), is broad. This is a direct consequence of the fact that the learned dynamics integrated with the policy under the learned model: P π β M is closer to its counterpart in the empirical MDP: P π β M for π β . Note that this is not true for any other policy besides the behavior policy that performs several counterfactual actions in a rollout and deviates from the data. For such a learned policy π, we incur an extra error which depends on the importance ratio of policy densities, compounded over the horizon and manifests as the D CQL term (similar to Equation 25, or Lemma D.4.1 in Kumar et al.[29]). Thus, in practice, we argue that we are interested in situations where the assumption ν(ρ πout , f ) − ν(ρ β , f ) ≥ C > 0 holds, in which case by increasing β, we can make the expression for ζ in Equation 26 negative, allowing for policy improvement.
Reinforcement learning: Theory and algorithms. Alekh Agarwal, Nan Jiang, M Sham, Kakade, CS Dept. Alekh Agarwal, Nan Jiang, and Sham M Kakade. Reinforcement learning: Theory and algorithms. CS Dept., UW Seattle, Seattle, WA, USA, Tech. Rep, 2019.
An optimistic perspective on offline reinforcement learning. Rishabh Agarwal, Dale Schuurmans, Mohammad Norouzi, International Conference on Machine Learning. PMLRRishabh Agarwal, Dale Schuurmans, and Mohammad Norouzi. An optimistic perspective on offline reinforcement learning. In International Conference on Machine Learning, pages 104-114. PMLR, 2020.
Arthur Argenson, Gabriel Dulac-Arnold, arXiv:2008.05556Model-based offline planning. arXiv preprintArthur Argenson and Gabriel Dulac-Arnold. Model-based offline planning. arXiv preprint arXiv:2008.05556, 2020.
Efficient exploration through bayesian deep q-networks. Kamyar Azizzadenesheli, Emma Brunskill, Animashree Anandkumar, ITA. IEEEKamyar Azizzadenesheli, Emma Brunskill, and Animashree Anandkumar. Efficient exploration through bayesian deep q-networks. In ITA, pages 1-9. IEEE, 2018.
Neuro-Dynamic Programming. Dimitri P Bertsekas, John N Tsitsiklis, Athena Scientific. Dimitri P. Bertsekas and John N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, Belmont, MA, 1996.
. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, Wojciech Zaremba, arXiv:1606.01540Openai gym. arXiv preprintGreg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym. arXiv preprint arXiv:1606.01540, 2016.
Model-augmented actor-critic: Backpropagating through paths. Ignasi Clavera, Violet Fu, Pieter Abbeel, arXiv:2005.08068arXiv preprintIgnasi Clavera, Violet Fu, and Pieter Abbeel. Model-augmented actor-critic: Backpropagating through paths. arXiv preprint arXiv:2005.08068, 2020.
. Thomas Degris, Martha White, Richard S Sutton, arXiv:1205.4839Off-policy actor-critic. arXiv preprintThomas Degris, Martha White, and Richard S Sutton. Off-policy actor-critic. arXiv preprint arXiv:1205.4839, 2012.
Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. Frederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, Sergey Levine, arXiv:1812.00568arXiv preprintFrederik Ebert, Chelsea Finn, Sudeep Dasari, Annie Xie, Alex Lee, and Sergey Levine. Visual foresight: Model-based deep reinforcement learning for vision-based robotic control. arXiv preprint arXiv:1812.00568, 2018.
Tree-based batch mode reinforcement learning. Damien Ernst, Pierre Geurts, Louis Wehenkel, Journal of Machine Learning Research. 6Damien Ernst, Pierre Geurts, and Louis Wehenkel. Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6:503-556, 2005.
Deep visual foresight for planning robot motion. Chelsea Finn, Sergey Levine, 2017 IEEE International Conference on Robotics and Automation (ICRA). IEEEChelsea Finn and Sergey Levine. Deep visual foresight for planning robot motion. In 2017 IEEE International Conference on Robotics and Automation (ICRA), pages 2786-2793. IEEE, 2017.
Batch reinforcement learning. Sascha Lange, Thomas Gabel, Martin A Riedmiller, Reinforcement Learning. Springer12Sascha Lange, Thomas Gabel, and Martin A. Riedmiller. Batch reinforcement learning. In Reinforcement Learning, volume 12. Springer, 2012.
Safe policy improvement with baseline bootstrapping. Romain Laroche, Paul Trichelair, Remi Tachet Des Combes, International Conference on Machine Learning. PMLRRomain Laroche, Paul Trichelair, and Remi Tachet Des Combes. Safe policy improvement with baseline bootstrapping. In International Conference on Machine Learning, pages 3652-3661. PMLR, 2019.
Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. Alex X Lee, Anusha Nagabandi, Pieter Abbeel, Sergey Levine, Advances in Neural Information Processing Systems. Alex X. Lee, Anusha Nagabandi, Pieter Abbeel, and Sergey Levine. Stochastic latent actor-critic: Deep reinforcement learning with a latent variable model. In Advances in Neural Information Processing Systems, 2020.
Representation balancing offline modelbased reinforcement learning. Jongmin Byung-Jun Lee, Kee-Eung Lee, Kim, International Conference on Learning Representations. Byung-Jun Lee, Jongmin Lee, and Kee-Eung Kim. Representation balancing offline model- based reinforcement learning. In International Conference on Learning Representations, 2021. URL https://openreview.net/forum?id=QpNz8r_Ri2Y.
Offline reinforcement learning: Tutorial, review, and perspectives on open problems. Sergey Levine, Aviral Kumar, George Tucker, Justin Fu, arXiv:2005.01643arXiv preprintSergey Levine, Aviral Kumar, George Tucker, and Justin Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020.
Off-policy policy gradient with state distribution correction. Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill, abs/1904.08473CoRRYao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Off-policy policy gradient with state distribution correction. CoRR, abs/1904.08473, 2019.
Provably good batch reinforcement learning without great exploration. Yao Liu, Adith Swaminathan, Alekh Agarwal, Emma Brunskill, arXiv:2007.08202arXiv preprintYao Liu, Adith Swaminathan, Alekh Agarwal, and Emma Brunskill. Provably good batch reinforcement learning without great exploration. arXiv preprint arXiv:2007.08202, 2020.
Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, Igor Mordatch, International Conference on Learning Representations (ICLR). Kendall Lowrey, Aravind Rajeswaran, Sham Kakade, Emanuel Todorov, and Igor Mordatch. Plan Online, Learn Offline: Efficient Learning and Exploration via Model-Based Control. In International Conference on Learning Representations (ICLR), 2019.
Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data. Ajay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, Dieter Fox, 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEEAjay Mandlekar, Fabio Ramos, Byron Boots, Silvio Savarese, Li Fei-Fei, Animesh Garg, and Dieter Fox. Iris: Implicit reinforcement without interaction at scale for learning control from offline robot manipulation data. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pages 4414-4420. IEEE, 2020.
Deployment-efficient reinforcement learning via model-based offline optimization. Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, Shixiang Gu, arXiv:2006.03647arXiv preprintTatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Gu. Deployment-efficient reinforcement learning via model-based offline optimization. arXiv preprint arXiv:2006.03647, 2020.
Finite-time bounds for fitted value iteration. Rémi Munos, Csaba Szepesvari, J. Mach. Learn. Res. 9Rémi Munos and Csaba Szepesvari. Finite-time bounds for fitted value iteration. J. Mach. Learn. Res., 9:815-857, 2008.
Algaedice: Policy gradient from arbitrary experience. Ofir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, Dale Schuurmans, arXiv:1912.02074arXiv preprintOfir Nachum, Bo Dai, Ilya Kostrikov, Yinlam Chow, Lihong Li, and Dale Schuurmans. Al- gaedice: Policy gradient from arbitrary experience. arXiv preprint arXiv:1912.02074, 2019.
Why is posterior sampling better than optimism for reinforcement learning. Ian Osband, Benjamin Van Roy, International Conference on Machine Learning. PMLRIan Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning? In International Conference on Machine Learning, pages 2701-2710. PMLR, 2017.
Randomized prior functions for deep reinforcement learning. Ian Osband, John Aslanides, Albin Cassirer, abs/1806.03335CoRRIan Osband, John Aslanides, and Albin Cassirer. Randomized prior functions for deep rein- forcement learning. CoRR, abs/1806.03335, 2018.
Can you trust your model's uncertainty? evaluating predictive uncertainty under dataset shift. Yaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, Jasper Snoek, arXiv:1906.02530arXiv preprintYaniv Ovadia, Emily Fertig, Jie Ren, Zachary Nado, David Sculley, Sebastian Nowozin, Joshua V Dillon, Balaji Lakshminarayanan, and Jasper Snoek. Can you trust your model's uncer- tainty? evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530, 2019.
Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. Aviral Xue Bin Peng, Grace Kumar, Sergey Zhang, Levine, arXiv:1910.00177arXiv preprintXue Bin Peng, Aviral Kumar, Grace Zhang, and Sergey Levine. Advantage-weighted regression: Simple and scalable off-policy reinforcement learning. arXiv preprint arXiv:1910.00177, 2019.
Marek Petrik, Yinlam Chow, Mohammad Ghavamzadeh, arXiv:1607.03842Safe policy improvement by minimizing robust baseline regret. arXiv preprintMarek Petrik, Yinlam Chow, and Mohammad Ghavamzadeh. Safe policy improvement by minimizing robust baseline regret. arXiv preprint arXiv:1607.03842, 2016.
Off-policy temporal-difference learning with function approximation. Doina Precup, S Richard, Sanjoy Sutton, Dasgupta, ICML. Doina Precup, Richard S Sutton, and Sanjoy Dasgupta. Off-policy temporal-difference learning with function approximation. In ICML, pages 417-424, 2001.
Offline reinforcement learning from images with latent space models. Rafael Rafailov, A Yu, Chelsea Rajeswaran, Finn, abs/2012.11547ArXiv. Rafael Rafailov, Tianhe Yu, A. Rajeswaran, and Chelsea Finn. Offline reinforcement learning from images with latent space models. ArXiv, abs/2012.11547, 2020.
Bridging offline reinforcement learning and imitation learning: A tale of pessimism. Paria Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, Stuart Russell, arXiv:2103.12021arXiv preprintParia Rashidinejad, Banghua Zhu, Cong Ma, Jiantao Jiao, and Stuart Russell. Bridging offline reinforcement learning and imitation learning: A tale of pessimism. arXiv preprint arXiv:2103.12021, 2021.
Neural fitted q iteration-first experiences with a data efficient neural reinforcement learning method. Martin Riedmiller, European Conference on Machine Learning. Martin Riedmiller. Neural fitted q iteration-first experiences with a data efficient neural reinforcement learning method. In European Conference on Machine Learning, pages 317-328.
. Springer, Springer, 2005.
Agnostic system identification for model-based reinforcement learning. Stephane Ross, Drew Bagnell, ICML. Stephane Ross and Drew Bagnell. Agnostic system identification for model-based reinforcement learning. In ICML, 2012.
Informing sequential clinical decision-making through reinforcement learning: an empirical study. M Susan, Eric Shortreed, Laber, J Daniel, Scott Lizotte, Joelle Stroup, Susan A Pineau, Murphy, Machine learning. 841-2Susan M Shortreed, Eric Laber, Daniel J Lizotte, T Scott Stroup, Joelle Pineau, and Susan A Murphy. Informing sequential clinical decision-making through reinforcement learning: an empirical study. Machine learning, 84(1-2):109-136, 2011.
Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. Y Noah, Jost Tobias Siegel, Felix Springenberg, Abbas Berkenkamp, Michael Abdolmaleki, Thomas Neunert, Roland Lampe, Martin Hafner, Riedmiller, arXiv:2002.08396arXiv preprintNoah Y Siegel, Jost Tobias Springenberg, Felix Berkenkamp, Abbas Abdolmaleki, Michael Neunert, Thomas Lampe, Roland Hafner, and Martin Riedmiller. Keep doing what worked: Behavioral modelling priors for offline reinforcement learning. arXiv preprint arXiv:2002.08396, 2020.
Cog: Connecting new skills to past experience with offline reinforcement learning. Avi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, Sergey Levine, arXiv:2010.14500arXiv preprintAvi Singh, Albert Yu, Jonathan Yang, Jesse Zhang, Aviral Kumar, and Sergey Levine. Cog: Connecting new skills to past experience with offline reinforcement learning. arXiv preprint arXiv:2010.14500, 2020.
S4rl: Surprisingly simple self-supervision for offline reinforcement learning. Samarth Sinha, Animesh Garg, arXiv:2103.06326arXiv preprintSamarth Sinha and Animesh Garg. S4rl: Surprisingly simple self-supervision for offline reinforcement learning. arXiv preprint arXiv:2103.06326, 2021.
Reinforcement Learning: An Introduction. R S Sutton, A G Barto, MIT PressCambridge, MAR. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998.
Dyna, an integrated architecture for learning, planning, and reacting. S Richard, Sutton, ACM Sigart Bulletin. 24Richard S Sutton. Dyna, an integrated architecture for learning, planning, and reacting. ACM Sigart Bulletin, 2(4):160-163, 1991.
An emphatic approach to the problem of off-policy temporal-difference learning. Richard S Sutton, Martha Rupam Mahmood, White, The Journal of Machine Learning Research. 171Richard S Sutton, A Rupam Mahmood, and Martha White. An emphatic approach to the problem of off-policy temporal-difference learning. The Journal of Machine Learning Research, 17(1):2603-2631, 2016.
Batch learning from logged bandit feedback through counterfactual risk minimization. Adith Swaminathan, Thorsten Joachims, J. Mach. Learn. Res. 16Adith Swaminathan and Thorsten Joachims. Batch learning from logged bandit feedback through counterfactual risk minimization. J. Mach. Learn. Res, 16:1731-1755, 2015.
Overcoming model bias for robust offline deep reinforcement learning. Phillip Swazinna, Steffen Udluft, Thomas Runkler, arXiv:2008.05533arXiv preprintPhillip Swazinna, Steffen Udluft, and Thomas Runkler. Overcoming model bias for robust offline deep reinforcement learning. arXiv preprint arXiv:2008.05533, 2020.
. Yuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego De Las, David Casas, Budden, arXiv:1801.00690Abbas Abdolmaleki. Andrew LefrancqarXiv preprintJosh Merel. et al. Deepmind control suiteYuval Tassa, Yotam Doron, Alistair Muldal, Tom Erez, Yazhe Li, Diego de Las Casas, David Budden, Abbas Abdolmaleki, Josh Merel, Andrew Lefrancq, et al. Deepmind control suite. arXiv preprint arXiv:1801.00690, 2018.
Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation. L Wang, Wei Zhang, Xiaofeng He, H Zha, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningL. Wang, Wei Zhang, Xiaofeng He, and H. Zha. Supervised reinforcement learning with recurrent neural network for dynamic treatment recommendation. Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2018.
Behavior regularized offline reinforcement learning. Yifan Wu, George Tucker, Ofir Nachum, arXiv:1911.11361arXiv preprintYifan Wu, George Tucker, and Ofir Nachum. Behavior regularized offline reinforcement learning. arXiv preprint arXiv:1911.11361, 2019.
Uncertainty weighted actor-critic for offline reinforcement learning. Yue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, Hanlin Goh, arXiv:2105.08140arXiv preprintYue Wu, Shuangfei Zhai, Nitish Srivastava, Joshua Susskind, Jian Zhang, Ruslan Salakhutdinov, and Hanlin Goh. Uncertainty weighted actor-critic for offline reinforcement learning. arXiv preprint arXiv:2105.08140, 2021.
Bdd100k: A diverse driving dataset for heterogeneous multitask learning. F Yu, H Chen, X Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, V Madhavan, Trevor Darrell, F. Yu, H. Chen, X. Wang, Wenqi Xian, Yingying Chen, Fangchen Liu, V. Madhavan, and Trevor Darrell. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. 2020
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 2633-2642, 2020.
Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, Sergey Levine, Conference on Robot Learning. PMLRTianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. In Conference on Robot Learning, pages 1094-1100. PMLR, 2020.
Mopo: Model-based offline policy optimization. Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, Tengyu Ma, arXiv:2005.13239arXiv preprintTianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, and Tengyu Ma. Mopo: Model-based offline policy optimization. arXiv preprint arXiv:2005.13239, 2020.
Model-based offline planning with trajectory pruning. Xianyuan Zhan, Xiangyu Zhu, Haoran Xu, arXiv:2105.07351arXiv preprintXianyuan Zhan, Xiangyu Zhu, and Haoran Xu. Model-based offline planning with trajectory pruning. arXiv preprint arXiv:2105.07351, 2021.
Plas: Latent action space for offline reinforcement learning. Wenxuan Zhou, Sujay Bajracharya, David Held, arXiv:2011.07213arXiv preprintWenxuan Zhou, Sujay Bajracharya, and David Held. Plas: Latent action space for offline reinforcement learning. arXiv preprint arXiv:2011.07213, 2020.
| [
"https://github.com/tianheyu927/mopo."
] |
[
"WANET -IMPERCEPTIBLE WARPING-BASED BACK- DOOR ATTACK",
"WANET -IMPERCEPTIBLE WARPING-BASED BACK- DOOR ATTACK"
] | [
"Anh Tuan Nguyen \nHanoi University of Science and Technology\n3 VinUniversity\n",
"Anh Tuan Tran \nHanoi University of Science and Technology\n3 VinUniversity\n",
"Vinai Research \nHanoi University of Science and Technology\n3 VinUniversity\n"
] | [
"Hanoi University of Science and Technology\n3 VinUniversity",
"Hanoi University of Science and Technology\n3 VinUniversity",
"Hanoi University of Science and Technology\n3 VinUniversity"
] | [] | With the thriving of deep learning and the widespread practice of using pretrained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears. However, the existing backdoor attacks are all built on noise perturbation triggers, making them noticeable to humans. In this paper, we instead propose using warping-based triggers. The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness. To make such models undetectable by machine defenders, we propose a novel training mode, called the "noise" mode. The trained networks successfully attack and bypass the state of the art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA. Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism's efficiency. Our code is publicly available at https://github.com/VinAIResearch/ Warping-based_Backdoor_Attack-release. | null | [
"https://arxiv.org/pdf/2102.10369v4.pdf"
] | 231,985,654 | 2102.10369 | bfa61717925018954253679b8605739fb09e8607 |
WANET -IMPERCEPTIBLE WARPING-BASED BACK- DOOR ATTACK
Anh Tuan Nguyen
Hanoi University of Science and Technology
3 VinUniversity
Anh Tuan Tran
Hanoi University of Science and Technology
3 VinUniversity
Vinai Research
Hanoi University of Science and Technology
3 VinUniversity
WANET -IMPERCEPTIBLE WARPING-BASED BACK- DOOR ATTACK
Published as a conference paper at ICLR 2021
With the thriving of deep learning and the widespread practice of using pretrained networks, backdoor attacks have become an increasing security threat drawing many research interests in recent years. A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears. However, the existing backdoor attacks are all built on noise perturbation triggers, making them noticeable to humans. In this paper, we instead propose using warping-based triggers. The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness. To make such models undetectable by machine defenders, we propose a novel training mode, called the "noise" mode. The trained networks successfully attack and bypass the state of the art defense methods on standard classification datasets, including MNIST, CIFAR-10, GTSRB, and CelebA. Behavior analyses show that our backdoors are transparent to network inspection, further proving this novel attack mechanism's efficiency. Our code is publicly available at https://github.com/VinAIResearch/ Warping-based_Backdoor_Attack-release.
INTRODUCTION
Deep learning models are essential in many modern systems due to their superior performance compared to classical methods. Most state-of-the-art models, however, require expensive hardware, huge training data, and long training time. Hence, instead of training the models from scratch, it is a common practice to use pre-trained networks provided by third-parties these days. This poses a serious security threat of backdoor attack (Gu et al., 2017). A backdoor model is a network poisoned either at training or finetuning. It can work as a genuine model in the normal condition. However, when a specific trigger appears in the input, the model will act maliciously, as designed by the attacker. Backdoor attack can occur in various tasks, including image recognition (Chen et al., 2017), speech recognition (Liu et al., 2018b), natural language processing (Dai et al., 2019), and reinforcement learning (Hamon et al., 2020). In this paper, we will focus on image classification, the most popular attacking target with possible fatal consequences (e.g., for self-driving car).
Since introduced, backdoor attack has drawn a lot of research interests (Chen et al., 2017;Liu et al., 2018b;Salem et al., 2020;Nguyen & Tran, 2020). In most of these works, trigger patterns are based on patch perturbation or image blending. Recent papers have proposed novel patterns such as sinusoidal strips (Barni et al., 2019), and reflectance . These backdoor triggers, however, are unnatural and can be easily spotted by humans.
We believe that the added content, such as noise, strips, or reflectance, causes the backdoor samples generated by the previous methods strikingly detectable. Instead, we propose to use image warping that can deform but preserve image content. We also found that humans are not good at recognizing subtle image warping, while machines are excellent in this task.
Hence, in this paper, we design a novel, simple, but effective backdoor attack based on image warping called WaNet. We use a small and smooth warping field in generating backdoor images, making the modification unnoticeable, as illustrated in Fig. 1. Our backdoor images are natural and hard to be distinguished from the genuine examples, confirmed by our user study described in Sec. 4.3. Figure 1: Comparison between backdoor examples generated by our method and by the previous backdoor attacks. Given the original image (leftmost), we generate the corresponding backdoor images using patch-based attacks (Gu et al., 2017;Liu et al., 2018b), blending-based attack (Chen et al., 2017), SIG (Barni et al., 2019), ReFool , and our method. For each method, we show the image (top), the magnified (×2) residual map (bottom). The images generated from the previous attacks are unnatural and can be detected by humans. In constrast, ours is almost identical to the original image, and the difference is unnoticeable.
To obtain a backdoor model, we first follow the common training procedure by poisoning a part of training data with a fixed ratio of ρ a ∈ (0, 1). While the trained networks provide high clean and attack accuracy, we found that they "cheated" by learning pixel-wise artifacts instead of the warping itself. It makes them easy to be caught by a popular backdoor defense Neural Cleanse. Instead, we add another mode in training, called "noise mode", to enforce the models to learn only the predefined backdoor warp. This novel training scheme produces satisfactory models that are both effective and stealthy.
Our attack method achieves invisibility without sacrificing accuracy. It performs similarly to stateof-the-art backdoor methods in terms of clean and attack accuracy, verified on common benchmarks such as MNIST, CIFAR-10, GTSRB, and CelebA. Our attack is also undetectable by various backdoor defense mechanisms; none of existing algorithms can recognize or mitigate our backdoor. This is because the attack mechanism of our method is drastically different from any existing attack, breaking the assumptions of all defense methods.
Finally, we demonstrate that our novel backdoor can be a practical threat by deploying it for physical attacks. We tested the backdoor classifier with camera-captured images of physical screens. Despite image quality degradation via extreme capturing conditions, our backdoor is well-preserved, and the attack accuracy stays near 100%.
In short, we introduce a novel backdoor attack via image warping. To train such a model, we extend the standard backdoor training scheme by introducing a "noise" training mode. The attack is effective, and the backdoor is imperceptible by both humans and computational defense mechanisms. It can be deployed for physical attacks, creating a practical threat to deep-learning-based systems 1 .
BACKGROUND
THREAT MODEL
Backdoor attacks are techniques of poisoning a system to have a hidden destructive functionality. The poisoned system can work genuinely on clean inputs but misbehave when a specific trigger pattern appears. In the attack mode for image classification, backdoor models can return a predefined target label, normally incorrect, regardless of image content. It allows the attacker to gain illegal benefits. For example, a backdoor face authentication system may allow the attacker to access whenever he puts a specific sticker on the face.
Backdoors can be injected into the deep model at any stage. We consider model poisoning at training since it is the most used threat model. The attacker has total control over the training process and maliciously alters data for his attack purposes. The poisoned model is then delivered to customers to deploy as-is. In our proposed attack, the attacker selects a fixed warping field and uses it to generate all the backdoor images in training and in testing-time attacks.
PREVIOUS BACKDOOR ATTACKS
We focus on backdoor attacks on image classification. The target network is trained for a classification task f : X → C, where X is an image domain and C = {c 1 , c 2 , ..., c M } is a set of M target classes. When poisoning f , we enforce it to learn an injection function B, a target label function c, and alter the network behaviour so that:
f (x) = y, f (B(x)) = c(y)(1)
for any pair of clean image x ∈ X and the corresponding label y ∈ C.
The earliest backdoor attack was BadNets (Gu et al., 2017). The authors suggested to poison a portion of training data by replacing each clean data pair (x, y) with the corresponding poisoned pair (B(x), c(y)). The injection function B simply replaces a fixed patch of the input image by a predefined trigger pattern. As for the target label function c(y), the authors proposed two tests: (1) all-to-one with a constant target label c(y) =ĉ and (2) all-to-all with c(y) = y + 1.
After BadNets, many variants of backdoor attacks have been introduced. These approaches focus on changing either the backdoor injection process or the injection function B.
As for the backdoor injection process, Liu et al. (2018b) proposed to inject backdoor into clean models via fine-tuning instead of the training stage. suggested hiding backdoor inside latent neurons for transfer learning. Many recent studies (Turner et al., 2019;Barni et al., 2019;Liu et al., 2020), injected backdoor only on samples with unchanged labels, i.e., the target c(y) is the same as the ground-truth label y, to dodge label inspection by humans.
In this paper, we focus on the development of a good injection function B. Most of the popular attack methods rely on fixed patch-based triggers. Chen et al. (2017) used image blending to embed the trigger into the input image, and Nguyen & Tran (2020) extended it to be input-aware. Salem et al. (2020) varied the patch-based trigger locations and patterns to make it "dynamic". Barni et al. (2019) employed sinusoidal strips as the trigger alongside the clean-label strategy. Lately, Liu et al. (2020) proposed to disguise backdoor triggers as reflectance to make the poisoned images look natural. The backdoor images generated by these attacks, however, are easy to be spotted by humans. We instead propose an "invisible" backdoor that is imperceptible by even sharp-eyed people.
BACKDOOR DEFENSE METHODS
As the threat of backdoor attacks becomes more apparent, backdoor defense research is emerging. Based on usage scenarios, we can classify them into three groups: training defense, model defense, and testing-time defense.
Training defense assumes the defender has control over the training process, and the adversary attacks by providing infected training data (Tran et al., 2018). This assumption, however, does not match our threat model, where the already-trained backdoor model is provided by a third party. This mechanism is not applicable to our situation and will not be considered further in this paper.
Model defenses aim to verify or mitigate the provided model before deployment. Fine-Pruning (Liu et al., 2018a) suggested to prune the dormant neurons, defined by analyses on a clean image set, to mitigate the backdoor if present. Neural Cleanse was the first work that could detect backdoor models. It optimized a patch-based trigger candidate for each target label, then detected if any candidate was abnormally smaller than the others as a backdoor indicator. ABS scanned the neurons and generated trigger candidates by reverse engineering. Cheng et al. (2019) used GradCam (Selvaraju et al., 2017 to analyze the network behavior on a clean input image with and without the synthesized trigger to detect anomalies. applied mode connectivity to effectively mitigate backdoor while keeping acceptable performance. Lately, Kolouri et al. (2020) introduced universal litmus patterns that can be fed to the network to detect backdoor. (2019) used GradCam inspection to detect potential backdoor locations. In all these methods, the trigger candidates were then verified by being injected into a set of clean images.
A common assumption in all previous defense methods is that the backdoor triggers are image patches. We instead propose a novel attack mechanism based on image warping, undermining the foundation of these methods.
ELASTIC IMAGE WARPING
Image warping is a basic image processing technique that deforms an image by applying the geometric transformation. The transformation can be affine, projective, elastic, or non-elastic. In this work, we propose to use elastic image warping given its advantages over the others: (1) Affine and projective transformations are naturally introduced to clean images via the image capturing process. If we apply these transformations to these images, the transformed images can be identical to other clean images that are of the same scenes but captured at different viewpoints. Hence, these transformations are not suitable to generate backdoor examples, particularly in physical attacks.
(2) Elastic transformation still generates natural outputs while non-elastic one does not.
The most popular elastic warping technique is Thin-Plate Splines (TPS) (Duchon, 1977). TPS can interpolate a smooth warping field to transform the entire image given a set of control points with known original and target 2D coordinates. TPS was adopted in Spatial Transformer Networks (Jaderberg et al., 2015), the first deep learning study incorporating differential image warping.
We believe that elastic image warping can be utilized to generate invisible backdoor triggers. Unlike previous attack methods that introduce extra and independent information to an input image, elastic image warping only manipulates existing pixels of the image. Humans, while being excellent in spotting incongruent part of an image, are bad at recognizing small geometric transformations.
WARPING-BASED BACKDOOR ATTACK
We now describe our novel backdoor attack method WaNet, which stand for Warping-based poisoned Networks. WaNet are designed to be stealthy to both machine and human inspections.
OVERVIEW
Recall that a classification network is a function f : X → C, in which X is an input image domain and C is a set of target classes. To train f , a training dataset S = {(x i , y i )|x i ∈ X, y i ∈ C, i = 1, N } is provided. We follow the training scheme of BadNets to poison a subset of S with ratio ρ a for backdoor training. Each clean pair (x, y) will be replaced by a backdoor pair (B(x), c(y)), in which B is the backdoor injection function and c(y) is the target label function.
Our main focus is to redesign the injection function B based on image warping. We construct B using a warping function W and a predefined warping field M :
B(x) = W(x, M ).(2)
M acts like a motion field; it defines the relative sampling location of backward warping for each point in the target image. W allows a floating-point warping field as input. When a sampling pixel falls on non-integer 2D coordinates, it will be bi-linear interpolated. To implement W, we rely on the public API grid sample provided by PyTorch. However, this API inputs a grid of normalized absolute 2D coordinates of the sampling points. To use that API, we first sum M with an identity sampling grid, then normalize to [−1, 1] to get the required grid input.
WARPING FIELD GENERATION
The warping field M is a crucial component; it must guarantee that the warped images are both natural and effective for attacking purposes. Hence, M are desired to satisfy the following properties:
• Small: M should be small, to be unnoticeable to humans, (Zhang et al., 2018) scores are computed at resolution 224×224.
• Elastic: M should be elastic, i.e., smooth and non-flat, to generate natural looking images, • Within image boundary: M should not exceed the image boundary, to avoid creating suspicious black/plain outer area.
To get such a warping field, we borrow the idea of using control points from TPS but simplify the interpolation method. The process of generating the desired warp is illustrated by Fig. 2 and is described in the following subsections.
Selecting the control grid We first select the control points. For simplicity, we pick the target points on a uniform grid of size k × k over the entire image. Their backward warping field is denoted as P ∈ R k×k×2 . We use a parameter s to define the strength of P and generate P as following:
P = ψ(rand [−1,1] (k, k, 2)) × s(3)
in which rand [−1,1] (. . . ) is a function returning random tensor with the input shape and element value in the range [−1, 1] and ψ is a normalization function. In this paper, we normalize the tensor elements by their mean absolute value:
ψ(A) = A 1 size(A) ai∈A |a i | (4)
Upsampling From the control points, we interpolate the warping field of the entire image. Since these points are in a uniform grid covering the entire image, instead of using a complex spline-based interpolation like in TPS, we can simply apply bicubic interpolation. We denote the output of this step as M 0 =↑ P ∈ R h×w×2 , with h and w being the image height and width respectively.
Clipping Finally, we apply a clipping function φ so that the sampling points do not fall outside of the image border. The process of generating M can be summarized by the equation:
M = φ(↑ (ψ(rand [−1,1] (k, k, 2)) × s)).(5)
We investigate the effect of the hyper-parameters k and s qualitatively in Fig. 3. The warping effect is almost invisible when k < 6 and s < 0.75.
RUNNING MODES
After computing the warping field M , we can train WaNet with with two modes, clean and attack, as the standard protocol. However, the models trained by that algorithm, while still achieving high accuracy in both clean and attack tests, tend to learn pixel-level artifacts instead of the warping. They are, therefore, easily exposed by a backdoor defense method such as Neural Cleanse. We will discuss more details in the ablation studies in Section 4.6.
To resolve this problem, we propose a novel training mode alongside the clean and attack mode, called noise mode. The idea is simple: when applying a random warping field M = M , the network should not trigger the backdoor but return the correct class prediction. Fig. 4 illustrates three running modes in our training pipelines. We first select the backdoor probability ρ a ∈ (0, 1) and the noise probability ρ n ∈ (0, 1) such that ρ a + ρ n < 1. Then, for each clean input (x, y), we randomly select one of three modes and alter that pair accordingly:
(x, y) → (x, y) with probability 1 − ρ a − ρ n (W(x, M ), c(y))
with probability ρ a (W(x, M + rand [−1,1] (h, w, 2)), y) with probability ρ n
Note that with the noise mode, instead of using a totally random warping field, we form it by adding Gaussian noise to M for a more effective training. The modified training set is then used to train f .
EXPERIMENTS
EXPERIMENTAL SETUP
Following the previous backdoor attack papers, we performed experiments on four datasets: MNIST (LeCun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009), GTSRB (Stallkamp et al., 2012) and CelebA (Liu et al., 2015). Note that CelebA dataset has annotations for 40 independent binary attributes, which is not suitable for multi-class classification. Therefore, we follow the configuration suggested by Salem et al. (2020) to select the top three most balanced attributes, including Heavy Makeup, Mouth Slightly Open, and Smiling, then concatenate them to create eight classification classes. Their detail information are shown in Table 1. To build the classifier f for the color image datasets, we used Pre-activation Resnet-18 (He et al., 2016) for the CIFAR-10 and GTSRB datasets as suggested by Kang (2020), and Resnet-18 for the CelebA dataset. As for the grayscale dataset MNIST, we defined a simple network structure as reported in Table 1.
We trained the networks using the SGD optimizer. The initial learning rate was 0.01, which was reduced by a factor of 10 after each 100 training epochs. The networks were trained until convergence. We used k = 4, s = 0.5, ρ a = 0.1, and ρ n = 0.2.
ATTACK EXPERIMENTS
We trained and tested the backdoor models in all-to-one configuration, i.e., c(y) =ĉ∀y. The accuracy values in clean mode, attack mode, and the noise mode are reported in Fig. 5a. As can be seen, with clean images, the networks could correctly classify them like any benign models, with accuracy near 100% on MNIST/GTSRB, 94.15% on CIFAR-10, and 79.77% on CelebA. When applying the pre-defined image warping, the attack success rate was near 100% on all datasets. However, when using a random warping, the classifiers still recognized the true image class with a similar accuracy as in the clean mode. This result is impressive given the fact that the poisoned images look almost identical to the original, as can be seen in Fig. 5b.
To evaluate our method's robustness in real-life scenarios, we also tested if backdoor images would still be misclassified even when being distorted by the capturing process. We showed 50 clean and 50 backdoor images on a screen and recaptured them using a phone camera. Our model still worked well on recaptured images, obtaining 98% clean accuracy and 96% attack success rate. Fig. 5c displays an example of our test. The clean image was recognized correctly as "automobile", while the look-a-like backdoor image was recognized as the "airplane" attack class.
HUMAN INSPECTION
To examine the realisticity of our backdoor and the previous methods, we created user studies with human inspection. First, we randomly selected 25 images from the GTSRB dataset. Second, for each backdoor injection function, we created the corresponding 25 backdoor images and mixed them with the original to obtain a set of 50 images. Finally, we asked 40 people to classify whether each image was genuine, collecting 2000 answers per method. The participants were trained about the mechanism and characteristics of the attack before answering the questions.
We collected the answers and reported the percentage of incorrect answers as the success fooling rates in Fig. 6a. Note that when the backdoor examples are more indistinguishable from the clean ones, the testers will find it harder to decide an image is clean or poisoned. Hence, better backdoor methods led to higher fooling rates on not only backdoor inputs but also on clean ones. The rates from previous methods are low, with maximum 7.7% on all inputs, implying that they are obvious to humans to detect. In contrast, our rate is 28%, four times their best number. It confirms that WaNet is stealthy and hard to detect, even with trained people.
Although our backdoor images are natural-looking, some of them have subtle properties that can be detected by trained testers. We provide two of the most detected backdoor examples from WaNet in Fig. 6b. In the first case, the circle sign is not entirely round. In the second case, the right edge of the traffic sign is slightly curved. Although these conditions can be found on real-life traffic signs, they are not common in the testing dataset GTSRB. These images are of the minority, and our fooling rate on backdoor images is 38.6%, not far away from the rate of 50% in random selection.
DEFENSE EXPERIMENTS
We will now test the trained models against the popular backdoor defense mechanisms, including Neural Cleanse, Fine-Prunning (Model defenses), and STRIPS (Testing-time defense).
Neural Cleanse is a model-defense method based on the pattern optimization approach. It assumes that the backdoor is patch-based. For each class label, Neural Cleanse computes the optimal patch pattern to convert any clean input to that target label. It then checks if any label has a significantly smaller pattern as a sign of backdoor. Neural Cleanse quantifies it by the Anomaly Index metric with the clean/backdoor threshold τ = 2. We ran Neural Cleanse over our WaNet models and report the numbers in Fig. 7c. WaNet passed the test on all datasets; its scores are even smaller than the clean model ones on MNIST and CIFAR-10. We can explain it by the fact that our backdoor relies on warping, a different mechanism compared with patch-based blending.
Fine-Pruning (Liu et al., 2018a), instead, focuses on neuron analyses. Given a specific layer, it analyzes the neuron responses on a set of clean images and detects the dormant neurons, assuming they are more likely to tie to the backdoor. These neurons are then gradually pruned to mitigate the backdoor. We tested Fine-Pruning on our models and plotting the network accuracy, either clean or attack, with respect to the number of neurons pruned in Fig. 7a. On all datasets, at no point is the clean accuracy considerably higher than the attack one, making backdoor mitigation impossible.
STRIP (Gao et al., 2019) is a representative of the testing-time defense approach. It examines the model with the presence of the input image. STRIP works by perturbing the input image through a set of clean images from different classes and raising the alarm if the prediction is persistent, indicating by low entropy. With WaNet, the perturbation operation of STRIP will modify the image content and break the backdoor warping if present. Hence, WaNet behaves like genuine models, with similar entropy ranges, as shown in Fig. 7b.
NETWORK INSPECTION
Visualization tools, such as GradCam (Selvaraju et al., 2017), are helpful in inspecting network behaviors. Patch-based backdoor methods can be exposed easily due to the use of small trigger Our attack method is based on the warping on the entire image, so it is undetectable by this algorithm. We visualize activation based on the label that has the highest prediction score in Fig. 7d. With clean models, that label is for the correct class label. With WaNet and backdoor inputs, it is the backdoor labelĉ. As can be seen, the visualization heatmaps of WaNet look like the ones from any clean model.
ABLATION STUDIES
Role of the noise mode Without the noise mode, we could still train a backdoor model with similar clean and attack accuracy. However, these models failed the defense test with Neural Cleanse as shown in Fig. 9, and the optimized trigger patterns revealed their true behavior.
MNIST
CIFAR-10 GTSRB CelebA 0 2 4 6 8 Anomaly Index w noise mode w/o noise mode Figure 9: Networks' performance against Neural Cleanse with and without noise mode. Fig. 8a displays the trigger patterns optimized by Neural Cleanse for the attacking class "airplane" on CIFAR-10. With the clean model, this pattern has an airplane-like shape, and it is big enough to rewrite image content given any input. With our model trained without noise mode, the optimized pattern just consists of scattered points. This pattern is remarkably smaller, making the model caught by Neural Cleanse. It reveals that the model did not learn the specific backdoor warping; instead, it remembered the pixel-wise artifacts. By adding the noise training mode, our model no longer relies on those artifacts, and the optimized pattern looks similar to the clean model's one.
Other hyper-parameters We investigated the effect of the warping hyper-parameters, including the strength s and the grid size k. Fig. 8b and 8c show the clean, attack, and noise mode accuracy of our network on the CIFAR-10 dataset when changing each of these parameters. When k or s is small, the backdoor images are similar to the clean ones. However, since they are a minority (ρ a = 0.1), the network would treat them like data with noisy labels in those scenarios. Hence, clean and noise accuracies are stable across configurations. In contrast, backdoor accuracy suffers on the left side of the plots. It gradually increases when s or k is small, then saturates and stays near 100%.
CONCLUSION AND FUTURE WORKS
This paper introduces a novel backdoor attack method that generates backdoor images via subtle image warping. The backdoor images are proved to be natural and undetectable by humans. We incorporate in training a novel "noise" mode, making it stealthy and pass all the known defense methods. It opens a new domain of attack mechanism and encourages future defense research.
A APPENDIX
A.1 SYSTEM DETAILS A.1.1 DATASETS We used 3 standard datasets, from simple to more complex ones, to conduct our experiments. As the datasets are all used in previous related works, our results would be more comparable and reliable.
MNIST
The dataset (LeCun et al., 1998) is a subset of the larger dataset available from the National Institute of Technology (NIST). This dataset consists of 70,000 grayscale, 28 × 28 images, divided into a training set of 60,000 images and a test set of 10,000 images. Original dataset could be found at http://yann.lecun.com/exdb/mnist/.
We applied random cropping and random rotation as data augmentation for the training process. During the evaluation stage, no augmentation is applied.
CIFAR10
The dataset was introduced the first time by Krizhevsky et al. (2009). It is a labeled subset of the 80-millions-tiny-images dataset, collected by Alex Krizhevsky, Vinod Nair and Geoffrey Hinton, consists of 60,000 color images at the resolution of 32 × 32. The dataset contains 10 classes, with 6,000 images per one. It is divided into two subsets: a training set of 50,000 images and a test set of 10,000 images. The data set is public and avalable at https://www.cs.toronto.edu/ kriz/cifar.html.
During training stage, random crop, random rotation and random horizontal flip were applied as data augmentation. No augmentation was added at the evaluation stage.
GTSRB
The German Traffic Sign Recognition Benchmark -the GTSRB (Stallkamp et al. (2012)) is used as an official dataset for the challenge held at the International Joint Conference on Neural Network (IJCNN) 2011. This dataset consists of 60,000 images with 43 classes and the resolution varying from 32 × 32 to 250 × 250. It is divided into a training set of 39,209 images and a test set of 12,630. The dataset could be found at http://benchmark.ini.rub.de/?section= gtsrb&subsection=dataset.
Input images were all resized into 32 × 32 pixels, then applied random crop and random rotation at the training stage. No augmentation was used at the evaluation stage.
CelebA
CelebFaces Attributes Dataset -CelebA, first introduced by Liu et al. (2015), is a large-scale face attributes dataset. It contains 10,177 identities with 202,599 face images. Each image has an annotation of 5 landmark locations and 40 binary attributes. The dataset is publicly available at http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html.
Noted that this dataset is highly unbalanced. Due to the time limitation, we select 3 out of 40 attributes, namely Heavy Makeup, Mouth Slightly Open and Smiling, as suggested by Salem et al. (2020). We then concatenate them into 8 classes to create a multiple label classification task. The input images were all resized into 64 × 64 pixels. Random crop and random rotation were applied as data augmentation at the training stage. No augmentation was applied at the evaluation stage.
A.1.2 CLASSIFICATION NETWORKS MNIST We used a simple, self-defined structure as network classifier for this dataset. Detailed architecture will be mentioned in Table 2. Beside the single-target attack scenario, we also verified the effectiveness of WaNet in multi-target scenario, often called all-to-all attack. In this scenario, the input of class y would be targeted into class c(y) = (y + 1) mod |C|, where |C| is the number of classes.
A.2.1 EXPERIMENTAL SETUP
We use the same experimental setups as in the single-target scenario, with a small modification. In the attack mode at training, we replace the fixed target labelĉ by (y + 1) mod |C|. In the attack test at evaluation, we also change the expected label similarly.
A.2.2 ATTACK EXPERIMENT
We conducted attack experiments and reported result in Table 4. While models still achieve stateof-the-art performance on clean data, the attack efficacies slightly decreases. This is due to the fact that the target label now varies from input to input. Though, the lowest attack accuracy is 78.58%, which is still harmful to real-life deployment.
Similar to all-to-one scenario, we also tested our model with noise mode and recorded the noise accuracy.
A.2.3 DEFENSE EXPERIMENTS
We repeat the same defense experiments used in the all-to-one scenario. Our backdoor models could also pass all the tests mentioned in Figure 7. Tran et al. (2018) proposed a data defense method based on the spectral signature of backdoor training data. Although this data-defense configuration does not match our threat model, we find it useful to verify if our backdoor data have the spectral signature discussed in that paper. We repeated the experiment in the last plot of its Fig. 1 Figure 13: Additional images for mentioned backdoor attack methods. generated by WaNet on the CIFAR-10 dataset, which is the same dataset used in the original paper. Fig. 14 plots histograms of the correlations between these samples' learned representations and their covariance matrix's top right singular vector. As can be seen, the histograms of the two populations are completely inseparable. Thereby, the backdoor training samples could not be removed from the training dataset using their proposed method. One possible explanation is that the distributional difference between the clean and backdoor correlations in the traditional backdoor methods was the result of the domination of a few backdoor neurons. We do not have such a phenomenon in WaNet, as proved in Fine-Prunning experiments, eliminating the appearance of spectral signature.
A.3.3 THE STABILITY OF WANET
In this section, we verify if WaNet is stable to the variations of the warping field M . We trained 8 WaNet backdoor models, using 8 randomly generated warping fields, in the CIFAR10 dataset. The clean, backdoor, and noise accuracies of the trained models are all stable, as shown in Table 5. This section further demonstrates the importance of noise mode by providing trigger patterns optimized by Neural Cleanse on more datasets and with more target classes. Fig. 15a and 15b visualize the patterns on MNIST and GTSRB dataset using backdoor models trained for target label 0, similar to Fig. 8a. Fig. 15c, 15d, and 15e provide results on all three datasets but with backdoor models for label 3. As can be seen, the WaNet models without noise mode training return sparse and small patterns, thus easy to be detected by Neural Cleanse. By including that training mode, the optimized patterns are more crowded and approach clean models' ones. Note that we skip visualizing the results on the CelebA dataset; its patterns optimized on either clean or backdoor models are all too sparse and small for humans to analyze due to subtle differences between human faces.
Figure 2 :Figure 3 :
23Process of creating the warping field M and using it to generate poisoned images. Effect of different hyper-parameters on the warping result. For each warped image, we show the image (top), the magnified (×2) residual map (bottom). The PSNR and LPIPS
Figure 5 :
5Attack experiments. In (b), we provide the clean (top) and backdoor (bottom) images.
Figure 6 :
6Human inspection tests: (a) Success fooling rates of each backdoor method, (b) The most distinguishable cases from WaNet.
Figure 7 :
7Experiments on verifying WaNet by the state-of-the-art defense and visualization methods.
Figure 8 :
8Ablation studies on CIFAR-10 dataset: (a) Role of the noise mode training, (b,c) Network performance when changing warping hyper-parameters. regions, as pointed out by Cheng et al. (2019); Doan et al. (2019).
Figure 10 :Figure 11 :Figure 12 :
101112Neural Cleanse against all-to-all scenario. Fine-pruning against all-to-all scenario. STRIP against all-to-all scenario.A.3 ADDITIONAL RESULTSA.3.1 ADDIONAL IMAGES FOR METIONED BACKDOOR ATTACK METHODSWe provide additional examples comparing backdoor images from WaNet and from other attack methods inFig. 13.A.3.2 EXPERIMENT ON SPECTRAL SIGNATURE DEFENSE
Figure 14 :
14Spectral Signature A.3.4 ADDITIONAL TRIGGER PATTERNS VISUALIZING THE ROLE OF THE NOISE MODE
Unlike model defense, testing-time defenses inspect models after deployment with the presence of input images. It focuses on verifying if the provided image is poisoned and how to mitigate it. STRIP(Gao et al., 2019) exploited the persistent outcome of the backdoor image under perturbations for detection. In contrast, Neo(Udeshi et al., 2019) searched for the candidate trigger patches where region blocking changed the predicted outputs. Recently, Doan et al.
Table 1 :
1Datasets and the classifiers used in our experiments. Each ConvBlock consists of a 3×3 convolution (stride=2), a BatchNorm, and a ReLU layer.Dataset
Subjects
#Classes Input Size #Train. Images Classifier
MNIST
Written digits
10
28 × 28 × 1
60,000
3 ConvBlocks, 2 fcs
CIFAR-10 General objects
10
32 × 32 × 3
50,000
PreActRes18
GTSRB
Traffic signs
43
32 × 32 × 3
39,252
PreActRes18
CelebA
Face attributes
8
64 × 64 × 3
202,599
ResNet18
Fooling rate (%) Patched Blended SIG ReFool WaNet
Backdoor inputs
8.7
1.4
2.7
2.3
38.6
Clean inputs
6.1
10.1
2.6 13.1
17.4
All inputs
7.4
5.7
2.6
7.7
28.0
Table 2 :
2Detailed architecture of MNIST classifier. * means the layer is followed by a Dropout layer. † means the layer is followed by a BatchNormalization layer.For the CIFAR-10 and GTSRB datasets, we use PreActRes18(He et al., 2016) architecture as classification networks.For the CelebA dataset, we use ResNet18(He et al., 2016) architecture as the classification network.A.1.3 RUNNING TIME We use a system of a GPU RTX 2080Ti and a CPU i7 9700K to conduct our experiment. Detailed inference time of each module will be demonstrated below.Layer
Filter Filter Size Stride Padding Activation
Conv2d † 32
3 × 3
2
1
ReLU
Conv2d † 64
3 × 3
2
0
ReLU
Conv2d
64
3 × 3
2
0
ReLU
Linear * 512
-
-
0
ReLU
Linear
10
-
-
0
Softmax
CIFAR10 and GTSRB
CelebA
Table 3 :
3Inference time of our modules. MNIST CIFAR10 GTSRB CelebA time/sample 4.37 µs 18.64 µs 18.65 µs 87.51 µs A.2 ALL-TO-ALL ATTACK
Table 4 :
4All-to-all attack result.Dataset Clean Attack Noise
MNIST 99.44 95.90 94.34
CIFAR-10 94.43 93.36 91.47
GTSRB 99.39 98.31 98.96
CelebA 78.73 78.58 76.12
MNIST
CIFAR-10
GTSRB
CelebA
0
2
4
6
Anomaly Index
clean
backdoor
, using 5000 clean samples and 1172 backdoor samplesOriginal Image
Patched
Blended
SIG
ReFool
Warped (Ours)
Table 5 :
5The stability of WaNet on the CIFAR-10 dataset. Accuracy (%) 94.42 ± 0.08 99.40 ± 0.21 93.16 ± 0.43Clean
Backdoor
Noise
Source code of the experiments will be publicly available.
A new backdoor attack in cnns by training set corruption without label poisoning. Mauro Barni, Kassem Kallas, Benedetta Tondi, 2019 IEEE International Conference on Image Processing (ICIP). IEEEMauro Barni, Kassem Kallas, and Benedetta Tondi. A new backdoor attack in cnns by training set corruption without label poisoning. In 2019 IEEE International Conference on Image Processing (ICIP), pp. 101-105. IEEE, 2019.
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, Dawn Song, arXiv:1712.05526Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprintXinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526, 2017.
Defending against Backdoor Attack on Deep Neural Networks. Hao Cheng, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, Xue Lin, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining Workshop. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining WorkshopHao Cheng, Kaidi Xu, Sijia Liu, Pin-Yu Chen, Pu Zhao, and Xue Lin. Defending against Back- door Attack on Deep Neural Networks. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining Workshop, 2019.
A backdoor attack against lstm-based text classification systems. Jiazhu Dai, Chuanshuai Chen, Yufeng Li, IEEE Access. 7Jiazhu Dai, Chuanshuai Chen, and Yufeng Li. A backdoor attack against lstm-based text classifica- tion systems. IEEE Access, 7:138872-138878, 2019.
Ehsan Bao Gia Doan, Damith C Abbasnejad, Ranasinghe, Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems. arXiv. Bao Gia Doan, Ehsan Abbasnejad, and Damith C. Ranasinghe. Februus: Input Purification Defense Against Trojan Attacks on Deep Neural Network Systems. arXiv, Aug 2019. URL https: //arxiv.org/abs/1908.03369.
Splines minimizing rotation-invariant semi-norms in sobolev spaces. Jean Duchon, Constructive theory of functions of several variables. SpringerJean Duchon. Splines minimizing rotation-invariant semi-norms in sobolev spaces. In Constructive theory of functions of several variables, pp. 85-100. Springer, 1977.
Strip: A defence against trojan attacks on deep neural networks. Yansong Gao, Change Xu, Derui Wang, Shiping Chen, C Damith, Surya Ranasinghe, Nepal, Proceedings of the 35th Annual Computer Security Applications Conference. the 35th Annual Computer Security Applications ConferenceYansong Gao, Change Xu, Derui Wang, Shiping Chen, Damith C Ranasinghe, and Surya Nepal. Strip: A defence against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113-125, 2019.
Badnets: Identifying vulnerabilities in the machine learning model supply chain. Tianyu Gu, Brendan Dolan-Gavitt, Siddharth Garg, Proceedings of Machine Learning and Computer Security Workshop. Machine Learning and Computer Security WorkshopTianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. Badnets: Identifying vulnerabilities in the machine learning model supply chain. In Proceedings of Machine Learning and Computer Security Workshop, 2017.
Robustness and explainability of artificial intelligence. Publications Office of the European Union. Henrik Ronan Hamon, Ignacio Junklewitz, Sanchez, Ronan Hamon, Henrik Junklewitz, and Ignacio Sanchez. Robustness and explainability of artificial intelligence. Publications Office of the European Union, 2020.
Identity mappings in deep residual networks. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, European conference on computer vision. SpringerKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Identity mappings in deep residual networks. In European conference on computer vision, pp. 630-645. Springer, 2016.
Spatial transformer networks. Max Jaderberg, Karen Simonyan, Andrew Zisserman, Advances in neural information processing systems. Max Jaderberg, Karen Simonyan, Andrew Zisserman, et al. Spatial transformer networks. In Ad- vances in neural information processing systems, pp. 2017-2025, 2015.
. Liu Kang, pytorch-cifar. Online; accessed 4.Liu Kang. pytorch-cifar, May 2020. URL https://github.com/kuangliu/ pytorch-cifar. [Online; accessed 4. Jun. 2020].
Universal litmus patterns: Revealing backdoor attacks in cnns. Soheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, Heiko Hoffmann, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionSoheil Kolouri, Aniruddha Saha, Hamed Pirsiavash, and Heiko Hoffmann. Universal litmus patterns: Revealing backdoor attacks in cnns. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 301-310, 2020.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Alex Krizhevsky et al. Learning multiple layers of features from tiny images. 2009.
Gradient-based learning applied to document recognition. Yann Lecun, Léon Bottou, Yoshua Bengio, Patrick Haffner, Proceedings of the IEEE. 8611Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278-2324, 1998.
Fine-pruning: Defending against backdooring attacks on deep neural networks. Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg, Proceedings of International Symposium on Research in Attacks, Intrusions, and Defenses. International Symposium on Research in Attacks, Intrusions, and DefensesKang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. Fine-pruning: Defending against backdoor- ing attacks on deep neural networks. In Proceedings of International Symposium on Research in Attacks, Intrusions, and Defenses, 2018a.
Trojaning attack on neural networks. Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, Xiangyu Zhang, Proceedings of Network and Distributed System Security Symposium. Network and Distributed System Security SymposiumYingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and Xiangyu Zhang. Trojaning attack on neural networks. In Proceedings of Network and Distributed System Security Symposium, 2018b.
Abs: Scanning neural networks for back-doors by artificial brain stimulation. Yingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, Xiangyu Zhang, Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. the 2019 ACM SIGSAC Conference on Computer and Communications SecurityYingqi Liu, Wen-Chuan Lee, Guanhong Tao, Shiqing Ma, Yousra Aafer, and Xiangyu Zhang. Abs: Scanning neural networks for back-doors by artificial brain stimulation. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1265-1282, 2019.
Reflection backdoor: A natural backdoor attack on deep neural networks. Yunfei Liu, Xingjun Ma, James Bailey, Feng Lu, Yunfei Liu, Xingjun Ma, James Bailey, and Feng Lu. Reflection backdoor: A natural backdoor attack on deep neural networks. 2020.
Deep learning face attributes in the wild. Ziwei Liu, Ping Luo, Xiaogang Wang, Xiaoou Tang, Proceedings of International Conference on Computer Vision (ICCV). International Conference on Computer Vision (ICCV)Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In Proceedings of International Conference on Computer Vision (ICCV), December 2015.
Input-aware dynamic backdoor attack. Anh Tuan, Anh Nguyen, Tran, Advances in Neural Information Processing Systems. H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. LinCurran Associates, Inc33Tuan Anh Nguyen and Anh Tran. Input-aware dynamic backdoor attack. In H. Larochelle, M. Ranzato, R. Hadsell, M. F. Balcan, and H. Lin (eds.), Advances in Neural In- formation Processing Systems, volume 33, pp. 3454-3464. Curran Associates, Inc., 2020. URL https://proceedings.neurips.cc/paper/2020/file/ 234e691320c0ad5b45ee3c96d0d7b8f8-Paper.pdf.
Dynamic backdoor attacks against machine learning models. Ahmed Salem, Rui Wen, Michael Backes, Shiqing Ma, Yang Zhang, arXiv:2003.03675arXiv preprintAhmed Salem, Rui Wen, Michael Backes, Shiqing Ma, and Yang Zhang. Dynamic backdoor attacks against machine learning models. arXiv preprint arXiv:2003.03675, 2020.
Grad-cam: Visual explanations from deep networks via gradient-based localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionRamprasaath R Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, and Dhruv Batra. Grad-cam: Visual explanations from deep networks via gradient-based local- ization. In Proceedings of the IEEE international conference on computer vision, pp. 618-626, 2017.
Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Johannes Stallkamp, Marc Schlipsing, Jan Salmen, Christian Igel, Neural networks. 32Johannes Stallkamp, Marc Schlipsing, Jan Salmen, and Christian Igel. Man vs. computer: Bench- marking machine learning algorithms for traffic sign recognition. Neural networks, 32:323-332, 2012.
Spectral signatures in backdoor attacks. Brandon Tran, Jerry Li, Aleksander Madry, Proceedings of Advances in Neural Information Processing Systems. Advances in Neural Information Processing SystemsBrandon Tran, Jerry Li, and Aleksander Madry. Spectral signatures in backdoor attacks. In Pro- ceedings of Advances in Neural Information Processing Systems, 2018.
Clean-label backdoor attacks. Alexander Turner, Dimitris Tsipras, Aleksander Madry, Alexander Turner, Dimitris Tsipras, and Aleksander Madry. Clean-label backdoor attacks. https://people.csail.mit.edu/madry/lab/, 2019.
Louth Rawshan, and Sudipta Chattopadhyay. Model agnostic defence against backdoor attacks in machine learning. Sakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, arXiv:1908.02203arXiv preprintSakshi Udeshi, Shanshan Peng, Gerald Woo, Lionell Loh, Louth Rawshan, and Sudipta Chattopad- hyay. Model agnostic defence against backdoor attacks in machine learning. arXiv preprint arXiv:1908.02203, 2019.
Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. Bolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, Y Ben, Zhao, Proceedings of 40th IEEE Symposium on Security and Privacy. 40th IEEE Symposium on Security and PrivacyBolun Wang, Yuanshun Yao, Shawn Shan, Huiying Li, Bimal Viswanath, Haitao Zheng, and Ben Y Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In Pro- ceedings of 40th IEEE Symposium on Security and Privacy, 2019.
Latent backdoor attacks on deep neural networks. Yuanshun Yao, Huiying Li, Haitao Zheng, Y Ben, Zhao, Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. the 2019 ACM SIGSAC Conference on Computer and Communications SecurityYuanshun Yao, Huiying Li, Haitao Zheng, and Ben Y Zhao. Latent backdoor attacks on deep neural networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communica- tions Security, pp. 2041-2055, 2019.
The unreasonable effectiveness of deep features as a perceptual metric. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, Oliver Wang, CVPR. Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
Bridging mode connectivity in loss landscapes and adversarial robustness. Pu Zhao, Pin-Yu Chen, Payel Das, Xue Karthikeyan Natesan Ramamurthy, Lin, International Conference on Learning Representations. Pu Zhao, Pin-Yu Chen, Payel Das, Karthikeyan Natesan Ramamurthy, and Xue Lin. Bridging mode connectivity in loss landscapes and adversarial robustness. In International Conference on Learning Representations, 2019.
| [
"https://github.com/VinAIResearch/",
"https://github.com/kuangliu/"
] |
[
"A Straightforward Framework For Video Retrieval Using CLIP",
"A Straightforward Framework For Video Retrieval Using CLIP"
] | [
"Jesús Andrés \nSchool of Engineering and Sciences\nPortillo-Quintero\nTecnologico de Monterrey Av. Eugenio Garza Sada 25010000−0002, 9856−1900, 64849−, MonterreyNLMexico\n"
] | [
"School of Engineering and Sciences\nPortillo-Quintero\nTecnologico de Monterrey Av. Eugenio Garza Sada 25010000−0002, 9856−1900, 64849−, MonterreyNLMexico"
] | [] | Video Retrieval is a challenging task where a text query is matched to a video or vice versa. Most of the existing approaches for addressing such a problem rely on annotations made by the users. Although simple, this approach is not always feasible in practice. In this work, we explore the application of the language-image model, CLIP, to obtain video representations without the need for said annotations. This model was explicitly trained to learn a common space where images and text can be compared. Using various techniques described in this document, we extended its application to videos, obtaining state-of-the-art results on the MSR-VTT and MSVD benchmarks 1 . | 10.1007/978-3-030-77004-4_1 | [
"https://arxiv.org/pdf/2102.12443v2.pdf"
] | 232,035,662 | 2102.12443 | 79ae2fb0985204b3f5bf8dddf47a24a5df88b24d |
A Straightforward Framework For Video Retrieval Using CLIP
Jesús Andrés
School of Engineering and Sciences
Portillo-Quintero
Tecnologico de Monterrey Av. Eugenio Garza Sada 25010000−0002, 9856−1900, 64849−, MonterreyNLMexico
A Straightforward Framework For Video Retrieval Using CLIP
José Carlos Ortiz-Bayliss [0000−0003−3408−2166] , and Hugo Terashima-Marín [0000−0002−5320−0773]
Video Retrieval is a challenging task where a text query is matched to a video or vice versa. Most of the existing approaches for addressing such a problem rely on annotations made by the users. Although simple, this approach is not always feasible in practice. In this work, we explore the application of the language-image model, CLIP, to obtain video representations without the need for said annotations. This model was explicitly trained to learn a common space where images and text can be compared. Using various techniques described in this document, we extended its application to videos, obtaining state-of-the-art results on the MSR-VTT and MSVD benchmarks 1 .
Introduction
Video is one of the most consumed forms of media available on the internet. The high consumption of this type of media requires to find suitable methods for finding videos that contain one or more features desired by the users. Most video browsers rely on annotations made by users to identify video contents. Although this solution is simple to implement, it comes at a high price. Relying on annotations to perform a query on videos requires an extensive description of the videos' innards and context. Unfortunately, this information may not be made available. Thus, it is clear that a video retrieval system that can handle user's queries without the need for such annotations represents a relevant topic of study.
This document describes a video retrieval model, which, as its name implies, can retrieve the videos from a collection that are best described by a particular query (text). For example, "A woman is running" should return videos that contain women running. Given that the Video Retrieval architecture estimates the similarity between video and text, it can also be used to perform the videoto-text retrieval (VTR) task. It consists of returning captions that best describe the query (video) from a set of description candidates. In either task, the system goal is that, given a query and a set of video-text pairs, it must return the ranking at which the corresponding opposite modality is positioned.
The TVR and VTR tasks can be seen as a method by which video and text contents are funnelled into a fixed-length representation using an embedding function. Since both projections fall in the same dimensional space, a similarity score can be applied, which consequently can be used to rank elements from a set of prospects. Given that similarity metrics between text-video and video-text are equal, TVR and VTR are considered inverse operations. They only depend on the modality of the input prompt.
Some works extensively focus on the video representation by adding pretrained models considered "experts". Each "expert" focuses on specific video contents such as sound, face detection, motion, among others. The information from all the experts is multiplexed by a complex gating mechanism [5,7]. Instead of starting from an elaborated video representation to train a common visual-text space, we propose to use a learned visual-text space to build a video representation. Similarly to Mithun et al. [12], our approach consists of using pre-trained models that measure the similarity between image and text. Then, we extend this idea to handle videos. We experimented with several aggregation methods to comply with the extra temporal dimension.
In this work, we chose CLIP as the base image-text model. CLIP is a stateof-the-art Neural Network, which is pre-trained for image-text pairs [14]. CLIP has proved that similarity learning can be used to train a visual encoder for downstream tasks such as classification, captioning, and clustering, to mention some. We harness the power of its visual representations to create a video representation that can be used directly with its original text encoder to bootstrap a Neural Network model for Video Retrieval. Since our work focuses on aggregation strategies of image features, our method is tested with Zero Shots of the evaluation dataset. Hence, no parameter finetuning is exercised to improve retrieval results.
The remainder of this document is organized as follows. In Section 2 we provide the foundations of this investigation and an overview of the most relevant related works. Section 3 describes the experiments conducted, their main results and their discussion. Finally, in Section 4 we present the conclusion and some ideas that may be worth exploring as part of the future work.
Background and Related Work
The work presented in this document is related to strategies used to construct a video encoder for video retrieval. It is straight forward to think that image features can serve as a proxy for video representations. In fact, Karpathy et al. [6] observed that a Convolutional Neural Network (CNN) feature from a single frame could be discriminative enough for video classification, achieving just 1.3 fewer percentage points than the top accuracy model from the same work, which on its part included more visual and temporal information.
Mithun et al. [12] proved that it was possible to supersede the state-of-the-art video retrieval model by obtaining the average visual features obtained from an image-text model. This practice has been implemented on novel models, along with more elaborated video representations. For instance, the state-of-the-art in video retrieval has been pushed by models that implement a Mixture-of-Experts (MoE) paradigm [5,7,10,13]. The MoE approach proposes a complex video representation by multiplexing the outputs of several pre-trained models (known as "experts") that attend to particular aspects of video such as motion, face detection, character recognition, among others.
In this regard, we are aware that at most seven experts have been included in a Video Retrieval model [5]. Nonetheless, the current state-of-the-art implements a mixture of two experts, indicating that video-text representations may rescind the added complexity that multiple experts convey [13]. Patrick et al. [13] propose that Contrastive Training used by most video retrieval systems encourages repulsive forces on independent, but similar, examples. To alleviate this, they use a support set containing positive examples for each data point on a training batch, so the common video-text space must learn concept sharing. Nonetheless, contrastive training has been proved successful in image and video representation learning [2,9].
Contrastive training is a regime on which a model is inducted to pull similar data points together and push apart dissimilar ones on a latent space. The foundational mechanism of the Contrastive Language-Image Pretraining (CLIP) is the model used in this work. As its name states, the model is pre-trained on 400,000 image-text pairs collected from the Internet. As a siamese neural network, it is composed of an image (ViT-B/32) and text encoder (transformer) that funnel information into a common space where objects can be compared using cosine similarity [14].
Experiment and Results
This section describes a mathematical description of CLIP and how we can use it for VTR or TVR. Then, we describe the datasets and metrics considered for this work. Then, we detail the experiments and their main results, followed by a brief discussion of the most relevant findings.
CLIP as Video Representation
By using CLIP we obtain the pre-trained functions ω(u) = w and φ(t) = c t , which encode image u and text t into w, c t ∈ R d , where d = 512. Assume a video v is composed of s sampled frames such that v = {u 1 , u 2 , . . . , u s }. Consequently, we can calculate the embedding of each frame into a matrix W ∈ R d×s so W = [ω(u 1 ) = w 1 , w 2 , . . . , w s ]. Therefore, the problem we try to solve is to find an aggregation function Λ that maps the input W ∈ R d×s into a video representation c v ∈ R d . Then, with a video and text representations c v and c t , we can compute a cosine similarity function (Equation 1), which is useful for ranking the video-text pairs inside a dataset given a query of a specific modality.
sim(a, b) = a T b a b(1)
Datasets
The proposed framework assumes a set C of videos and corresponding captions pairs in the form
{{(v i , t ij )} n i=1 } m(vi) j=1
where the number of captions per video may be non-uniform, hence m is a function of v. By design, some datasets are split into sections used for training and validation of results. For the preliminary experiments, we use the training splits to prove our hypothesis, but final results are reported on tests split of their respective datasets.
The datasets involved in this work are listed below.
MSR-VTT is a dataset composed of 10,000 videos, each with a length that ranges from ten to 32 seconds and 200,000 captions. [1]. Each video has approximately 40 associated sentences in English. LSMDC is comprised 118,081 videos, each with a length that ranges from two to 30 seconds. The videos were extracted from 202 movies. The validation set contains 7,408 videos, and the test set 1,000 videos from movies independent from the training and validation splits [15].
All the frames were sampled from each video from the previously mentioned datasets to extract the frame features. Other datasets are related to this work but cannot be used include WIT (WebImageText) [14] and HT100M [11]. WIT is composed of 400,000 image-text pairs on which CLIP was trained on. Since WIT is an image-text dataset that cannot be used as a benchmark for video retrieval. HT100M is a dataset of 100 million video-text pairs, used only as a pre-training set for other Video Retrieval works [5,11,13,16].
Metrics
To conduct our experiments, we follow the testing methodologies used in previous works [5,7] and report standard retrieval metrics. For median rank (MdR), mean rank (MnR) and standard deviation of rank (StdR), the lower the value, the better the performance. In the case of recall at rank (R@k, where k = {1, 5, 10}), the higher the value, the better the performance. For datasets that involve multiple sentences per video -such as Full from MSR-VTT and MSVD test-, we follow the protocol used by Liu et al. [7] and use the minimum rank among all associated sentences to a given video query.
Exploratory Experiments
In the exploratory experiments, we empirically define two candidates for framelevel aggregation Λ functions. We conduct this set of preliminary experiments on a validation sample comprised of 1,000 video-text pairs from MSR-VTT. The first frame-level aggregation function is based on the idea that it is feasible to obtain reasonable video representations by only considering one frame sample [6]. Given the feature matrix W ∈ R d×s , we define Λ s (W ) = W 30 ∈ R d as a function that returns the features of the 30 th frame. Since these videos contain approximately 30 frames per second, this is equivalent to sampling a frame from the first second of the video.
A second candidate for an aggregation function is proposed by Mithun et al. [12], who suggest that the average of frame-level features from videos can be used as an approximator for video representations. This method has extensively been used in other retrieval works [5,7,9,11,13]. Consequently, we define Λ avg (W) =W ∈ R d , whereW is the average value of matrix columns.
Given that videos present dynamic events in which several sequences of frames can represent completely different things, we used k-means as the method for aggregation [17]. With this implementation, the aggregation function follows the form Λ k (W) = W ∈ R d×k , which returns k video embeddings. For evaluation purposes, we repeat the ranking procedure with the obtained independent video representations k times and register each query's minimum rank, then calculate the retrieval metrics.
Based on the results depicted in Table 1, the average-based methods obtain the best results in terms of the metrics used. It is noticeable that, among kmeans methods, there is no significant difference between the results. This may be because MSR videos do not exceed 32 seconds in length, which may not be enough to differentiate the centroids when creating the clusters. We appeal to Occam's Razor principle regarding the aggregation method and select Λ avg for further experiments since it accomplishes a similar performance to k-means based aggregation methods but with a lower calculation complexity.
Confirmatory Experiments
This section compares our video retrieval model against the state-of-the-art in the MSR-VTT, MSVD and LSMDC datasets. In all the cases, we evaluate both the TVR and VTR tasks.
In MSR-VTT, we are able to supersede the R@1 score of the previous best model SSB [13] on the split 1k-A for the TVR task. However, we are positioned behind previous works on other recall metrics (Table 2). Besides, we consistently achieve state-of-the-art results on all the recall metrics in the Full split from MSR-VTT. In the MSVD dataset, we obtain state-of-the-art results on most of the retrieval metrics (Table 3). We suppose that models that are based on MoE such as SSB [13] and CE [7] cannot use all of their implemented experts because the videos in MSVD lack audio information, so they have to rely only on visual features. In LSMDC, we do not obtain state-of-the-art results, but we are positioned second-best ( Table 4). Given that video descriptions in this dataset do not follow the form of a typical sentence, as they are designed to teach a model to recognize characters and interactions between movie scenes, we commend the robustness of CLIP's text encoder because it could adapt to a new sentence schema.
Discussion
Although we obtain outstanding results on different metrics and datasets, there are some things worth discussing. For example, our original supposition was that the ranking worsens as the video gets longer. To confirm or reject this idea, we produced Figure 1. Figure 1 depicts the video length in seconds (x-axis), and the rank assigned to it (y-axis). As a video gets longer, we expected that it would be more difficult for the video representation to capture the temporal elements. Hence it would be ranked worse. However, the experiment conducted on set 1k-A from MSR-VTT shows that ranking varies wildly independently from video length (at least as long as the videos present in the dataset).
We proceeded to look at the worst ranked video-text pairs, we noticed that several sentences incorporated phrases like "a family is having a conversation" or "a man talking about a woman" hinting that sentences that were mainly describing audio content would be ranked worse. This conclusion is reinforced by the fact that our model scored the best on MSVD, a dataset that by design
Conclusion and Future Work
This work presents the first implementation of CLIP to obtain video features. Our method works by leveraging its learned common image-text space without the need for parameter finetuning (Zero-Shot). We apply an aggregation function to frame-level features, common in other video retrieval works. Our work focuses only on visual and text modalities, as it supersedes methods that implement a complex mixture of pre-trained models obtaining state-of-the-art results on the MSVD and MSR-VTT datasets.
One potential application of this CLIP-derived implementation is to retrieve specific moments inside videos. Also, it is yet unseen how will our video representation behave if tested as a video classifier. This methodology might be useful to create a video representation that is based on CLIP for longer durations. For example, other works have used frame features to construct a graph that can change through time [8]. Such a representation could keep the strong text alignment suitable for video retrieval. Also, our work can be used as an expert on a future MoE video retrieval system.
Fig. 1 .
1Scatter plot of video length and assigned rank on TVR task on the 1k-A test split. The red line represents the median rank.
18 .
18Xu, J., Mei, T., Yao, T., Rui, Y.: Msr-vtt: A large video description dataset for bridging video and language. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 5288-5296 (2016) 19. Yu, Y., Kim, J., Kim, G.: A joint sequence fusion model for video question answering and retrieval. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 471-487 (2018)
The test set has been used in different ways in the literature. Then, we will refer to two common variations as Full[7] (containing all the 2,990 videos in the test set from MSR-VTT) and 1k-A [19] (containing only 1000 videos from the 2,990 in the test set in MSR-VTT). MSVD contains 1970 videos, each with a length that ranges from one to 62 seconds. Train, validation and test splits contain 1200, 100 and 670 videos, respectivelyThe training, validation
and test splits are composed of 6,513, 497 and 2,990 videos, respectively,
with 20 corresponding descriptions each [18].
Text-to-Video Retrieval results on the MSR-VTT validation set, using different aggregation functions.Λ R@1 R@5 R@10 MdR MnR StdR
Λs 24.9 46.1 56.9
7.0 64.61 149.21
Λavg 35.4 58.0 67.2
3.0 39.81 111.43
Λ2 34.3 57.8 66.5
3.0 40.23 112.85
Λ3 34.4 57.7 66.6
3.0 39.77 110.69
Λ4 33.7 58.4 66.9
3.0 37.98 107.53
Λ5 34.4 57.6 66.1
3.0 38.44 108.02
Λ6 34.9 58.4 67.6
3.5 37.44 108.34
Λ7 35.3 58.1 67.5
4.0 38.33 107.88
Λ8 33.9 57.7 67.9
3.0 38.23 107.32
Λ9 33.9 57.2 67.1
3.0 37.87 108.23
Λ10 35.0 57.8 68.0
3.0 37.26 107.34
Table 1.
. TVR and VTR results in the MSR-VTT dataset. M, H and W denote training on MSR-VTT, HT100M and WIT, respectively. . TVR and VTR results in the MSVD dataset. D, H and W denote training on MSVD, HT100M and WIT, respectively. . TVR and VTR results in the LSMDC dataset. L, H and W denote training on LSMDC, HT100M and WIT, respectively. does not contain any audio track and text descriptions are based on what can be visualized.TVR
VTR
Method
Training Test Set R@1 R@5 R@10 MdR R@1 R@5 R@10 MdR
JSFusion [19]
M
1k-A 10.2 31.2 43.2
13
-
-
-
-
HT100M [11]
H+M
1k-A 14.9 40.2 52.8
9
16.8 41.7 55.1
8
CE [7]
M
1k-A 20.9 48.8 62.4
6
20.6 50.3
64
5.3
AVLnet [16]
H+M
1k-A 27.1 55.6 66.6
4
28.5 54.6 65.2
4
MMT [5]
H+M
1k-A 26.6 57.1 69.6
4
27.0 57.5 69.7
3.7
SSB [13]
H+M
1k-A 30.1 58.5 69.3
3 28.5 58.6 71.6
3
CLIP
W
1k-A 31.2 53.7 64.2
4
27.2 51.7 62.6
5
VSE [12]
M
Full
5.0 16.4 24.6
47
7.7 20.3 31.2
28
VSE++ [12]
M
Full
5.7 17.1 24.8
65 10.2 25.4 35.1
25
Multi Cues [12]
M
Full
7.0 20.9 29.7
38 12.50 32.10 42.4
16
W2VV [3]
M
Full
6.1 18.7 27.5
45 11.8 28.9 39.1
21
Dual Enc. [4]
M
Full
7.7 22.0 31.8
32 13.0 30.8 43.3
15
E2E [9]
M
Full
9.9 24.0 32.4 29.5
-
-
-
-
CE [7]
M
Full
10.0 29.0 42.2
16 15.6 40.9 55.2
8.3
CLIP
W
Full 21.4 41.1 50.4 10 40.3 69.7 79.2
2
Table 2TVR
VTR
Method
Training R@1 R@5 R@10 MdR R@1 R@5 R@10 MdR
VSE [12]
D
12.3 30.1 42.3
14 34.7 59.9 70.0
3
VSE++ [12]
D
15.4 39.6 53.0
9
-
-
-
-
Multi Cues [12]
D
20.3 47.8 61.1
6
-
-
-
-
CE [7]
D
19.8 49.0 63.8
6
-
-
-
-
Support-set Bottleneck [13] H+D
28.4 60.0 72.9
4
-
-
-
-
CLIP
W
37 64.1 73.8
3 59.9 85.2 90.7
1
Table 3TVR
VTR
Method
Training R@1 R@5 R@10 MdR R@1 R@5 R@10 MdR
JSFusion [19]
L
9.1 21.2 34.1
36 12.3 28.6 38.9 20
CE [7]
L
11.2 26.9 34.8 25.3
-
-
-
-
MMT [5]
H+L
12.9 29.9 40.1 19.3
-
-
-
-
CLIP
W
11.3 22.7 29.2 56.5 6.8 16.4 22.1
73
Table 4
AcknowledgmentsThis research was partially supported by ITESM Research Group with Strategic Focus on Intelligent Systems.
Collecting highly parallel data for paraphrase evaluation. D Chen, W B Dolan, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesChen, D., Dolan, W.B.: Collecting highly parallel data for paraphrase evaluation. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. pp. 190-200 (2011)
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, International conference on machine learning. PMLRChen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for con- trastive learning of visual representations. In: International conference on machine learning. pp. 1597-1607. PMLR (2020)
Predicting visual features from text for image and video caption retrieval. J Dong, X Li, C G Snoek, IEEE Transactions on Multimedia. 2012Dong, J., Li, X., Snoek, C.G.: Predicting visual features from text for image and video caption retrieval. IEEE Transactions on Multimedia 20(12), 3377-3388 (2018)
Dual encoding for zero-example video retrieval. J Dong, X Li, C Xu, S Ji, Y He, G Yang, X Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDong, J., Li, X., Xu, C., Ji, S., He, Y., Yang, G., Wang, X.: Dual encoding for zero-example video retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9346-9355 (2019)
Multi-modal transformer for video retrieval. V Gabeur, C Sun, K Alahari, C Schmid, Computer Vision -ECCV 2020. Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M.ChamSpringer International PublishingGabeur, V., Sun, C., Alahari, K., Schmid, C.: Multi-modal transformer for video retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision -ECCV 2020. pp. 214-229. Springer International Publishing, Cham (2020)
Largescale video classification with convolutional neural networks. A Karpathy, G Toderici, S Shetty, T Leung, R Sukthankar, L Fei-Fei, Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. the IEEE conference on Computer Vision and Pattern RecognitionKarpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large- scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. pp. 1725-1732 (2014)
Y Liu, S Albanie, A Nagrani, A Zisserman, Use what you have: Video retrieval using representations from collaborative experts. Liu, Y., Albanie, S., Nagrani, A., Zisserman, A.: Use what you have: Video retrieval using representations from collaborative experts (2020)
Hierarchical video frame sequence representation with deep convolutional graph network. F Mao, X Wu, H Xue, R Zhang, Proceedings of the European Conference on Computer Vision (ECCV) Workshops. the European Conference on Computer Vision (ECCV) WorkshopsMao, F., Wu, X., Xue, H., Zhang, R.: Hierarchical video frame sequence represen- tation with deep convolutional graph network. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops. pp. 0-0 (2018)
End-to-end learning of visual representations from uncurated instructional videos. A Miech, J B Alayrac, L Smaira, I Laptev, J Sivic, A Zisserman, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMiech, A., Alayrac, J.B., Smaira, L., Laptev, I., Sivic, J., Zisserman, A.: End-to-end learning of visual representations from uncurated instructional videos. In: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9879-9889 (2020)
Learning a Text-Video Embedding from Incomplete and Heterogeneous Data. A Miech, I Laptev, J Sivic, arXiv:1804.02516Miech, A., Laptev, I., Sivic, J.: Learning a Text-Video Embedding from Incomplete and Heterogeneous Data. arXiv:1804.02516 (2018)
Howto100m: Learning a text-video embedding by watching hundred million narrated video clips. A Miech, D Zhukov, J B Alayrac, M Tapaswi, I Laptev, J Sivic, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMiech, A., Zhukov, D., Alayrac, J.B., Tapaswi, M., Laptev, I., Sivic, J.: Howto100m: Learning a text-video embedding by watching hundred million nar- rated video clips. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 2630-2640 (2019)
Learning joint embedding with multimodal cues for cross-modal video-text retrieval. N C Mithun, J Li, F Metze, A K Roy-Chowdhury, Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. the 2018 ACM on International Conference on Multimedia RetrievalMithun, N.C., Li, J., Metze, F., Roy-Chowdhury, A.K.: Learning joint embedding with multimodal cues for cross-modal video-text retrieval. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval. pp. 19-27 (2018)
Support-set bottlenecks for video-text representation learning. M Patrick, P Y Huang, Y Asano, F Metze, A G Hauptmann, J F Henriques, A Vedaldi, International Conference on Learning Representations. Patrick, M., Huang, P.Y., Asano, Y., Metze, F., Hauptmann, A.G., Henriques, J.F., Vedaldi, A.: Support-set bottlenecks for video-text representation learn- ing. In: International Conference on Learning Representations (2021), https: //openreview.net/forum?id=EqoXe2zmhrh
A Radford, J Kim Wook, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, G Krueger, I Sutskever, Learning transferable visual models from natural language supervision. Radford, A., Kim Wook, J., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sas- try, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision (2021)
A dataset for movie description. A Rohrbach, M Rohrbach, N Tandon, B Schiele, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionRohrbach, A., Rohrbach, M., Tandon, N., Schiele, B.: A dataset for movie de- scription. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3202-3212 (2015)
A Rouditchenko, A Boggust, D Harwath, D Joshi, S Thomas, K Audhkhasi, R Feris, B Kingsbury, M Picheny, A Torralba, J Glass, Avlnet: Learning audio-visual language representations from instructional videos. Rouditchenko, A., Boggust, A., Harwath, D., Joshi, D., Thomas, S., Audhkhasi, K., Feris, R., Kingsbury, B., Picheny, M., Torralba, A., Glass, J.: Avlnet: Learning audio-visual language representations from instructional videos (2020)
Videobert: A joint model for video and language representation learning. C Sun, A Myers, C Vondrick, K Murphy, C Schmid, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSun, C., Myers, A., Vondrick, C., Murphy, K., Schmid, C.: Videobert: A joint model for video and language representation learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 7464-7473 (2019)
| [] |
[
"Geometric phase and a nonreciprocal spin wave circular polarizer",
"Geometric phase and a nonreciprocal spin wave circular polarizer"
] | [
"Yu Liu ",
"Jin Lan ",
"\nCenter for Joint Quantum Studies\nDepartment of Physics\nSchool of Science\nand Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology\nTianjin University\n92 Weijin Road300072TianjinChina\n",
"\nTianjin University\n300354TianjinChina\n"
] | [
"Center for Joint Quantum Studies\nDepartment of Physics\nSchool of Science\nand Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology\nTianjin University\n92 Weijin Road300072TianjinChina",
"Tianjin University\n300354TianjinChina"
] | [] | We show that spin wave acquires a polarization-dependent geometric phase along a cyclic trajectory of noncoplanar magnetizations in antiferromagnets. Specifically, we demonstrate that a cyclic set of 90 • antiferromagnetic domain walls simultaneously introduce geometric and dynamic phases to spin wave, and thus leads to asymmetric magnitude of overall phase for left-/right-circular components. Based on the polarization-dependent phase, we propose theoretically and confirm by micromagnetic simulations that, a Mach-Zehner interferometer with cyclic 90 • domain walls in one arm and homogenous domain in the other arm, naturally acts as a spin wave circular polarizer. Moreover, the circular polarizer has intrinsic nonreciprocity, which filters opposite polarization in opposite propagation direction. arXiv:2304.08071v1 [cond-mat.mes-hall] | null | [
"https://export.arxiv.org/pdf/2304.08071v1.pdf"
] | 258,178,994 | 2304.08071 | 3b780d2bd170ab7a224a0f8da33951ff18abd9da |
Geometric phase and a nonreciprocal spin wave circular polarizer
Yu Liu
Jin Lan
Center for Joint Quantum Studies
Department of Physics
School of Science
and Tianjin Key Laboratory of Low Dimensional Materials Physics and Preparing Technology
Tianjin University
92 Weijin Road300072TianjinChina
Tianjin University
300354TianjinChina
Geometric phase and a nonreciprocal spin wave circular polarizer
We show that spin wave acquires a polarization-dependent geometric phase along a cyclic trajectory of noncoplanar magnetizations in antiferromagnets. Specifically, we demonstrate that a cyclic set of 90 • antiferromagnetic domain walls simultaneously introduce geometric and dynamic phases to spin wave, and thus leads to asymmetric magnitude of overall phase for left-/right-circular components. Based on the polarization-dependent phase, we propose theoretically and confirm by micromagnetic simulations that, a Mach-Zehner interferometer with cyclic 90 • domain walls in one arm and homogenous domain in the other arm, naturally acts as a spin wave circular polarizer. Moreover, the circular polarizer has intrinsic nonreciprocity, which filters opposite polarization in opposite propagation direction. arXiv:2304.08071v1 [cond-mat.mes-hall]
Introduction. Phase is the core property of all waves, including electromagnetic wave, acoustic wave, matter wave, gravitational wave as well as spin wave. The wave phase naturally divides into two parts: the dynamical phase characterizing the wave evolution rate, and the geometric phase describing the geometric property of the wave system in parametric space [1][2][3]. Since its initial proposal, the concept of geometric phase has fastly evolved and become the foundation of vast and diverse disciplines [4,5]. Exploitation of geometric phase offers new possibilites in wave manipulation including trajectory controlling, wavefront tailoring and polarization harnessing, beyond the physical limit imposed by the dynamical phase [6][7][8][9][10][11][12][13].
Spin wave, the collective precession of ordered magnetization, is an alternative angular momentum carrier beside the spin-polarized conduction electron [14][15][16][17]. The dynamical phase of spin wave can be tuned via multiple means, such as exerting magnetic field [18,19] or electric field [20][21][22], passing electric current, placing magnetic impurities [23][24][25], coupling between two waveguides [26][27][28], and deposting magnetic domain walls [29][30][31][32]. Based on the dynamical phase shift, a plethora of logic and neuromorphic magnonic devices have been theoretically proposed or experimentally realized [19,[33][34][35].
In contrast to extensively investigated dynamical phase, the geometric phase of spin wave is only studied in limited situations. The geometric phase is shown to develop in a magnetic ring [36], between two magnetic domain walls [37] or along a magnetic helix [38] where non-coplanar magnetization forms along the spin wave trajectory. However, systematic formulation of the geometric phase for spin wave is still lacking, impeding the full exploitation of geometric phase in design of magnonic devices, let alone the collaborative leverage of geometric and dynamical phases.
In this work, we show that spin wave acquires a geometric phase cross a cyclic set of non-coplanar 90 • domain walls, beside the conventional dynamical phase. By virtue of the polarization-dependent geometric phase, we propose a spin wave circular polarizer, based on the wave interference in a two-arm Mach-Zehner structure. We further show that functionality of the circular polarizer is highly reprogrammable by reversing the propagation direction, tuning the working frequency, or altering the magnetic states. Parallel wave processings boosted by the fundamental superposition principle, are also demonstrated upon such a circular polarizer.
Polarized spin wave in antiferromagnets. Consider an antiferromagnet with the normalized Nèel order denoted by unit vector N, which is naturally partitioned to the static background magnetization n and the dynamical spin wave n , N = n+n . Under the unity constraints |N| = 1 and |n| = 1, as well as the small amplitude approximation for spin wave |n | 1, the transverse condition n · n = 0 is satisfied everywhere. Hence, it is instructive to formulate spin wave in spherical coordinates as n = n θêθ + n φêφ , whereê θ/φ are two orthgonal polarization directions transverse to the back- ground magnetizationê r ≡ n, and n θ/φ are the corresponding wave components. Alternatively, in complex form, the spin wave reads n = σ=±1 ψ σ ξ σ , where ξ σ = (ê θ + iσê φ )/2 and ψ σ = n θ − iσn φ with σ = ±1 are bases and components of the left-/right-circular polarizations, respectively.
Geometric phase of polarized spin wave. The SO(2) symmetry of the linear basesê θ/φ about the background magnetization, gives rises to the U (1) symmetry of the circular bases ξ ± , or an indeterminate phase for the circular polarization. Hence, when a polarized spin wave travels along a closed trajectory of inhomogeneous magnetization n(s) parametrized by the arc length s, the circular basis may develop an additional geometric phase instead of restoring to its original phase. Specifically, the relative phase of spin wave developed between n and n+dn is characterized by the Berry connection [39]. The accompanying Berry curvature is Ω σ = ∇ n × Λ σ = σn, resembling the field radiated from a monopole of strength σ located at n = 0. The evolution of the circular bases is then governed by
Λ σσ = −iξ † σ ·∇ n ξ σ , which is diagonal in circular bases with Λ σσ = δ σσ Λ σ∂ s ξ σ = −i(Λ σ ·∂ s n)ξ σ with solution ξ σ = exp iΦ G σχ ξ 0 σ ,
where ξ 0 σ is the initial circular basis, and χ denotes the propagation direction along the trajectory in Fig. 1(a). The geometric phase accumulated in a closed trajectory of background magnetization is thus described by
∆Φ G σχ = − l Λ σ · dn = −σχΘ,(1)
where Θ is the magnitude of solid angle enclosed by trajectory l in a magnetic Bloch sphere, and χ corresponds to anticlockwise/clockwise circulating direction, as depicted in Fig. 1(c). The geometric phase in Eq. (1) shares a similar form to the spin-redirectional phase in its optical counterpart [6,39,40], but the solid angle is subtended by background magnetizations here instead of optical wavevectors. In Eq. (1), the geometric phase ∆Φ G σχ flips sign when either the trajectory reverses its direction (χ → −χ) or the spin wave alters its circular polarization (σ → −σ), indicating its intrinsic chirality. Moreover, opposite geometric phase ±Θ experienced by two circular modes leads to Faraday rotation of the linear bases,
ê θ e φ = cos Θ χ sin Θ −χ sin Θ cos Θ ê 0 θ e 0 φ ,(2)
whereê 0 θ/φ are the initial direction. Geometric phase across 90 • domain walls. To elaborate the concept of geometric phase, we turn to an antiferromagnet wire with magnetic cubic anisotropy [32]. The dynamics of the Nèel order N is governed by antiferromagnet-type Landau-Lifshitz-Gibert (LLG) equation [38,41,42],
ρN ×N = −N × γH + αN ×Ṅ,(3)
where ρ is the inertia of antiferromagnetic dynamics, γ is the gyromagnetic ratio, and α is Gilbert damping constant. Here H = −δU/δN is the effective field acting on Nèel order N,
U = (1/2) A(∇N) 2 + K(N 2 x N 2 y + N 2 y N 2 z + N 2 x N 2 z )
dx is the magnetic energy, A is the exchange stiffness, K is the cubic anisotropy strength. The inertia is expressed by ρ = a 2 /8γA, where a is the lattice constant. The dipolar field and a moderate easy-axis anisotropy do not change the main physics in this work, and thus are disregarded [43].
Due to the cubic anisotropy, the magnetization direction of a homogenous domain lies at one of the three Cartesian directions n =x i with i = {1, 2, 3}. When two orthogonally magnetizedx i -domain andx j -domain meet, a 90 • domain wall forms with all magnetizations residing in the x i -x j plane. The non-coplanar magnetizations in three cyclically connectedx i -x j -x k -x i domain walls subtends a solid angle of exactly Θ = π/2 in magnitude, as depicted in Fig. 2(b). Therefore, the linear-x/y modes interchanges after traversing such a cyclic domain wall according to Eq. (2), or following the parallel transport law in a magnetic Bloch sphere as depicted in Fig. 2
(b).
The spin wave evolution are further investigated by micromagnetic simulations in Mumax3 [44] with following magnetic parameters: the exchange coupling constant A = 2.1 × 10 −11 J/m, the gyromagnetic ratio γ = 2.21 × 10 5 m/(A s), the cubic anisotropy K = 1.0 × 10 4 J/m 2 , the damping constant α = 1.0 × 10 −5 , and the lattice constant a = 0.5 nm. Anx-domain and aŷ-domain are placed in betweenẑ-domains at two sides of an antiferromagnetic wire in Fig. 2(a), and linearly polarized spin waves are injected from the left side. As shown in Fig. 2(c), after traversing two intermediatex-andŷ-domains, linear-x/y spin waves are braided by exploiting linear-z as the third state, similar to its optical and acoustic counterparts [45][46][47]. Nevertheless, non-Abelien braiding is absent since only two independent polarization modes exist upon any background magnetization. We have checked that the polarization braiding, as a special case of polarization rotation, is independent of spin wave frequency. In contrast, the linear-x/y spin wave remains unchanged for a trivial trajectory through theẑ-x(ŷ)-ẑ domains, since the enclosing solid angle is zero therein [43].
Dynamical phase across a 90 • domain wall. Beside introducing geometric phase, the domain wall also modifies the dynamical phase by altering the spin wave dynamics [29,31,37].
Without loss of generality, we consider a 90 • domain wall lying between ax-domain and aŷ-domain, which adopts a Walker-type profile n(x) = ( [1 − tanh(x/W )]/2, [1 + tanh(x/W )]/2, 0) with W = A/K the characteristic width [32,48]. The spin wave dynamics upon such anx-ŷ type 90 • domain wall is then recast from the LLG equation (3) to a Klein-Gordon-like equation
− ρ γψ = −A∂ 2 x + K + V (x) ψ,(4)
where V (x) = −(11/8)K sech 2 (x/W ) is the effective potential well caused by the inhomogeneous magnetization within domain wall. The deviation of V (x) from the celebrated Pöschl-Teller type potential [30,49] originates from the additional contribution of the cubic anisotropy. The hybridization of two circular modes caused by the cubic anisotropy, or the retarding effect between two linear modes [32,43], is disregarded in Eq. (4) for model simplicity and compactness.
The spin wave scatterings by a 90 • domain wall are quantitatively investigated via two numerical evaluations in parallel: the Green function calculations based on Eq. (4) via Kwant [50] and micromagnetic simulations via Mumax3 [44]. The agreements between two methods in Fig. 2(d) corroborate the following two influences caused by the potentail well V (x) < 0 or the inhomogeneous domain wall profile n(x): i) A small reflection in the extremely low frequency range; ii) A positive dynamical phase ∆φ > 0 in the full range, which monotonically decreases for increasing frequency.
Overall phase across cyclic 90 • domain walls. Consider a Mach-Zehner type spin wave interferometer as depicted in Fig. 2(e) inset, where aẑ-domain occupies the major region, and ax-domain and aŷ-domain are deposited in the upper arm. When spin waves split from the input port converge again at the output port, the overall phase difference developed between the cyclicẑ-x-ŷ-ẑ domain walls in the upper arm and the homogenousẑ-domain in the lower arm is described by
∆Φ σχ = ∆Φ G σχ + ∆Φ D = − σχπ 2 + 3∆φ,(5)
where the geometric part ∆Φ G σχ arises from the enclosing solid angle Θ = π/2, and the dynamical part ∆Φ D is accumulated across three consecutive domain walls.
The frequency-dependent overall phase ∆Φ D in Fig. 2(f) guide us to designate binary parameter τ to following two specific frequencies: τ = −1 for low frequency f L ≈ 0.097 THz with ∆Φ D = 3π/2; and τ = +1 for high frequency f H ≈ 0.236 THz with ∆Φ D = π/2. The manipulation space {σ, χ, τ } formed by polarization states in σ, propagation direction in χ and freqeuncy in τ then provides 3 binary means to harness spin wave. In such a 3-bit manipulation space, the overall phase is recast from Eq. (5) to
∆Φ σχτ = ∆Φ G σχ + ∆Φ D τ = 2 − τ − σχ 2 π,(6)
which is always integer mutiples of π. Consequently, the interference of spin waves in two arms at the confluence region leads to the output efficiency
η σχτ = cos 2 ∆Φ σχτ 2 = 1 + σχτ 2 ,(7)
which lies in a binary on/off states, i.e., being either completely constructive η = 1 for ∆Φ = 0, 2π, or completely destructive η = 0 for ∆Φ = π, as depicted in Fig. 2(e). It is noteworthy that the on/off states possess a rather wide frequency tolerance, especially for the high-frequency case.
A non-reciprocal circular polarizer. In Eq. (7), the on/off combinations arises in the interferometer for arbitrary binary pair in σ, χ or τ . One circular polarization is blocked η σ = 0 while the other is passed η −σ = 1, hence the two-arm interferometer acts as a spin wave circular polarizer. And when the propagation direction is reversed (χ → −χ), the on/off state is flipped, indicating the non-reciprocity of the circular polarizer. The chirality of such a circular polarizer originates from the chiral nature of the geometric phase as manifested in Eq. (1). By switching between high/low frequency (τ → −τ ), the on/off state is flipped again, revealing a frequency-controlled chirality of the circular polarizer.
χ = +1 (A → B) χ = −1 (A ← B) σ = +1 σ = −1 σ = +1 σ = −1 τ = −1 τ = +1 (f = f L ) (f = f H ) n x n y τ = ±1 σ = ±1 τ = ±1 σ = ±1 χ = ±1 χ = +1 χ = −1 (a) (b)
The polarization-filtering functionality of the circular polarizer is further confirmed by micromagnetic simulations in Fig. 3(a). Circular spin waves at selected frequencies are excited at A or B port of the interferometer in Fig. 2(e), and is detected in the other port. At low working frequency f = f L , when two circular spin waves are continuously excited at port A, only the left-circular mode is observed at port B, signifying a left-circular polarizer in the rightward direction. In contrast, only the left-circular mode excited at port B is visible at port A, suggesting a right-circular polarizer in the leftward propagation direction. After switching to the high working frequency f = f H , the right/left-circular mode is selectively passed for A → B (A ← B), in full compliance to the on/off rule outlined in Eq. (7).
Superposition-endorsed parallel wave processing. The principle of wave superposition not only engenders the in-terference for waves of the same type, but also ensures that waves of different types can be brought to the same medium without mutual disturbances in linear regime [51]. Guided by this fundamental superposition principle, individual manipulations in mono-polarized, unidirectional and monochromatic fashion in Fig. 3(a) can be processed concurrently in the interferometer at once. In Fig. 3(b), when a linear-polarized and dichromatic spin wave consisting of equal components of high/low frequencies and left-/right-circular polarizations is continuously injected from port A, only the low-frequency left-circular and the high-frequency right-circular components is detected at port B. Similarly, when port B is set as the input port, the other two components are detected in port A. In both cases (χ = ±1), the input and output signals in Fig. 3(b) are simple addition of 2 2 channels in 2-bit {σ, τ } space in Fig. 3(a).
To fully unleash the power of parallel processing, the linearpolarized and dichromatic spin wave are simultaneously excited at port A and B for finite duration of 0.16 ns. These two wave pulses travelling in opposite directions then encounter and penetrate each other, and are simultaneously detected at the other side after a waiting time 0.20 ns. Aided by the wave pulse, a parallel processing of 2 3 channels for 3-bit {σ, χ, τ } space is enabled without much signal deterioration in this bidirectional setup, as also demonstrated in Fig. 3(b).
Magnetic programmability and scalability. By altering the magnetic states in the interferometer, the geomet-ric/dyanmical phase can be modified, and the functionality of the circular polarizer adjusts accordingly [43]. When any magnetic domain is switchedx i → −x i , the geometric phase ∆Φ G alters its sign, and thus the circular polarizer changes its chirality. Meanwhile, when magnetic domain walls along the ring formed by two arms is moved, the dynamical phase becomes ∆Φ D = d∆φ with d = {−3, −1, 1, 3} the number difference of domain wall in two arms, the circular polarizer at low frequency f = f L changes its filtering chirality.
The circular polarizer in this work is fully captured by two scales: the typical length W = A/K and the typical time
t 0 = (πγ/ √ 2a) √ AK.
Hence, the whole design can be directly scaled for any combination of the exchange stiffness A and cubic anisotropy K.
Conclusion. In conclusion, we show that spin wave acquires a geometric phase along a cyclic trajectory of noncoplanar magnetizations in antiferromagnets. Moreover, a cyclic set of 90 • domain walls imposes a polarizationdependent phase to passing spin wave, which consists of both geometric and dynamic phase. By virtue of such asymmetric phase for left/right-circular modes, we realize an interferencebased circular polarizer. The collaboration between geometric and dynamical phase, provides new paradigms in harnessing spin wave via non-coplanar magnetizations.
Figure 1 .
1Schematics of the geometric phase acquired by polarized spin wave in non-coplanar magnetizations. (a) The evolution of the polarization direction along a closed trajectory of non-coplanar magnetizations. (b) The geometric phase of circular polarization and the rotation of linear polarization. (c) The evoluation of the polarization direction in a magnetic Bloch sphere. The orange arrows are for background Nèel order n, and the accompanying red/blue arrows are for a specific polarization direction along trajectory.
Figure 2 .
2Spin wave scattering and interference across the cyclic 90 • domain walls. (a) Magnetic profile of the cyclicẑ-x-ŷ-ẑ domain walls. (b) Parallel transportation of a polarization direction in a magnetic Bloch sphere. (c) Braiding of linear-x/y polarizations across three 90 • domain walls extracted from micromagnetic simulations. (d) Dynamical phase across a single 90 • domain wall. The inset plots the transmission probability of spin wave. (e) The output efficiency of two circular polarizations. The inset depicts the schematics of the two-arm interferometer and the corresponding magnetic profiles. (f) The overall phase for left-/right circular polarizations. In (d)(e)(f), the solid lines are theoretical calculations based on Eq. (4), and the dots are extracted from micromagnetic simulations, and two circled marks are for low/high frequency.
Figure 3 .
3Spin wave processing in a non-reciprocal circular polarizer. (a) Individual processing in a single channel. (b) Parallel processing in multiple channels. Upper pannel: unidiretional processing in 2 2 channels of {σ, τ } separately for χ = 1 and χ = −1; Lower panel: bidirectional processing of 2 3 channels of {σ, τ, χ} simultaneously for χ = ±1. In all plots, the orange/green dots are for n x/y extracted from micromagnetic simulations, and the solid lines are for theoretical fittings.
Quantal Phase Factors Accompanying Adiabatic Changes. M V Berry, 10.1098/rspa.1984.0023Proc. R. Soc. Lond. A. R. Soc. Lond. A39245M. V. Berry, Quantal Phase Factors Accompanying Adiabatic Changes, Proc. R. Soc. Lond. A 392, 45 (1984).
Polarization of light and topological phases. R Bhandari, 10.1016/S0370-1573(96)00029-4Phys. Rep. 2811R. Bhandari, Polarization of light and topological phases, Phys. Rep. 281, 1 (1997).
Berry phase effects on electronic properties. D Xiao, M.-C Chang, Q Niu, 10.1103/RevModPhys.82.1959Rev. Mod. Phys. 821959D. Xiao, M.-C. Chang, and Q. Niu, Berry phase effects on elec- tronic properties, Rev. Mod. Phys. 82, 1959 (2010).
Geometric phase memories. M Berry, 10.1038/nphys1608Nat. Phys. 6148M. Berry, Geometric phase memories, Nat. Phys. 6, 148 (2010).
Geometric phase from Aharonov-Bohm to Pancharatnam-Berry and beyond. E Cohen, H Larocque, F Bouchard, F Nejadsattari, Y Gefen, E Karimi, 10.1038/s42254-019-0071-1Nat. Rev. Phys. 1437E. Cohen, H. Larocque, F. Bouchard, F. Nejadsattari, Y. Gefen, and E. Karimi, Geometric phase from Aharonov-Bohm to Pan- charatnam-Berry and beyond, Nat. Rev. Phys. 1, 437 (2019).
Spin-orbit interactions of light. K Y Bliokh, F J Rodríguez-Fortuño, F Nori, A V Zayats, 10.1038/nphoton.2015.201Nature Photonics. 9796K. Y. Bliokh, F. J. Rodríguez-Fortuño, F. Nori, and A. V. Zayats, Spin-orbit interactions of light, Nature Photonics 9, 796 (2015).
Geometric Phase in Optics: From Wavefront Manipulation to Waveguiding. C P Jisha, S Nolte, A Alberucci, 10.1002/lpor.202100003Laser Photonics Rev. 152100003C. P. Jisha, S. Nolte, and A. Alberucci, Geometric Phase in Optics: From Wavefront Manipulation to Waveguiding, Laser Photonics Rev. 15, 2100003 (2021).
Geometric phase lens. F S Roux, 10.1364/JOSAA.23.000476J. Opt. Soc. Am. A. 23476F. S. Roux, Geometric phase lens, J. Opt. Soc. Am. A 23, 476 (2006).
Fabrication of ideal geometric-phase holograms with arbitrary wavefronts. J Kim, Y Li, M N Miskiewicz, C Oh, M W Kudenov, M J Escuti, 10.1364/OPTICA.2.000958Optica. 2958J. Kim, Y. Li, M. N. Miskiewicz, C. Oh, M. W. Kudenov, and M. J. Escuti, Fabrication of ideal geometric-phase holograms with arbitrary wavefronts, Optica 2, 958 (2015).
Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission. A Arbabi, Y Horie, M Bagheri, A Faraon, 10.1038/nnano.2015.186Nat. Nanotechnol. 10937A. Arbabi, Y. Horie, M. Bagheri, and A. Faraon, Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission, Nat. Nanotechnol. 10, 937 (2015).
. M Xiao, G Ma, Z Yang, P Sheng, Z Q Zhang, C , M. Xiao, G. Ma, Z. Yang, P. Sheng, Z. Q. Zhang, and C. T.
Geometric phase and band inversion in periodic acoustic systems. Chan , 10.1038/nphys3228Nat. Phys. 11240Chan, Geometric phase and band inversion in periodic acoustic systems, Nat. Phys. 11, 240 (2015).
Guiding light via geometric phases. S Slussarenko, A Alberucci, C P Jisha, B Piccirillo, E Santamato, G Assanto, L Marrucci, 10.1038/nphoton.2016.138Nat. Photonics. 10571S. Slussarenko, A. Alberucci, C. P. Jisha, B. Piccirillo, E. San- tamato, G. Assanto, and L. Marrucci, Guiding light via geomet- ric phases, Nat. Photonics 10, 571 (2016).
Wave-Vector-Varying Pancharatnam-Berry Phase Photonic Spin Hall Effect. W Zhu, H Zheng, Y Zhong, J Yu, Z Chen, 10.1103/PhysRevLett.126.083901Phys. Rev. Lett. 12683901W. Zhu, H. Zheng, Y. Zhong, J. Yu, and Z. Chen, Wave-Vector- Varying Pancharatnam-Berry Phase Photonic Spin Hall Effect, Phys. Rev. Lett. 126, 083901 (2021).
Transmission of electrical signals by spin-wave interconversion in a magnetic insulator. Y Kajiwara, K Harii, S Takahashi, J Ohe, K Uchida, M Mizuguchi, H Umezawa, H Kawai, K Ando, K Takanashi, S Maekawa, E Saitoh, 10.1038/nature08876Nature. 464262Y. Kajiwara, K. Harii, S. Takahashi, J. Ohe, K. Uchida, M. Mizuguchi, H. Umezawa, H. Kawai, K. Ando, K. Takanashi, S. Maekawa, and E. Saitoh, Transmission of electrical signals by spin-wave interconversion in a magnetic insulator, Nature 464, 262 (2010).
Hillebrands, Magnon spintronics. A V Chumak, V I Vasyuchka, A A Serga, B , 10.1038/nphys3347Nat. Phys. 11453A. V. Chumak, V. I. Vasyuchka, A. A. Serga, and B. Hille- brands, Magnon spintronics, Nat. Phys. 11, 453 (2015).
Long-distance transport of magnon spin information in a magnetic insulator at room temperature. L J Cornelissen, J Liu, R A Duine, J B Youssef, B J Van Wees, 10.1038/nphys3465Nat. Phys. 111022L. J. Cornelissen, J. Liu, R. A. Duine, J. B. Youssef, and B. J. van Wees, Long-distance transport of magnon spin informa- tion in a magnetic insulator at room temperature, Nat. Phys. 11, 1022 (2015).
The 2021 Magnonics Roadmap. A Barman, G Gubbiotti, S Ladak, A O Adeyeye, M Krawczyk, J Gräfe, 10.1088/1361-648X/abec1aJ. Phys.: Condens. Matter. 33413001A. Barman, G. Gubbiotti, S. Ladak, A. O. Adeyeye, M. Krawczyk, J. Gräfe et al., The 2021 Magnonics Roadmap, J. Phys.: Condens. Matter 33, 413001 (2021).
Spin-wave logical gates. M P Kostylev, A A Serga, T Schneider, B Leven, B Hillebrands, 10.1063/1.2089147Appl. Phys. Lett. 87153501M. P. Kostylev, A. A. Serga, T. Schneider, B. Leven, and B. Hillebrands, Spin-wave logical gates, Appl. Phys. Lett. 87, 153501 (2005).
Realization of spin-wave logic gates. T Schneider, A A Serga, B Leven, B Hillebrands, R L Stamps, M P Kostylev, 10.1063/1.2834714Appl. Phys. Lett. 9222505T. Schneider, A. A. Serga, B. Leven, B. Hillebrands, R. L. Stamps, and M. P. Kostylev, Realization of spin-wave logic gates, Appl. Phys. Lett. 92, 022505 (2008).
Electric Control of Spin Currents and Spin-Wave Logic. T Liu, G Vignale, 10.1103/PhysRevLett.106.247203Phys. Rev. Lett. 106247203T. Liu and G. Vignale, Electric Control of Spin Currents and Spin-Wave Logic, Phys. Rev. Lett. 106, 247203 (2011).
Electric-Field Coupling to Spin Waves in a Centrosymmetric Ferrite. X Zhang, T Liu, M E Flatté, H X Tang, 10.1103/PhysRevLett.113.037202Phys. Rev. Lett. 11337202X. Zhang, T. Liu, M. E. Flatté, and H. X. Tang, Electric-Field Coupling to Spin Waves in a Centrosymmetric Ferrite, Phys. Rev. Lett. 113, 037202 (2014).
Antiferromagnetic Spin Wave Field-Effect Transistor. R Cheng, M W Daniels, J.-G Zhu, D Xiao, 10.1038/srep24223Sci. Rep. 624223R. Cheng, M. W. Daniels, J.-G. Zhu, and D. Xiao, Antiferro- magnetic Spin Wave Field-Effect Transistor, Sci. Rep. 6, 24223 (2016).
O V Dobrovolskiy, R Sachser, S A Bunyaev, D Navas, V M Bevz, M Zelent, W Śmigaj, J Rychły, M Krawczyk, R V Vovk, M Huth, G N Kakazei, 10.1021/acsami.9b02717Spin-Wave Phase Inverter upon a Single Nanodefect. 1117654O. V. Dobrovolskiy, R. Sachser, S. A. Bunyaev, D. Navas, V. M. Bevz, M. Zelent, W.Śmigaj, J. Rychły, M. Krawczyk, R. V. Vovk, M. Huth, and G. N. Kakazei, Spin-Wave Phase Inverter upon a Single Nanodefect, ACS Appl. Mater. Interfaces 11, 17654 (2019).
Magnetic Logic Gate Based on Polarized Spin Waves. W Yu, J Lan, J Xiao, 10.1103/physrevapplied.13.024055Phys. Rev. Appl. 1324055W. Yu, J. Lan, and J. Xiao, Magnetic Logic Gate Based on Po- larized Spin Waves, Phys. Rev. Appl. 13, 024055 (2020).
Inverse-design magnonic devices. Q Wang, A V Chumak, P Pirro, 10.1038/s41467-021-22897-4Nat. Commun. 122636Q. Wang, A. V. Chumak, and P. Pirro, Inverse-design magnonic devices, Nat. Commun. 12, 2636 (2021).
Reconfigurable nanoscale spin-wave directional coupler. Q Wang, P Pirro, R Verba, A Slavin, B Hillebrands, A V Chumak, 10.1126/sciadv.1701517Sci. Adv. 41701517Q. Wang, P. Pirro, R. Verba, A. Slavin, B. Hillebrands, and A. V. Chumak, Reconfigurable nanoscale spin-wave directional cou- pler, Sci. Adv. 4, e1701517 (2018).
A magnonic directional coupler for integrated magnonic half-adders. Q Wang, M Kewenig, M Schneider, R Verba, F Kohl, B Heinz, M Geilen, M Mohseni, B Lägel, F Ciubotaru, C Adelmann, C Dubs, S D Cotofana, O V Dobrovolskiy, T Brächer, P Pirro, A V Chumak, 10.1038/s41928-020-00485-6Nat. Electron. 3765Q. Wang, M. Kewenig, M. Schneider, R. Verba, F. Kohl, B. Heinz, M. Geilen, M. Mohseni, B. Lägel, F. Ciubotaru, C. Adelmann, C. Dubs, S. D. Cotofana, O. V. Dobrovolskiy, T. Brächer, P. Pirro, and A. V. Chumak, A magnonic directional coupler for integrated magnonic half-adders, Nat. Electron. 3, 765 (2020).
Reconfigurable Spin-Wave Coupler Based on Domain-Wall Channels. M Zhao, X Wang, Z Luo, Q Xia, Y Nie, R Xiong, G.-H Guo, 10.1103/PhysRevApplied.17.064013Phys. Rev. Appl. 1764013M. Zhao, X.-g. Wang, Z. Luo, Q.-l. Xia, Y.-z. Nie, R. Xiong, and G.-h. Guo, Reconfigurable Spin-Wave Coupler Based on Domain-Wall Channels, Phys. Rev. Appl. 17, 064013 (2022).
Domain-Wall Induced Phase Shifts in Spin Waves. R Hertel, W Wulfhekel, J Kirschner, 10.1103/PhysRevLett.93.257202Phys. Rev. Lett. 93257202R. Hertel, W. Wulfhekel, and J. Kirschner, Domain-Wall In- duced Phase Shifts in Spin Waves, Phys. Rev. Lett. 93, 257202 (2004).
Antiferromagnetic domain wall as spin wave polarizer and retarder. J Lan, W Yu, J Xiao, 10.1038/s41467-017-00265-5Nat. Commun. 8178J. Lan, W. Yu, and J. Xiao, Antiferromagnetic domain wall as spin wave polarizer and retarder, Nat. Commun. 8, 178 (2017).
Mutual control of coherent spin waves and magnetic domain walls in a magnonic device. J Han, P Zhang, J T Hou, S A Siddiqui, L Liu, 10.1126/science.aau2610Science. 3661121J. Han, P. Zhang, J. T. Hou, S. A. Siddiqui, and L. Liu, Mutual control of coherent spin waves and magnetic domain walls in a magnonic device, Science 366, 1121 (2019).
Magnetically switchable spin-wave retarder with 90 degree antiferromagnetic domain wall. F Ye, J Lan, 10.1103/PhysRevB.104.L180401Phys. Rev. B. 104180401F. Ye and J. Lan, Magnetically switchable spin-wave retarder with 90 degree antiferromagnetic domain wall, Phys. Rev. B 104, L180401 (2021).
Nanoscale neural network using non-linear spin-wave interference. Á Papp, W Porod, G Csaba, 10.1038/s41467-021-26711-zNat. Commun. 126422Á. Papp, W. Porod, and G. Csaba, Nanoscale neural network us- ing non-linear spin-wave interference, Nat. Commun. 12, 6422 (2021).
Advances in coherent magnonics. P Pirro, V I Vasyuchka, A A Serga, B Hillebrands, 10.1038/s41578-021-00332-wNat. Rev. Mater. 1P. Pirro, V. I. Vasyuchka, A. A. Serga, and B. Hillebrands, Ad- vances in coherent magnonics, Nat. Rev. Mater. , 1 (2021).
A V Chumak, P Kabos, M Wu, C Abert, C Adelmann, A O Adeyeye, 10.1109/TMAG.2022.3149664Advances in Magnetics Roadmap on Spin-Wave Computing. 581A. V. Chumak, P. Kabos, M. Wu, C. Abert, C. Adelmann, A. O. Adeyeye et al., Advances in Magnetics Roadmap on Spin- Wave Computing, IEEE Trans. Mag. 58, 1 (2022).
Berry phase of magnons in textured ferromagnets. V K Dugaev, P Bruno, B Canals, C Lacroix, 10.1103/PhysRevB.72.024456Phys. Rev. B. 7224456V. K. Dugaev, P. Bruno, B. Canals, and C. Lacroix, Berry phase of magnons in textured ferromagnets, Phys. Rev. B 72, 024456 (2005).
Katsnelson, Chirality-Dependent Transmission of Spin Waves through Domain Walls. F J Buijnsters, Y Ferreiros, A Fasolino, M , 10.1103/PhysRevLett.116.147204Phys. Rev. Lett. 116147204F. J. Buijnsters, Y. Ferreiros, A. Fasolino, and M. I. Katsnel- son, Chirality-Dependent Transmission of Spin Waves through Domain Walls, Phys. Rev. Lett. 116, 147204 (2016).
Curvilinear manipulation of polarized spin waves. H Wu, J Lan, 10.1103/PhysRevB.105.174427Phys. Rev. B. 105174427H. Wu and J. Lan, Curvilinear manipulation of polarized spin waves, Phys. Rev. B 105, 174427 (2022).
. M Onoda, S Murakami, N Nagaosa, 10.1103/PhysRevLett.93.083901Phys. Rev. Lett. 9383901Hall Effect of LightM. Onoda, S. Murakami, and N. Nagaosa, Hall Effect of Light, Phys. Rev. Lett. 93, 083901 (2004).
K Y Bliokh, M A Alonso, M R Dennis, 10.1088/1361-6633/ab4415Geometric phases in 2D and 3D polarized fields: Geometrical, dynamical, and topological aspects. 82122401K. Y. Bliokh, M. A. Alonso, and M. R. Dennis, Geometric phases in 2D and 3D polarized fields: Geometrical, dynamical, and topological aspects, Rep. Prog. Phys. 82, 122401 (2019).
Nonlinear Field Theory of Large-Spin Heisenberg Antiferromagnets: Semiclassically Quantized Solitons of the One-Dimensional Easy-Axis N\'eel State. F D M Haldane, 10.1103/PhysRevLett.50.1153Phys. Rev. Lett. 501153F. D. M. Haldane, Nonlinear Field Theory of Large-Spin Heisenberg Antiferromagnets: Semiclassically Quantized Soli- tons of the One-Dimensional Easy-Axis N\'eel State, Phys. Rev. Lett. 50, 1153 (1983).
Antiferromagnetic Domain Wall Motion Induced by Spin Waves. E G Tveten, A Qaiumzadeh, A Brataas, 10.1103/PhysRevLett.112.147204Phys. Rev. Lett. 112147204E. G. Tveten, A. Qaiumzadeh, and A. Brataas, Antiferromag- netic Domain Wall Motion Induced by Spin Waves, Phys. Rev. Lett. 112, 147204 (2014).
See Supplementary Materials for detailed derivation of antiferromagnetic LLG equation, the setup of micromagnetic simulations, the influence of dipolar field and easy-axis anistropy, the magnetic programmability of circular polarizer, and the truth table of circular polarizer in full state space. See Supplementary Materials for detailed derivation of antifer- romagnetic LLG equation, the setup of micromagnetic simula- tions, the influence of dipolar field and easy-axis anistropy, the magnetic programmability of circular polarizer, and the truth table of circular polarizer in full state space.
The design and verification of MuMax3. A Vansteenkiste, J Leliaert, M Dvornik, M Helsen, F Garcia-Sanchez, B Van Waeyenberge, 10.1063/1.4899186AIP Adv. 4107133A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia- Sanchez, and B. Van Waeyenberge, The design and verification of MuMax3, AIP Adv. 4, 107133 (2014).
Non-Abelian Braiding of Light. T Iadecola, T Schuster, C Chamon, 10.1103/PhysRevLett.117.073901Phys. Rev. Lett. 11773901T. Iadecola, T. Schuster, and C. Chamon, Non-Abelian Braiding of Light, Phys. Rev. Lett. 117, 073901 (2016).
X.-L Zhang, F Yu, Z.-G Chen, Z.-N Tian, Q.-D Chen, H.-B Sun, G Ma, 10.1038/s41566-022-00976-2Non-Abelian braiding on photonic chips. 16390X.-L. Zhang, F. Yu, Z.-G. Chen, Z.-N. Tian, Q.-D. Chen, H.-B. Sun, and G. Ma, Non-Abelian braiding on photonic chips, Nat. Photon. 16, 390 (2022).
Classical non-Abelian braiding of acoustic modes. Z.-G Chen, R.-Y Zhang, C T Chan, G Ma, 10.1038/s41567-021-01431-9Nat. Phys. 18179Z.-G. Chen, R.-Y. Zhang, C. T. Chan, and G. Ma, Classical non- Abelian braiding of acoustic modes, Nat. Phys. 18, 179 (2022).
Staggered Dynamics in Antiferromagnets by Collective Coordinates. E G Tveten, A Qaiumzadeh, O A Tretiakov, A Brataas, 10.1103/PhysRevLett.110.127208Phys. Rev. Lett. 110127208E. G. Tveten, A. Qaiumzadeh, O. A. Tretiakov, and A. Brataas, Staggered Dynamics in Antiferromagnets by Collective Coor- dinates, Phys. Rev. Lett. 110, 127208 (2013).
Polarization-selective spin wave driven domain-wall motion in antiferromagnets. W Yu, J Lan, J Xiao, 10.1103/PhysRevB.98.144422Phys. Rev. B. 98144422W. Yu, J. Lan, and J. Xiao, Polarization-selective spin wave driven domain-wall motion in antiferromagnets, Phys. Rev. B 98, 144422 (2018).
Kwant: A software package for quantum transport. C W Groth, M Wimmer, A R Akhmerov, X Waintal, 10.1088/1367-2630/16/6/063065New J. Phys. 1663065C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Wain- tal, Kwant: A software package for quantum transport, New J. Phys. 16, 063065 (2014).
D H Goldstein, 10.1201/b10436Polarized Light. Boca RatonCRC Press3rd ed.D. H. Goldstein, Polarized Light, 3rd ed. (CRC Press, Boca Ra- ton, 2017).
| [] |
[
"MOF-Based Polymeric Nanocomposite Films as Potential Materials for Drug Delivery Devices in Ocular Therapeutics",
"MOF-Based Polymeric Nanocomposite Films as Potential Materials for Drug Delivery Devices in Ocular Therapeutics"
] | [
"J Gandara-Loe \nDepartamento de Química Inorgánica-IUMA\nLaboratorio de Materiales Avanzados\nUniversidad de Alicante\nE-03690San Vicente del RaspeigSpain\n",
"B E Souza \nDepartment of Engineering Science\nMultifunctional Materials & Composites (MMC) Laboratory\nUniversity of Oxford\nParks RoadOX1 3PJOxfordUK\n",
"A Missyul \nCELLS-ALBA Synchrotron\nCerdanyola del Vallés\nE-08290Spain\n",
"G Giraldo \nClínica Clofan\nCarrera 48 # 19 A 40MedellínColombia\n",
"J.-C Tan \nDepartment of Engineering Science\nMultifunctional Materials & Composites (MMC) Laboratory\nUniversity of Oxford\nParks RoadOX1 3PJOxfordUK\n",
"J Silvestre-Albero \nDepartamento de Química Inorgánica-IUMA\nLaboratorio de Materiales Avanzados\nUniversidad de Alicante\nE-03690San Vicente del RaspeigSpain\n"
] | [
"Departamento de Química Inorgánica-IUMA\nLaboratorio de Materiales Avanzados\nUniversidad de Alicante\nE-03690San Vicente del RaspeigSpain",
"Department of Engineering Science\nMultifunctional Materials & Composites (MMC) Laboratory\nUniversity of Oxford\nParks RoadOX1 3PJOxfordUK",
"CELLS-ALBA Synchrotron\nCerdanyola del Vallés\nE-08290Spain",
"Clínica Clofan\nCarrera 48 # 19 A 40MedellínColombia",
"Department of Engineering Science\nMultifunctional Materials & Composites (MMC) Laboratory\nUniversity of Oxford\nParks RoadOX1 3PJOxfordUK",
"Departamento de Química Inorgánica-IUMA\nLaboratorio de Materiales Avanzados\nUniversidad de Alicante\nE-03690San Vicente del RaspeigSpain"
] | [] | f These two authors contributed equally.AbstractNovel MOF-based polymer nanocomposite films were successfully prepared using Zrbased UiO-67 as a metal-organic framework (MOF) and polyurethane (PU) as a polymeric matrix. Synchrotron X-ray powder diffraction (SXRPD) analysis confirms the improved stability of the UiO-67 embedded nanocrystals and scanning electron microscopy images confirm their homogeneous distribution (average crystal size ~ 100-200 nm) within the 50-µm thick film. Accessibility to the inner porous structure of the embedded MOFs was completely suppressed for N2 at cryogenic temperatures.However, ethylene adsorption measurements at 25ºC confirm that at least 45% of the MOF crystals are fully accessible for gas phase adsorption of non-polar molecules.Although this partial blockage limits the adsorption performance of the embedded MOFs for ocular drugs (e.g., brimonidine tartrate) compared to the pure MOF, an almost 60-fold improvement in the adsorption capacity was observed for PU matrix after incorporation of the UiO-67 nanocrystals. UiO-67@PU nanocomposite exhibits a prolonged release of brimonidine (up to 14 days were quantified). Finally, the combined use of SXRPD, thermogravimetric analysis (TGA) and FTIR analysis confirmed the presence of the drug in the nanocomposite film, the stability of the MOF framework and the drug upon loading, and the presence of brimonidine in an amorphous phase once adsorbed. These results open the gate towards the application of these polymeric nanocomposite films for drug delivery in optical therapeutics, either as a component of contact lens, in the composition of lacrimal stoppers (e.g., punctal plugs) or in sub-tenon inserts. | 10.1021/acsami.0c07517 | [
"https://arxiv.org/pdf/2101.05377v1.pdf"
] | 219,605,721 | 2101.05377 | 61e6aeb49d07dd4259988851d67c998ba78c47bb |
MOF-Based Polymeric Nanocomposite Films as Potential Materials for Drug Delivery Devices in Ocular Therapeutics
J Gandara-Loe
Departamento de Química Inorgánica-IUMA
Laboratorio de Materiales Avanzados
Universidad de Alicante
E-03690San Vicente del RaspeigSpain
B E Souza
Department of Engineering Science
Multifunctional Materials & Composites (MMC) Laboratory
University of Oxford
Parks RoadOX1 3PJOxfordUK
A Missyul
CELLS-ALBA Synchrotron
Cerdanyola del Vallés
E-08290Spain
G Giraldo
Clínica Clofan
Carrera 48 # 19 A 40MedellínColombia
J.-C Tan
Department of Engineering Science
Multifunctional Materials & Composites (MMC) Laboratory
University of Oxford
Parks RoadOX1 3PJOxfordUK
J Silvestre-Albero
Departamento de Química Inorgánica-IUMA
Laboratorio de Materiales Avanzados
Universidad de Alicante
E-03690San Vicente del RaspeigSpain
MOF-Based Polymeric Nanocomposite Films as Potential Materials for Drug Delivery Devices in Ocular Therapeutics
f These two authors contributed equally.AbstractNovel MOF-based polymer nanocomposite films were successfully prepared using Zrbased UiO-67 as a metal-organic framework (MOF) and polyurethane (PU) as a polymeric matrix. Synchrotron X-ray powder diffraction (SXRPD) analysis confirms the improved stability of the UiO-67 embedded nanocrystals and scanning electron microscopy images confirm their homogeneous distribution (average crystal size ~ 100-200 nm) within the 50-µm thick film. Accessibility to the inner porous structure of the embedded MOFs was completely suppressed for N2 at cryogenic temperatures.However, ethylene adsorption measurements at 25ºC confirm that at least 45% of the MOF crystals are fully accessible for gas phase adsorption of non-polar molecules.Although this partial blockage limits the adsorption performance of the embedded MOFs for ocular drugs (e.g., brimonidine tartrate) compared to the pure MOF, an almost 60-fold improvement in the adsorption capacity was observed for PU matrix after incorporation of the UiO-67 nanocrystals. UiO-67@PU nanocomposite exhibits a prolonged release of brimonidine (up to 14 days were quantified). Finally, the combined use of SXRPD, thermogravimetric analysis (TGA) and FTIR analysis confirmed the presence of the drug in the nanocomposite film, the stability of the MOF framework and the drug upon loading, and the presence of brimonidine in an amorphous phase once adsorbed. These results open the gate towards the application of these polymeric nanocomposite films for drug delivery in optical therapeutics, either as a component of contact lens, in the composition of lacrimal stoppers (e.g., punctal plugs) or in sub-tenon inserts.
Introduction
Glaucoma is a pathological eye disorder associated with an increase in the intraocular pressure (IOP) and one of the leading causes of irreversible blindness worldwide. 1 Approximately, 70 millions middle-aged people and elderly are affected by its common form, open angle glaucoma whereof 10% ends in bilateral blindness. 2 Among the different drugs to treat glaucoma, brimonidine tartrate is one of the most widely applied. Brimonidine is an alpha-adrenergic agonist able to reduce the ocular pressure through the constriction in the blood vessels, ending in the decrease of the aqueous humour production. 3 Conventional drug delivery systems such as eye droplets represent 90% of the marketed ophthalmic formulations. 4,5 However, severe constrains are associated with this topical approach such as tear turnover, fast nasolacrimal drainage and reflex blinking, thus ending in a non-optimal dosage. 6 Roughly, only 5% of the drug applied topically reaches the deeper ocular tissues, thus forcing pharmaceutical producers to increase the drug concentration, with the associated increase in the toxicity and, indirectly, the risk of side effects. 7 Another limitation of these topical administration routes is the low compliance of patients, mainly elderly, to strictly follow the administration protocol (administration of a number of droplets several times per day).
The development of more efficient ocular drug delivery systems with well-designed and prolonged release kinetics remains a challenge in materials science and ophthalmology.
Nanocarriers such as polyacrylic acid nanoparticles, 8 chitosan nanoparticles, 9 nanovesicles, 10 and layered double hydroxides (LDH) 11 have been reported as promising alternatives for topical brimonidine dosage. However, the main limitation of some of these materials for potential application falls in the physical (low gravimetric capacity for the drug) and textural properties.
Novel drug administration platforms to treat ocular disorders prepared from polymeric materials (solid or semi-solid inserts) have gained a large popularity in the last few years. [12][13][14] The potential advantage of these polymeric devices is the accurate dosing, increased ocular residence time, reduction of systemic side effects or better patient compliance, just to mention some. 15 Due to the potential of these devices in ocular drug delivery, several companies have patented and commercialized them. For instance, one of the first marketed ocular insert has been commercialized by Alza (Vacville, CA) as Ocusert ® which are used to dose anti-glaucoma drug pilocarpine for a maximum of [5][6][7] days. 16,17 Although these are excellent numbers, the absence of a well-defined regular 3D network within these polymeric matrices limits their total drug uptake and hinders a controlled release.
Based on these premises, the design of novel functional ocular polymeric devices through the incorporation of perfectly designed high-capacity nanofillers would be a key stepping stone to increase the versatility and impact of these inserts in nanomedicine.
A potential approach not widely explored in the literature could be the incorporation of nanocarriers with an improved drug adsorption uptake and controlled release, provided that the incorporated guest structures does not alter the mechanical properties of the insert, while the porous structure of the nanofiller remains fully accessible in the mixed formulation. [18][19][20] Among the potential candidates, high-surface area porous materials such as metal-organic frameworks (MOFs) provide an avenue to achieve these requirements. 21 MOFs are crystalline materials formed by the union of metal centres and organic likers. The self-assembly of metal clusters (or nodes) and organic ligands allows the design of a large number of 1D to 3D networks characterized by high surface area, a large pore volume and tuneable host-guest interactions. 22 Over the last few years, these materials have shown promise as a potential platform for drug delivery in powder form. 23,24 Recent studies from Gandara-Loe et al. have shown that MOFs can store a large amount of brimonidine tartrate (up to 600 mg of drug per gram of MOF), and with an extended release time of up to 12 days, in the specific case of UiO-67.
Furthermore, in vitro cytotoxicity assays have demonstrated the low toxicity of UiO-67 for retinal photoreceptor cells. 25 The excellent performance of UiO-67 is motivated by the presence of large tetrahedral and octahedral cages in the micro/mesoporous range. 26 Taking into account these excellent properties, the successful incorporation of these 3D porous networks in continuous polymeric matrices will offer a new perspective in nanomedicine with more suitable nanocomposite materials (instead of working with powders), with novel functionalities (e.g., drug delivery properties), to be used either as micro-inserts (e.g. punctal plug in lacrimal or sub-tenon cavities) or as a component in contact lens. 27,28 Polymer-MOF nanocomposite materials have already been reported in the literature as potential candidates for gas adsorption/separation processes such as, CO2/N2 or CO2/CH4 separation or ethylene adsorption. 29,30 There are recent studies on the use of HKUST-1/polyurethane nanocomposite membranes for drug encapsulation and controlled release. 31 However, the understanding of molecular accessibility in liquid phase adsorption processes is still a challenge due to the different nature of the polymeric network and the MOF nanofiller. To the best of our knowledge, polymer-MOFs nanocomposite films have not yet been tested as a drug delivery carrier for ocular therapeutics.
Based on these premises, the main goal of this work is to report an optimal synthesis of functional MOF-based polyurethane thin films, and to evaluate the performance of these UiO-67@PU nanocomposites for brimonidine adsorption/release in liquid phase.
The successful development of these functional materials (MOF@polymer) will open the gate towards the application of these devices in a number of ocular disorders that require a controlled and prolonged release of drugs, from glaucoma treatment to postsurgical treatments by anti-inflammatory drugs.
Experimental section
UiO-67 synthesis
UiO-67 was synthetized based on the procedure reported in the literature by Katz et al. 32 Briefly, 0.268 g of ZrCl4 were dissolved in a mixture of 20 mL of N,N-dimethylformamide (DMF) and 2 mL of concentrated HCl. In a second vessel 0.360 g of 4,4'biphenyldicarboxylic acid (BDPC) were dissolved in 40 mL of DMF. The two solutions were mixed and maintained under sonication for 30 min. The final solution was transferred to a 200 mL glass jar, closed tightly and kept at 80ºC overnight. The resulting white solid was filtered and washed first with DMF (2x30 mL) and then with ethanol (2x30 mL). The sample was activated first under low vacuum conditions (13×10 -3 Pa) up to 90ºC and, afterwards, at 150ºC for 3 h under ultra-high vacuum conditions.
UiO-67@PU synthesis
The UiO-67@PU nanocomposite films were fabricated by following the procedures described below. Polyurethane (PU) solution was prepared by dissolving poly [4,4'-
Synchrotron X-ray powder diffraction (SXRPD) analysis
Synchrotron X-ray powder diffraction data (SXRPD) were collected on the powder diffraction end station of the MSPD beamline at synchrotron ALBA in Spain, using a MYTHEN detector and a wavelength of 0.4227 Å. The experiments were performed in an ad hoc capillary reaction cell (fused silica capillary, inner diameter 0.7 mm, outer diameter 0.85 mm). SXRPD measurements were performed at 25ºC to the assynthesized UiO-67, PU and the UiO-67@PU films, and also to the UiO-67@PU films after brimonidine adsorption. The reference spectra for brimonidine tartrate powder was also determined.
Thermogravimetric analysis (TGA)
Thermogravimetric analysis data of UiO-67, PU film and UiO-67@PU film were obtained using TG-DTA METTLER TOLEDO equipment model TG/SDTA851e/SF/1100. The samples
were measured using an alumina sample holder and temperature range of 25ºC-600ºC with a heating rate of 5 ºC/min under N2 flow.
Scanning electron microscopy (SEM) evaluation
Cross-section micrographs were recorded using a Hitachi scanning electron microscope model S3000N. This microscope is equipped with Bruker brand X-ray detector (model Xflash 3001) for EDS microanalysis and mapping. Samples were kept under cryogenic conditions (liquid N2) before the analysis in order to obtain a high-quality cross section and avoid surface alterations during the sectioning process.
Nitrogen and ethylene adsorption/desorption isotherms
Loading and release experiments
Brimonidine tartrate quantification was done based on the High Performance Liquid Chromatography method developed by Karamanos et al. 35 A stock solution of 1500 ppm of brimonidine tartrate was prepared by dissolving 1.5 g in 1000 mL of ultrapure water.
Brimonidine loading experiments
Brimonidine adsorption isotherms were performed at 25ºC using a group of aqueous solution (pH = 7) prepared from the stock solution with an initial concentration of 250 ppm, 500 ppm, 750 ppm, 1000 ppm and 1500 ppm of brimonidine tartrate. The nanocomposite films were degassed at 100ºC overnight before the experiment.
Approximately 100 mg of film were placed in contact with 50 mL of solution at each of the concentrations described above and left under stirring until equilibrium was reached. Aliquots were taken at different time intervals in order to evaluate the adsorption kinetics of the films.
The quantification of brimonidine was determined using High Performance Liquid Chromatography (HPLC) by diluting each aliquot 1:100 and using the method described above.
Brimonidine release experiments
100 mg of UiO-67@PU film, previously degassed, was loaded with brimonidine by contacting it with 50 mL of a 1500 ppm brimonidine tartrate aqueous solution. The system was left at 25ºC under stirring for 24 h to ensure full equilibrium. After this time the film was separated from the solution and an aliquot was taken to determine the maximum loading amount. The brimonidine-loaded film was washed several times with ultrapure water and dried under vacuum at 60ºC for 6 h. The dried brimonidine loaded film was immersed in 50 mL of physiological solution (PBS) and aliquots were taken at different times up to 14 days. The aliquots were diluted 1:100 and brimonidine quantification was performed using the HPLC method described above.
Results and discussion
Characterization of the synthesized films and accessibility of the embedded MOFs
The crystallinity of the synthesized materials has been evaluated through synchrotron X-ray powder diffraction measurements (SXRPD). Figure 1 shows the comparative SXRPD patterns for the as-synthesized UiO-67 crystals, obtained by solvothermal method, and the UiO-67@PU film. The SXRPD pattern of the UiO-67 sample perfectly fits with the simulated pattern and with those previously described in the literature, thus confirming the quality and reproducibility of the synthetized MOF. 32 Concerning the UiO-67@PU nanocomposite material, the SXRPD pattern confirms the presence of a semi-crystalline system, with the combination of crystallinity due to UiO-67 nanoparticles and the amorphous background from the PU matrix. The PU matrix is characterized by a broad peak between 2! = 6º -10º (see Figure S1), whereas the main diffraction peaks of the MOF can be clearly appreciated at 2! = 2.3º-2.6º. These results confirm the preservation of the 3D network in the UiO-67 nanocrystals upon incorporation in the polymeric matrix, and their excellent crystallinity. and symmetric decompositions peaks, as appreciated in Figure 3. For instance, the pure PU film exhibits a decomposition profile with a well-defined decomposition peak centred at 337ºC and a small shoulder at 430ºC, which is typical of polyurethane materials. 38 In the case of UiO-67, the TGA profile shows the release of the solvent at 135ºC and the main framework decomposition close to 550 ºC. 26 Figure 3 also shows the TGA profile for the UiO-67@PU nanocomposite film. In this case the scenario is more complex. As it can be appreciated, the nanocomposite material exhibits a broad decomposition profile with a main peak located in between 200 ºC and 300 ºC. 34 Apparently, the incorporation of the MOF nanocrystals limits the cross-linking between PU molecular chains, thus reducing their thermal stability. For the sake of clarity, a deconvolution of the DTGA profile for the nanocomposite system can be seen in Figure S3. In addition to the decomposition of the polymeric matrix, the aforementioned shoulders must be attributed to solvent removal (ca. 217ºC) and to the secondary contribution in the decomposition of the PU matrix (ca. 278ºC). Furthermore, the nanocomposite material exhibits an additional decomposition peak at 528ºC, unambiguously attributed to the degradation of the embedded MOF. This finding constitutes another proof about the successful incorporation of the MOF crystals in the polymeric matrix. Table S1 contains a summary of the TGA results for the three samples evaluated. To check the accessibility of the 3D porous network in UiO-67@PU nanocomposite films to gas molecules, the nitrogen adsorption/desorption isotherm was performed at -196ºC and compared to that of the pure MOF. As it can be appreciated in Figure S4, UiO-67 presents the typical adsorption-desorption isotherm already described elsewhere, 32 with a large uptake at low relative pressures due to its highly microporous framework,
Brimonidine adsorption and release
Brimonidine adsorption isotherms were performed in aqueous media (ultrapure water) and at room temperature in order to quantify the maximum amount of drug adsorbed in the porous structure of the synthesized films. As is shown in Figure 5, while the adsorption in the pure PU film is close to 0 mg/g, the maximum brimonidine adsorption capacity in the UiO-67@PU film (at an equilibrium time of 4h; see Figure S5) In order to mimic a potential application in human body, brimonidine release isotherms were performed in PBS solution at room temperature and neutral pH,
Brimonidine-composite compatibility and stability studies
Structural stability of the MOF framework is an important parameter to be considered in liquid-phase adsorption processes. It is widely accepted in the literature that MOF materials can exhibit a limited stability in the presence of aqueous environments or after the incorporation of the drug. 41 In the specific case of UiO-67, it is well-known that upon exposure to water or moisture, this system exhibits a large instability due to the hydrolysis of the linker-metal bonds, and the associated pore collapse. [42][43][44] However, the partial amorphization of the UiO-67 nanoparticles during the adsorption/release of In addition to the structural stability, another concern is the adsorption mechanism.
Adsorption of brimonidine into MOF-based polymeric films can be explained via three potential scenarios. As summarized in Figure 7, brimonidine can be adsorbed only in those MOF crystals located in the periphery of the PU film (option A), brimonidine can be adsorbed only in the polymeric matrix, i.e. MOF nanocrystals are completely blocked (option B) or it can be adsorbed equally in the different crystals homogenously distributed within the PU film (option C). To identify which of these options is the most plausible to explain the adsorption mechanism, the UiO-67@PU nanocomposite has been thoroughly evaluated before and after adsorption of brimonidine using synchrotron X-ray diffraction, thermogravimetry (TGA) and FTIR. The unit cell parameters deduced for the embedded UiO-67 crystals after Rietveld refinement are summarized in Table 1 To further ascertain the adsorption mechanism, TGA analysis was performed in the UiO-67@PU film after the loading of brimonidine. For clarity, the TGA of pure brimonidine tartrate has been included in Figure S8. Brimonidine tartrate exhibits a single decomposition peak at around 210 ºC. A closer look to the TGA profile for the brimonidine-loaded UiO-67@PU nanocomposite ( Figure S9) shows that the TGA peaks corresponding to the decomposition of the PU matrix and the UiO-67 crystals are shifted to higher temperatures upon adsorption. In addition, the thermogram shows an additional tiny peak at 210ºC, not present in the un-loaded UiO-67@PU material, that can be attributed to brimonidine within the composite film (blue peak deconvoluted in Figure S10). Although the shifts observed in Figure S9 for the decomposition of PU and UiO-67 upon brimonidine adsorption could be an indication of the presence of brimonidine in both domains, the real location of the drug remains still an open question. Last but not least, it is important to highlight that the quantification of the tinny peak at 210ºC corresponds to ~ 23 mg Brimonidine/gcomposite film. Although this is a rough estimation, we cannot exclude that around 40% of the brimonidine loaded at 1500 ppm ( Figure 5) could be lost during the washing step applied before the TGA analysis. A similar hypothesis could be used to explain the low release achieved in Figure 6.
Finally, the presence of the drug has been evaluated using FTIR of the UiO-67@PU film before and after loading with brimonidine, Figure 9. The FTIR spectra for the individual components have also been included for clarity. As it can be observed, before loading, the FTIR spectra of the UiO-67@PU film shows the characteristic peaks of PU and UiO-67. PU has a characteristic peak at 3329 cm -1 attributed to the stretching of the NH bond Figure S1.
Synchrotron X-ray powder diffraction pattern of pure polyurethane film. 6
Figure S5
Brimonidine adsorption kinetics in the UiO67@PU film at different initial concentrations.
6 Figure S6. X-ray powder diffraction pattern of as-synthesized UiO-67 and after soaking in water for 1 day.
7 Figure S7. Synchrotron X-ray powder diffraction pattern of brimonidine tartrate.
7 Figure S8. TGA-DTGA profiles for brimonidine tartrate. 8 Figure S9. TGA-DTGA profiles for UiO-67@PU film before and after loading with brimonidine.
8 Figure S10. Deconvolution of the DTGA profile in brimonidine loaded UiO-67@PU films.
9 Figure S11. Typical chromatogram for brimonidine using HPLC and detected by UV-Vis. • Equations applied in this study.
-30 wt. % UiO-67 encapsulated in a polymeric (PU) 50 µm film was prepared using the following equation (1):
!"#-67 (). % = - ! !"#$%& ! !"#$%& "! '! . / 100% (1)
where 2 #$%&'( is the weight of UiO-67 nanoparticles dispersed in THF and 2 *# is the weight of PU pellets dissolved in THF.
-Maximum amount of brimonidine that can be released (2) and real percentage released (3) was calculated using the following equation:
2 +,$&!-. = / ( &/ )* ! +",-(2)% 34546748 = / )*$.),)/0)1 ×1 ! 2."$-/3(3)
where mbri-max is the maximum amount of brimonidine adsorbed in a given mass of film (mfilm), C0 is the initial concentration of brimonidine, Ceq is the concentration after the adsorption reached the equilibrium, Ceq-released is the concentration measured after a given time during the releasing process in PBS solution and 9
the volume of PBS.
• Tables Table S1. Thermogravimetric results of the different samples evaluated. • Figures Figure S1. Synchrotron X-ray powder diffraction pattern of pure polyurethane film. Figure S6. X-ray powder diffraction pattern of as-synthesized UiO-67 and after soaking in water for 1 day. 1 Figure S7. Synchrotron X-ray powder diffraction pattern of brimonidine tartrate.
Where Ceq (mg/L) is the concentration of the solute after the equilibrium has been reached, qeq (mg/g) is the mas of solute adsorbed per unit mass of adsorbent, buffer and acetonitrile were used as mobile phase for the column. Figure S10 shows a typical chromatogram for brimonidine quantified by HPLC. Figure S11. Typical chromatogram for brimonidine using HPLC and detected by UV-Vis.
Textural properties and gas phase accessibility of the different samples were evaluated by gas physisorption, i.e. nitrogen adsorption at -196ºC and ethylene adsorption at 25ºC.Nitrogen gas adsorption measurements were performed in a homemade fully automated manometric equipment designed and constructed by the Advanced Materials Group (LMA), now commercialized as N2GSorb-6 (Gas to Materials Technologies, www.g2mtech.com). Nitrogen adsorption data were used to calculate a) the total pore volume (Vt) at relative pressure of 0.95, b) the BET surface area (SBET) and c) the micropore volume (VN2), after application of the Dubinin-Radushkevich (DR) equation. Ethylene adsorption experiments were performed in a home-built fully automated manometric equipment, now commercialized by Quantachrome Corp. as VSTAR. Before the experiments, samples were degassed at 100ºC for 8 h under high vacuum conditions (10 -5 torr).
Calibration curve was constructed by measuring concentrations from 2 to 15 ppm using chromatographic conditions, analytical column Supelcosil LC-18, 5 µm, 250 x 4.6 mm i.d. stainless steel (Supelco, Bellfonte, PA, USA) equipped with RP-18 precolumn, 20 x 4.6 mm i.d. (Supelco). The mobile phase was a mixture 9:1 (v/v) of 10 mM triethylamine pH 3.2 buffer and acetonitrile. The separation was performed at room temperature, at a flow rate of 1.0 mL/min, injection volume of 20 µL, and the detection of the brimonidine at 248 nm.
Figure 1 . 37 Figure 2 .
1372Synchrotron XRPD experimental patterns of UiO-67 and UiO-67@PU film accompanied by simulated pattern of UiO-67.Morphologically, UiO-67@PU film is a semi-transparent and flexible composite material (Figure S2) with high versatility for the production of different ocular devices. As it is shown inFigure 2, the nanocomposite film is formed by MOF nanocrystals (average crystal size 100-200 nm) embedded into the polyurethane matrix, giving a film of approximately 50 µm thick.Figure 2cshows the relatively uniform distribution of the UiO-67 nanocrystals within the PU matrix, an observation that was further confirmed by specific Zr-mapping experiments(Figure 2d). Previous results described in the literature for gas separation using similar composites have anticipated that the accessibility (permeation of gases) decreases with the thickness of the film.34,36 Based on this assumption and taking into account the objectives of this study (liquid phase adsorption processes usually possess lower kinetics compared to gas adsorption processes), we assume that a film of 50 µm can be considered as a good approach. Furthermore, 30wt.% MOF loading can be considered as an upper limit to keep a good balance between thermomechanical and toughness properties for a potential future application.34,SEM micrograph of (a) as-synthesized UiO-67 nanocrystals, (b) cross-section of a 50 µm thick neat PU film, (c) cross-section of UiO-67@PU 50-µm film and (d) Zr EDX mapping (green colour) of a cross-section of UiO-67@PU nanocomposite film. Thermogravimetric (TGA) analyses were used to evaluate the thermal stability of the nanocomposite film compared to the pure components (PU and UiO-67). Polyurethane and UiO-67 nanoparticles exhibit characteristic decomposition profiles with very sharp
Figure 3 .
3Thermogravimetric analysis (TGA and DTGA) of PU, UiO-67 and UiO-67@PU film.
and the associated step at p/p0 ~ 0.15 attributed to the presence of wider pores (small mesopores). This observation is in close agreement with the presence of two kind of cavities in UiO-67, tetrahedral and octahedral cages with a diameter of 1.1 and 2.3 nm, respectively.32 Interestingly, in the specific case of the UiO-67@PU film the accessibility for nitrogen at cryogenic temperatures is completely suppressed over the whole relative pressure range evaluated. This observation is in close agreement with previous studies described in the literature for ZIF-8 and ZIF-7 loaded polymeric matrices.37 Apparently, nitrogen with a quadrupolar moment is not able to diffuse through the rubbery polymeric network at cryogenic temperatures. Despite the inaccessibility of nitrogen to the embedded MOF crystals, this observation does not necessarily reflect the real the composite material. Based on our previous experience, adsorption of non-polar molecules (for instance, hydrocarbons) constitutes a complementary tool to evaluate the porous structure in these MOF@polymer nanocomposites.Figure 4shows the ethylene adsorption/desorption isotherms at 25ºC for the pure PU, UiO-67 and the nanocomposite. These results show that, contrary to N2, ethylene is indeed able to access the inner porous structure in this kind of materials. Whereas the pure PU film exhibits an adsorption capacity close to 0 mmol/g, UiO-67 nanoparticles are able to adsorb up to 1.31 mmol/g at a pressure of 1 bar. For the UiO-67@PU nanocomposite sample, the total adsorption capacity for ethylene at 1 bar is ca. 0.18 mmol/g. After a normalization to the total amount of MOF (considering that the composite contains ca.30 wt.%), this value scales up to a total uptake of 0.59 mmol/gMOF. Compared to the pure UiO-67, this result constitutes a reduction of 55% in the adsorption capacity of the embedded crystals, i.e. embedded nanocrystals are indeed accessible to gas molecules, although only partially.
Figure 4 .
4Ethylene adsorption (filled symbol)-desorption (open symbol) isotherms at 25ºC in as-synthesized UiO-67, PU and UiO-67@PU films.
obtained from the Langmuir model achieves a value of 58.4 mg of brimonidine per gram of film, i.e. 194.7 mg of brimonidine per gram of UiO-67 (considering the nominal value of 30wt.% of UiO-67 in the film). This value differs from that reported in the literature for pure UiO-67 nanoparticles (ca. 600 mgbrimonidine/gMOF).25 The reduction in the adsorption capacity for the nanocomposite (around 67% reduction) is in close agreement with the gas-phase ethylene adsorption measurements described above (ethylene was able to access 45% of the porosity whereas brimonidine only 32.5% of the MOF porous network). Although these numbers must be optimized, this finding constitutes an important development elucidating the potential application of these MOF-doped polymeric matrices for liquid-phase adsorption/desorption processes. Even though these processes are performed in the presence of a solvent (for instance, an aqueous solution), the embedded MOFs are able to preserve a similar accessibility to the target molecule (e.g. ocular drug), compared to similar measurements in gas phase, i.e. in the absence of solvent. These results suggest that UiO-67 cavities are able to host both ethylene (molecular size of 4.7 x 9.8 Å) and brimonidine (3.28 x 4.18 x 4.84 Å) in a similar extend.39,40 Compared to the neat PU polymer, the incorporation of UiO-67 nanofillers gives rise to a 60-fold increase in the adsorption capacity for brimonidine tartrate.
Figure 5 .
5Brimonidine liquid-phase adsorption isotherms in PU and UiO-67@PU films at 25ºC (C0 = 1500 ppm).
Figure 6 .
6As it can be observed, the UiO-67@PU nanocomposite exhibits a fast release (up to 7 % of the total uptake) in the first minute of the experiment. Afterwards, there is a continuous release with time up to a maximum of 10% of the total brimonidine retained after 14 days exposure. The large release in the first few hours must be attributed to brimonidine weakly interacting with the nanocomposite and/or adsorbed in the external layers/pores of the film. Considering the traditional topical administration of brimonidine, i.e. a patient must take one droplet of brimonidine solution of 2 mg/mL (Alphagan P ® , Allergan) every 8h, this means 0.3 mg of brimonidine per day or 4.2 mg in 14 days.7 Taking into account the total uptake of 58.4 mg/g for our composite, a release of 10 % (5.8 mg/g) after 14 days is within the needs of a normal patient with glaucoma, thus validating our approach. At this point it is important to highlight that we cannot exclude the possibility that some brimonidine is already removed/released from the loaded film during the washing step performed after the loading and before the release washing step was designed to remove exclusively the brimonidine retained in the external surface of the film).
Figure 6 .
6Brimonidine tartrate release kinetics at 25ºC in physiological media PBS (loading concentration 1500 ppm) At this point, the open questions remain the compatibility of the drug with the composite, the stability of the MOF structure after the loading process and finally, the potential location of the drug molecule in the composite system. Next sections are devoted to answer all these questions.
25 Figure 7 .
257been very useful to extend the released kinetics beyond 12 days, as described before by some of us. Scheme of possible scenarios for Brimonidine adsorption in MOF@polymer composites: (a) adsorption in the peripheric MOF crystals, (b) fully inaccessible and (c) fully accessible embedded MOF nanocrystals.
Figure 8 . 5 .
85Synchrotron X-ray powder diffraction patterns of UiO-67@PU film before and after being exposed to the brimonidine solution.Synchrotron X-ray diffraction measurements were performed in order to elucidate the structural parameters of the UiO-67 embedded crystals before and after the loading of brimonidine. As it can be observed inFigure 8 both patterns are rather similar even after exposure to the brimonidine aqueous solution for several days. These results are contrary to the performance of the pure MOF (Figure S6), where a significant structural deterioration was identified after 1 day in contact with water and confirms the improved structural stability of UiO-67 upon encapsulation in the PU matrix. 25 Although the cavities in UiO-67 (octahedral of 2.3 nm and tetrahedral of 1.15 nm) are large enough to accommodate the brimonidine molecule, the open question at this point is how to ascertain if brimonidine is able to take advantage of these cavities. 32 Synchrotron X-ray diffraction measurements of the pure brimonidine tartrate (Figure S7) show a rich XRD pattern with a large number of peaks in the range 2 and 18º, confirming the high crystallinity of this molecule. The absence of these peaks in the SXRPD pattern of the brimonidine-loaded UiO-67@PU nanocomposite (Figure 8) could be a priori an evidence of the absence of brimonidine both in the polymeric network and in the embedded MOF nanocrystals. However, this observation would be in contradiction with brimonidine adsorption measurements reported in Figure This be explained due to the amorphization of the drug upon adsorption,thus explaining the absence of peaks in the SXRPD pattern. This hypothesis would be in agreement with the encapsulation of the drug in the MOF cavities, with the associated limitation for these molecules to arrange in a periodic fashion. These conclusions are also supported by previous studies dealing with the adsorption/release of brimonidine through ocular devices suggesting the transformation of crystalline brimonidine into an amorphous phase once it is adsorbed into the material.[45][46][47]
.
Pure UiO-67 crystals have a cubic unit cell with lattice parameters a = b = c = 26.8447(9) Å. As it can be observed, the lattice parameters remain rather similar after incorporation of the UiO-67 crystals in the polymeric matrix, in close agreement with the high quality of the crystals described in Figure 1. Interestingly, lattice parameters do not change after exposure of the UiO-67@PU nanocomposite to an aqueous solution of brimonidine. Although these results confirm the large stability of UiO-67 nanocrystals in an aqueous environment upon incorporation in the PU matrix, these are not conclusive about the location of brimonidine upon adsorption. Unfortunately, Rietveld refinement analysis of the embedded crystals doesnot allow to answer this question due to the limited quality of the SXRPD pattern.Table 1. Summary of structural parameters and adsorption performance of UiO-67, and UiO-67@PU film before and after loading with brimonidine.
( 51 Figure 9 .
519Figure 9a). In addition there are two contributions at 1724 cm -1 and 1696 cm -1 due to the poly(caprolactone) ester bond, and the -CH stretching vibration at 2944 cm -1 , among others.48,49 The characteristic peaks of UiO-67 can be observed at 1594 cm -1 , 1528 cm -1 and 1411 cm -1 due to the stretching vibrations of the carboxylate group of the ligands and, the peaks at 815 cm -1 , 766 cm -1 and 652 cm -1 due to the Zr-O stretching vibrations. 50,FTIR spectra of (a) UiO-67 (bottom), PU (middle) and UiO67@PU film (upper), and (b) UiO67@PU film before (bottom) and after (upper) loading with brimonidine.As already reported in the literature, brimonidine tartrate also presents characteristic vibrations in the IR range. These characteristic vibrations include peaks at 3212 and 3268 cm -1 owing to N-H stretching vibration from the secondary amine groups (RR'-NH). Peaks around 1650 cm -1 are attributed to C=O stretching and -CN stretching appears at 1284cm -1 . 52-54The most remarkable feature of the FTIR spectra of UiO-67@PU after loading brimonidine is, in addition to the bands described above due to PU and UiO-67, the presence of a wide contribution around 3575-3074 cm -1 . This broad contribution could be associated to the overlapping of signals from adsorbed H2O (O-H stretching at 3404 1 ), and to the stretching -NH vibrations characteristics of urea and urethane bonds (3333 cm -1 ) in PU.48,49 However, taking into account that the brimonidine-loaded sample has been vacuum dried at 60ºC before the FTIR spectra, and the absence of this wide contribution in the drug-free nanocomposite film, the presence of this broad contribution must be unambiguously attributed to the presence of brimonidine chemically interacting with the composite via hydrogen bonding with surface oxygen and nitrogen groups. This finding is supported by the presence of a new peak at 1650 cm -1 (solid line inFigure 9b) in the loaded film due to the C=O groups of the brimonidine tartrate. These assignments are in perfect agreement with previous studies in NH2-MIL-88(Fe) loaded with brimonidine.53 In summary, FTIR spectra clearly confirm the presence of the drug in the UiO-67@PU film, although the real location, either in the polymeric matrix or in the UiO-67 network cannot be easily identified.ConclusionsWe have successfully developed a novel UiO-67-based polyurethane film with an excellent adsorption/release performance for an ocular drug such as brimonidine tartrate. Synchrotron X-ray powder diffraction measurements confirms the high quality of the MOF nanocrystals when embedded in a hydrophobic polymer such as PU and their improved stability in an aqueous environment, compared to the pure MOF.Although the inner porous structure is not accessible to nitrogen with a quadrupole moment, this is not the case for the adsorption of non-polar molecules (e.g., hydrocarbons) at room temperature. Although the partial accessibility of the embeddedMOFs limits the brimonidine adsorption performance, the UiO-67@PU composite gives rise to a 60-fold improvement compared to the neat PU film. Synchrotron XRPD, TGA and FTIR measurements of the composite before and after loading brimonidine confirm the presence of the drug within the UiO-67@PU film, although the real role of the polymer matrix and the UiO-67 nanocrystals cannot be conclusively confirmed. The total brimonidine uptake of the composite is as high as 58.4 mgBRI per gram of composite or 194.7 mgBRI per gram of MOF. These results in liquid-phase are highly promising and open the door to the design of novel polymeric inserts with functional properties and improved performance (for instance with drug delivery properties), to be applied in a number of ophthalmological disorders, either as a component of contact lens, in the composition of lacrimal stoppers (e.g., punctal plugs) or in sub-tenon inserts.
4 Figure S2 . 5 Figure S3 . 5 Figure S4 .
4S25S35S4Photographs of the different samples prepared by doctor blading, (a) UiO67@PU and (b) PU films. Deconvolution of the DTGA profile for UiO-67@PU film. Nitrogen adsorption (filled symbols)-desorption (open symbols) isotherms at -196ºC for UiO-67 and UiO-67@PU film.
Figure S2 .
S2Photographs of the different samples prepared by doctor blading, (a) UiO67@PU and (b) PU films.
Figure S3 .
S3Deconvolution of the DTGA profile for UiO-67@PU film.
Figure S4 .
S4Nitrogen adsorption (filled symbols)-desorption (open symbols) isotherms at -196ºC for UiO-67 and UiO-67@PU film.
Figure S5 .
S5Brimonidine adsorption kinetics in the UiO67@PU film at different initial concentrations.
Figure S8 .
S8TGA-DTGA profiles for brimonidine tartrate.
Figure S9 .
S9TGA-DTGA profiles for UiO-67@PU film before and after loading with brimonidine.
Figure S10 .
S10Deconvolution of the DTGA profile in brimonidine loaded UiO-67@PU films. • Langmuir model for brimonidine adsorption isotherm. Adsorption isotherms are defined as the mathematical relationship between the mass of the adsorbed solute per adsorbent mass unit and the solute concentration remained in the solution when the equilibrium has been reached at constant temperature 2 . The most widely used isotherm models for liquid-solid systems are Langmuir, Freundlich and Prausnitz-Radke 3 .Langmuir model (4) was theoretically developed based on the following assumptions: i) the adsorption occurs in specific sites on the surface of the adsorbent, ii) only one molecule is adsorbed in each active site, iii) there are not interactions between adjacent adsorbed molecules and iv) the adsorption heat is the same for all the active sites. This model is mathematically represented as:
•
qmax (mg/g) is the maximum amount of solute that can be adsorbed by the adsorbent, Co is the initial concentration and K (L/mg) is the Langmuir constant related with the heat of adsorption. The equation was solved using Statistica 10 software of StatSoft Inc by nonlinear estimation with estimation method of Rosenbrook and Quasi-Newton. The values obtained for brimonidine adsorption in UiO-67@PU nanocomposite films are: Brimonidine chromatograph Chromatographic conditions used for the quantification of brimonidine were based in the method developed by Karamanos et al 4 . 10 mM triethylamine 3.2
Interestingly, this peak is not symmetric and clear shoulders can be appreciated at around 217ºC and 278ºC, in addition to the main contribution at 252ºC. Taking into account that 70 wt.% of the composite corresponds to PU, the main contribution at 252ºC must be attributed to the decomposition of the polymeric matrix. Compared to the pure polymer (ca. 337ºC), these results indicate a clear shift to lower temperatures upon incorporation of the MOF nanofillers, in close agreement with previous studies reported in the literature.
Table of Contents Page
of1. Equations applied in this study
3
2. Tables
4
3. Figures
ΔT: temperature range of the thermal decomposition. ΔW: Total weight loss at the main decomposition process Tm: The degradation temperature corresponding to the maximum weight loss rate of DTG curve.Sample
1st stage
2nd stage
3rd stage
ΔT
ΔW
ΔT
ΔW
ΔT
ΔW
Tm
(ºC)
(wt.%)
(ºC)
(wt.%)
(ºC)
(wt.%) (ºC)
UiO-67
30-200
25.3
200-440
4.7
440-600
31.5
540
PU
30-190
0.4
235-500
94.7
500-600
0.7
337
UiO-67@PU 30-190
2.7
190-370
68.9
455-600
8.4
252
AcknowledgmentsThe authors would like to acknowledge financial support from the MINECO (MAT2016-
Glaucoma Care and Conformance with Preferred Practice Patterns: Examination of the Private, Community-Based Ophthalmologist. L H Hertzog, K G Albrecht, L Labree, P P Lee, Ophthalmology. 1037Hertzog, L. H.; Albrecht, K. G.; LaBree, L.; Lee, P. P. Glaucoma Care and Conformance with Preferred Practice Patterns: Examination of the Private, Community-Based Ophthalmologist. Ophthalmology 1996, 103 (7), 1009-1013.
The Number of People with Glaucoma Worldwide in 2010 and 2020. H A Quigley, A T Broman, Br. J. Ophthalmol. 903Quigley, H. A.; Broman, A. T. The Number of People with Glaucoma Worldwide in 2010 and 2020. Br. J. Ophthalmol. 2006, 90 (3), 262 LP -267.
Adamsons, I. A. A Randomized Trial in Patients Inadequately Controlled with Timolol Alone Comparing the Dorzolamide-Timolol Combination to Monotherapy with Timolol or Dorzolamide 1. C M Clineschmidt, R D Williams, E Snyder, Ophthalmology. 10Clineschmidt, C. M.; Williams, R. D.; Snyder, E.; Adamsons, I. A. A Randomized Trial in Patients Inadequately Controlled with Timolol Alone Comparing the Dorzolamide-Timolol Combination to Monotherapy with Timolol or Dorzolamide 1 . Ophthalmology 1998, 105 (10), 1952-1959.
Ophthalmic Drug Delivery Systems--Recent Advances. C L Bourlais, L Acar, H Zia, P A Sado, T Needham, R Leverge, Prog. Retin. Eye Res. 171Bourlais, C. L.; Acar, L.; Zia, H.; Sado, P. A.; Needham, T.; Leverge, R. Ophthalmic Drug Delivery Systems--Recent Advances. Prog. Retin. Eye Res. 1998, 17 (1), 33- 58.
Ophthalmic Drug Delivery through Contact Lenses. D Gulsen, A Chauhan, Invest. Ophthalmol. Vis. Sci. 457Gulsen, D.; Chauhan, A. Ophthalmic Drug Delivery through Contact Lenses. Invest. Ophthalmol. Vis. Sci. 2004, 45 (7), 2342-2347.
Ocular Drug Delivery. R Gaudana, H K Ananthula, A Parenky, A K Mitra, AAPS J. 123Gaudana, R.; Ananthula, H. K.; Parenky, A.; Mitra, A. K. Ocular Drug Delivery. AAPS J. 2010, 12 (3), 348-360.
Challenges and Obstacles of Ocular Pharmacokinetics and Drug Delivery. A Urtti, Adv. Drug Deliv. Rev. 5811Urtti, A. Challenges and Obstacles of Ocular Pharmacokinetics and Drug Delivery. Adv. Drug Deliv. Rev. 2006, 58 (11), 1131-1135.
Brimonidine Formulation in Polyacrylic Acid Nanoparticles for Ophthalmic Delivery. T K De, D J Rodman, B A Holm, P N Prasad, E J Bergey, J. Microencapsul. 203De, T. K.; Rodman, D. J.; Holm, B. A.; Prasad, P. N.; Bergey, E. J. Brimonidine Formulation in Polyacrylic Acid Nanoparticles for Ophthalmic Delivery. J. Microencapsul. 2003, 20 (3), 361-374.
Chitosan Nanoparticles for Controlled Delivery of Brimonidine Tartrate to the Ocular Membrane. K H Singh, U A Shinde, Pharmazie. 668Singh, K. H.; Shinde, U. A. Chitosan Nanoparticles for Controlled Delivery of Brimonidine Tartrate to the Ocular Membrane. Pharmazie 2011, 66 (8), 594- 599.
Preparation and Evaluation of Nano-Vesicles of Brimonidine Tartrate as an Ocular Drug Delivery System. P Prabhu, R Nitish Kumar, M Koland, N M Harish, K Vijayanarayan, G Dhondge, R N Charyulu, J. Young Pharm. 24Prabhu, P.; Nitish Kumar, R.; Koland, M.; Harish, N. M.; Vijayanarayan, K.; Dhondge, G.; Charyulu, R. N. Preparation and Evaluation of Nano-Vesicles of Brimonidine Tartrate as an Ocular Drug Delivery System. J. Young Pharm. 2010, 2 (4), 356-361.
Sustained Release of Brimonidine from a New Composite Drug Delivery System for Treatment of Glaucoma. J Sun, Y Lei, Z Dai, X Liu, T Huang, J Wu, Z P Xu, X Sun, ACS Appl. Mater. Interfaces. 20179Sun, J.; Lei, Y.; Dai, Z.; Liu, X.; Huang, T.; Wu, J.; Xu, Z. P.; Sun, X. Sustained Release of Brimonidine from a New Composite Drug Delivery System for Treatment of Glaucoma. ACS Appl. Mater. Interfaces 2017, 9 (9), 7990-7999.
. D Ghate, H F Edelhauser, Ocular Drug Delivery. Expert Opin. Drug Deliv. 32Ghate, D.; Edelhauser, H. F. Ocular Drug Delivery. Expert Opin. Drug Deliv. 2006, 3 (2), 275-287.
Ocular Drug Delivery by Liposome-Chitosan Nanoparticle Complexes (LCS-NP). Y Diebold, M Jarrín, V Sáez, E L S Carvalho, M Orea, M Calonge, B Seijo, M J Alonso, Biomaterials. 288Diebold, Y.; Jarrín, M.; Sáez, V.; Carvalho, E. L. S.; Orea, M.; Calonge, M.; Seijo, B.; Alonso, M. J. Ocular Drug Delivery by Liposome-Chitosan Nanoparticle Complexes (LCS-NP). Biomaterials 2007, 28 (8), 1553-1564.
Episcleral Drug Film for Better-Targeted Ocular Drug Delivery and Controlled Release Using Multilayered Poly-ε-Caprolactone (PCL). S Sun, J Li, X Li, B Lan, S Zhou, Y Meng, L Cheng, Acta Biomater. 37Sun, S.; Li, J.; Li, X.; Lan, B.; Zhou, S.; Meng, Y.; Cheng, L. Episcleral Drug Film for Better-Targeted Ocular Drug Delivery and Controlled Release Using Multilayered Poly-ε-Caprolactone (PCL). Acta Biomater. 2016, 37, 143-154.
Ocular Inserts for Topical Delivery. M F Saettone, L Salminen, Adv. Drug Deliv. Rev. 161Saettone, M. F.; Salminen, L. Ocular Inserts for Topical Delivery. Adv. Drug Deliv. Rev. 1995, 16 (1), 95-106.
Visual Effects of Pilocarpine in Glaucoma. H S Brown, Arch. Ophthalmol. 94101716Brown, H. S. Visual Effects of Pilocarpine in Glaucoma. Arch. Ophthalmol. 1976, 94 (10), 1716.
The Ocusert Pilocarpine System. I P Pollack, H A Quigley, T S Harbin, South. Med. J. 6910POLLACK, I. P.; QUIGLEY, H. A.; HARBIN, T. S. The Ocusert Pilocarpine System. South. Med. J. 1976, 69 (10), 1296-1298.
New Platforms for Multi-Functional Ocular Lenses: Engineering Double-Sided Functionalized Nano-Coatings. P Mehta, L Justo, S Walsh, M S Arshad, C G Wilson, C K O'sullivan, S M Moghimi, I S Vizirianakis, K Avgoustakis, D G Fatouros, Z Ahmad, J. Drug Target. 234Mehta, P.; Justo, L.; Walsh, S.; Arshad, M. S.; Wilson, C. G.; O'Sullivan, C. K.; Moghimi, S. M.; Vizirianakis, I. S.; Avgoustakis, K.; Fatouros, D. G.; Ahmad, Z. New Platforms for Multi-Functional Ocular Lenses: Engineering Double-Sided Functionalized Nano-Coatings. J. Drug Target. 2015, 23 (4), 305-310.
Soft Contact Lenses Functionalized with Pendant Cyclodextrins for Controlled Drug Delivery. J.-F Rosa Dos Santos, C Alvarez-Lorenzo, M Silva, L Balsa, J Couceiro, J.-J Torres-Labandeira, A Concheiro, Biomaterials. 307Rosa dos Santos, J.-F.; Alvarez-Lorenzo, C.; Silva, M.; Balsa, L.; Couceiro, J.; Torres-Labandeira, J.-J.; Concheiro, A. Soft Contact Lenses Functionalized with Pendant Cyclodextrins for Controlled Drug Delivery. Biomaterials 2009, 30 (7), 1348-1355.
Functionalized Chitosan/NIPAM (HEMA) Hybrid Polymer Networks as Inserts for Ocular Drug Delivery: Synthesis,in Vitro Assessment, Andin Vivo Evaluation. L Verestiuc, O Nastasescu, E Barbu, I Sarvaiya, K L Green, J Tsibouklis, J. Biomed. Mater. Res. Part A. 774Verestiuc, L.; Nastasescu, O.; Barbu, E.; Sarvaiya, I.; Green, K. L.; Tsibouklis, J. Functionalized Chitosan/NIPAM (HEMA) Hybrid Polymer Networks as Inserts for Ocular Drug Delivery: Synthesis,in Vitro Assessment, Andin Vivo Evaluation. J. Biomed. Mater. Res. Part A 2006, 77A (4), 726-735.
Metal-Organic Frameworks as Potential Drug Delivery Systems. C.-Y Sun, C Qin, X.-L Wang, Z.-M Su, Expert Opin. Drug Deliv. 101Sun, C.-Y.; Qin, C.; Wang, X.-L.; Su, Z.-M. Metal-Organic Frameworks as Potential Drug Delivery Systems. Expert Opin. Drug Deliv. 2013, 10 (1), 89-101.
Introduction to Metal-Organic Frameworks. H.-C Zhou, J R Long, O M Yaghi, Chem. Rev. 20122Zhou, H.-C.; Long, J. R.; Yaghi, O. M. Introduction to Metal-Organic Frameworks. Chem. Rev. 2012, 112 (2), 673-674.
Metal-Organic Frameworks as Efficient Materials for Drug Delivery. P Horcajada, C Serre, M Vallet-Regí, M Sebban, F Taulelle, G Férey, Angew. Chemie Int. Ed. 4536Horcajada, P.; Serre, C.; Vallet-Regí, M.; Sebban, M.; Taulelle, F.; Férey, G. Metal-Organic Frameworks as Efficient Materials for Drug Delivery. Angew. Chemie Int. Ed. 2006, 45 (36), 5974-5978.
Metal-Organic Framework Nanocarriers for Drug Delivery in Biomedical Applications. Y Sun, L Zheng, Y Yang, X Qian, T Fu, X Li, Z Yang, H Yan, C Cui, W Tan, Nano-Micro Lett. 20201103Sun, Y.; Zheng, L.; Yang, Y.; Qian, X.; Fu, T.; Li, X.; Yang, Z.; Yan, H.; Cui, C.; Tan, W. Metal-Organic Framework Nanocarriers for Drug Delivery in Biomedical Applications. Nano-Micro Lett. 2020, 12 (1), 103.
Metal-Organic Frameworks as Drug Delivery Platforms for Ocular Therapeutics. J Gandara-Loe, I Ortuño-Lizarán, L Fernández-Sanchez, J L Alió, N Cuenca, A Vega-Estrada, J Silvestre-Albero, ACS Appl. Mater. Interfaces. 20192Gandara-Loe, J.; Ortuño-Lizarán, I.; Fernández-Sanchez, L.; Alió, J. L.; Cuenca, N.; Vega-Estrada, A.; Silvestre-Albero, J. Metal-Organic Frameworks as Drug Delivery Platforms for Ocular Therapeutics. ACS Appl. Mater. Interfaces 2019, 11 (2), 1924-1931.
H 2 Storage in Isostructural UiO-67 and UiO-66 MOFs. S Chavan, J G Vitillo, D Gianolio, O Zavorotynska, B Civalleri, S Jakobsen, M H Nilsen, L Valenzano, C Lamberti, K P Lillerud, S Bordiga, Phys. Chem. Chem. Phys. 20125Chavan, S.; Vitillo, J. G.; Gianolio, D.; Zavorotynska, O.; Civalleri, B.; Jakobsen, S.; Nilsen, M. H.; Valenzano, L.; Lamberti, C.; Lillerud, K. P.; Bordiga, S. H 2 Storage in Isostructural UiO-67 and UiO-66 MOFs. Phys. Chem. Chem. Phys. 2012, 14 (5), 1614-1626.
Topical Ocular Drug Delivery: Recent Developments and Future Challenges. V H L Lee, J R Robinson, J. Ocul. Pharmacol. Ther. 21Lee, V. H. L.; Robinson, J. R. Topical Ocular Drug Delivery: Recent Developments and Future Challenges. J. Ocul. Pharmacol. Ther. 1986, 2 (1), 67-108.
A Comprehensive Review on Contact Lens for Ophthalmic Drug Delivery. J Xu, Y Xue, G Hu, T Lin, J Gou, T Yin, H He, Y Zhang, X Tang, J. Control. 281Xu, J.; Xue, Y.; Hu, G.; Lin, T.; Gou, J.; Yin, T.; He, H.; Zhang, Y.; Tang, X. A Comprehensive Review on Contact Lens for Ophthalmic Drug Delivery. J. Control. Release 2018, 281, 97-118.
Metal-Organic Framework Nanosheets in Polymer Composite Materials for Gas Separation. T Rodenas, I Luz, G Prieto, B Seoane, H Miro, A Corma, F Kapteijn, F X Llabrés I Xamena, J Gascon, Nat. Mater. 141Rodenas, T.; Luz, I.; Prieto, G.; Seoane, B.; Miro, H.; Corma, A.; Kapteijn, F.; Llabrés i Xamena, F. X.; Gascon, J. Metal-Organic Framework Nanosheets in Polymer Composite Materials for Gas Separation. Nat. Mater. 2015, 14 (1), 48- 55.
Mixed-Matrix Membranes of Zeolitic Imidazolate Framework (ZIF-8)/Matrimid Nanocomposite: Thermo-Mechanical Stability and Viscoelasticity Underpinning Membrane Separation Performance. E M Mahdi, J.-C Tan, J. Memb. Sci. 498Mahdi, E. M.; Tan, J.-C. Mixed-Matrix Membranes of Zeolitic Imidazolate Framework (ZIF-8)/Matrimid Nanocomposite: Thermo-Mechanical Stability and Viscoelasticity Underpinning Membrane Separation Performance. J. Memb. Sci. 2016, 498, 276-290.
Elucidating the Drug Release from Metal-Organic Framework Nanocomposites via In Situ Synchrotron Microspectroscopy and Theoretical Modeling. B E Souza, L Donà, K Titov, P Bruzzese, Z Zeng, Y Zhang, A S Babal, A F Möslein, M D Frogley, M Wolna, G Cinque, B Civalleri, J.-C Tan, ACS Appl. Mater. Interfaces. 20204Souza, B. E.; Donà, L.; Titov, K.; Bruzzese, P.; Zeng, Z.; Zhang, Y.; Babal, A. S.; Möslein, A. F.; Frogley, M. D.; Wolna, M.; Cinque, G.; Civalleri, B.; Tan, J.-C. Elucidating the Drug Release from Metal-Organic Framework Nanocomposites via In Situ Synchrotron Microspectroscopy and Theoretical Modeling. ACS Appl. Mater. Interfaces 2020, 12 (4), 5147-5156.
A Facile Synthesis of UiO-66, UiO-67 and Their Derivatives. M J Katz, Z J Brown, Y J Colón, P W Siu, K A Scheidt, R Q Snurr, J T Hupp, O K Farha, Chem. Commun. 49829449Katz, M. J.; Brown, Z. J.; Colón, Y. J.; Siu, P. W.; Scheidt, K. A.; Snurr, R. Q.; Hupp, J. T.; Farha, O. K. A Facile Synthesis of UiO-66, UiO-67 and Their Derivatives. Chem. Commun. 2013, 49 (82), 9449.
Denny Jr, M S Cohen, S , Situ Modification of Metal-Organic Frameworks in Mixed-Matrix Membranes. Angew. Chemie Int. 54Denny Jr., M. S.; Cohen, S. M. In Situ Modification of Metal-Organic Frameworks in Mixed-Matrix Membranes. Angew. Chemie Int. Ed. 2015, 54 (31), 9029-9032.
Dynamic Molecular Interactions between Polyurethane and ZIF-8 in a Polymer-MOF Nanocomposite: Microstructural, Thermo-Mechanical and Viscoelastic Effects. E M Mahdi, J.-C Tan, 97Polymer (Guildf)Mahdi, E. M.; Tan, J.-C. Dynamic Molecular Interactions between Polyurethane and ZIF-8 in a Polymer-MOF Nanocomposite: Microstructural, Thermo- Mechanical and Viscoelastic Effects. Polymer (Guildf). 2016, 97, 31-43.
Development of an HPLC Method for Determining the Alpha2-Adrenergic Receptor Agonist Brimonidine in Blood Serum and Aqueous Humor of the Eye. N K Karamanos, F Lamari, J Katsimpris, S Gartaganis, Biomed. Chromatogr. 131Karamanos, N. K.; Lamari, F.; Katsimpris, J.; Gartaganis, S. Development of an HPLC Method for Determining the Alpha2-Adrenergic Receptor Agonist Brimonidine in Blood Serum and Aqueous Humor of the Eye. Biomed. Chromatogr. 1999, 13 (1), 86-88.
Polymeric Mixed Matrix Membranes Containing Zeolites as a Filler for Gas Separation Applications: A Review. D Bastani, N Esmaeili, M Asadollahi, J. Ind. Eng. Chem. 192Bastani, D.; Esmaeili, N.; Asadollahi, M. Polymeric Mixed Matrix Membranes Containing Zeolites as a Filler for Gas Separation Applications: A Review. J. Ind. Eng. Chem. 2013, 19 (2), 375-393.
Polymer Nanocomposites Functionalised with Nanocrystals of Zeolitic Imidazolate Frameworks as Ethylene Control Agents. E M Mahdi, C Cuadrado-Collados, J Silvestre-Albero, J.-C Tan, Mater. Today Adv. Mahdi, E. M.; Cuadrado-Collados, C.; Silvestre-Albero, J.; Tan, J.-C. Polymer Nanocomposites Functionalised with Nanocrystals of Zeolitic Imidazolate Frameworks as Ethylene Control Agents. Mater. Today Adv. 2019, 2, 100008.
Characterization of Polyurethane Resins by FTIR, TGA, and XRD. G Trovati, E A Sanches, S C Neto, Y P Mascarenhas, G O Chierice, J. Appl. Polym. Sci. 20101Trovati, G.; Sanches, E. A.; Neto, S. C.; Mascarenhas, Y. P.; Chierice, G. O. Characterization of Polyurethane Resins by FTIR, TGA, and XRD. J. Appl. Polym. Sci. 2010, 115 (1), 263-268.
The Basics of General, Organic, and Biological Chemistry; Saylor Fundation. D W Ball, J W Hill, R J Scott, Ball, D. W.; Hill, J. W.; Scott, R. J. The Basics of General, Organic, and Biological Chemistry; Saylor Fundation, 2011.
Molecular Sieving of Ethane from Ethylene through the Molecular Cross-Section Size Differentiation in Gallate-based Metal-Organic Frameworks. Z Bao, J Wang, Z Zhang, H Xing, Q Yang, Y Yang, H Wu, R Krishna, W Zhou, B Chen, Q Ren, Angew. Chemie Int. Ed. 5749Bao, Z.; Wang, J.; Zhang, Z.; Xing, H.; Yang, Q.; Yang, Y.; Wu, H.; Krishna, R.; Zhou, W.; Chen, B.; Ren, Q. Molecular Sieving of Ethane from Ethylene through the Molecular Cross-Section Size Differentiation in Gallate-based Metal-Organic Frameworks. Angew. Chemie Int. Ed. 2018, 57 (49), 16020-16025.
Structural Stability of Metal Organic Frameworks in Aqueous Media -Controlling Factors and Methods to Improve Hydrostability and Hydrothermal Cyclic Stability. N Qadir, S A M Said, H M Bahaidarah, Microporous Mesoporous Mater. 201Qadir, N. ul; Said, S. A. M.; Bahaidarah, H. M. Structural Stability of Metal Organic Frameworks in Aqueous Media -Controlling Factors and Methods to Improve Hydrostability and Hydrothermal Cyclic Stability. Microporous Mesoporous Mater. 2015, 201, 61-90.
Determining the Structural Stability of UiO-67 with Respect to Time: A Solid-State NMR Investigation. M C Lawrence, C Schneider, M J Katz, Chem. Commun. 5228Lawrence, M. C.; Schneider, C.; Katz, M. J. Determining the Structural Stability of UiO-67 with Respect to Time: A Solid-State NMR Investigation. Chem. Commun. 2016, 52 (28), 4971-4974.
Stability and Degradation Mechanisms of Metal-Organic Frameworks Containing the Zr6O4(OH)4 Secondary Building Unit. J B Decoste, G W Peterson, H Jasuja, T G Glover, Y Huang, K S Walton, J. Mater. Chem. A. 2013185642DeCoste, J. B.; Peterson, G. W.; Jasuja, H.; Glover, T. G.; Huang, Y.; Walton, K. S. Stability and Degradation Mechanisms of Metal-Organic Frameworks Containing the Zr6O4(OH)4 Secondary Building Unit. J. Mater. Chem. A 2013, 1 (18), 5642.
Are Zr 6 -Based MOFs Water Stable? Linker Hydrolysis vs. Capillary-Force-Driven Channel Collapse. J E Mondloch, M J Katz, N Planas, D Semrouni, L Gagliardi, J T Hupp, O K Farha, Chem. Commun. 50648944Mondloch, J. E.; Katz, M. J.; Planas, N.; Semrouni, D.; Gagliardi, L.; Hupp, J. T.; Farha, O. K. Are Zr 6 -Based MOFs Water Stable? Linker Hydrolysis vs. Capillary- Force-Driven Channel Collapse. Chem. Commun. 2014, 50 (64), 8944.
Development and Evaluation of HPMC Based Matrices for Transdermal Patches of Tramadol. A R Chandak, P R P Verma, Clin. Res. Regul. Aff. 251Chandak, A. R.; Verma, P. R. P. Development and Evaluation of HPMC Based Matrices for Transdermal Patches of Tramadol. Clin. Res. Regul. Aff. 2008, 25 (1), 13-30.
Biodegradable Ocular Inserts for Sustained Delivery of Brimonidine Tartarate: Preparation and In Vitro/In Vivo Evaluation. M H Aburahma, A A Mahmoud, AAPS PharmSciTech. 124Aburahma, M. H.; Mahmoud, A. A. Biodegradable Ocular Inserts for Sustained Delivery of Brimonidine Tartarate: Preparation and In Vitro/In Vivo Evaluation. AAPS PharmSciTech 2011, 12 (4), 1335-1347.
Integration of Accessible Secondary Metal Sites into MOFs for H 2 S Removal. G Nickerl, M Leistner, S Helten, V Bon, I Senkovska, S Kaskel, Inorg. Chem. Front. 20144Nickerl, G.; Leistner, M.; Helten, S.; Bon, V.; Senkovska, I.; Kaskel, S. Integration of Accessible Secondary Metal Sites into MOFs for H 2 S Removal. Inorg. Chem. Front. 2014, 1 (4), 325-330.
Porous Biodegradable Polyurethane Nanocomposites: Preparation, Characterization, and Biocompatibility Tests. R C M Dias, A M Góes, R Serakides, E Ayres, R L Oréfice, Mater. Res. 132Dias, R. C. M.; Góes, A. M.; Serakides, R.; Ayres, E.; Oréfice, R. L. Porous Biodegradable Polyurethane Nanocomposites: Preparation, Characterization, and Biocompatibility Tests. Mater. Res. 2010, 13 (2), 211-218.
Thermal Analysis of Polyurethane Block Polymers. R W Seymour, S L Cooper, Macromolecules. 61Seymour, R. W.; Cooper, S. L. Thermal Analysis of Polyurethane Block Polymers. Macromolecules 1973, 6 (1), 48-53.
Effective Adsorption and Enhanced Removal of Organophosphorus Pesticides from Aqueous Solution by Zr-Based MOFs of UiO-67. X Zhu, B Li, J Yang, Y Li, W Zhao, J Shi, J Gu, ACS Appl. Mater. Interfaces. 71Zhu, X.; Li, B.; Yang, J.; Li, Y.; Zhao, W.; Shi, J.; Gu, J. Effective Adsorption and Enhanced Removal of Organophosphorus Pesticides from Aqueous Solution by Zr-Based MOFs of UiO-67. ACS Appl. Mater. Interfaces 2015, 7 (1), 223-231.
Water-Stable Nanoscale Zirconium-Based Metal-Organic Frameworks for the Effective Removal of Glyphosate from Aqueous Media. A Pankajakshan, M Sinha, A A Ojha, S Mandal, ACS Omega. 37Pankajakshan, A.; Sinha, M.; Ojha, A. A.; Mandal, S. Water-Stable Nanoscale Zirconium-Based Metal-Organic Frameworks for the Effective Removal of Glyphosate from Aqueous Media. ACS Omega 2018, 3 (7), 7832-7839.
Nanovesicular Formulation of Brimonidine Tartrate for the Management of Glaucoma: In Vitro and In Vivo Evaluation. S Maiti, S Paul, R Mondol, S Ray, B Sa, AAPS PharmSciTech. 122Maiti, S.; Paul, S.; Mondol, R.; Ray, S.; Sa, B. Nanovesicular Formulation of Brimonidine Tartrate for the Management of Glaucoma: In Vitro and In Vivo Evaluation. AAPS PharmSciTech 2011, 12 (2), 755-763.
Metal-Organic Frameworks, NH2-MIL-88(Fe), as Carriers for Ophthalmic Delivery of Brimonidine. S.-N Kim, C G Park, B K Huh, S H Lee, C H Min, Y Y Lee, Y K Kim, K H Park, Y Choy, Bin, Acta Biomater. 79Kim, S.-N.; Park, C. G.; Huh, B. K.; Lee, S. H.; Min, C. H.; Lee, Y. Y.; Kim, Y. K.; Park, K. H.; Choy, Y. Bin. Metal-Organic Frameworks, NH2-MIL-88(Fe), as Carriers for Ophthalmic Delivery of Brimonidine. Acta Biomater. 2018, 79, 344-353.
Proniosomal Gel-Derived Niosomes: An Approach to Sustain and Improve the Ocular Delivery of Brimonidine Tartrate. A Emad Eldeeb, S Salah, M Ghorab, Emad Eldeeb, A.; Salah, S.; Ghorab, M. Proniosomal Gel-Derived Niosomes: An Approach to Sustain and Improve the Ocular Delivery of Brimonidine Tartrate;
Formulation, in-Vitro Characterization, and in-Vivo Pharmacodynamic Study. Drug Deliv. 261Formulation, in-Vitro Characterization, and in-Vivo Pharmacodynamic Study. Drug Deliv. 2019, 26 (1), 509-521.
Metal-Organic Frameworks as Drug Delivery Platforms for Ocular Therapeutics. J Gandara-Loe, ACS Appl. Mater. Interfaces. 11Gandara-Loe, J. et al. Metal-Organic Frameworks as Drug Delivery Platforms for Ocular Therapeutics. ACS Appl. Mater. Interfaces 11, 1924- 1931 (2019).
Adsorption design for wastewater treatment. D O Cooney, Lewis PublishersBoca Raton, FlCooney, D. O. Adsorption design for wastewater treatment. (Boca Raton, Fl. : Lewis Publishers, 1999).
. D D Do, Adsorption Analysis: Equilibria and Kinetics. in Series on Chemical Engineering. 2Published by Imperial College Press and Distributed by World Scientific Publishing CoDo, D. D. Adsorption Analysis: Equilibria and Kinetics. in Series on Chemical Engineering Volume 2, 13-17 (Published by Imperial College Press and Distributed by World Scientific Publishing Co., 1998).
Development of an HPLC method for determining the alpha2-adrenergic receptor agonist brimonidine in blood serum and aqueous humor of the eye. N K Karamanos, F Lamari, J Katsimpris, S Gartaganis, Biomed. Chromatogr. 13Karamanos, N. K., Lamari, F., Katsimpris, J. & Gartaganis, S. Development of an HPLC method for determining the alpha2-adrenergic receptor agonist brimonidine in blood serum and aqueous humor of the eye. Biomed. Chromatogr. 13, 86-88 (1999).
| [] |
[
"Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers",
"Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers",
"Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers",
"Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers"
] | [
"Shuyu Cheng \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Núria Bagués \nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Camelia M Selcu \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Jacob B Freyermuth \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Ziling Li ",
"Binbin Wang \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Shekhar Das \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"P Chris Hammel \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Mohit Randeria \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"David W Mccomb \nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Roland K Kawakami \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Shuyu Cheng \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Núria Bagués \nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Camelia M Selcu \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Jacob B Freyermuth \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Ziling Li ",
"Binbin Wang \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Shekhar Das \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"P Chris Hammel \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Mohit Randeria \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"David W Mccomb \nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States\n\nDepartment of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States\n",
"Roland K Kawakami \nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n\nDepartment of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States\n"
] | [
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210Columbus, StructuralOhioUnited States",
"Department of Materials Science and Engineering\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States",
"Department of Physics\nThe Ohio State University\n43210ColumbusOhioUnited States"
] | [] | Magnetic skyrmions are promising for next-generation information storage and processing owing to their potential advantages in data storage density, robustness, and energy efficiency. The magnetic multilayers consisting of Pt, Co, and a third metal element X provide an ideal platform to study the skyrmions due to their highly tunable magnetic properties. Here, we report the material parameters needed to achieve room-temperature skyrmions in epitaxial Pt/Co/Cu superlattices grown by molecular beam epitaxy. By tuning the Co thickness and the number of periods, the magnetic easy axis varies from perpendicular to in-plane, and skrymions are observed in the spin-reorientation transition. The magnetic properties of the Pt/Co/Cu samples are studied by magneto-optic Kerr effect (MOKE) and superconducting quantum interference device (SQUID) magnetometer measurements. Skyrmions are directly imaged by magnetic force microscopy (MFM) and Lorentz transmission electron microscopy (LTEM). The development of room-temperature skyrmions in Pt/Co/Cu multilayers may lead to advances in skyrmion-related research and applications.Magnetic skyrmions are topologically protected spin textures that stand out as one of the strongest candidates for next-generation information storage due to their small sizes, thermal stability, and high energy efficiency[1][2][3][4]. These special spin textures originate from the complex interplay between magnetic anisotropy, dipolar interactions, applied field, and the Dzyaloshinskii-Moriya interaction (DMI). The DMI plays an important role in skyrmion formation and stabilization, as it favors perpendicular alignment between neighboring spins[5,6]. From a material point of view, the DMI is allowed by asymmetric crystal structures, which happens in either bulk crystals that are non-centrosymmetric or interfaces that break inversion symmetry[7]. Therefore, the DMI can be categorized into bulk DMI and interfacial DMI. While the former one gives rise to Bloch skyrmions in which the spins twist in the tangential direction[8,9], the latter one gives rise to Néel skyrmions in which the spins tumble in the radial direction[10,11].Since 2009, magnetic skyrmions have been discovered in a variety of materials, including B20-phase materials[8,9,12], two-dimensional materials[13,14], and magnetic bilayers and multilayers[10,11,15,16]. Among these materials, the Pt/Co/X (X = metallic material) magnetic multilayers have drawn much attention because the insertion of the X layers into Pt/Co superlattices generates non-canceling interfacial DMI by breaking the inversion symmetry[10,17,18]. Furthermore, the magnetic properties of Pt/Co/X multilayers can be vastly tuned through varying the thickness of each layer[19,20]and the number of repetitions of Pt/Co/X[21,22], or simply changing the element X [23, 24]. So far, the magnetic properties of Pt/Co/X multilayer systems with * [email protected] | null | [
"https://export.arxiv.org/pdf/2303.02117v1.pdf"
] | 257,353,337 | 2303.02117 | a3afa1430187f9504d3fa02bdb3041053fdcbe09 |
Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers
3 Mar 2023
Shuyu Cheng
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Núria Bagués
Department of Materials Science and Engineering
The Ohio State University
43210Columbus, StructuralOhioUnited States
Department of Materials Science and Engineering
The Ohio State University
43210ColumbusOhioUnited States
Camelia M Selcu
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Jacob B Freyermuth
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Ziling Li
Binbin Wang
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Materials Science and Engineering
The Ohio State University
43210Columbus, StructuralOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Materials Science and Engineering
The Ohio State University
43210ColumbusOhioUnited States
Shekhar Das
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
P Chris Hammel
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Mohit Randeria
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
David W Mccomb
Department of Materials Science and Engineering
The Ohio State University
43210Columbus, StructuralOhioUnited States
Department of Materials Science and Engineering
The Ohio State University
43210ColumbusOhioUnited States
Roland K Kawakami
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Department of Physics
The Ohio State University
43210ColumbusOhioUnited States
Room-Temperature Magnetic Skyrmions in Pt/Co/Cu Multilayers
3 Mar 2023
Magnetic skyrmions are promising for next-generation information storage and processing owing to their potential advantages in data storage density, robustness, and energy efficiency. The magnetic multilayers consisting of Pt, Co, and a third metal element X provide an ideal platform to study the skyrmions due to their highly tunable magnetic properties. Here, we report the material parameters needed to achieve room-temperature skyrmions in epitaxial Pt/Co/Cu superlattices grown by molecular beam epitaxy. By tuning the Co thickness and the number of periods, the magnetic easy axis varies from perpendicular to in-plane, and skrymions are observed in the spin-reorientation transition. The magnetic properties of the Pt/Co/Cu samples are studied by magneto-optic Kerr effect (MOKE) and superconducting quantum interference device (SQUID) magnetometer measurements. Skyrmions are directly imaged by magnetic force microscopy (MFM) and Lorentz transmission electron microscopy (LTEM). The development of room-temperature skyrmions in Pt/Co/Cu multilayers may lead to advances in skyrmion-related research and applications.Magnetic skyrmions are topologically protected spin textures that stand out as one of the strongest candidates for next-generation information storage due to their small sizes, thermal stability, and high energy efficiency[1][2][3][4]. These special spin textures originate from the complex interplay between magnetic anisotropy, dipolar interactions, applied field, and the Dzyaloshinskii-Moriya interaction (DMI). The DMI plays an important role in skyrmion formation and stabilization, as it favors perpendicular alignment between neighboring spins[5,6]. From a material point of view, the DMI is allowed by asymmetric crystal structures, which happens in either bulk crystals that are non-centrosymmetric or interfaces that break inversion symmetry[7]. Therefore, the DMI can be categorized into bulk DMI and interfacial DMI. While the former one gives rise to Bloch skyrmions in which the spins twist in the tangential direction[8,9], the latter one gives rise to Néel skyrmions in which the spins tumble in the radial direction[10,11].Since 2009, magnetic skyrmions have been discovered in a variety of materials, including B20-phase materials[8,9,12], two-dimensional materials[13,14], and magnetic bilayers and multilayers[10,11,15,16]. Among these materials, the Pt/Co/X (X = metallic material) magnetic multilayers have drawn much attention because the insertion of the X layers into Pt/Co superlattices generates non-canceling interfacial DMI by breaking the inversion symmetry[10,17,18]. Furthermore, the magnetic properties of Pt/Co/X multilayers can be vastly tuned through varying the thickness of each layer[19,20]and the number of repetitions of Pt/Co/X[21,22], or simply changing the element X [23, 24]. So far, the magnetic properties of Pt/Co/X multilayer systems with * [email protected]
various metallic materials X has been reported including X = Mn [25], Ni [26], Cu [19], Ru [27], Ho [28], Ta [11], W [21], Ir [10], etc. Within a number of options for transition metal X, Cu is of particular interest for a number of reasons. Since the lattice constant of Cu is close to that of Co and Pt, it is possible to grow Pt/Co/Cu multilayers epitaxially along the Pt(111) direction [19]. This enables the layer-by-layer growth of high-quality crystalline Pt/Co/Cu multilayers using molecular beam epitaxy (MBE). It was also reported that the Pt/Co/Cu multilayers have no magnetic dead layer as compared to other materials as X layer [23]. For these reasons, Pt/Co/Cu could be a model system to investigate skyrmion properties.
In this paper, we report the material parameters needed to achieve room-temperature skyrmions in epitaxial Pt/Co/Cu multilayers. By varying the number of periods (N ) and the Co thickness (t Co ), the magnetic anisotropy can be tuned between a perpendicular easy axis and an in-plane easy axis. We find that skyrmions form when N and t Co are adjusted so that the system is in the spin-reorientation transition (SRT) between the perpendicular and in-plane magnetization states. It is notable that the presence of skyrmions is associated with a characteristic wasp-waisted shape of the macroscopic out-of-plane hysteresis loop. In addition, we observe that current pulses can help nucleate skyrmions in the SRT regime. Experimentally, the epitaxial [Pt/Co/Cu] N multilayers were grown by MBE on top of a Pt(111) buffer layer grown on insulating Al 2 O 3 (0001) substrates, which allow in-plane current pulses to be applied to the sample. The macroscopic magnetic properties of [Pt/Co/Cu] N multilayers were studied using a combination of magnetooptic Kerr effect (MOKE) and superconducting quantum interference device (SQUID) magnetometry, and the skyrmions were observed using magnetic force microscopy (MFM) and Lorentz transmission electron microscopy (LTEM). The [Pt/Co/Cu] N multilayers were grown on epitaxial Pt(111) buffer layers on Al 2 O 3 (0001) substrates (MTI Corporation) using MBE. Unless otherwise noted, the structure of the multilayer samples is (bottom to top): 5 nm Pt/[6 atomic layers (6 AL) Co/2 AL Cu/2 AL Pt] 5 /5 nm CaF 2 (hereafter [Pt(2)/Co(6)/Cu(2)] 5 , where the sequence of the layers is from the bottom to the top, and the numbers in the parentheses represent the thickness of each layer in units of atomic layers), as shown in Figure 1a. Prior to the growth, the Al 2 O 3 (0001) substrates were annealed in air at 1000 • C for 180 minutes and then degassed in the growth chamber at 500 • C for 30 minutes. A 5 nm Pt(111) buffer layer was epitaxially grown on the Al 2 O 3 (0001) substrate following the recipe described in [29]. After the samples were cooled down to room temperature, [Pt/Co/Cu] N multilayers were deposited on top of the Pt(111) buffer layer by opening and closing the shutters sequentially. The growth time for each layer was determined by the growth rate which was calibrates by a quartz crystal deposition monitor. Pt was deposited from an electron-beam evaporator, while Co and Cu were deposited from Knudsen cells. The typical growth rates for Pt, Co, and Cu are 0.9Å/min, 0.8Å/min and 1.0Å/min, respectively. After growth, 5 nm CaF 2 was deposited on the sample to protect the sample from oxidation. The in situ reflection high-energy electron diffraction (RHEED) pattern was monitored during the growth, as shown in Figure 1b. Streaky RHEED patterns indicate that the [Pt(2)/Co(6)/Co(2)] 5 multilayers grow epitaxially. Furthermore, the in-plane lattice constant extracted from the RHEED pattern during growth shows oscillatory behavior, with decreasing lattice constant during Co layer growth, and increasing lattice constant during Cu and Pt layer growth, as shown in Figure 1c. This oscillation of in-plane lattice constant does not decay during the growth.
The structure of the sample was confirmed by scanning transmission electron microscopy (STEM) imaging of a cross-section sample using a ThermoFisher probe corrected Themis-Z at 300 kV. The cross-section sample was prepared by Ga ion milling at 30 kV and 5 kV using a focused ion beam (FIB). Figure 1d The polar MOKE hysteresis loop shows a "wasp-waisted" shape with small remanence. Although the applied magnetic field was limited to 120 mT which is not sufficient to fully saturate the sample, the hysteresis loop nevertheless captures the main magnetic characteristics of the sample. The wasp-waisted shape of the hysteresis loop is similar to that of Pt/Co/Cu multilayers near the SRT in which the magnetic stripe domains were observed [19]. Meanwhile, the longitudinal MOKE hysteresis loop shows almost linear behavior in 120 mT range, with a much smaller magnitude compared to polar MOKE. The response to external magnetic fields where the magnetization is easier to polarize out-of-plane compared to inplane is due to the presence of perpendicular surface magnetic anisotropy from the Pt/Co interfaces. The results from SQUID measurements of [Pt(2)/Co(6)/Cu(2)] 5 are shown in Figure 2b. The out-of-plane hysteresis loop (red curve) shows a similar shape to the polar MOKE loop, with low remanence and a saturation field of 192 mT, while the in-plane hysteresis loop (blue curve) saturates at ∼ 0.5 Tesla. We note that the the saturation magnetization of [Pt(2)/Co(6)/Cu(2)] 5 multilayers is larger than the bulk Co value (1430 kA/m) [30] by the amount of ∆M Co = 580 kA/m. This comes from the extra magnetic moment of Pt induced by the magnetic proximity effect, which has been reported in Pt/Co multilayers [31]. To quantify this effect, we calculate the magnetic moment of Pt from the following formula [31]:
. + / ) 0 1 ( - 2 3 $ # % 4 % ! % % ! % 4 % $ # % 5 % 6 ) 0 1 7 3 ) 8 + 9 - ( ) : ; & < ) = + / > . , ? 2 . / - 9 ) : ; & < (a) ! " " " # " " " " # " " " ! " " " $ % & ' ( ) * + ! , " # , " " , " # , " ! , " - " . % & / 0 1 2 3 + 4 ! # " # ! 4 $ % &- 5 )∆M Co t Co = M P t t P t(1)
By substituting t Co = 6.1 nm and t P t = 7.3 nm into Eq. 1, we get M P t = 0.84 µ B /Pt.
To investigate how the magnetic properties of [Pt/Co/Cu] multilayers depend on the material parameters, we systematically varied the Co thickness and the number of periods N . For these studies, we maintained a constant thickness of 2 AL for the Pt and Cu layers.
Beginning with the Co thickness, we synthesized a sample series of with t Co = 4, 5, 6, 7, 8 and 9 AL for a fixed number of periods N = 5. Representative polar (red curve) and longitudinal (blue curve) MOKE loops are shown in Figure 3a. At a low Co thickness of 4 AL (sample I), the polar loop is square with a large remanence while the longitudinal loop has a small signal with no remanence. This indicates a perpendicular (out-ofplane) magnetic easy axis. As t Co increases, the polar MOKE hysteresis loop evolves to a wasp-waist shape with low remanence for t Co = 6 AL (sample III), and eventually to almost linear with negligible hysteresis and zero remanence for t Co = 8 AL Meanwhile, the longitudinal MOKE hysteresis loop exhibits an increasing remanence with thickness, going from zero remanence for t Co = 4 and 6 AL to a loop with substantial remanence and sharp magnetization reversals for 8 AL. This indicates a transition to in-plane magnetization for thicker Co. This spin reorientation transition (SRT) from perpendicular magnetization to in-plane magnetization with increasing Co thickness is summarized by the horizontal points at N = 5 in Figure 3d, with red points signifying perpendicular magnetization, green points signifying the SRT region, and blue points signifying in-plane magnetization.
This thickness-dependent spin reorientation is understood as a competition between the perpendicular magnetic anisotropy (PMA) originating from the Pt/Co interface and the magnetic shape anisotropy favoring inplane magnetization [32]. Since the magnetic shape anisotropy scales with the Co film thickness and the Co/Pt PMA does not, larger thicknesses will favor inplane magnetization while smaller thicknesses will favor perpendicular magnetization. We also observe a spin reorientation transition that depends on the number of periods N of the [Pt/Co/Cu] multilayer. Interestingly, as N is increased from 3 to 7 while keeping t Co fixed at 7 AL, a similar transition from perpendicular to in-plane magnetization occurs, as shown in Figure 3b. In this sample series, sample IV with 3 periods has a polar MOKE loop (red curve, top loop) with nearly 100% remanence and a longitudinal MOKE loop (blue curve, top loop) with a weak response, indicating perpendicular magnetization. For sample VI with 5 periods the polar MOKE loop (red curve, middle loop) has a wasp-waisted shape, and for sample VII with 7 periods the polar MOKE loop has a weak response (red curve, bottom loop). Meanwhile, the longitudinal MOKE loops (blue curves) for the 5-period and 7-period samples (VI and VII) develop hysteresis. This indicates that the magnetization exhibits a transition from perpendicular to inplane as N increases.
! " ! # $ % & ' ( ) * + , ( - . ' & / 0 1 2 " 1 " " 2 " " 2 " 1 " " 1 2 " 3 ( - . 4 0 " 5 ! " " 5 ! 6 $ 7 8 9 : ; / 9 7 & % ( ) * + , ( - . ' & / 0 ! " ! ! " ! " 5 ! " " 5 ! " 5 ! " " 5 ! < & . = % > ( ? ( ( @ # : - A 0 B C $ - ! 0 B C ; - A 0 D 2 < & . = % > ( ? ? ? ( ( @ # : - A 0 B C $ - E 0 B C ; - A 0 D 2 < & . = % > ( ? F ( ( @ # : - A 0 B C $ - G 0 B C ; - A 0 D 2 C $ ( 4 H 9 I J 7 > K K ( L > = > 7 / > 7 I > (a) ! " # ! ! " # $ % & ' ( ) * + ( & , - . / 0 1 2 . 3 4 5 , + 6 # ! # 7 % - , 5 . / 0 1 2 . 3 4 5 , + 6 8 9 ! 8 ! ! 9 ! ! 9 ! 8 ! ! 8 9 ! : . 3 4 ; 6 # ! # # ! # ! " # ! ! " # ! " # ! ! " # < , 4 = - > . ? @ @ . . A 7 ) 3 B 6 C D % 3 E 6 C D * 3 B 6 F E < , 4 = - > . ? @ . . A 7 ) 3 B 6 C D % 3 E 6 C D * 3 B 6 F 9 < , 4 = - > . @ ? . . A 7 ) 3 B 6 C D % 3 E 6 C D * 3 B 6 F G H * 4 I > 5 . % J . 7 > 5 ( % + K . L > = > & + > & M > (b) ! " # $ % & % $ # " ! ' ( ) * + , - . / 0 , 1 2 + * 3 4 % $ & 5 & " & & " & 5 & % $ & 6 , 1 2 7 4 8 * 2 9 ) : , ; < < < , , = ' > 1 $ 4 ? @ ( 1 5 4 ? @ A 1 $ 4 B " , ' ( ) * + , - . / 0 , C ( D E F > A 3 F D * ) , - . / 0 (c) ! " # $ % & ' ( ) * + , - . / - 0 + , 1 . 2 3 - ' 4 5 6 ! " # $ % 7 . - 8 9 1 : ; < + 3 3 - 8 7 . - = > 8 . ) 1 : - ? @ A + , 3 B 0 + , C + < 2 1 : ( D @ , - E @ F < + 8 1 G @ 8 1 . < H < I C D @ < + - E @ F < + 8 1 G @ 8 1 . < J ; A ,) 1 .
In previous studies, both similar behavior (large N prefers in-plane) [33] and opposite behavior (large N prefers perpendicular) [22,34,35] have been reported for [Co/Pt] and [Co/Pt/X] superlattices. These experiments illustrate that the dependence of the magnetic easy axis on N is a complex issue, which may depend on both intrinsic interactions (e.g. anistropy, DMI, dipolar) and extrinsic factors (e.g. roughness, crystallinity). Thus, understanding the N -dependence requires further study and is beyond the scope of the current work.
Looking at the variation in magnetic anisotropy as a function of t Co and N in Figure 3d shows that perpendicular magnetization is favored for low t Co and low N , while in-plane magnetization is favored for high t Co and high N . Since these two phases occupy opposite corners of the diagram, it is likely that the spin reorientation transition occupies a diagonal region of the diagram. To test this, we synthesized a sample with t Co = 8 AL and N = 4 (sample VIII) that is to the lower-right of the two samples in the transition region (samples III, VI). Indeed, the polar and longitudinal MOKE loops for this sample (Figure 3c) confirm the wasp-waisted polar loop that signifies the spin reorientation transition region.
The magnetic properties of the samples are summarized in Table I We now focus on the samples in the transition region with wasp-waisted polar hystersis loops, indicated by the green dots in Figure 3d (and labeled "Skyrmions"). Here, we employed LTEM and MFM to image the magnetic do- [36], focused ion beam (FIB) milling was utilized to thin the sapphire substrate, thus allowing the electron beam to transmit through the sample. Figure 4a-c shows planar-view LTEM images of a [Pt(2)/Co(6)/Cu(2)] 5 multilayer sample at various out-of-plane magnetic fields. The straight parallel lines are from the substrate thinning and are not due to magnetic textures. For these images, we have tilted the sample by 20 • to achieve magnetic contrast for Néel skyrmions [17,37].
Beginning at zero field (Figure 4a), we observe labyrinth magnetic domains which evolve into magnetic stripe domains when the field is ramped up to 100 mT (Figure 4b). When the field is raised to 135 mT (Figure 4c), several magnetic bubbles are observed. The image in Figure 4e has improved contrast after subtracting a background image taken in the field-polarized state at 160 mT [38].
These bubbles are identified as DMI-skyrmions of Néel type based on several considerations. First, the magnetic contrast only appears only when the sample is tilted, which is a characteristic of Néel skyrmion as the Lorentz force is tangential. Further, the bubble appears as having a positive and negative lobes, which is the expected shape for Néel skyrmion. A line-cut across the lobes, shown in Figure 4e, establishes the size of the bubble to be smaller than 110 nm and also indicates the chirality of the bubble: the trace goes from a minima on the left to a maxima on the right while an opposite chirality would interchange left and right. Significantly, all of the bubbles exhibit the same chirality, which indicates that DMI contributes significantly to the formation of the bubble. These characteristics therefore identify the bubbles as DMI-skyrmions of Néel type.
We further investigate the magnetic domain structure and skyrmion spin textures using MFM. An advantage of MFM over LTEM is that sample preparation including substrate thinning is not required. This provides access to the as-grown magnetic properties, as substrate thinning could introduce strain. In addition, MFM measurements are compatible with devices fabricated by photolithography and electron-beam lithography. The measurements were performed in the Bruker MFM equipped with a homemade variable magnet with out-of-plane field range of -115 mT to 115 mT. The [Pt/Co/Cu] multilayers were patterned to have micron-wide device channels.
Upon investigating the three samples in the "Skyrmion" region of the phase diagram (green dots in Figure 3d), we found that skyrmions could be nucleated either using field ramp sequences or current pulses. For a [Pt(2)/Co(6)/Cu(2)] 5 sample (Figure 5a-c), the out-of-plane magnetic field was first set to -100 mT. Ramping to 0 mT produced labyrinth magnetic domains in the channel (Figure 5a). Increasing the field to 67 mT, the domain structure evolved to magnetic stripe domains with a lower density of domain walls (Figure 5b), consistent with the domain structures observed by LTEM (Figure 4b). As the MFM magnet is unable to reach the 135 mT needed to nucleate skyrmions, we decided to use current pulses to help nucleate skyrmions, as demonstrated previously in other materials [11,39]. Following a single current pulse of 1.39×10 12 A/m 2 with duration 20 ns, some of the stripe domains have broken up into isolated skyrmions (Figure 5c).
For a [Pt(2)/Co(8)/Cu(2)] 4 sample, the saturation field is within the range of the MFM magnet, so we investigate skyrmion nucleation by field ramping. Starting by applying a -100 mT field, we ramp up through 0 mT to a final field of 115 mT. Figure 5d-g show MFM images at representative fields. At 70 mT, the magnetic texture is dominated by stripe domains and a few skyrmions are observed. Increasing to 90 mT causes many of the stripes to nucleate into skyrmions or disappear altogether. At 100 mT, some of the skyrmions have disappeared, along with the stripes. Finally, at 115 mT, most of the sample has become field-polarized.
Additional MFM measurements establish the presence of skyrmions in a [Pt(2)/Co(7)/Cu(2)] 5 sample (SM Section 3).
Micromagnetic simulations were performed to try to understand the skyrmion size and its relation to various parameters. Using an exchange stiffness A ex of 10 pJ/m, uniaxial anisotropy K u of 608.5 kJ/m 3 , and saturation magnetization M S of 821 kA/m (see SM Section 4 for details), we ran simulations using MuMax3 [40] for a variety of applied out-of-plane magnetic fields and DMI strengths D including 2.15 mJ/m 2 from [41]. Results of the simulations for the [Pt(2)/Co(6)/Cu(2)] 5 sample are shown in Fig. 6. At zero field, the simulations show a labyrinth domain structure, consistent with both the MFM and LTEM (Fig. 6a). As the magnetic field is increased, skyrmions can be stabilized starting around 80-90 mT. Continuing to increase the field causes the skyrmions to shrink, as shown in Fig. 6c and 6d. Although the simulated values of skyrmion size are lower than the experimental values, the trend of decreasing size with increasing magnetic field is consistent for both theory and experiment. Another trend observed in the simulation is an increase of the skyrmion size with increasing values of D. These dependences of skyrmion size on D and magnetic field are in agreement with previous analytical results in Wang et al. [42].
In conclusion, we investigated the formation of roomtemperature skyrmions in [Pt/Co/Cu] multilayers. We found that by varying the Co thickness and the number of periods N , the magnetization direction can be tuned from perpendicular to in-plane orientation. Skyrmions were observed in the spin-reorientation transition regime, where the polar MOKE loops have a characteristic waspwaisted shape. Magnetic imaging by LTEM showed that the magnetic spin texture evolves from labyrinth domains at low perpendicular magnetic fields to isolated skyrmion spin textures at higher fields. By tilting the sample during LTEM, we verified that the skyrmions are Néel type. Since the chirality was the same for all skyrmions, this indicates the importance of interface DMI in the skyrmion formation. MFM measurements on patterned devices showed that current pulses could nucleate skyrmions at lower magnetic fields compared to nucleation by magnetic fields alone. Since the Pt/Co/Cu multilayers are epitaxial, free of magnetic dead layers, and exhibit skyrmions, this work establishes a model system for systematically investigating the properties of skyrmions. With increasing field, magnetic textures evolve from stripe domains to skyrmions to a field-polarized state.
(a) (b) (c) (d) ! ! " # ! " ! ! # ! ! $ % & ' ( ) * + , - ) . ( / 0 / ' , 1 + ( 2 ! ! " 3 ! " ! 4 ! 5 ! 6 ! 7 , 1 ( 8 2 , 9
:
; / ' ) ( / + 0 . < , = . 0 . , - , > , ? " # , ( @ A ( , - , > , ? 5 , ( @ A ( , - , > , " ? B , ( @ A ( , - , > , " ? 3 , ( @ A (
S1. MOKE hysteresis loops of the [Pt/Co/Cu] multilayers
The structures and magnetic properties of [Pt(2)/Co(t Co )/Cu (2)] N sample series are summarized in Table. S1 (replica of Table. 1 in the main text). Fig. S1 shows the MOKE hysteresis loops of [Pt(2)/Co(t Co )/Cu (2)] N samples with strong easy-axis anisotropy (sample I, II, IV, V). Fig. S2 shows the MOKE hysteresis loops of [Pt(2)/Co(t Co )/Cu (2)] N samples with easy-plane anisotropy (sample VII, IX, X). Fig. S3 (sample VIII) are shown in Fig. S4a and S4b, respectively.
! " ! # $ % & ' ( ) * + , ( - . ' & / 0 1 2 " 1 " " 2 " " 2 " 1 " " 1 2 " 3 ( - . 4 0 " 5 ! " " 5 ! 6 $ 7 8 9 : ; / 9 7 & % ( ) * + , ( - . ' & / 0 ! " ! ! " ! ! " ! " 5 ! " " 5 ! " 5 ! " " 5 ! " 5 ! " " 5 ! < & . = % > ( ? ( ( @ # : - A 0 B C $ - ! 0 B C ; - A 0 D 2 < & . = % > ( ? ? ( ( @ # : - A 0 B C $ - 2 0 B C ; - A 0 D 2 < & . = % > ( ? E ( ( @ # : - A 0 B C $ - F 0 B C ; - A 0 D G < & . = % > ( E ( ( @ # : - A 0 B C $ - F 0 B C ; - A 0 D ! FIG.( - . ' & / 0 1 2 " 1 " " 2 " " 2 " 1 " " 1 2 " 3 ( - . 4 0 " 5 ! " " 5 ! 6 $ 7 8 9 : ; / 9 7 & % ( ) * + , ( - . ' & / 0 ! " ! ! " ! " 5 ! " " 5 ! " 5 ! " " 5 ! < & . = % > ( ? @ @ @ ( ( A # : - B 0 C D $ - E 0 C D ; - B 0 F E < & . = % > ( @ G ( ( A # : - B 0 C D $ - H 0 C D ; - B 0 F 2 < & . = % > ( G ( ( A # : - B 0 C D $ - I 0 C D ; - B 0 F 2 FIG.! " " " # " " " " # " " " ! " " " $ % & ' ( ) * + ! # " # ! , " - % & . / 0 1 2 + 3 ! # " # ! 3 $ % &, 4 ) 5 61 / % > ? % % @ A 8 & ! + ) 5 6 & B + ) 5 7 & ! + C D (a) ! " " " # " " " " # " " " ! " " " $ % & ' ( ) * + ! # " # ! , " - % & . / 0 1 2 + 3 ! # " # ! 3 $ % &, 4 ) 5 6
S3. Additional MFM images
We performed additional MFM measurements on [Pt(2)/Co(7)/Cu(2)] 5 sample, as shown in Fig. S5. We started from zero field and ramped up the field up to 100 mT gradually. Similar to [Pt(2)/Co(8)/Cu(2)] 4 sample sample, the magnetic textures were dominated by labyrinth domains at 70 mT (Fig. S5a). At 80 mT, the magnetic textures remained mostly unchanged as compared to 70 mT (Fig. S5b). However, as the field was ramped up to 100 mT, a huge portion of labyrinth domains broke into skyrmions or disappeared, as shown in Fig. S5c. This measurement demonstrates the existence of skyrmions in [Pt(2)/Co(7)/Cu(2)] 5 sample. The in-plane and out-of-plane saturation fields in the SQUID measurement can be used to find the effective anisotropy, which combines the magnetic anisotropy described by K u and the shape anisotropy of the sample. The relationship between the effective anisotropy K ef f and the magnetic anisotropy K u is given by:
K ef f = K u − 1 2 µ 0 M 2 S (S1)
Using this relationship, we find the magnetic anisotropy to be K u =608.5 kJ/m 3 .
Next, we determine the exchange stiffness A ex . The usual way to measure the exchange stiffness is through measuring the transition temperature T C of the material. However, the transition temperature in this material is above 400 K, which is the maximum temperature that can be applied in the experiment. However, we know the exchange stiffness for bulk cobalt, A ex =11 pJ/m, which serves as an upper bound on the T C of our material. Since the thin film will have a lower T C than the bulk, the exchange stiffness will be lower accordingly.
There is also a relationship between exchange stiffness and skyrmion size, with larger stiffness leading to larger skyrmions. Since the skyrmions seen in the experiment are quite large, the exchange stiffness will be close to the value for bulk cobalt, and the simulations suggest a value of A ex =10 pJ/m.
To try to understand the skyrmion size as a function of magnetic field and the strength of the DMI in the material, we performed micromagnetic simulations using MuMax3 [? ].
We found for a given value of DMI, the skyrmion size decreases as magnetic field increases, whereas for a given magnetic field, increasing the strength of the DMI increases the size of the skyrmion. To understand this, we use the skyrmion ansatz of Wang et al.
[? ] from their theory paper on skyrmion size. Our simulation results agree with their findings, that the skyrmion radius increases with DMI strength and decreases with magnetic field strength.
FIG. 1 .
1Material growth and structural characterization. (a) Schematic drawing of the sample structure. (b) In situ RHEED patterns during growth. Top: sapphire(0001) with the beam along the [1120] in-plane direction. Middle: Pt(111). Bottom: [Pt(2)/Co(6)/Cu(2)]5 multilayers. (c) In-plane lattice constant extracted from the RHEED streak spacing during the growth. The blue, red, and gray regions correspond to the deposition of Co, Cu, and Pt, respectively. (d) STEM-HAADF image of [Pt(2)/Co(6)/Cu(2)]5 multilayers.
shows a STEM high-angle annular dark field (HAADF) image of [Pt(2)/Co(6)/Co(2)] 5 multilayer on top of Pt buffer layer on Al 2 O 3 viewed along the Al 2 O 3 -[1100] axis. In Figure 1d, the [Pt(2)/Co(6)/Co(2)] 5 sample exhibits have well-defined layered structures. Due to atomic number (Z-) contrast in HAADF images the Co layers appear as dark layers in the STEM image, while the Pt and Cu layers appear as bright layers.We first discuss MOKE and SQUID measurements of [Pt(2)/Co(6)/Cu(2)] 5 multilayers.Figure 2ashows the
FIG. 2 .
2Magnetic characterizations of [Pt(2)/Co(6)/Cu(2)]5 multilayers. (a) MOKE hysteresis loops of [Pt(2)/Co(6)/Cu(2)]5 multilayers. (b) SQUID hysteresis loops of [Pt(2)/Co(6)/Cu(2)]5 multilayers. The black solid line represents 1430 kA/m, which is the saturation magnetization of bulk Co. polar (red curve) and longitudinal (blue curve) MOKE hysteresis loops of a [Pt(2)/Co(6)/Cu(2)] 5 multilayer.
FIG. 3 .
3Relationship between MOKE hysteresis loops and the multilayer structure. (a) Representative polar (red curve) and longitudinal (blue curve) MOKE hysteresis loops of [Pt(2)/Co(tCo)/Cu(2)]5 multilayers with a fixed number of periods N = 5. (b) Representative polar (red curve) and longitudinal (blue curve) MOKE hysteresis loops of [Pt(2)/Co(7)/Cu(2)]N multilayers with fixed Co layer thickness tCo = 7 AL. (c) Polar (red curve) and longitudinal (blue curve) MOKE hysteresis loops of [Pt(2)/Co(8)/Cu(2)]4 (sample VIII). (d) Summary of the sample series. The dashed lines are guides to the eye.
and the hysteresis loops of each sample are shown in Section 1 of the Supplementary Material (SM).
FIG. 4 .
4LTEM images of the [Pt(2)/Co(6)/Cu(2)]5 thin film with a thinned substrate. (a-c) LTEM images of the [Pt(2)/Co(6)/Cu(2)]5 sample under (a) 0 mT, (b) 100 mT, and (c) 135 mT applied magnetic field. The scale bar represents 1 µm. (d). LTEM image at 135 mT after subtracting a background image taken in the field-polarized state (160 mT). The area of this image corresponds to the white box in Figure 4c. The scale bar represents 500 nm. (e). Line-cut profile of a single skyrmion along the direction of the red line in Figure 4d. main structure and investigate the possible presence of skyrmions. For the LTEM measurements
ACKNOWLEDGMENTSFIG. 5 .
5We acknowledge stimulating discussions with Denis Pelekhov. This work was supported by the DARPA TEE program under Grant No. D18AP00008. This research was partially supported by the Center for Emergent Materials, an NSF MRSEC, under award number DMR-2011876. Electron microscopy was performed at the Center for Electron Microscopy and Analysis (CEMAS) at The Ohio State University.AUTHOR CONTRIBUTIONS S.C. synthesized the materials and performed the RHEED, MOKE, and SQUID measurements. N.B. MFM images of skyrmions. (a-c) For a [Pt(2)/Co(6)/Cu(2)]5 sample, the magnetic field is ramped from -100 mT to 0 mT leading to labyrinth domains (a), ramping to 67 mT produces domains with lower density (b), and application of a current pulse generates isolated skyrmions (c). (d-f) For a [Pt(2)/Co(8)/Cu(2)]4 sample, the magnetic field is ramped from -100 mT to 70 mT (d), 90 mT (e), 100 mT (f), and 115 mT (g).
FIG. 6 .
6Micromagnetic simulation of [Pt(2)/Co(6)/Cu(2)]5 sample with (a) 0 mT, (b) 70 mT, (c) 100 mT (d) 135 mT applied field. (e). Simulated diameter of skyrmions as a function of applied field using several different DMI values. The red stars represent experimental data from LTEM measurements. B.W. performed the TEM and LTEM measurements. C.M.S., Z.L., and S.D. performed the MFM measurements. J.B.F. and M.R. performed micromagnetic sim-ulations. R.K.K., D.W.M, M.R., and P.C.H. conceived the study. All authors participated in data analysis and preparation of the manuscript.
S1. Polar (red) and longitudinal (blue) MOKE hysteresis loops of [Pt/Co/Cu] multilayers with strong easy-axis anisotropy.
FIG. S4 .
S4SQUID data of (a). [Pt(2)/Co(7)/Cu(2)] 5 (sample VI) and (b). [Pt(2)/Co(8)/Cu(2)] 4 (sample VIII), respectively. The black solid line represents 1430 kA/m, which is the saturation magnetization of bulk Co.
FIG. S5. MFM images of [Pt(2)/Co(7)/Cu(2)] 5 sample with magnetic field ramped to (a) 70 mT, (b) 80 mT, and (c) 100 mT.S4. DETERMINATION OF PARAMETERS FOR MICROMAGNETIC SIMULA-TIONSWe consider the [Pt(2)/Co(6)/Cu(2)] 5 sample described in the main text. The saturation magnetization and magnetic anisotropy are determined from SQUID magnetometry, shown inFigure 2bof the main text. Taking into account the thickness of the sample and contribution of the 5 nm Pt buffer layer, the saturation magnetization is found to be M S =821 kA/m.
TABLE S1 .
S1Summary of the structures and magnetic properties of the samples. This is a replica ofTable. 1 in the main text.shows the MOKE hysteresis loops
of [Pt(2)/Co(t Co )/Cu(2)] N samples that host skyrmions (sample III, VI, VIII).
Sample ID
Co thickness t Co
Number of periods N
Anisotropy
I
4
5
OOP
II
5
5
OOP
III
6
5
OOP (near SRT)
IV
7
3
OOP
V
7
4
OOP
VI
7
5
OOP (near SRT)
VII
7
7
IP
VIII
8
4
OOP (near SRT)
IX
8
5
IP
X
9
5
IP
S2. Polar (red) and longitudinal (blue) MOKE hysteresis loops of [Pt/Co/Cu] multilayers with easy-plane anisotropy. FIG. S3. Polar (red) and longitudinal (blue) MOKE hysteresis loops of [Pt/Co/Cu] multilayers that host skyrmions.S2. SQUID data of the [Pt/Co/Cu] multilayersWe performed SQUID measurements on [Pt(2)/Co(7)/Cu(2)] 5 (sample VI) and [Pt(2)/Co(8)/Cu(2)] 4 (sample VIII), which are the two samples that host skyrmions. The out-of-plane (red) and in-!
"
#
!
!
"
#
$
%
&
'
(
)
*
+
(
&
,
-
.
/
0
1
2
.
3
4
5
,
+
6
#
!
#
7
%
-
,
5
.
/
0
1
2
.
3
4
5
,
+
6
8
!
!
!
8
!
!
9
.
3
4
:
6
#
!
#
#
!
#
!
"
#
!
!
"
#
!
"
#
!
!
"
#
;
,
4
<
-
=
.
>
>
>
.
.
?
7
)
3
@
6
A
B
%
3
C
6
A
B
*
3
@
6
D
E
;
,
4
<
-
=
.
F
>
.
.
?
7
)
3
@
6
A
B
%
3
G
6
A
B
*
3
@
6
D
E
;
,
4
<
-
=
.
F
>
>
>
.
.
?
7
)
3
@
6
A
B
%
3
H
6
A
B
*
3
@
6
D
E
plane (blue) hysteresis loops of [Pt(2)/Co(7)/Cu(2)] 5 (sample VI) and [Pt(2)/Co(8)/Cu(2)] 4
Skyrmions on the track. A Fert, V Cros, J Sampaio, Nature Nanotechnology. 8152A. Fert, V. Cros, and J. Sampaio, Skyrmions on the track, Nature Nanotechnology 8, 152 (2013).
Memory functions of magnetic skyrmions. W Koshibae, Y Kaneko, J Iwasaki, M Kawasaki, Y Tokura, N Nagaosa, Japanese Journal of Applied Physics. 5453001W. Koshibae, Y. Kaneko, J. Iwasaki, M. Kawasaki, Y. Tokura, and N. Nagaosa, Memory functions of mag- netic skyrmions, Japanese Journal of Applied Physics 54, 053001 (2015).
Magnetic skyrmions: from fundamental to applications. G Finocchio, F Büttner, R Tomasello, M Carpentieri, M Kläui, Journal of Physics D: Applied Physics. 49423001G. Finocchio, F. Büttner, R. Tomasello, M. Carpentieri, and M. Kläui, Magnetic skyrmions: from fundamental to applications, Journal of Physics D: Applied Physics 49, 423001 (2016).
C Back, V Cros, H Ebert, K Everschor-Sitte, A Fert, M Garst, T Ma, S Mankovsky, T Monchesky, M Mostovoy, The 2020 skyrmionics roadmap. 53363001C. Back, V. Cros, H. Ebert, K. Everschor-Sitte, A. Fert, M. Garst, T. Ma, S. Mankovsky, T. Monchesky, M. Mostovoy, et al., The 2020 skyrmionics roadmap, Journal of Physics D: Applied Physics 53, 363001 (2020).
A thermodynamic theory of "weak" ferromagnetism of antiferromagnetics. I Dzyaloshinsky, Journal of Physics and Chemistry of Solids. 4241I. Dzyaloshinsky, A thermodynamic theory of "weak" fer- romagnetism of antiferromagnetics, Journal of Physics and Chemistry of Solids 4, 241 (1958).
Anisotropic superexchange interaction and weak ferromagnetism. T Moriya, Physical Review. 12091T. Moriya, Anisotropic superexchange interaction and weak ferromagnetism, Physical Review 120, 91 (1960).
Nanoscale magnetic skyrmions in metallic films and multilayers: a new twist for spintronics. R Wiesendanger, Nature Reviews Materials. 11R. Wiesendanger, Nanoscale magnetic skyrmions in metallic films and multilayers: a new twist for spintron- ics, Nature Reviews Materials 1, 1 (2016).
Skyrmion lattice in a chiral magnet. S Mühlbauer, B Binz, F Jonietz, C Pfleiderer, A Rosch, A Neubauer, R Georgii, P Böni, Science. 323915S. Mühlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Böni, Skyrmion lattice in a chiral magnet, Science 323, 915 (2009).
Real-space observation of a two-dimensional skyrmion crystal. X Yu, Y Onose, N Kanazawa, J H Park, J Han, Y Matsui, N Nagaosa, Y Tokura, Nature. 465901X. Yu, Y. Onose, N. Kanazawa, J. H. Park, J. Han, Y. Matsui, N. Nagaosa, and Y. Tokura, Real-space ob- servation of a two-dimensional skyrmion crystal, Nature 465, 901 (2010).
Additive interfacial chiral interaction in multilayers for stabilization of small individual skyrmions at room temperature. C Moreau-Luchaire, C Moutafis, N Reyren, J Sampaio, C Vaz, N Van Horne, K Bouzehouane, K Garcia, C Deranlot, P Warnicke, Nature Nanotechnology. 11444C. Moreau-Luchaire, C. Moutafis, N. Reyren, J. Sam- paio, C. Vaz, N. Van Horne, K. Bouzehouane, K. Gar- cia, C. Deranlot, P. Warnicke, et al., Additive interfacial chiral interaction in multilayers for stabilization of small individual skyrmions at room temperature, Nature Nan- otechnology 11, 444 (2016).
Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets. S Woo, K Litzius, B Krüger, M.-Y Im, L Caretta, K Richter, M Mann, A Krone, R M Reeve, M Weigand, Nature Materials. 15501S. Woo, K. Litzius, B. Krüger, M.-Y. Im, L. Caretta, K. Richter, M. Mann, A. Krone, R. M. Reeve, M. Weigand, et al., Observation of room-temperature magnetic skyrmions and their current-driven dynamics in ultrathin metallic ferromagnets, Nature Materials 15, 501 (2016).
Near roomtemperature formation of a skyrmion crystal in thin-films of the helimagnet FeGe. X Yu, N Kanazawa, Y Onose, K Kimoto, W Zhang, S Ishiwata, Y Matsui, Y Tokura, Nature Materials. 10106X. Yu, N. Kanazawa, Y. Onose, K. Kimoto, W. Zhang, S. Ishiwata, Y. Matsui, and Y. Tokura, Near room- temperature formation of a skyrmion crystal in thin-films of the helimagnet FeGe, Nature Materials 10, 106 (2011).
B Ding, Z Li, G Xu, H Li, Z Hou, E Liu, X Xi, F Xu, Y Yao, W Wang, Observation of magnetic skyrmion bubbles in a van der Waals ferromagnet Fe3GeTe2. 20868B. Ding, Z. Li, G. Xu, H. Li, Z. Hou, E. Liu, X. Xi, F. Xu, Y. Yao, and W. Wang, Observation of magnetic skyrmion bubbles in a van der Waals ferromagnet Fe3GeTe2, Nano Letters 20, 868 (2019).
Néel-type skyrmion in WTe2/Fe3GeTe2 van der Waals heterostructure. Y Wu, S Zhang, J Zhang, W Wang, Y L Zhu, J Hu, G Yin, K Wong, C Fang, C Wan, Nature Communications. 111Y. Wu, S. Zhang, J. Zhang, W. Wang, Y. L. Zhu, J. Hu, G. Yin, K. Wong, C. Fang, C. Wan, et al., Néel-type skyrmion in WTe2/Fe3GeTe2 van der Waals heterostruc- ture, Nature Communications 11, 1 (2020).
Field-dependent size and shape of single magnetic skyrmions. N Romming, A Kubetzka, C Hanneken, K Bergmann, R Wiesendanger, Physical Review Letters. 114177203N. Romming, A. Kubetzka, C. Hanneken, K. von Bergmann, and R. Wiesendanger, Field-dependent size and shape of single magnetic skyrmions, Physical Review Letters 114, 177203 (2015).
Skyrmions in magnetic multilayers. W Jiang, G Chen, K Liu, J Zang, S G Velthuis, A Hoffmann, Physics Reports. 7041W. Jiang, G. Chen, K. Liu, J. Zang, S. G. Te Velthuis, and A. Hoffmann, Skyrmions in magnetic multilayers, Physics Reports 704, 1 (2017).
A transmission electron microscope study of Néel skyrmion magnetic textures in multilayer thin film systems with large interfacial chiral interaction. S Mcvitie, S Hughes, K Fallon, S Mcfadzean, D Mc-Grouther, M Krajnak, W Legrand, D Maccariello, S Collin, K Garcia, Scientific Reports. 81S. McVitie, S. Hughes, K. Fallon, S. McFadzean, D. Mc- Grouther, M. Krajnak, W. Legrand, D. Maccariello, S. Collin, K. Garcia, et al., A transmission electron mi- croscope study of Néel skyrmion magnetic textures in multilayer thin film systems with large interfacial chiral interaction, Scientific Reports 8, 1 (2018).
Temperature dependence of the Dzyaloshinskii-Moriya interaction in Pt/Co/Cu thin film heterostructures. S Schlotter, P Agrawal, G S Beach, Applied Physics Letters. 11392402S. Schlotter, P. Agrawal, and G. S. Beach, Temperature dependence of the Dzyaloshinskii-Moriya interaction in Pt/Co/Cu thin film heterostructures, Applied Physics Letters 113, 092402 (2018).
Magnetic stripe domains of. L Sun, J Liang, X Xiao, C Zhou, G Chen, Y Huo, Y Wu, AIP Advances. 656109Pt/Co/Cu]10 multilayer near spin reorientation transitionL. Sun, J. Liang, X. Xiao, C. Zhou, G. Chen, Y. Huo, and Y. Wu, Magnetic stripe domains of [Pt/Co/Cu]10 multi- layer near spin reorientation transition, AIP Advances 6, 056109 (2016).
Effect of a cu spacer between co and pt layers on the structural and magnetic properties in (Co/Cu/Pt)5/Pt type multilayers. S Bandiera, R Sousa, B Rodmacq, L Lechevallier, B Dieny, Journal of Physics D: Applied Physics. 46485003S. Bandiera, R. Sousa, B. Rodmacq, L. Lechevallier, and B. Dieny, Effect of a cu spacer between co and pt layers on the structural and magnetic properties in (Co/Cu/Pt)5/Pt type multilayers, Journal of Physics D: Applied Physics 46, 485003 (2013).
Interfacial Dzyaloshinskii-Moriya interaction, interface-induced damping and perpendicular magnetic anisotropy in. I Benguettat-El Mokhtari, A Mourkas, P Ntetsika, I Panagiotopoulos, Y Roussigné, S Cherif, A Stashkevich, F Kail, L Chahed, M Belmeguenai, I. Benguettat-El Mokhtari, A. Mourkas, P. Ntetsika, I. Panagiotopoulos, Y. Roussigné, S. Cherif, A. Stashke- vich, F. Kail, L. Chahed, and M. Belmeguenai, Interfa- cial Dzyaloshinskii-Moriya interaction, interface-induced damping and perpendicular magnetic anisotropy in
Pt/Co/W based multilayers. Journal of Applied Physics. 126133902Pt/Co/W based multilayers, Journal of Applied Physics 126, 133902 (2019).
Interfacial Dzyaloshinskii-Moriya interaction in the epitaxial W/Co/Pt multilayers. S K Jena, R Islam, E Milińska, M M Jakubowski, R Minikayev, S Lewińska, A Lynnyk, A Pietruczik, P Aleszkiewicz, C Autieri, Nanoscale. 137685S. K. Jena, R. Islam, E. Milińska, M. M. Jakubowski, R. Minikayev, S. Lewińska, A. Lynnyk, A. Pietruczik, P. Aleszkiewicz, C. Autieri, et al., Interfacial Dzyaloshinskii-Moriya interaction in the epitaxial W/Co/Pt multilayers, Nanoscale 13, 7685 (2021).
Influence of the capping layer material on the interfacial Dzyaloshinskii-Moriya interaction in Pt/Co/capping layer structures probed by brillouin light scattering. M Belmeguenai, Y Roussigne, S M Cherif, A Stashkevich, T Petrisor, M Nasui, M Gabor, Journal of Physics D: Applied Physics. 52125002M. Belmeguenai, Y. Roussigne, S. M. Cherif, A. Stashke- vich, T. Petrisor, M. Nasui, and M. Gabor, Influ- ence of the capping layer material on the interfa- cial Dzyaloshinskii-Moriya interaction in Pt/Co/capping layer structures probed by brillouin light scattering, Jour- nal of Physics D: Applied Physics 52, 125002 (2019).
Element-selective modulation of interfacial Dzyaloshinskii-Moriya interaction in Pt|Co|Metal based multilayers. F Ajejas, Y Sassi, W Legrand, S Collin, A Thiaville, J P Garcia, S Pizzini, N Reyren, V Cros, A Fert, arXiv:2109.00761arXiv preprintF. Ajejas, Y. Sassi, W. Legrand, S. Collin, A. Thi- aville, J. P. Garcia, S. Pizzini, N. Reyren, V. Cros, and A. Fert, Element-selective modulation of interfacial Dzyaloshinskii-Moriya interaction in Pt|Co|Metal based multilayers, arXiv preprint arXiv:2109.00761 (2021).
Structural and magnetic properties of Pt/Co/Mn-based multilayers. M Lonsky, M.-W Yoo, Y.-S Huang, J Qian, J.-M Zuo, A Hoffmann, Physical Review Materials. 654413M. Lonsky, M.-W. Yoo, Y.-S. Huang, J. Qian, J.-M. Zuo, and A. Hoffmann, Structural and magnetic properties of Pt/Co/Mn-based multilayers, Physical Review Materials 6, 054413 (2022).
/Al multilayers via the spin hall effect of Pt. J.-C Rojas-Sánchez, P Laczkowski, J Sampaio, S Collin, K Bouzehouane, N Reyren, H Jaffrès, A Mougin, J.-M George, Applied Physics Letters. 10882406Perpendicular magnetization reversal in PtJ.-C. Rojas-Sánchez, P. Laczkowski, J. Sampaio, S. Collin, K. Bouzehouane, N. Reyren, H. Jaffrès, A. Mougin, and J.-M. George, Perpendicular magnetiza- tion reversal in Pt/[Co/Ni]3/Al multilayers via the spin hall effect of Pt, Applied Physics Letters 108, 082406 (2016).
Interlayer exchange coupling in Pt/Co/Ru and Pt/Co/Ir superlattices. S Karayev, P D Murray, D Khadka, T Thapaliya, K Liu, S Huang, Physical Review Materials. 341401S. Karayev, P. D. Murray, D. Khadka, T. Thapaliya, K. Liu, and S. Huang, Interlayer exchange coupling in Pt/Co/Ru and Pt/Co/Ir superlattices, Physical Review Materials 3, 041401 (2019).
Influence of rare earth metal ho on the interfacial Dzyaloshinskii-Moriya interaction and spin torque efficiency in Pt/Co/Ho multilayers. L Liu, X Zhao, W Liu, Y Song, X Zhao, Z Zhang, Nanoscale. 1212444L. Liu, X. Zhao, W. Liu, Y. Song, X. Zhao, and Z. Zhang, Influence of rare earth metal ho on the inter- facial Dzyaloshinskii-Moriya interaction and spin torque efficiency in Pt/Co/Ho multilayers, Nanoscale 12, 12444 (2020).
Atomic layer epitaxy of kagome magnet Fe3Sn2 and Sn-modulated heterostructures. S Cheng, B Wang, I Lyalin, N Bagués, A J Bishop, D W Mccomb, R K Kawakami, APL Materials. 1061112S. Cheng, B. Wang, I. Lyalin, N. Bagués, A. J. Bishop, D. W. McComb, and R. K. Kawakami, Atomic layer epi- taxy of kagome magnet Fe3Sn2 and Sn-modulated het- erostructures, APL Materials 10, 061112 (2022).
Magnetism from the atom to the bulk in iron, cobalt, and nickel clusters. I M Billas, A Chatelain, W A De Heer, Science. 2651682I. M. Billas, A. Chatelain, and W. A. de Heer, Magnetism from the atom to the bulk in iron, cobalt, and nickel clusters, Science 265, 1682 (1994).
Impact of buffer layer and pt thickness on the interface structure and magnetic properties in (Co/Pt) multilayers. M Bersweiler, K Dumesnil, D Lacour, M Hehn, Journal of Physics: Condensed Matter. 28336005M. Bersweiler, K. Dumesnil, D. Lacour, and M. Hehn, Impact of buffer layer and pt thickness on the interface structure and magnetic properties in (Co/Pt) multilay- ers, Journal of Physics: Condensed Matter 28, 336005 (2016).
Perpendicular magnetic anisotropy and magneto-optical kerr effect of vapor-deposited Co/Pt-layered structures. W Zeper, F Greidanus, P Carcia, C Fincher, Journal of Applied Physics. 654971W. Zeper, F. Greidanus, P. Carcia, and C. Fincher, Per- pendicular magnetic anisotropy and magneto-optical kerr effect of vapor-deposited Co/Pt-layered structures, Jour- nal of Applied Physics 65, 4971 (1989).
Ultrafast magnetization dynamics in high perpendicular anisotropy [Co/Pt]n multilayers. A Barman, S Wang, O Hellwig, A Berger, E E Fullerton, H Schmidt, Journal of Applied Physics. 101A. Barman, S. Wang, O. Hellwig, A. Berger, E. E. Fuller- ton, and H. Schmidt, Ultrafast magnetization dynamics in high perpendicular anisotropy [Co/Pt]n multilayers, Journal of Applied Physics 101, 09D102 (2007).
Effect of the repeat number and co layer thickness on the magnetization reversal process in. X Wang, N multilayersY Wei, N multilayersK He, N multilayersY Liu, N multilayersY Huang, N multilayersQ Liu, N multilayersJ Wang, N multilayersG Han, N multilayersJournal of Physics D: Applied Physics. 53215001Pt/Co(x)X. Wang, Y. Wei, K. He, Y. Liu, Y. Huang, Q. Liu, J. Wang, and G. Han, Effect of the repeat number and co layer thickness on the magnetization reversal process in [Pt/Co(x)]N multilayers, Journal of Physics D: Applied Physics 53, 215001 (2020).
Manipulating density of magnetic skyrmions via multilayer repetition and thermal annealing. X Wang, A Cao, S Li, J Tang, A Du, H Cheng, Y Sun, H Du, X Zhang, W Zhao, Physical Review B. 10464421X. Wang, A. Cao, S. Li, J. Tang, A. Du, H. Cheng, Y. Sun, H. Du, X. Zhang, and W. Zhao, Manipulating density of magnetic skyrmions via multilayer repetition and thermal annealing, Physical Review B 104, 064421 (2021).
Stimulated nucleation of skyrmions in a centrosymmetric magnet. B Wang, P Wu, N Salguero, Q Zheng, J Yan, M Randeria, D W Mccomb, ACS Nano. 1513495B. Wang, P.-k. Wu, N. Bagués Salguero, Q. Zheng, J. Yan, M. Randeria, and D. W. McComb, Stimulated nucleation of skyrmions in a centrosymmetric magnet, ACS Nano 15, 13495 (2021).
M J Benitez, A Hrabec, A P Mihai, T A Moore, G Burnell, D Mcgrouther, C H Marrows, S Mcvitie, arXiv:1503.07668Magnetic microscopy of topologically protected homochiral domain walls in an ultrathin perpendicularly magnetized Co film. arXiv preprintM. J. Benitez, A. Hrabec, A. P. Mihai, T. A. Moore, G. Burnell, D. McGrouther, C. H. Marrows, and S. McVi- tie, Magnetic microscopy of topologically protected ho- mochiral domain walls in an ultrathin perpendicularly magnetized Co film, arXiv preprint arXiv:1503.07668 (2015).
Extracting weak magnetic contrast from complex background contrast in plan-view fege thin films. B Wang, N Bagués, T Liu, R K Kawakami, D W Mccomb, Ultramicroscopy. 232113395B. Wang, N. Bagués, T. Liu, R. K. Kawakami, and D. W. McComb, Extracting weak magnetic contrast from com- plex background contrast in plan-view fege thin films, Ultramicroscopy 232, 113395 (2022).
Skyrmions in synthetic antiferromagnets and their nucleation via electrical current and ultrafast laser illumination. R Juge, N Sisodia, J U Larrañaga, Q Zhang, V T Pham, K G Rana, B Sarpi, N Mille, S Stanescu, R Belkhou, Nature Communications. 134807R. Juge, N. Sisodia, J. U. Larrañaga, Q. Zhang, V. T. Pham, K. G. Rana, B. Sarpi, N. Mille, S. Stanescu, R. Belkhou, et al., Skyrmions in synthetic antiferromag- nets and their nucleation via electrical current and ultra- fast laser illumination, Nature Communications 13, 4807 (2022).
The design and verification of MuMax3. A Vansteenkiste, J Leliaert, M Dvornik, M Helsen, F Garcia-Sanchez, B Van Waeyenberge, AIP advances. 4107133A. Vansteenkiste, J. Leliaert, M. Dvornik, M. Helsen, F. Garcia-Sanchez, and B. Van Waeyenberge, The de- sign and verification of MuMax3, AIP advances 4, 107133 (2014).
Material systems for FM-/AFM-coupled skyrmions in Co/Pt-based multilayers. H Jia, B Zimmermann, M Hoffmann, M Sallermann, G Bihlmayer, S Blügel, Physical Review Materials. 494407H. Jia, B. Zimmermann, M. Hoffmann, M. Sallermann, G. Bihlmayer, and S. Blügel, Material systems for FM- /AFM-coupled skyrmions in Co/Pt-based multilayers, Physical Review Materials 4, 094407 (2020).
A theory on skyrmion size. X Wang, H Yuan, X Wang, Communications Physics. 131X. Wang, H. Yuan, and X. Wang, A theory on skyrmion size, Communications Physics 1, 31 (2018).
| [] |
[
"Normalized centered moments of the Fréchet extreme-value distribution and inference of its parameter",
"Normalized centered moments of the Fréchet extreme-value distribution and inference of its parameter"
] | [
"Jean-Christophe Pain \nDAM\nCEA\nF-91297ArpajonDIFFrance\n\nLaboratoire Matière en Conditions Extrêmes\nUniversité Paris-Saclay\nCEA\n91680Bruyères-le-ChâtelFrance\n"
] | [
"DAM\nCEA\nF-91297ArpajonDIFFrance",
"Laboratoire Matière en Conditions Extrêmes\nUniversité Paris-Saclay\nCEA\n91680Bruyères-le-ChâtelFrance"
] | [] | In the present work, we provide the general expression of the normalized centered moments of the Fréchet extreme-value distribution. In order to try to represent a set of data corresponding to rare events by a Fréchet distribution, it is important to be able to determine its characteristic parameter α. Such a parameter can be deduced from the variance (proportional to the square of the Full Width at Half Maximum) of the studied distribution. However, the corresponding equation requires a numerical resolution. We propose two simple estimates of α from the knowledge of the variance, based on the Laurent series of the Gamma function. The most accurate expression involves the Apéry constant. | null | [
"https://export.arxiv.org/pdf/2303.15572v1.pdf"
] | 257,771,761 | 2303.15572 | 44ab0057a119f2b076f74952a4c2015bf0093e9a |
Normalized centered moments of the Fréchet extreme-value distribution and inference of its parameter
27 Mar 2023 March 29, 2023
Jean-Christophe Pain
DAM
CEA
F-91297ArpajonDIFFrance
Laboratoire Matière en Conditions Extrêmes
Université Paris-Saclay
CEA
91680Bruyères-le-ChâtelFrance
Normalized centered moments of the Fréchet extreme-value distribution and inference of its parameter
27 Mar 2023 March 29, 2023
In the present work, we provide the general expression of the normalized centered moments of the Fréchet extreme-value distribution. In order to try to represent a set of data corresponding to rare events by a Fréchet distribution, it is important to be able to determine its characteristic parameter α. Such a parameter can be deduced from the variance (proportional to the square of the Full Width at Half Maximum) of the studied distribution. However, the corresponding equation requires a numerical resolution. We propose two simple estimates of α from the knowledge of the variance, based on the Laurent series of the Gamma function. The most accurate expression involves the Apéry constant.
Introduction
The Fréchet law [1] is, along with the Gumbel [2] and Weibull [3] ones, a type of extremevalue distribution, used to model the distribution of the maximum value of a sample. It finds many applications in different fields such as natural calamities, horse racing, rainfall, queues in supermarkets, sea currents or wind speeds (see for instance Refs. [4,5]).
The generalized Fréchet law (probability distribution function) reads
g(x; α, s, m) = α s x − m s −1−α e −( x−m s ) −α ,(1)
depending on parameters m (position of the maximum), s > 0 (scale parameter) and α the shape parameter. The associated cumulative distribution function is e −( x−m s ) −α if x > m and 0 otherwise. For simplicity, in the present work we focus on the one-parameter Fréchet distribution
f (x; α) = α x −α−1 e −x −α ,(2)
with α > 0, corresponding to the cumulative distribution function e −x α . The results presented here can be easily generalized to the form of Eq. (1). It can be useful to fit an observed (or measured) distribution of rare events, by a Fréchet distribution [6,7]. To do so, we need to determine the parameter α. The expression of the normalized centered moments of the Fréchet distribution are given in section 2. In section 3, we provide an approximate determination of the parameter α from the knowledge of the variance of a distribution.
Normalized centered moments
The moments of the one-parameter Fréchet distribution are respectively (see for instance Ref. [8]):
µ k = ∞ 0 x k f (x; α) dx = ∞ 0 t − k α e −t dt = Γ 1 − k α ,(3)
where Γ is the usual Gamma function and k ≥ 1. The moments µ k are defined for k < α. The centered moments are defined as
µ k,c = ∞ 0 (x − µ 1 ) k f (x; α) dx.(4)
Using the binomial expansion theorem, one gets
µ k,c = k p=0 k p ∞ 0 x k (−µ 1 ) k−p f (x; α) dx,(5)
where
k p = k!/p!/(k − p)! is the usual binomial coefficient, i.e. µ k,c = k p=0 k p ∞ 0 x k (−1) k−p Γ 1 − 1 α k−p f (x; α) dx.(6)
Setting Ω k = Γ 1 − k α , the latter equation reads
µ k,c = (−1) k k p=0 (−1) p k p Ω k−p 1 ∞ 0 x k f (x; α) dx.(7)
The reduced centered moments are defined (for k ≥ 2) as
ζ k,c = µ k,c (µ 2,c ) k/2 ,(8)
and thus
ζ k,c = (−1) k k p=0 (−1) p k p Ω k−p 1 ∞ 0 x k f (x; α) dx Ω 2 − Ω 2 1 n/2 .(9)
The skewness of the Fréchet distribution (characterizing its asymmetry) is
ζ 3,c = Ω 3 − 3Ω 2 Ω 1 + 2Ω 3 1 Ω 2 − Ω 2 1 3 ,(10)
for α > 3 and +∞ otherwise. The excess kurtosis (kurtosis minus three), characterizing the sharpness of the distribution as compared to the Gaussian, reads
ζ 4,c − 3 = −6 + Ω 4 − 4Ω 3 Ω 1 + 3Ω 2 2 Ω 2 − Ω 2 1(11)
for α > 4 and ∞ otherwise. In addition, for instance, the normalized centered sixth-order moment reads
ζ 6,c = Ω 6 − 6Ω 5 Ω 1 + 15Ω 4 Ω 2 2 − 20Ω 3 Ω 3 2 + 15Ω 2 Ω 4 1 − 5Ω 6 1 (Ω 2 − Ω 2 1 ) 3 .(12)
3 Inference of the Fréchet parameter from the knowledge of its variance
The purpose of this section is to provide an approximate determination of the parameter α of the Fréchet distribution from the knowledge of the variance.
Laurent series of the Gamma function
The Gamma function is
Γ(z) = ∞ 0 e −t t z−1 dt.(13)
Integrating by parts, one gets
Γ(z) = e −t t z z ∞ 0 + ∞ 0 e −t t z z dt = 1 z ∞ 0 e −t t z dt = 1 z ∞ 0 e −t e z ln t dt.(14)
Expanding e z ln t in Taylor series yields
Γ(z) = 1 z ∞ 0 e −t ∞ n=0 (z ln t) n n! dt(15)
i.e.
Γ(z) = 1 z ∞ n=0 z n n! ∞ 0 e −t (ln t) n dt,(16)
which gives
Γ(z) = 1 z ∞ 0 e −t ln t dt + z 2 ∞ 0 e −t (ln t) 2 dt + z 2 6 ∞ 0 e −t (ln t) 3 dt + O(z 3 ).(17)
One knows that ∞ 0 e −t ln t dt = Γ ′ (1) = −γ.
One has Γ ′ (x) = Γ(x) ψ(x), where ψ represents the Digamma function
ψ(z + 1) = −γ + ∞ n=1 z n(n + z) z = −1, −2, −3, . . .(19)
From Eq. (19), we deduce ψ(1) = −γ, ψ ′ (1) = ζ(2) = π 2 /6, where ζ is the usual zeta function
ζ(s) = ∞ n=1 1 n s = 1 + 1 2 s + 1 3 s + 1 4 s + · · · .(20)
The second integral in Eq. (17) is equal to
∞ 0 e −t (ln t) 2 dt = Γ ′′ (1) = Γ ′ (1)ψ(1) + Γ(1)ψ ′ (1) = γ 2 + π 2 6(21)
and since ψ ′′ (1) = −2 ζ(3), ζ(3) being the Apéry constant, we have also
∞ 0 e −t (ln t) 3 dt = Γ ′′′ (1) = Γ ′′ (1)ψ(1) + 2Γ ′ (1)ψ ′ (1) + Γ(1)ψ ′′ (1) = −γ γ 2 + π 2 6 − 2γ π 2 6 − 2 ζ(3).(22)
Finally, the Laurent expansion of the Gamma function up to order two reads
Γ(z) = 1 z − γ + 1 2 γ 2 + π 2 6 z − 1 6 γ 3 + γπ 2 2 + 2 ζ(3) z 2 + O(z 3 ).(23)
Simple approximate formula for the parameter of the Fréchet distribution
Truncating the Laurent series above (Eq. (23)) at order z:
Γ(z) = 1 z − γ + 1 2 γ 2 + π 2 6 z + O(z 2 )(24)
gives, using Γ(z + 1) = z Γ(z):
Γ(z + 1) = 1 − γ z + 1 2 γ 2 + π 2 6 z 2 + O(z 3 ).(25)
Under that assumption, the variance becomes i.e.
V = Ω 2 − Ω 2 1 ≈ 1 + 2γ α + 1 2 γ 2 + π 2 6 4 α 2 − 1 + γ α + 1 2 γ 2 + π 2 6 1 α 2 2 ,(26)V ≈ π 2 6α 2 .(27)
The parameter α is thus simply
α ≈ π √ 6 V .(28)
The accuracy of expression (28) can be appreciated from table 1.
3.3 Improving the estimate using the Laurent series of Γ(z) up to z 2
Using the Laurent series (23), one has
Γ(z + 1) = 1 − γ z + 1 2 γ 2 + π 2 6 z 2 − 1 6 γ 3 + γπ 2 2 + 2 ζ(3) z 3 + O(z 4 )(29)
and finally π 2 6 1 α 2 + (γπ 2 + 6 ζ(3)) 3
1 α 3 ≈ V .(30)
The latter equation is a third-order polynomial equation in the variable 1/α; it can be solved analytically using the Tartaglia-Cardano formulas [9]. Table 2 shows some estimates of α given by the analytical solution of Eq. (30), compared to the exact values.
Conclusion
We provided the general expressions of the normalized centered moments of the Fréchet extremevalue distribution. Representing a set of data corresponding to rare events by a Fréchet distribution requires determining its characteristic parameter α. The latter can be deduced from the variance of the studied distribution. However, the corresponding equation needs to be solved numerically. We propose two simple estimates of α, based on the Laurent series of the Gamma function, up to monomials proportional to z and z 2 respectively.
Sur la loi de probabilité de l'écart maximum. M Fréchet, Ann. Soc. Polon. Math. 6in frenchM. Fréchet, Sur la loi de probabilité de l'écart maximum, Ann. Soc. Polon. Math. 6, 3 (1927) [in french].
E J Gumbel, Statistics of extremes. New YorkColumbia University PressE. J. Gumbel, Statistics of extremes (New York: Columbia University Press, 1958).
A Papoulis, S U Pillai, Probability, random variables, and stochastic processes. BostonMcGraw-HillA. Papoulis and S. U. Pillai, Probability, random variables, and stochastic processes (Boston: McGraw-Hill, 2002).
An introduction to statistical modeling of extreme values. S Coles, Springer-VerlagS. Coles, An introduction to statistical modeling of extreme values (Springer-Verlag, 2001).
Extreme value distributions: theory and applications. S Kotz, S Nadarajah, World ScientificS. Kotz and S. Nadarajah, Extreme value distributions: theory and applications (World Scientific, 2000).
The Fréchet distribution: estimation and application -An overview. P L Ramos, F Louzada, E Ramos, S Dey, J. Stat. Manag. Syst. 23P. L. Ramos, F. Louzada, E. Ramos and S. Dey, The Fréchet distribution: estimation and application -An overview, J. Stat. Manag. Syst. 23, 549-578 (2019).
Kullback-Leibler divergence for two extreme-value Fréchet distributions. J.-C Pain, arXiv 2303.13153J.-C. Pain, Kullback-Leibler divergence for two extreme-value Fréchet distributions, arXiv 2303.13153 (2023).
Characteristic and moment generating functions of generalized extreme value distribution (GEV), chapter 13, in: Sea level rise, coastal engineering, shorelines and tides. G Muraleedharan, C Soares, C Lucas, Linda L. WrightNova Science Publishers, IncG. Muraleedharan, C. Guedes Soares and C. Lucas, Characteristic and moment generat- ing functions of generalized extreme value distribution (GEV), chapter 13, in: Sea level rise, coastal engineering, shorelines and tides, edited by Linda L. Wright (Nova Science Publishers, Inc, 2009).
A Note on the roots of a cubic. J P Mckelvey, Am. J. Phys. 52J. P. McKelvey, A Note on the roots of a cubic, Am. J. Phys. 52, 269-270 (1984).
| [] |
[
"On the low-energy behavior of the Adler function",
"On the low-energy behavior of the Adler function"
] | [
"A V Nesterenko \nJoint Institute for Nuclear Research\nJoliot Curie 6141980BLTPh, DubnaRussian Federation\n"
] | [
"Joint Institute for Nuclear Research\nJoliot Curie 6141980BLTPh, DubnaRussian Federation"
] | [] | The infrared behavior of the Adler function is examined by making use of a recently derived integral representation for the latter. The obtained result for the Adler function agrees with its experimental prediction in the entire energy range. The inclusive τ lepton decay is studied in the framework of the developed approach. | 10.1016/j.nuclphysbps.2008.12.048 | [
"https://arxiv.org/pdf/0808.2043v1.pdf"
] | 17,935,564 | 0808.2043 | f3e185fc4e70d200c2756e156683ba59f96650ca |
On the low-energy behavior of the Adler function
14 Aug 2008
A V Nesterenko
Joint Institute for Nuclear Research
Joliot Curie 6141980BLTPh, DubnaRussian Federation
On the low-energy behavior of the Adler function
14 Aug 2008
The infrared behavior of the Adler function is examined by making use of a recently derived integral representation for the latter. The obtained result for the Adler function agrees with its experimental prediction in the entire energy range. The inclusive τ lepton decay is studied in the framework of the developed approach.
INTRODUCTION
The Adler function [1] plays a key role in particle physics. Specifically, theoretical description of some strong interaction processes (e.g., electronpositron annihilation into hadrons [2] and inclusive τ lepton decay [3,4]) is inherently based on this function. Besides, Adler function is essential for confronting the precise experimental measurements of some electroweak observables (e.g., muon anomalous magnetic moment [5] and shift of the electromagnetic fine structure constant [6]) with their theoretical predictions. In turn, the latter represents a decisive test of the Standard Model and imposes strict restrictions on possible "new physics" beyond it.
Furthermore, Adler function plays a crucial role for the congruous analysis of spacelike and timelike experimental data. Indeed, since perturbation theory and renormalization group method are not applicable directly to the study of observables depending on the timelike kinematic variable, for the self-consistent description of the latter one has to relate the timelike experimental data with the spacelike perturbative results. Here, the required link between the experimentally measurable R-ratio of electron-positron annihilation into hadrons and theoretically computable Adler function D(Q 2 ) is represented by the dispersion relation [1]
D(Q 2 ) = Q 2 ∞ 4m 2 π R(s) (s + Q 2 ) 2 ds,(1)
where m π ≃ 135 MeV [7] stands for the mass of the lightest hadron state. The dispersion rela- * E-mail: [email protected] tion (1) is also commonly employed for extracting the Adler function from the relevant experimental data. For this purpose, in the integrand (1) R(s) is usually parameterized by its experimental measurements at low and intermediate energies and by its theoretical prediction at high energies. The ultraviolet behavior of the Adler function can be approximated by the power series in the strong running coupling within the perturbation theory (see paper [8] and references therein)
D (ℓ) pert (Q 2 ) = 1 + ℓ j=1 d j α (ℓ) s (Q 2 ) j .(2)
The overall factor N c f Q 2 f is omitted throughout, where N c = 3 is the number of colors and Q f denotes the charge of the quark of the f -th flavor. In Eq.
(2) α (ℓ) s (Q 2 ) is the ℓ-loop perturbative QCD invariant charge, α(1)
s (Q 2 ) = 4π/(β 0 ln z), z = Q 2 /Λ 2 , β 0 = 11 − 2n f /3, n f is the number of active quarks, and d 1 = 1/π. However, the perturbative expansion (2) is invalid at low energies and it is inconsistent with the dispersion relation for the Adler function (1) due to unphysical singularities of the strong running coupling α s (Q 2 ) in the infrared domain. The latter also causes certain difficulties in processing the low-energy experimental data.
NOVEL INTEGRAL REPRESENTA-TION FOR THE ADLER FUNCTION
In general, there is a variety of the nonperturbative approaches to handle the strong interaction processes at low energies. In this work we will focus on the approach which engages dispersion relations. Indeed, the latter provide an im-portant source of the nonperturbative information about the hadron dynamics in the infrared domain, which should certainly be taken into account when one is trying to go beyond the scope of perturbation theory.
In particular, dispersion relation (1) imposes stringent physical nonperturbative constraints on the Adler function. Specifically, since R(s), being the ratio of two cross-sections, assumes finite values and tends to a constant in the ultraviolet asymptotic s → ∞, then the Adler function D(Q 2 ) vanishes 1 in the infrared limit Q 2 = 0. In addition, dispersion relation (1) implies that the Adler function possesses the only cut Q 2 ≤ −4m 2 π along the negative semi-axis of real Q 2 . These nonperturbative constraints on the Adler function have been merged with its perturbative approximation in Refs. [9,10] (see also discussion of this issue in Ref. [11]). Eventually, this results in the following integral representations for the Adler function and R-ratio:
D(Q 2 ) = Q 2 Q 2 + 4m 2 π 1 + ∞ 4m 2 π ρ(σ) σ − 4m 2 π σ + Q 2 dσ σ ,(3)R(s) = θ(s − 4m 2 π ) 1 + ∞ s ρ(σ) dσ σ ,(4)
where θ(x) is the unit step function, θ(x) = 1 if x ≥ 0 and θ(x) = 0 otherwise. The developed approach [9] eliminates such intrinsic difficulties of perturbation theory as the infrared unphysical singularities of outcoming results. Besides, additional parameters are not introduced into the theory. Furthermore, Eq. (4) by construction accounts for the effects due to the analytic continuation of spacelike theoretical results into timelike domain, such as the resummation of the so-called π 2 -terms. It is worth noting also that the mass of the lightest hadron state affects both the parton model prediction and the strong correction of the quantities in hand (3), (4).
In the limit of the massless pion m π = 0 expressions (3) and (4) become identical to those of the Analytic Perturbation Theory (APT), see papers [12,13] and references therein. However, it is 1 This constraint holds for mπ = 0 only. crucial to keep the pion mass nonvanishing, since it can be safely neglected only when one handles the strong interaction processes at high energies. It is worth mentioning that there is a number of other similar approaches 2 which also combine perturbative results with relevant dispersion relations, see, e.g., Refs. [15,16,17,18,19]. The spectral density ρ(σ), which appears in Eqs. (3) and (4), can be determined either as the discontinuity of the explicit "exact" theoretical expression for the Adler function D exact (Q 2 ) across the physical cut or as the numerical derivative of the experimental data on R-ratio [9]:
ρ(σ) = 1 π D exact (−σ + i0 + ) = − d R exp (σ) d ln σ .(5)
However, there is still no explicit "exact" expression for the Adler function, and, therefore, there is no unique way to compute the corresponding spectral density (5) by making use of its approximate perturbative expression (2). In what follows we will employ the spectral function obtained in Ref. [20] 3 , which has the following form at the one-loop level [20,22]:
ρ (1) (σ) = 1 + Λ 2 σ 1 ln 2 (σ/Λ 2 ) + π 2 .(6)
The Adler function (3), corresponding to the spectral function (6), is presented in Fig. 1 by solid curve. The dot-dashed curve stands for the one-loop perturbative approximation (2) of the Adler function, whereas its experimental prediction, computed in the way described above, is denoted by the shaded band. As one may infer from Fig. 1, the obtained result for the Adler function is in a reasonable agreement with its experimental prediction in the entire energy range.
INCLUSIVE τ LEPTON DECAY
It is also of a particular interest to study the inclusive τ lepton decay within the approach in hand, since this process probes the infrared hadron dynamics at energies below the mass of the τ lepton, and the relevant experimental data are fairly precise. The measurable quantity here is the inclusive semileptonic branching ratio
R τ = Γ(τ − → hadrons − ν τ ) Γ(τ − → e −ν e ν τ ) ,(7)
which can be split into three parts, namely, R τ = R τ,V + R τ,A + R τ,S . The terms R τ,V and R τ,A account for the contributions to Eq. (7) of the decay modes with the light quarks only, and they correspond to the vector (V) and axial-vector (A) quark currents, respectively. The last term R τ,S accounts for the contribution to Eq. (7) of the decay modes with the s quark. Let us proceed with the nonstrange part of the ratio (7) associated with the vector quark currents
R τ,V = N c 2 |V ud | 2 S EW (∆ QCD + δ ′ EW ) ,(8)
see papers [3,4] and references therein for detailed discussion of this issue. The experimental measurement [23] of the ratio (8) yields R τ,V = 1.764±0.016. In Eq. (8) |V ud | = 0.97418±0.00027 denotes the Cabibbo-Kobayashi-Maskawa matrix element [7], S EW = 1.0194±0.0050 and δ ′ EW = 0.0010 are the electroweak corrections [3,24], and ∆ QCD can be expressed in terms of a weighted integral of the aforementioned R(s)-ratio:
∆ QCD = 2 M 2 τ 0 1 − s M 2 τ 2 1 + 2 s M 2 τ R(s) ds M 2 τ ,(9)
where M τ ≃ 1.777 GeV [7] is the τ lepton mass.
In the framework of perturbative approach one usually reduces Eq. (9) to the contour integral in the complex s-plane along the circle of the radius of the squared mass of the τ lepton. At the oneloop level this eventually leads to [3]
∆ QCD = 1 + d 1 α (1) s (M 2 τ ),(10)
that, in turn, results in Λ = (678 ± 55) MeV for n f = 2 active quarks. At the same time, for the evaluation of ∆ QCD in the framework of the approach in hand, the integration in Eq. (9) can be performed in a straightforward way. Ultimately this leads to the following result at the one-loop level [10]:
∆ QCD = 1 − δ Γ + d 1 α (1) TL (M 2 τ ) − d 1 δ Γ α (1) TL (m 2 Γ ) + d 1 4π β 0 1 χ f (ξ) ρ (1) (ξM 2 τ ) dξ,(11)
where
f (ξ) = ξ 3 − 2ξ 2 + 2, χ = m 2 Γ /M 2 τ , δ Γ = χ f (χ), and α (1) TL (s) = 4π β 0 θ(s − m 2 Γ ) ∞ s ρ (1) (σ) dσ σ(12)
is the one-loop timelike effective coupling [9].
Here m Γ stands for the total mass of the lightest allowed hadronic decay mode of the τ lepton, e.g., for the vector channel m Γ = m π 0 + m π − . In this case δ Γ ≃ 0.048 considerably exceeds the electroweak correction δ ′ EW . Eventually, Eq. (11) results in Λ = (941 ± 86) MeV for n f = 2 active quarks, that is somewhat larger than the one-loop perturbative estimation quoted above.
The effects due to the nonvanishing hadronic mass m Γ play a substantial role herein. In particular, in the massless limit m Γ = 0 Eq. (11) leads to Λ = (493 ± 56) MeV for n f = 2 active quarks.
SUMMARY
The infrared behavior of the Adler function is studied by making use of recently derived integral representation for the latter. The developed approach possesses a number of appealing features. Namely, it eliminates unphysical perturbative singularities, properly accounts for the effects due to the analytic continuation of spacelike theoretical results into timelike domain, and embodies the effects due to the mass of the lightest hadron state. Besides, additional adjustable parameters are not introduced into the theory. Furthermore, the developed approach provides a reasonable description of the Adler function in the entire energy range. It is also shown that the effects due to the nonvanishing mass of the lightest hadron state play a substantial role in processing the experimental data on the inclusive τ lepton decay.
Figure 1 .
1Adler function (3) corresponding to the spectral density (6) (solid curve) (Λ = 441 MeV, n f = 3). Perturbative approximation (2) of the Adler function and its experimental prediction are denoted by the dot-dashed curve and shaded band, respectively.
The Adler function was studied in the framework of APT supplemented with the relativistic quark mass threshold resummation in Ref.[14].
It is interesting to note that the QCD effective coupling obtained in Ref.[20] has been independently rediscovered in Ref.[21] proceeding from entirely different reasoning.
AcknowledgementsThis work was partially performed during the visit of the author to the University of Milano. Author is thankful to Professor Giovanni Prosperi for his kind hospitality and fruitful discussions. Author is grateful to N. Brambilla, S. Forte, and A. Vairo for the interest to this study. Partial financial support of grants RFBR-08-01-00686, BRFBR-JINR-F08D-001, and NS-1027.2008.2 is acknowledged.
. S L Adler, Phys. Rev. D. 103714S.L. Adler, Phys. Rev. D 10, 3714 (1974).
R P Feynman, Photon-Hadron Interactions. Benjamin, MassachusettsR.P. Feynman, Photon-Hadron Interactions (Benjamin, Massachusetts, 1972).
. E Braaten, S Narison, A Pich, Nucl. Phys. B. 373581E. Braaten, S. Narison, and A. Pich, Nucl. Phys. B 373, 581 (1992).
. M Davier, S Descotes-Genon, A Hocker, B Malaescu, Z Zhang, arXiv:0803.0979hep-phM. Davier, S. Descotes-Genon, A. Hocker, B. Malaescu, and Z. Zhang, arXiv:0803.0979 [hep-ph].
. M Davier, W J Marciano, Ann. Rev. Nucl. Part. Sci. 54115M. Davier and W.J. Marciano, Ann. Rev. Nucl. Part. Sci. 54, 115 (2004);
. M Passera, J. Phys. G. 3175M. Passera, J. Phys. G 31, R75 (2005);
. J P Miller, E Rafael, B L Roberts, Rept. Prog. Phys. 70795J.P. Miller, E. de Rafael, and B.L. Roberts, Rept. Prog. Phys. 70, 795 (2007).
. F Jegerlehner, arXiv:0807.4206hep-phF. Jegerlehner, arXiv:0807.4206 [hep-ph].
. C Amsler, Phys. Lett. B. 6671Particle Data GroupC. Amsler et al. [Particle Data Group], Phys. Lett. B 667, 1 (2008).
. P A Baikov, K G Chetyrkin, J H Kuhn, Phys. Rev. Lett. 10112002P.A. Baikov, K.G. Chetyrkin, and J.H. Kuhn, Phys. Rev. Lett. 101, 012002 (2008).
. A V Nesterenko, J Papavassiliou, J. Phys. G. 321025A.V. Nesterenko and J. Papavassiliou, J. Phys. G 32, 1025 (2006);
. A V Nesterenko, arXiv:0710.5878hep-phA.V. Nesterenko, arXiv:0710.5878 [hep-ph].
. A V Nesterenko, in preparationA.V. Nesterenko, in preparation.
. M Baldicchi, A V Nesterenko, G M Prosperi, D V Shirkov, C Simolo, Phys. Rev. Lett. 99242001M. Baldicchi, A.V. Nesterenko, G.M. Pros- peri, D.V. Shirkov, and C. Simolo, Phys. Rev. Lett. 99, 242001 (2007);
. M Baldicchi, A V Nesterenko, G M Prosperi, C Simolo, Phys. Rev. D. 7734013M. Baldic- chi, A.V. Nesterenko, G.M. Prosperi, and C. Simolo, Phys. Rev. D 77, 034013 (2008).
. D V Shirkov, I L Solovtsov, Phys. Rev. Lett. 791209D.V. Shirkov and I.L. Solovtsov, Phys. Rev. Lett. 79, 1209 (1997);
. Phys. Lett. B. 442344Phys. Lett. B 442, 344 (1998);
. Theor. Math. Phys. 150132Theor. Math. Phys. 150, 132 (2007).
. D V Shirkov, Eur. Phys. J. C. 22331D.V. Shirkov, Eur. Phys. J. C 22, 331 (2001);
. arXiv:0807.1404Nucl. Phys. B (Proc. Suppl.). 15251hep-phNucl. Phys. B (Proc. Suppl.) 152, 51 (2006); arXiv:0807.1404 [hep-ph].
. K A Milton, I L Solovtsov, O P Solovtsova, Phys. Rev. D. 6576009K.A. Milton, I.L. Solovtsov, and O.P. Solovtsova, Phys. Rev. D 65, 076009 (2002);
. Mod. Phys. Lett. A. 211355Mod. Phys. Lett. A 21, 1355 (2006).
. A P Bakulev, S V Mikhailov, N G Stefanis, Phys. Rev. D. 75E79901A.P. Bakulev, S.V. Mikhailov, and N.G. Ste- fanis, Phys. Rev. D 75, 056005 (2007); 77, 079901(E) (2008);
. Phys. Rev. D. 72E119908Phys. Rev. D 72, 074014 (2005); 72, 119908(E) (2005).
. A I Alekseev, B A Arbuzov, Mod. Phys. Lett. A. 201747A.I. Alekseev and B.A. Arbuzov, Mod. Phys. Lett. A 20, 103 (2005); 13, 1747 (1998).
. D M Howe, C J Maxwell, Phys. Rev. D. 7014002D.M. Howe and C.J. Maxwell, Phys. Rev. D 70, 014002 (2004);
. Phys. Lett. B. 541129Phys. Lett. B 541, 129 (2002).
. I Caprini, J Fischer, Phys. Rev. D. 7154014I. Caprini and J. Fischer, Phys. Rev. D 71, 094017 (2005); 60, 054014 (1999);
. Eur. Phys. J. C. 24127Eur. Phys. J. C 24, 127 (2002).
. G Cvetic, C Valenzuela, arXiv:0804.0872J. Phys. G. 3227hep-phG. Cvetic and C. Valenzuela, J. Phys. G 32, L27 (2006); arXiv:0804.0872 [hep-ph].
. A V Nesterenko, Phys. Rev. D. 62116009A.V. Nesterenko, Phys. Rev. D 62, 094028 (2000); 64, 116009 (2001).
. F Schrempp, J. Phys. G. 28915F. Schrempp, J. Phys. G 28, 915 (2002);
. D Klammer, F Schrempp, JHEP. 080698D. Klammer and F. Schrempp, JHEP 0806, 098 (2008).
. A V Nesterenko, Nucl. Phys. B (Proc. Suppl.). 13359A.V. Nesterenko, Nucl. Phys. B (Proc. Suppl.) 133, 59 (2004);
. Int. J. Mod. Phys. A. 185475Int. J. Mod. Phys. A 18, 5475 (2003).
. K Ackerstaff, OPAL CollaborationEur. Phys. J. C. 7571K. Ackerstaff et al. (OPAL Collaboration), Eur. Phys. J. C 7, 571 (1999).
. W J Marciano, A Sirlin, Phys. Rev. Lett. 611815W.J. Marciano and A. Sirlin, Phys. Rev. Lett. 61, 1815 (1988);
. E Braaten, C S Li, Phys. Rev. D. 423888E. Braaten and C.S. Li, Phys. Rev. D 42, 3888 (1990).
| [] |
[
"Graph Square Roots of Small Distance from Degree One Graphs *",
"Graph Square Roots of Small Distance from Degree One Graphs *"
] | [
"Petr A Golovach [email protected] \nDepartment of Informatics\nUniversity of Bergen\nNorway\n",
"Paloma T Lima [email protected] \nDepartment of Informatics\nUniversity of Bergen\nNorway\n",
"Charis Papadopoulos [email protected] \nDepartment of Mathematics\nUniversity of Ioannina\nGreece\n"
] | [
"Department of Informatics\nUniversity of Bergen\nNorway",
"Department of Informatics\nUniversity of Bergen\nNorway",
"Department of Mathematics\nUniversity of Ioannina\nGreece"
] | [] | Given a graph class H, the task of the H-Square Root problem is to decide, whether an input graph G has a square root H from H. We are interested in the parameterized complexity of the problem for classes H that are composed by the graphs at vertex deletion distance at most k from graphs of maximum degree at most one, that is, we are looking for a square root H such that there is a modulator S of size k such that H − S is the disjoint union of isolated vertices and disjoint edges. We show that different variants of the problems with constraints on the number of isolated vertices and edges in H − S are FPT when parameterized by k by demonstrating algorithms with running time 2 2 O(k) · n O(1) . We further show that the running time of our algorithms is asymptotically optimal and it is unlikely that the double-exponential dependence on k could be avoided. In particular, we prove that the VC-k Root problem, that asks whether an input graph has a square root with vertex cover of size at most k, cannot be solved in time 2 2 o(k) · n O(1) unless Exponential Time Hypothesis fails. Moreover, we point out that VC-k Root parameterized by k does not admit a subexponential kernel unless P = NP. * A preliminary version of the paper has been accepted for LATIN 2020. The paper received support from the Research Council of Norway via the projects "CLASSIS" and "MULTIVAL". | 10.1007/s00224-022-10079-8 | [
"https://arxiv.org/pdf/2010.05733v1.pdf"
] | 222,290,971 | 2010.05733 | bbcd5556f6e5a18f9e3300298e65c82f31f02abd |
Graph Square Roots of Small Distance from Degree One Graphs *
Petr A Golovach [email protected]
Department of Informatics
University of Bergen
Norway
Paloma T Lima [email protected]
Department of Informatics
University of Bergen
Norway
Charis Papadopoulos [email protected]
Department of Mathematics
University of Ioannina
Greece
Graph Square Roots of Small Distance from Degree One Graphs *
Given a graph class H, the task of the H-Square Root problem is to decide, whether an input graph G has a square root H from H. We are interested in the parameterized complexity of the problem for classes H that are composed by the graphs at vertex deletion distance at most k from graphs of maximum degree at most one, that is, we are looking for a square root H such that there is a modulator S of size k such that H − S is the disjoint union of isolated vertices and disjoint edges. We show that different variants of the problems with constraints on the number of isolated vertices and edges in H − S are FPT when parameterized by k by demonstrating algorithms with running time 2 2 O(k) · n O(1) . We further show that the running time of our algorithms is asymptotically optimal and it is unlikely that the double-exponential dependence on k could be avoided. In particular, we prove that the VC-k Root problem, that asks whether an input graph has a square root with vertex cover of size at most k, cannot be solved in time 2 2 o(k) · n O(1) unless Exponential Time Hypothesis fails. Moreover, we point out that VC-k Root parameterized by k does not admit a subexponential kernel unless P = NP. * A preliminary version of the paper has been accepted for LATIN 2020. The paper received support from the Research Council of Norway via the projects "CLASSIS" and "MULTIVAL".
Introduction
Squares of graphs and square roots constitute widely studied concepts in graph theory, both from a structural perspective as well as from an algorithmic point of view. A graph G is the square of a graph H if G can be obtained from H by the addition of an edge between any two vertices of H that are at distance two. In this case, the graph H is called a square root of G. It is interesting to notice that there are graphs that admit different square roots, graphs that have a unique square root and graphs that do not have a square root at all. In 1994, Motwani and Sudan [26] proved that the problem of determining if a given graph G has a square root is NP-complete. This problem is known as the Square Root problem.
The intractability of Square Root has been attacked in two different ways. The first one is by imposing some restrictions on the input graph G. In this vein, the Square Root problem has been studied in the setting in which G belongs to a specific class of graphs [4,12,11,20,25,24,27].
Another way of coping with the hardness of the Square Root problem is by imposing some additional structure on the square root H. That is, given the input graph G, the task is to determine whether G has a square root H that belongs to a specific graph class H. This setting is known as the H-Square Root problem and it is the focus of this work. The H-Square Root problem has been shown to be polynomial-time solvable for specific graph classes H [17,20,21,18,19]. To name a few among others, the problem is solved in polynomial time when H is the class of trees [23], bipartite graphs [16], cactus graphs [12], and, more recently, when H is the class of cactus block graphs [6], outerplanar graphs [10], and graphs of pathwidth at most 2 [10]. It is interesting to notice that the fact that H-Square Root can be efficiently (say, polynomially) solved for some class H does not automatically imply that H -Square Root is efficiently solvable for every subclass H of H. On the negative side, H-Square Root remains NP-complete on graphs of girth at least 5 [7], graphs of girth at least 4 [8], split graphs [17], and chordal graphs [17]. The fact that all known NP-hardness constructions involve dense graphs [7,8,17,26] and dense square roots, raised the question of whether H-Square Root is polynomial-time solvable for every sparse graph class H.
We consider this question from the Parameterized Complexity viewpoint for structural parameterizations of H (we refer to the book of Cygan et al. [5] for an introduction to the field). More precisely, we are interested in graph classes H that are at small distance from a (sparse) graph class for which H-Square Root can be solved in polynomial time. Within this scope, the distance is usually measured either by the number of edge deletions, edge additions or vertex deletions. This approach for the problem was first applied by Cochefert et al. in [3], who considered H-Square Root, where H is the class of graphs that have a feedback edge set of size at most k, that is, for graphs that can be made forests by at most k edge deletions. They proved that H-Square Root admits a compression to a special variant of the problem with O(k 2 ) vertices, implying that the problem can be solved in 2 O(k 4 ) + O(n 4 m) time, i.e., is fixed-parameter tractable (FPT) when parameterized by k. Herein, we study whether the same complexity behavior occurs if we measure the distance by the number of vertex deletions instead of edge deletions.
Towards such an approach, the most natural consideration for H-Square Root is to ask for a square root of feedback vertex set of size at most k. The approach used by Cochefert et al. [3] fails if H is the class of graphs that can be made forests by at most k vertex deletions and the question of the parameterized complexity of our problem for this case is open. In this context, we consider herein the H-Square Root problem when H is the class of graphs of bounded vertex deletion distance to a disjoint union of isolated vertices and edges. Our main result is that the problem is FPT when parameterized by the vertex deletion distance. Surprisingly, however, we conclude a notable difference on the running time compared to the edge deletion case even on such a relaxed variation: a double-exponential dependency on the vertex deletion distance is highly unavoidable. Therefore, despite the fact that both problems are FPT, the vertex deletion distance parameterization for the H-Square Root problem requires substantial effort. More formally, we are interested in the following problem.
Input:
A graph G and nonnegative integers p, q, k such that p + 2q + k = |V (G)|.
Task:
Decide whether there is a square root H of G such that H − S is a graph isomorphic to pK 1 + qK 2 , for a set S on k vertices.
Distance-k-to-(pK 1 + qK 2 ) Square Root
Note that when q = 0, the problem asks whether G has a square root with a vertex cover of size (at most) k and we refer to the problem as VC-k Root. If p = 0, we obtain Distancek-to-Matching Square Root. Observe also that, given an algorithm solving Distancek-to-(pK 1 + qK 2 ) Square Root, then by testing all possible values of p and q such that p + 2q = |V (G)| − k, we can solve the Distance-k-to-Degree-One Square Root problem, whose task is to decide whether there is a square root H such that the maximum degree of H − S is at most one for a set S on k vertices. Note that a set of vertices X inducing a graph of maximum degree one is known as a dissociation set and the maximum size of a dissociation set is called the dissociation number (see, e.g., [29]). Thus, the task of Distance-k-to-Degree-One Square Root is to find a square root H with the dissociation number at least |V (G)| − k.
We show that Distance-k-to-(pK 1 + qK 2 ) Square Root can be solved in 2 2 O(k) · n O(1) time, that is, the problem is FPT when parameterized by k, the size of the deletion set. We complement this result by showing that the running time of our algorithm is asymptotically optimal in the sense that VC-k Root, i.e., the special case of Distance-k-to-(pK 1 + qK 2 ) Square Root when q = 0, cannot be solved in 2 2 o(k) · n O(1) time unless Exponential Time Hypothesis (ETH ) of Impagliazzo, Paturi and Zane [13,14] fails (see also [5] for an introduction to the algorithmic lower bounds based on ETH). We also prove that VC-k Root does not admit a kernel of subexponential in k size unless P = NP.
Motivated by the above results, we further investigate the complexity of the H-Square Root problem when H is the class of graphs of bounded deletion distance to a specific graph class. We show that the problem of testing whether a given graph has a square root of bounded deletion distance to a clique is also FPT parameterized by the size of the deletion set.
Preliminaries
Graphs. All graphs considered here are finite undirected graphs without loops and multiple edges. We refer to the textbook by Bondy and Murty [1] for any undefined graph terminology. We denote the vertex set of G by V (G) and the edge set by E(G). We use n to denote the number of vertices of a graph and use m for the number of edges (if this does not create confusion).
Given x ∈ V (G), we denote by N G (x) the neighborhood of x. The closed neighborhood of x, denoted by N G [x], is defined as N H (x) ∪ {x}. For a set X ⊂ V (G), N G (X) denotes the set of vertices in V (G) \ X that have at least one neighbor in X. Analogously, N G [X] = N G (X) ∪ X.
The distance between a pair of vertices u, v ∈ V (G) is the number of edges of a shortest path between them in G. We denote by N 2 G (u) the set of vertices of G that are at distance exactly two from u, and N 2 G [u] is the set of vertices at distance at most two from u. Given S ⊆ V (G), we denote by G − S the graph obtained from G by the removal of the vertices of S. If S = {u}, we also write G − u. The subgraph induced by S is denoted by G[S], and has S as its vertex set and {uv | u, v ∈ S and uv ∈ E(G)} as its edge set. A clique is a set K ⊆ V (G) such that G[K] is a complete graph. An independent set is a set I ⊆ V (G) such that G[I] has no edges. A vertex cover of G is a set S ⊆ V (G) such that V (G) \ S is an independent set. A graph is bipartite if its vertex set can be partitioned into two independent sets, say A and B, and is complete bipartite if it is bipartite and every vertex of A is adjacent to every vertex of B. A biclique in a graph G is a set B ⊆ V (G) such that G[B] is a complete bipartite graph. A matching in G is a set of edges having no common endpoint. We denote by K r the complete graph on r vertices. Given two graphs G and G , we denote by G + G the disjoint union of them. For a positive integer p, pG denotes the disjoint union of p copies of G.
The square of a graph H is the graph G = H 2 such that V (G) = V (H) and every two distinct vertices u and v are adjacent in G if and only if they are at distance at most two in H.
If G = H 2 , then H is a square root of G. Two vertices u, v are said to be true twins if N G [u] = N G [v]
. A true twin class of G is a maximal set of vertices that are pairwise true twins. Note that the set of true twin classes of G constitutes a partition of V (G). Let T = {T 1 , . . . , T r }. We define the prime-twin graph G of G as the graph with the vertex set T such that two distinct vertices T i and T j of G are adjacent if and only if uv ∈ E(G) for u ∈ T i and v ∈ T j . Parameterized Complexity. We refer to the recent book of [5] for an introduction to Parameterized Complexity. Here we only state some basic definitions that are crucial for understanding. In a parameterized problem, each instance is supplied with an integer parameter k, that is, each instance can be written as a pair (I, k). A parameterized problem is said to be fixed-parameter tractable (FPT) if it can be solved in time f (k) · |I| O(1) for some computable function f . A kernelization for a parameterized problem is a polynomial time algorithm that maps each instance (I, k) of a parameterized problem to an instance (I , k ) of the same problem such that (i) (I, k) is a Yes-instance if and only if (I , k ) is a Yes-instance, and (ii) |I | + k is bounded by f (k) for some computable function f . The output (I , k ) is called a kernel. The function f is said to be the size of the kernel.
Integer Programming. We will use integer linear programming as a subroutine in the proof of our main result. In particular, we translate part of our problem as an instance of the following problem.
Input:
An m × p matrix A over Z and a vector b ∈ Z m . Task:
Decide whether there is a vector x ∈ Z p such that Ax ≤ b.
p-Variable Integer Linear Programming Feasibility
Lenstra [22] and Kannan [15] showed that the above problem is FPT parameterized by p, while Frank and Tardos [9] showed that this algorithm can be made to run also in polynomial space. We will make use of these results, that we formally state next. 3 Distance-k-to-(pK 1 + qK 2 ) Square Root
In this section we give an FPT algorithm for the Distance-k-to-(pK 1 + qK 2 ) Square Root problem, parameterized by k. In the remainder of this section, we use (G, p, q, k) to denote an instance of the problem. Suppose that (G, p, q, k) is a Yes-instance and H is a square root of G such that there is S ⊆ V (G) of size k and H − S is isomorphic to pK 1 + qK 2 . We say that S is a modulator, the p vertices of H − S that belong to pK 1 are called S-isolated vertices and the q edges that belong to qK 2 are called S-matching edges. Slightly abusing notation, we also use these notions when H is not necessarily a square root of G but any graph such that H − S has maximum degree one.
Structural lemmas
We start by defining the following two equivalence relations on the set of ordered pairs of vertices of G. Two pairs of adjacent vertices (x, y) and (z, w) are called matched twins, denoted by (x, y) ∼ mt (z, w), if the following conditions hold:
· N G [x] \ {y} = N G [z] \ {w}, and · N G [y] \ {x} = N G [w] \ {z}. A pair of vertices (x, y) is called comparable if N G [x] ⊆ N G [y]
. Two comparable pairs of vertices (x, y) and (z, w) are nested twins, denoted by (x, y) ∼ nt (z, w), if the following conditions hold:
· N G (x) \ {y} = N G (z) \ {w}, and · N G [y] \ {x} = N G [w] \ {z}.
We use the following properties of matched and nested twins.
Lemma 1.
Let (x, y) and (z, w) be two distinct pairs of adjacent vertices (resp. comparable pairs) of G that are matched twins (resp. nested twins). Then, the following holds:
(i) {x, y} ∩ {z, w} = ∅, (ii) xw, zy / ∈ E(G), (iii) yw ∈ E(G), (iv) if (x, y) ∼ mt (z, w) then xz ∈ E(G), (v) if (x, y) ∼ nt (z, w) then xz / ∈ E(G), (vi) G − {x, y} and G − {z, w} are isomorphic.
Proof. For (i), we show that the end-vertices of both pairs are distinct. It is not difficult to see that (x, y) mt (y, x) and (x, y) nt (y,
x), since x ∈ N G [x] \ {y} and x / ∈ N G [y] \ {x}.
Assume, for the sake of contradiction, that the two pairs share one end-vertex.
· First, we show (i) for ∼ mt . Let (x, y) ∼ mt (z, w). Suppose that y = w. Then x / ∈ N G [y]\{x} but x ∈ N G [w] \ {z}, that is, N G [y] \ {x} = N G [w] \ {z} contradicting (x, y) ∼ mt (z, w). Assume that y = z. Then z ∈ N G [y] \ {w} but z = y / ∈ N G [x] \ {y}; a contradiction.
The cases x = z and x = w are completely symmetric to the cases considered above. · Now we prove (i) for ∼ nt . Let (x, y) ∼ nt (z, w). Suppose that y = w.
Then x / ∈ N G [y] \ {x} but x ∈ N G [w]\{z}, that is, N G [y]\{x} = N G [w]\{z}; a contradiction to (x, y) ∼ nt (z, w). Let x = z. Then y / ∈ N G (x)\{y} but y ∈ N G [z]
\{w}, and we get that N G (x)\{y} = N G (z)\ {w}, leading again to a contradiction. Assume that y = z. Then y = z / ∈ N G [w] \ {z} but y ∈ N G [y] \ {x} and we again obtain a contradiction. The case x = w is symmetric.
This completes the proof of (i). To show the remaining claims, observe that
N G [y] \ {x} = N G [w] \ {z} holds in both relations. For (ii), note that if xw ∈ E(G), then x ∈ N G [w] \ {z} but x / ∈ N G [y] \ {x}, a contradiction. So xw / ∈ E(G).
The same follows by a symmetric argument for the edge yz.
For (iii), note that if yw / ∈ E(G), then w / ∈ N G [y]\{x}, but w ∈ N G [w]\{z}, a contradiction. To show (iv), observe that if xz / ∈ E(G), then x ∈ N G [x] \ {y}, while x / ∈ N G [z] \ {w}, a contradiction. For (v), if xz ∈ E(G), then z ∈ N G (x) \ {y}, but z / ∈ N G (z) \ {w}, a contradiction. To see (vi), notice that {x, y} ∩ {z, w} = ∅ by (i). Consider α : V (G) → V (G) such that α(x) = z, α(y) = w, α(z) = x, α(w) = y and α(v) = v for v ∈ V (G) \ {x, y, z, w}.
It is straightforward to see that α is an automorphism of G by the definition of ∼ nt and ∼ mt and the properties (i) and (ii). Hence, G − {x, y} and G − {z, w} are isomorphic.
In particular, the properties above allow us to classify pairs of vertices with respect to ∼ mt and ∼ nt .
Observation 1. The relations ∼ mt and ∼ nt are equivalence relations on pairs of vertices and comparable pairs of vertices, respectively.
Proof. It is clear that ∼ mt (resp. ∼ nt ) are reflexive and symmetric on pairs of vertices (resp. comparable vertices). Let (
x 1 , y 1 ), (x 2 , y 2 ) and (x 3 , y 3 ) be pairs of vertices. If N G [x 1 ] \ {y 1 } = N G [x 2 ] \ {y 2 } and N G [x 2 ] \ {y 2 } = N G [x 3 ] \ {y 3 }, then N G [x 1 ] \ {y 1 } = N G [x 3 ] \ {y 3 }. Also if N G (x 1 ) \ {y 1 } = N G (x 2 ) \ {y 2 } and N G (x 2 ) \ {y 2 } = N G (x 3 ) \ {y 3 }, then N G (x 1 ) \ {y 1 } = N G (x 3 ) \ {y 3 }.
This immediately implies that ∼ mt and ∼ nt are transitive, as well.
Let H be a square root of a connected graph G with at least three vertices, such that H is at distance k from pK 1 + qK 2 , and let S be a modulator. Note that S = ∅, because G is connected and |V (G)| ≥ 3. Then an S-matching edge ab of H satisfies exactly one of the following conditions:
1. N H (a) ∩ S = ∅ and N H (b) ∩ S = ∅, 2. N H (a) ∩ S, N H (b) ∩ S = ∅ and N H (a) ∩ N H (b) ∩ S = ∅,3. N H (a) ∩ S, N H (b) ∩ S = ∅ and N H (a) ∩ N H (b) ∩ S = ∅.
We refer to them as type 1, 2 and 3 edges, respectively (see Figure 1). We use the same notation for every graph F that has a set of vertices S such that F − S has maximum degree at most one.
In the following three lemmas, we show the properties of the S-matching edges of types 1, 2 and 3 respectively that are crucial for our algorithm. We point out that even though some of the properties presented may be redundant, we state them in the lemmas for clarity of the explanations.
Lemma 2. Let H be a square root of a connected graph G with at least three vertices such that
H − S is isomorphic to pK 1 + qK 2 for S ⊆ V (G). If a 1 b 1 and a 2 b 2 are two type 1 distinct edges such that N H (b 1 ) ∩ S = N H (b 2 ) ∩ S = ∅,
then the following holds:
(i) (a 1 , b 1 ) and (a 2 , b 2 ) are comparable pairs, (ii) (a 1 , b 1 ) ∼ nt (a 2 , b 2 ), (iii) (a 1 , b 1 ) mt (a 2 , b 2 ). Proof. Let A = N H (b 1 ) ∩ S = N H (b 2 ) ∩ S. Since (a 1 , b 1 ) is a type 1 edge, we have that N H (a 1 ) = {b 1 }. Thus, N G [a 1 ] = A ∪ {a 1 , b 1 } ⊆ N H [b 1 ] ⊆ N G [b 1 ]. The same holds for (a 2 , b 2 ).
Hence, the pairs are comparable and (i) is proved.
For (ii), note that since N H (b 1 ) ∩ S = N H (b 2 ) ∩ S = A, then N G [b 1 ] \ {a 1 } = N H [A] = N G [b 2 ] \ {a 2 }. Moreover, since N H (a 1 ) = {b 1 } and N H (a 2 ) = {b 2 }, we have that N G (a 1 ) \ {b 1 } = A = N G (a 2 ) \ {b 2 }. This shows that (a 1 , b 1 ) ∼ nt (a 2 , b 2 ).
Finally, for (iii), it suffices to notice that by Lemma 1(v), we have that a 1 a 2 / ∈ E(G) and it should be a 1 a 2 ∈ E(G) if (a 1 , b 1 ) ∼ mt (a 2 , b 2 ) by Lemma 1(iv).
Lemma 3. Let H be a square root of a connected graph G with at least three vertices such that
H − S is isomorphic to pK 1 + qK 2 for S ⊆ V (G). If a 1 b 1 and a 2 b 2 are two distinct type 2 edges such that N H (a 1 ) ∩ S = N H (a 2 ) ∩ S and N H (b 1 ) ∩ S = N H (b 2 ) ∩ S,
then the following holds:
(i) (a 1 , b 1 ) ∼ mt (a 2 , b 2 ), (ii) (a 1 , b 1 ) nt (a 2 , b 2 ). Proof. Let A = N H (a 1 ) ∩ S = N H (a 2 ) ∩ S and B = N H (b 1 ) ∩ S = N H (b 2 ) ∩ S. Since a 1 b 1 and a 2 b 2 are type 2 edges, we have A ∩ B = ∅. For (i), notice that N G [a 1 ] = N 2 H [a 1 ] = {b 1 } ∪ B ∪ N H [A]. Therefore, we have that N G [a 1 ] \ {b 1 } = N 2 H [a 1 ] \ {b 1 } = B ∪ N H [A]. By the same arguments, N G [a 2 ] \ {b 2 } = B ∪ N H [A] and, therefore, N G [a 1 ]\{b 1 } = N G [a 2 ]\{b 2 }. By symmetric arguments, we obtain that N G [b 1 ]\{a 1 } = N G [b 2 ] \ {a 2 }, which completes the proof that (a 1 , b 1 ) ∼ mt (a 2 , b 2 ).
To prove (ii), notice that a 1 a 2 ∈ E(G) by Lemma 1(iv) and, therefore, (a 1 , b 1 ) nt (a 2 , b 2 ) by Lemma 1(v).
Lemma 4. Let H be a square root of a connected graph G with at least three vertices such that
H − S is isomorphic to pK 1 + qK 2 for S ⊆ V (G). If a 1 b 1 and a 2 b 2 are two distinct type 3 edges such that N H (a 1 ) ∩ S = N H (a 2 ) ∩ S and N H (b 1 ) ∩ S = N H (b 2 ) ∩ S,
then the following holds: 1 and a 2 (resp. b 1 and b 2 ) are true twins in G.
(i) (a 1 , b 1 ) mt (a 2 , b 2 ), (ii) (a 1 , b 1 ) nt (a 2 , b 2 ), (iii) aProof. Let A = N H (a 1 ) ∩ S = N H (a 2 ) ∩ S and B = N H (b 1 ) ∩ S = N H (b 2 ) ∩ S. Since a 1 b 1 and a 2 b 2 are type 3 edges, A ∩ B = ∅.
For (i) and (ii), it suffices to notice that since
A ∩ B = ∅, then a 1 b 2 , b 1 a 2 ∈ E(G). By Lemma 1(ii), we conclude that (a 1 , b 1 ) mt (a 2 , b 2 ) and (a 1 , b 1 ) nt (a 2 , b 2 ). For (iii), observe that N G [a 1 ] = N 2 H [a 1 ] = N H [A] ∪ {b 1 } ∪ B by the definition. Since A ∩ B = ∅, we have that b 1 ∈ N H [A]. Hence, N G [a 1 ] = N H [A] ∪ B. By the same arguments, N G [a 2 ] = N H [A] ∪ B. Then N G [a 1 ] = N G [a 2 ]
, that is, a 1 and a 2 are true twins. Clearly, the same holds for b 1 and b 2 .
We also need the following straightforward observation about S-isolated vertices.
Observation 2.
Let H be a square root of a connected graph G with at least three vertices such that H − S is isomorphic to pK 1 + qK 2 for S ⊆ V (G). Then every two distinct S-isolated vertices of H with the same neighbors in S are true twins in G.
The next lemma is used to construct reduction rules that allow to bound the size of equivalence classes of pairs of vertices with respect to ∼ nt and ∼ mt .
Lemma 5.
Let H be a square root of a connected graph G with at least three vertices such that H − S is isomorphic to pK 1 + qK 2 for a modulator S ⊆ V (G) of size k. Let Q be an equivalence class in the set of pairs of comparable pairs of vertices with respect to the relation ∼ nt (an equivalence class in the set of pairs of adjacent vertices with respect to the relation ∼ mt , respectively). If |Q| ≥ 2k + 2 2k + 1, then Q contains two pairs (a 1 , b 1 ) and (a 2 , b 2 ) such that
a 1 b 1 and a 2 b 2 are S-matching edges of type 1 in H satisfying N H (b 1 ) ∩ S = N H (b 2 ) ∩ S = ∅ (S- matching edges of type 2 in H satisfying N H (a 1 )∩S = N H (a 2 )∩S and N H (b 1 )∩S = N H (b 2 )∩S, respectively).
Proof. Let Q be an equivalence class of size at least 2k + 2 2k + 1 with respect to ∼ nt or ∼ mt .
By Lemma 1 (i), each vertex of G appears in at most one pair of Q. Since |S| = k, there are at most k pairs of Q with at least one element in S. Let
Q = {(x, y) ∈ Q | x, y /
∈ S and xy is not a S-matching edge in H}.
We now show that |Q | ≤ k. Consider (x, y) ∈ Q . Since xy ∈ E(G) \ E(H), there exists w ∈ V (G) such that wx, wy ∈ E(H). Since H − S is isomorphic to pK 1 + qK 2 , we have that w ∈ S. Let (x , y ) ∈ Q \ {(x, y)}.
By the same argument, there exists w ∈ S such that w x , w y ∈ E(H). Moreover, it cannot be the case that w = w , since this would imply that xy , yx ∈ E(G), which by Lemma 1 (ii) is a contradiction to the fact that (x, y) ∼ nt (x , y ) or (x, y) ∼ mt (x , y ). That is, for each pair (x, y) ∈ Q , there is a vertex in S that is adjacent to both elements of the pair and no vertex of S can be adjacent to the elements of more than one pair of Q . Since |S| ≤ k, we conclude that |Q | ≤ k.
Since |Q| ≥ 2k +2 2k +1, there are at least 2 2k +1 S-matching edges in Q. Given that |S| ≤ k, by the pigeonhole principle, we have that there are two pairs (a 1 , b 1
), (a 2 , b 2 ) ∈ Q such that a 1 b 1 and a 2 b 2 are S-matched edges in H and N H (a 1 ) ∩ S = N H (a 2 ) ∩ S and N H (b 1 ) ∩ S = N H (b 2 ) ∩ S.
In particular, this implies that a 1 b 1 and a 2 b 2 are of the same type. It cannot be the case that these two edges are of type 3, since by Lemma 4(i) and (ii), these two pairs would not be equivalent with respect to ∼ nt or ∼ mt . We now consider the following two cases, one for each of the mentioned equivalence relations.
Suppose that Q is an equivalence class in the set of pairs of comparable pairs of vertices with respect to the relation ∼ nt . By Lemma 3(ii), they cannot be of type 2. Hence, a 1 b 1 and a 2 b 2 are of type 1. In particular, either
N H (a 1 ) ∩ S = N H (a 2 ) ∩ S = ∅ or N H (b 1 ) ∩ S = N H (b 2 ) ∩ S = ∅. If N H (a 1 ) ∩ S = N H (a 2 ) ∩ S = ∅, then a 1 a 2 ∈ E(G) contradicting Lemma 1 (v). Hence, a 1 b 1 and a 2 b 2 are S-matching edges of type 1 in H satisfying N H (b 1 ) ∩ S = N H (b 2 ) ∩ S = ∅.
Let now Q be an equivalence class in the set of pairs of adjacent vertices with respect to the relation ∼ mt . By Lemma 2(iii), they cannot be of type 1. Hence, a 1 b 1 and a 2 b 2 are of type 2. This concludes the proof of the lemma.
The algorithm
In this section we prove our main result. First, we consider connected graphs. For this, observe that if a connected graph G has a square root H then H is connected as well.
Theorem 2. Distance-k-to-(pK 1 + qK 2 ) Square Root can be solved in time 2 2 O(k) · n O(1) on connected graphs. Proof. Let (G, p, q, k) be an instance of Distance-k-to-(pK 1 + qK 2 ) Square Root with G being a connected graph. Recall that we want to determine if G has a square root H such that H − S is isomorphic to pK 1 + qK 2 , for a modulator S ⊂ V (G) with |S| = k, where p + 2q + k = n.
If G has at most two vertices, then the problem is trivial. Notice also that if k = 0, then (G, p, q, k) may be a Yes-instance only if G has at most two vertices, because G is connected. Hence, from now on we assume that n ≥ 3 and k ≥ 1.
We exhaustively apply the following rule to reduce the number of type 1 edges in a potential solution. For this, we consider the set A of comparable pairs of vertices of G and find its partition into equivalence classes with respect to ∼ nt . Note that A contains at most 2m elements and can be constructed in time O(mn). Then the partition of A into equivalence classes can be found in time O(m 2 n) by checking the neighborhoods of the vertices of each pair.
Rule 2.1.
If there is an equivalence class Q ⊆ A with respect to ∼ nt such that |Q| ≥ 2k +2 2k +2, delete two vertices of G that form a pair of Q and set q := q − 1.
The following claim shows that Rule 2.1 is safe. Claim 2.1. If G is the graph obtained from G by the application of Rule 2.1, then (G, p, q, k) and (G , p, q − 1, k) are equivalent instances of Distance-k-to-(pK 1 + qK 2 ) Square Root and G is connected.
Proof: Let G = G − {x, y} for a pair (x, y) ∈ Q.
First assume (G, p, q, k) is a Yes-instance to Distance-k-to-(pK 1 + qK 2 ) Square Root and let H be a square root of G that is a solution to this problem with a modulator S. By Lemma 5, H has two S-matching edges x y and x y of type 1 such that (x , y ), (x , y ) ∈ Q and N H (y ) ∩ S = N H (y ) ∩ S = ∅. Note that any edge of H that has an endpoint in y , also has an endpoint in y (except for x y ). Hence, H = H − {x , y } is a square root of G = G − {x , y } with one less S-matching edge. Moreover, H is connected, because H is connected and N H (y ) \ {x } = N H (y ) \ {x }. This implies that G is connected as well. We conclude that (G , p, q − 1, k) is a Yes-instance with G be a connected graph. Because G and G are isomorphic by Lemma 1(vi), we have that (G , p, q − 1, k) is a Yes-instance as well and G is connected. Now assume (G , p, q − 1, k) is a Yes-instance to Distance-k-to-(pK 1 + qK 2 ) Square Root and let H be a square root of G that is a solution to this problem with a modulator S. Recall that Q consists of pairs of vertices whose end-vertices are pairwise distinct by Lemma 1(i). Hence, Q = Q \ {(x, y)} contains at least 2k + 2 2k + 1 elements. By the definition of ∼ nt , every two pairs of Q = Q \ {(x, y)} are equivalent with respect to the relation for G . Thus, by Lemma 5, there are (x , y ), (x , y ) ∈ Q such that x y and x y are S-matching edges of type 1 in H and N H (y ) ∩ S = N H (y ) ∩ S = ∅. We construct a square root H for G by adding the edge xy to H as an S-matching edge of type 1 with N H (y) ∩ S = N H (y ) ∩ S. To see that H is indeed a square root for G, note that since H was a square root for G , we have H 2 = G . Now we argue about the edges of G that are incident to x and y. Since (x, y),
(x , y ) ∈ Q, we have that N G (x) \ {y} = N G (x ) \ {y } and N G [y] \ {x} = N G [y ] \ {x }.
This means that if w = x is a neighbor of y in G, then w is also a neighbor of y . Since H is a square root of G , we have that either y w ∈ E(H ) or y and w are at distance two in H . Since N H (y) ∩ S = N H (y ) ∩ S, the same holds for y: it is either adjacent to w or it is at distance two from w in H. A symmetric argument holds for any edge incident to x in G. Hence, we conclude that H is indeed a square root of G.
We also want to reduce the number of type 2 edges in a potential solution. Let B be the set of pairs of adjacent vertices. We construct the partition of B into equivalence classes with respect to ∼ mt . We have that |B| = 2m and, therefore, the partition of B into equivalence classes can be found in time O(m 2 n) by checking the neighborhoods of the vertices of each pair. We exhaustively apply the following rule.
Rule 2.2.
If there is an equivalence class Q ⊆ B with respect to ∼ mt such that |Q| ≥ 2k +2 2k +2, delete two vertices of G that form a pair of Q and set q := q − 1.
The following claim shows that Rule 2.1 is safe. Let (G, p, q, k) be a Yes-instance to Distance-k-to-(pK 1 + qK 2 ) Square Root and let H be a square root of G that is a solution to this problem with a modulator S. By Lemma 5, H has two S-matching edges x y and x y of type 2 such that (x , y ), (x , y ) ∈ Q and N H (x ) ∩ S = N H (x ) ∩ S and N H (y ) ∩ S = N H (y ) ∩ S. Note that any edge of H that has an endpoint in x (resp. y ), also has an endpoint in x (resp. y ), except for x y (resp. x y ). Thus, H = H − {x , y } is a square root of G = G − {x , y } with one less S-matching edge. We also have that H is connected, because H is connected. This implies that G is also connected. We conclude that (G , p, q − 1, k) is a Yes-instance with G be a connected graph. Because G and G are isomorphic by Lemma 1 (vi), we have that (G , p, q − 1, k) is a Yes-instance as well and G is connected. Now assume (G , p, q − 1, k) is a Yes-instance to Distance-k-to-(pK 1 + qK 2 ) Square Root and let H be a square root of G that is a solution to this problem with a modulator S. Recall that Q consists of pairs of vertices whose end-vertices are pairwise distinct by Lemma 1 (i). Hence, Q = Q \ {(x, y)} contains at least 2k + 2 2k + 1 elements. By the definition of ∼ mt , every to pairs of Q = Q \ {(x, y)} are equivalent with respect to the relation for G . Thus, by This means that if w = x is a neighbor of y in G, then w is also a neighbor of y . Since H is a square root of G , we have that either y w ∈ E(H ) or y and w are at distance two in H . Since N H (y) ∩ S = N H (y ) ∩ S, the same holds for y: it is either adjacent to w in H or it is at distance two from w in H. A symmetric argument holds for any edge incident to x in G. Hence, we conclude that H is indeed a square root of G.
After exhaustive application of Rules 2.1 and 2.2 we obtain the following bounds on the number of S-matching edges of types 1 and 2 in a potential solution. (G , p, q , k) be the instance of Distance-k-to-(pK 1 + qK 2 ) Square Root after exhaustive applications of Rules 2.1 and 2.2. Then G is a connected graph and a potential solution H to the instance has at most 2 k (2k + 2 2k + 1) S-matching edges of type 1 and 2 2k (2k + 2 2k + 1) S-matching edges of type 2.
Claim 2.3. Let
Proof: Clearly, G is connected by Claims 2.1 and 2.2.
By Lemma 2(ii) and Lemma 3(ii), if two S-matching edges xy and x y of a potential solution behave in the same way with respect to S, that is, if N H (x) ∩ S = N H (x ) ∩ S and N H (y) ∩ S = N H (y )∩S, then they belong to the same equivalence class (either with respect to ∼ nt or to ∼ mt ). Hence, after exhaustive application of Rule 2.1, for each set A ⊆ S, there are at most 2k +2 2k +1 S-matching edges xy such that N H (y) ∩ S = A. Thus, there are at most 2 k (2k + 2 2k + 1) Smatching edges of type 1 in H. Analogously, after exhaustive application of Rule 2.1, for each A, B ⊆ S, there are at most 2k + 2 2k + 1 S-matching edges xy such that N H (x) ∩ S = A and N H (y) ∩ S = B. Hence, there are at most 2 2k (2k + 2 2k + 1) S-matching edges of type 2.
For simplicity, we call (G, p, q, k) again the instance obtained after exhaustive applications of Rules 2.1 and 2.2. Notice that G can be constructed in polynomial time, since the equivalence classes according to ∼ mt and ∼ nt can be computed in time O(m 2 n).
By Claim 2.3, in a potential solution, the number of S-matching edges of types 1 and 2 is bounded by a function of k. We will make use of this fact to make further guesses about the structure of a potential solution. To do so, we first consider the classes of true twins of G and show the following. G, p, q, k) is a Yes-instance to our problem, then r ≤ 2(2 k + 2 2k )(2k + 2 2k + 1) + k + 2 k + 2 · 2 2k . Proof: Assume (G, p, q, k) is a Yes-instance to our problem and let H be a square root of G containing a modulator S of size k such that G−S is isomorphic to pK 1 +qK 2 . Let X be the set of vertices of G that are endpoints of type 1 and type 2 S-matching edges in H. By Claim 2.3, |X| ≤ 2(2 k +2 2k )(2k +2 2k +1). Note that if two S-isolated vertices of H have the same neighborhood in S, they are true twins in G by Observation 2. Moreover, by Lemma 4(iii), if xy and x y are two type 3 S-matching edges in H satisfying N H (x) ∩ S = N H (x ) ∩ S and N H (y) ∩ S = N H (y ) ∩ S, then x and x (resp. y and y ) are true twins in G. As already explained, there are no other types of edges in H − S. Thus, we have at most 2(2 k + 2 2k )(2k + 2 2k + 1) distinct classes of true twins among the vertices of X, at most k classes among the vertices of S, at most 2 k classes among the S-isolated vertices and at most 2 · 2 2k classes among the vertices that are endpoints of type 3 k-matching edges. This shows that r ≤ 2(2 k + 2 2k )(2k + 2 2k + 1) + k + 2 k + 2 · 2 2k .
Observe that the partition T = {T 1 , . . . , T r } of V (G) into classes of true twins can be constructed in linear time [28]. Using Claim 2.4, we apply the following rule.
Rule 2.3.
If |T | > 2(2 k + 2 2k )(2k + 2 2k + 1) + k + 2 k + 2 · 2 2k , then return No and stop.
From now, we assume that we do not stop by Rule 2.3. This means that |T | = O(2 4k ). Suppose that (G, p, q, k) is a Yes-instance to Distance-k-to-(pK 1 + qK 2 ) Square Root and let H be a square root of G that is a solution to this instance with a modulator S. We say that F is the skeleton of H with respect to S if F is obtained from H be the exhaustive application of the following rules: In other word, we replace the set of S-matching edges of type 3 with the same neighborhoods on the end-vertices in S by a single representative and we replace the set of S-isolated vertices with the same neighborhoods by a single representative.
We say that a graph F is a potential solution skeleton with respect to a set S ⊆ V (F ) of size k for (G, p, q, k) if the following conditions hold:
(i) F − S has maximum degree one, that is, F − S is isomorphic to sK 1 + tK 2 for some nonnegative integers s and t, Note that (iv) means that the number of type 1 and type 2 S-matched edges with the same neighbors in S is upper bounded by 2k + 2 2k + 1. Since Rules 2.1 and 2.2 cannot be applied to (G, p, q, k), we obtain the following claim by Lemmas 2(ii) and 3(ii).
Claim 2.5. Every skeleton of a solution to (G, p, q, k) is a potential solution skeleton for this instance with respect to the modulator S.
We observe that each potential solution skeleton has bounded size.
Claim 2.6. For every potential solution skeleton F for (G, p, q, k), |V (F )| ≤ k + 2 k + 2 · 2 2k + 2 · 2 2k (2k + 2 2k + 1).
Proof: By the definition, F has k vertices in S, at most 2 k S-isolated vertices and at most 2 · 2 2k end-vertices of S-matching edges of type 3. For each A, B ⊆ S such that A ∩ B = ∅ and at least one of A and B is nonempty, F has at most k + 2 2k + 1 S-matching edges xy of type 1 or type 2 with N F (x) ∩ S = A and N F (y) ∩ S = B. Then we have at most 2 · 2 2k (2k + 2 2k + 1) end-vertices of these edges.
Moreover, we can construct the family F of all potential solution skeletons together with their modulators. (F, S), where F is a potential solution skeleton and S ⊆ V (F ) is a modulator of size k, has size at most 2 ( k 2 ) + 2 2 k + 2 2 2k + (2k + 2 2k + 2) 2 2k and can be constructed in time 2 2 O(k) .
Claim 2.7. The family F of all pairs
Proof: There are at most 2 ( k 2 ) distinct subgraph with the set of vertices S of size k. We have at most 2 2 k distinct sets of S-isolated vertices and there are at most 2 2 2k distinct sets of Smatching edges of type 3. For each A, B ⊆ S such that A ∩ B = ∅ and at least one of A and B is nonempty, there are k + 2 2k + 2 possible sets of S-matching edges xy of type 1 or type 2 such that N F (x) ∩ S = A and N F (y) ∩ S = B. Therefore, we have at most (2k + 2 2k + 2) 2 2k distinct sets of type 1 or type 2. Then |F| ≤ 2 ( k 2 ) + 2 2 k + 2 2 2k + (2k + 2 2k + 2) 2 2k . Finally, it is straightforward to see that F can be constructed in 2 2 O(k) time.
Using Claim 2.7, we construct F, and for every (F, S) ∈ F, we check whether there is a solution H to (G, p, q, k) with a modulator S , whose skeleton is isomorphic to F with an isomorphism that maps S to S . If we find such a solution, then (G, p, q, k) is a Yes-instance. Otherwise, Claims 2.5 guarantees that (G, p, q, k) is a No-instance.
Assume that we are given (F, S) ∈ F for the instance (G, p, q, k). Recall that we have the partition T = {T 1 , . . . , T r } of V (G) into true twin classes of size at most 2(2 k + 2 2k )(2k + 2 2k + 1) + k + 2 k + 2 · 2 2k by Rule 2.3. Recall also that the prime-twin graph G of G is the graph with the vertex set T such that two distinct vertices T i and T j of G are adjacent if and only if uv ∈ E(G) for u ∈ T i and v ∈ T j . Clearly, given G and T , G can be constructed in linear time. For an induced subgraph R of G, we define τ R : V (R) → T to be a mapping such that
τ R (v) = T i if v ∈ T i for T i ∈ T .
Let ϕ : V (F ) → T be a surjective mapping. We say that ϕ is G-compatible if every two distinct vertices u and v of F are adjacent in F 2 if and only if ϕ(u) and ϕ(v) are adjacent in G. Let F be the skeleton of a solution H to (G, p, q, k).
Claim 2.8.
Then τ F : V (F ) → T is a G-compatible surjection.
Proof: Recall that H 2 = G and F is an induced subgraph of H. Then the definition of F and Lemma 4 (iii) immediately imply that τ F is a G-compatible surjection.
Our next step is to reduce our problem to solving a system of linear integer inequalities. Let ϕ : V (F ) → T be a G-compatible surjective mapping. Let X 1 , X 2 and X 3 be the sets of end-vertices of the S-matching edges of type 1, type 2 and type 3 respectively in F . Let also Y be the set of S-isolated vertices of F . For every vertex v ∈ V (F ), we introduce an integer variable x v . Informally, x v is the number of vertices of a potential solution H that correspond to a vertex v.
x v = 1 for v ∈ S ∪ X 1 ∪ X 2 , x v ≥ 1 for v ∈ Y ∪ X 3 , x u − x v = 0 for every type 3 edge uv, v∈Y x v = p, v∈X 1 ∪X 2 ∪X 3 x v = 2q, v∈ϕ −1 (T i ) x v = |T i | for T i ∈ T .(1)
The following claim is crucial for our algorithm. Proof: Suppose that there is a solution H to (G, p, q, k) with a modulator S , whose skeleton F is isomorphic to F with an isomorphism that maps S to S . To simplify notation, we identify F and F and identify S and S . We set ϕ = τ F . By Claim 2.8, ϕ is a G-compatible surjection.
For v ∈ Y , we define the value of
x v = |{u ∈ V (H) | u is an S-isolated and N H (u) = N H (v)}|.
For each S-matching edge uv of type 3 of F ,
x u = x v = |{xy ∈ E(H) |xy is an S-matching edge, N H (x) ∩ S = N H (u) ∩ S and N H (y) ∩ S = N H (v) ∩ S}|.
This defines the value of the variables x v for v ∈ X 3 . Recall that for all v / ∈ Y ∪ X 3 , x v = 1 by the definition of (1). It is straightforward to verify that the constructed assignment of the variables gives a solution of (1) for ϕ.
For the opposite direction, let ϕ : V (F ) → T be a G-compatible surjective mapping such that the system (1) has a solution. Assume that the variables x v have values that satisfy (1). We construct the graphF from F and the extensionφ of ϕ as follows.
· For every S-isolated vertex v of F , replace v by x v copies that are adjacent to the same vertices as v and defineφ(x) = ϕ(v) for the constructed vertices. · For every S-matching edge uv of type 3, replace u and v by x u = x v copies of pairs of adjacent vertices x and y, make x and y adjacent to the same vertices of S as u and v respectively, and defineφ(x) = ϕ(u) andφ(y) = ϕ(v) respectively. · Setφ(v) = ϕ(v) for the remaining vertices.
Observe that by the construction and the assumption that the values of the variables x v satisfy (1),F has p S-isolated vertices, q S-matching edges, and for every
T i ∈ T , |{v ∈ V (F ) | v ∈φ −1 (T i )}| = |T i |. We define ψ : V (G) → V (G) by mapping |T i | vertices of {v ∈ V (F ) | v ∈φ −1 (T i )} arbitrarily into distinct vertices of T i ⊆ V (G) for each T i ∈ T .
Clearly, ψ is a bijection. Notice that by Lemma 4 (iii) and Observation 2, the sets of vertices ofF constructed from S-isolated vertices and the end-vertices of S-matching edges are sets of true twins inF 2 . Also we have that, because ϕ is G-compatible, two distinct vertices u, v ∈ V (F ) are adjacent in F 2 if and only if eitherφ(u) =φ(v) orφ(u) =φ(v) andφ(u)φ(v) ∈ E(G). This implies that ψ is an isomorphism ofF 2 and G, which means that G has a square root isomorphic toF . Clearly, ψ| V (F ) is an isomorphism of F into the skeleton of H mapping S to S . By Claim 2.9, we can state our task as follows: verify whether there is a G-compatible surjection ϕ : V (F ) → T such that (1) has a solution.
For this, we consider all at most |V (F )
| |T | = 2 2 O(k) surjections ϕ : V (F ) → T . For each ϕ, we verify whether it is G-compatible. Clearly, it can be done in time O(|V (F )| 3 ). If ϕ is G-compatible, we construct the system (1) with |V (F )| = 2 O(k) variables in time O(|V (F )| 2 ).
Then we solve it by applying Theorem 1 in 2 2 O(k) log n time. This completes the description of the algorithm and its correctness proof.
To evaluate the total running time, notice that the preprocessing step, that is, the exhaustive application of Rules 2.1 and 2.2 is done in polynomial time. Then the construction of T , G and the application of Rule 2.3 is polynomial as well. By Claim 2.7, F is constructed in time 2 2 O(k) . The final steps, that is, constructing ϕ and systems (1) and solving the systems, can be done in time 2 2 O(k) log n. Therefore, the total running time is 2 2 O(k) · n O(1) .
For simplicity, in Theorem 2, we assumed that the input graph is connected but it is not difficult to extend the result for general case.
Corollary 1. Distance-k-to-(pK 1 + qK 2 ) Square Root can be solved in time 2 2 O(k) · n O(1) .
Proof. Let (G, p, q, k) be an instance of Distance-k-to-(pK 1 + qK 2 ) Square Rootand let C 1 , . . . , C be the components of G. If = 1, we apply Theorem 2. Assume that this is not the case and ≥ 2. For each i ∈ {1, . . . , }, we use Theorem 2 to solve the instances (C i , p , q , k ) such that k ≤ k, p ≤ p, q ≤ q and k + p + 2q = |V (C i )|. Then we combine these solutions to solve the input instance using a dynamic programming algorithm.
For h ∈ {1, . . . , }, let G h be the subgraph of G with the components C 1 , . . . , C h . Clearly, G h = G. For each h ∈ {1, . . . , }, every triple of nonnegative integers k , p , q such that k ≤ k, p ≤ p, q ≤ q and k + p + 2q = |V (G h )|, we solve the instance (G h , p , q , k ). For h = 1, this is already done as G 1 = C 1 . Let h ≥ 2. Then it is straightforward to observe that (G h , p , q , k ) is a Yes-instance if and only if there are nonnegative integers k 1 , p 1 , q 1 and k 2 , p 2 , q 2 such that · k 1 + k 2 = k , p 1 + p 2 = p , q 1 + q 2 = q , and · k 1 + p 1 + 2q 1 = |V (G h−1 )| and k 2 + p 2 + 2q 2 = |V (C h )|, for which both (G h−1 , p 1 , q 1 , k 1 ) and (C h , p 2 , q 2 , k 2 ) are Yes-instances. This allows to solve (G h , p , q , k ) in time O(n 2 ) if we are given the solutions for (G h−1 , p 1 , q 1 , k 1 ) and (C h , p 2 , q 2 , k 2 ). We obtain that, given the tables of solutions for the components of G, we can solve the problem for G in time O(n 5 ). We conclude that the total running time is 2 2 O(k) · n O(1) . Corollary 1 gives the following statement for the related problems.
A lower bound for Distance-k-to-(pK 1 + qK 2 ) Square Root
In this section, we show that the running time of our algorithm for Distance-k-to-(pK 1 +qK 2 ) Square Root given in Section 3 (see Theorem 2) cannot be significantly improved. In fact, we show that the Distance-k-to-(pK 1 +qK 2 ) Square Root problem admits a double-exponential lower bound, even for the special case q = 0, that is, in the case of VC-k Root.
To provide a lower bound for the VC-k Root problem, we will give a parameterized reduction from the Biclique Cover problem. This problem takes as input a bipartite graph G and a nonnegative integer k, and the task is to decide whether the edges of G can be covered by at most k complete bipartite subgraphs. Chandran et al. [2] showed the following two results about the Biclique Cover problem that will be of interest to us. For the forward direction, suppose (B, k) is a Yes-instance for Biclique Cover. We will show that (G, k + 4) is a Yes-instance for VC-k Root. Note that if B has a biclique cover of size strictly less than k, we can add arbitrary bicliques to this cover and obtain a biclique cover for B of size exactly k. Let C = {C 1 , . . . , C k } be such a biclique cover. We construct the following square root candidate H for G with V (H) = V (G). Add the edges uv, vw, u v and v w to H, and also all the edges between u and X, all the edges between u and Y and all the edges in G [Z]. Finally, for each 1 ≤ i ≤ k, add to H all the edges between z i and the vertices of C i . Proof: Let xy ∈ E(G) be such that xy / ∈ E(H). If xy = uw, note that uv, vw ∈ E(H). If xy = vx i for some i, note that uv, ux i ∈ E(G). If xy = x i x j , then ux i , ux j ∈ E(H). If xy = uz i , let x j be a vertex of C i and note that ux j , x j z i ∈ E(H). If xy = x j z i , let be such that x j ∈ C and observe that x j z , z z i ∈ E(H). Symmetric arguments apply for the edges in
G[Y ∪ Z ∪ {u , v , w }].
Finally, if xy = x i y j , let C be the biclique containing the edge x i y j in C and note that x i z , y j z ∈ E(H).
We conclude that (G, k + 4) is a Yes-instance for VC-k Root by Claim 4.1 together with the fact that Z ∪ {u, v, u , v } is a vertex cover of H of size k + 4.
Before we show the reverse direction of the theorem, we state the next three claims, that concern the structure of any square root of the graph G. Proof: Suppose for a contradiction that the graph G has a square root H such that vw / ∈ E(H). In this case, it holds that vu, uw ∈ E(H), since u is the only common neighbor of v and w. However, since wx i / ∈ E(G) for every 1 ≤ i ≤ p and wz j / ∈ E(G) for every 1 ≤ j ≤ k, then ux i , uz j / ∈ E(H). Therefore, there must exist an induced P 3 in H with endpoints, for instance, u and z , for some . However, since ux i , uz j / ∈ E(H) for every 1 ≤ i ≤ p and every 1 ≤ j ≤ k and N G (u) = X ∪ Z ∪ {v, w}, either v or w have to be the middle vertex of the P 3 . This is a contradiction, since vz , wz / ∈ E(G). Now suppose for a contradiction that the graph G has a square root H such that uv / ∈ E(H). If there exists such that ux , vx ∈ E(H), then we have a contradiction, since this would imply that no edge incident to w can be in H, given that wx / ∈ E(G). We can then conclude that vw, uw ∈ E(H). We can now reach a contradiction by the same argument as used in the previous paragraph.
The claim follows by a symmetric argument for the edges u v and v w . Proof: Suppose for a contradiction that G has a square root H such that ux i / ∈ E(H) for some 1 ≤ i ≤ p. By Claim 4.2, uv, vw ∈ E(H). This implies that vx i / ∈ E(H), since wx i / ∈ E(G). Since ux i / ∈ E(H) by assumption, there must exist j such that x j is the middle vertex of a P 3 in H with endpoints v and x i . However, this is a contradiction, since wx j / ∈ E(G). The claim follows by a symmetric argument for the edges of the form u y j . Proof: Suppose for a contradiction that G has a square root H such that x i y j ∈ E(H) for some 1 ≤ i ≤ p and 1 ≤ j ≤ q. By Claim 4.3, we have that ux i ∈ E(H), which is a contradiction since uy j / ∈ E(G).
Now, for the reverse direction of the theorem, assume that G has a square root H that has a vertex cover of size at most k + 4. By Claim 4.4, for every edge of G of the form x i y j , it holds that x i y j / ∈ E(H). This implies that, for every such edge, there exists an induced P 3 in H having x i and y j as its endpoints. Since N G (x i ) ∩ N G (y j ) = Z, only vertices of Z can be the middle vertices of these paths. For 1 ≤ ≤ k, let C = N H (z ) ∩ (X ∪ Y ). We will now show that C = {C 1 , . . . , C k } is a biclique cover of B. First, note that since for every edge x i y j , there exists z h ∈ Z such that z h x i , z h y j ∈ E(H), we conclude that x i y j ∈ C h , which implies that C is an edge cover of B. Furthermore, for a given , since every vertex of C is adjacent to z in H, G[C ] is a clique and, therefore, B[C ] is a biclique. This implies that C is indeed a biclique cover of B of size k, which concludes the proof of the theorem.
From Theorem 3 and Lemma 6 we obtain the following theorems. Moreover, from Theorem 4 and Lemma 6 we can also conclude the following corollary. Proof. Assume that VC-k Root has a kernel of size 2 o(k) . Since VC-k Root is in NP and Biclique Cover is NP-complete, there is an algorithm A that in time O(n c ) reduces VC-k Root to Biclique Cover, where c is a positive constant. Then combining the reduction from Lemma 6, the kernelization algorithm for VC-k Root and A, we obtain a kernel for Biclique Cover of size (2 o(k) ) c that is subexponential in k. By Theorem 4, this is impossible unless P = NP. Equivalently, we can observe that Chandran et al. [2], in fact, proved a stronger claim. Their proof shows that Biclique Cover does not admit a compression (we refer to [5] for the definition of the notion) of subexponential in k size to any problem in NP.
Distance-k-to-Clique Square Root
In this section, we consider the complexity of testing whether a graph admits a square root of bounded deletion distance to a clique. More formally, we consider the following problem:
Input:
A graph G and nonnegative integer k.
Task:
Decide whether there is a square root H of G such that H − S is a complete graph for a set S on k vertices.
Distance-k-to-Clique Square Root
We give an algorithm running in FPT-time parameterized by k, the size of the deletion set. That is, we prove the following theorem. Proof. Let (G, k) be an instance to Distance-k-to-Clique Square Root. We start by computing the number of classes of true twins in G. If G has at least 2 k + k + 1 classes of true twins, then G is a No-instance to the problem, as we show in the following claim. Hence, from now on we assume that G has at most 2 k + k classes of true twins. We exhaustively apply the following rule in order to decrease the size of each class of true twins in G. Rule 7.1. If |T i | ≥ 2 k + k + 1 for some i, delete a vertex from T i .
The following claim shows that Rule 7.1 is safe.
Claim 7.2.
If G is the graph obtained from G by the application of Rule 7.1, then (G, k) and (G , k) are equivalent instances of Distance-k-to-Clique Square Root.
Proof: Let G = G − v. First assume (G, k) is a Yes-instance to Distance-k-to-Clique Square Root and let H be a square root of G that is a solution to this problem. Since |T i | ≥ 2 k + k + 1 and G has at most 2 k + k classes of true twins, by the pigeonhole principle there are two vertices x, y ∈ T i such that, in H, x, y / ∈ S and N H [x] ∩ S = N H [y] ∩ S. That is, x and y are true twins in H also. Thus, H = H − x is a square root for G = G − x such that H − S is a complete graph. Since G and G are isomorphic, we have that (G , k) is a Yes-instance as well.
Now assume (G , k) is a yes-instance to Distance-k-to-Clique Square Root and let H be a square root of G that is a solution to the problem. Note that T i \ {v} is a true twin class of G of size at least 2 k + k. Thus, there exists u ∈ T i \ {v} such that, in H , u / ∈ S. We can add v to H as a true twin of u and obtain a square root H for G such that H − S is a complete graph.
After exhaustive application of Rule 7.1, we obtain an instance (G , p , k) such that G contains at most (2 k + k) 2 vertices, since it has at most 2 k + k twin classes, each of size at most 2 k +k. Moreover, (G , k) and (G, k) are equivalent instances of Distance-k-to-Clique Square Root. We can now check by brute force whether (G , k) is Yes-instance to the problem. Since G has 2 O(k) vertices, this can be done in time 2 2 O(k) · n O(1) , which concludes the proof of the theorem.
Conclusion
In this work, we showed that Distance-k-to-(pK 1 + qK 2 ) Square Root and its variants can be solved in 2 2 O(k) · n O(1) time. We also proved that the double-exponential dependence on k is unavoidable up to Exponential Time Hypothesis, that is, the problem cannot be solved in 2 2 o(k) · n O(1) time unless ETH fails. We also proved that the problem does not admit a kernel of subexponential in k size unless P = NP. We believe that it would be interesting to further investigate the parameterized complexity of H-Square Root for sparse graph classes H under structural parameterizations. The natural candidates are the Distance-k-to-Linear-Forest Square Root and Feedback-Vertex Set-k Square Root problems, whose tasks are to decide whether the input graph has a square root that can be made a linear forest, that is, a union of paths, and a forest respectively by (at most) k vertex deletions. Recall that the existence of an FPT algorithm for H-Square Root does not imply the same for subclasses of H. However, it can be noted that the reduction from Lemma 6 implies that our complexity lower bounds still hold and, therefore, we cannot expect that these problems would be easier.
Parameterized complexity of H-Square Root is widely open for other, not necessarily sparse, graph classes. We considered the Distance-k-to-Clique Square Root problem and proved that it is FPT when parameterized by k. What can be said if we ask for a square root that is at deletion distance (at most) k form a cluster graph, that is, the disjoint union of cliques? We believe that our techniques allows to show that this problem is FPT when parameterized by k if the number of cliques is a fixed constant. Is the problem FPT without this constraint?
Theorem 1 (
1[9,15,22]). p-Variable Integer Linear Programming Feasibility can be solved using O(p 2.5p+o(p) · L) arithmetic operations and space polynomial in L, where L is the number of bits in the input.
Figure 1 :
1Types of edges of H − S.
Claim 2. 2 .
2If G is the graph obtained from G by the application of Rule 2.2, then (G, p, q, k) and (G , p, q − 1, k) are equivalent instances of Distance-k-to-(pK 1 + qK 2 ) Square Root and G is connected.Proof: The proof of this claim follows the same lines as the proof of Claim 2.1. Let G = G−{x, y} for (x, y) ∈ Q.
Lemma 5, there are (x , y ), (x , y ) ∈ Q such that x y and x y are S-matching edges of type 2 in H with N H (x )∩S = N H (x )∩S and N H (y )∩S = N H (y )∩S. We construct a square root H for G by adding the edge xy to H as a S-matching edge of type 2 with N H (x) ∩ S = N H (x ) ∩ S and N H (y) ∩ S = N H (y ) ∩ S. To see that H is indeed a square root for G, note that since H was a square root for G , we have H 2 = G . Now we argue about the edges of G that are incident to x and y. Since (x, y), (x , y ) ∈ Q, we have that N G [x] \ {y} = N G [x ] \ {y } and N G [y] \ {x} = N G [y ] \ {x }.
Claim 2. 4 .
4Let T = {T 1 , . . . , T r } be the partition of V (G) into classes of true twins. If (
(i) if H has two distinct type 3 S-matching edges xy and x y withN H (x) ∩ S = N H (x ) ∩ Sand N H (y) ∩ S = N H (y ) ∩ S, then delete x and y, (ii) if H has two distinct S-isolated vertices x and y with N H (x) = N H (y), then delete x.
(ii) for every two distinct S-isolated vertices x and y of F , N F (x) = N F (y), (iii) for every two distinct S-matching edges xy and x y of type 3, either N F (x)∩S = N H (x )∩S or N F (y) ∩ S = N H (y ) ∩ S, (iv) for every A, B ⊆ S such that A ∩ B = ∅ and at least one of A and B is nonempty, {xy ∈ E(F − S) | N F (x) ∩ S = A and N F (y) ∩ S = B} has size at most 2k + 2 2k + 1.
Claim 2. 9 .
9The instance (G, p, q, k) has a solution H with a modulator S such that there is an isomorphism ψ : V (F ) → V (F ) for the skeleton F of H mapping S to S if and only if there is a G-compatible surjective mapping ϕ : V (F ) → T such that the system (1) has a solution.
Corollary 2 .
2VC-k Root, Distance-k-to-Matching Square Root and Distance-k-to-Degree-One Square Root can be solved in time 2 2 O(k) · n O(1) .
Theorem 3 (Figure 2 :
32[2]). Biclique Cover cannot be solved in time 2 2 o(k) · n O(1) unless ETH is false.Theorem 4 ([2]). Biclique Cover does not admit a kernel of size 2 o(k) unless P = NP. Lemma 6. There exists a polynomial time algorithm that, given an instance (B, k) for Biclique Cover, produces an equivalent instance (G, k+4) for VC-k Root, with |V (G)| = |V (B)|+k+6. Proof. Let (B, k) be an instance of Biclique Cover where (X, Y ) is the bipartition of V (B). Let X = {x 1 , . . . , x p } and Y = {y 1 , . . . , y q }. We construct the instance (G, k + 4) for VCk Root such that V (G) = X ∪ Y ∪ {z 1 , . . . , z k } ∪ {u, v, w, u , v , w }. Denote by Z the set {z 1 , . . . , z k }. The edge set of G is defined in the following way: G[X ∪ Z ∪ {u}], G[X ∪ {v}], {u, v, w}, G[Y ∪ Z ∪ {u }], G[Y ∪ {v }] and {u , v , w } are cliques and x i y j ∈ E(G) if and only if x i y j ∈E(B). The construction of G is shown inFigure 2. Illustrating the graphs G and H considered in the proof of Lemma 6. The sets X and Y form the bipartition of an instance of Biclique Cover and the two colored completed bipartite subgraphs correspond to a solution of the problem, where k = 2. The constructed graph G of VC-k Root is depicted by the solid and dotted black edges, whereas the graph spanned by the solid black edges corresponds to the square root H of G.
Claim 4. 1 .
1The constructed graph H is indeed a square root of G.
Claim 4. 2 .
2The edges uv, vw, u v and v w belong to any square root of G.
Claim 4. 3 .
3The edges {ux i , u y j | 1 ≤ i ≤ p, 1 ≤ j ≤ q} belong to any square root of G.
Claim 4. 4 .
4The edges {x i y j | 1 ≤ i ≤ p, 1 ≤ j ≤ q} do not belong to any square root of G.
Theorem 5 .
5VC-k Root cannot be solved in time 2 2 o(k) · n O(1) unless ETH is false.
Theorem 6 .
6VC-k Root does not admit a kernel of size 2 o(k) unless P = NP.
Theorem 7 .
7Distance-k-to-Clique Square Root can be solved in time 2 2 O(k) · n O(1) .
Claim 7. 1 .
1Let G be a graph and H be a square root of G such that H − S is a complete graph, with |S| = k. Let T 1 , . . . , T t be a partition of V (G) into classes of true twins. Then t ≤ 2 k + k. Proof: Let C = V (H) \ S. Note that if u, v ∈ C and N H (u) ∩ S = N H (v) ∩ S, then u and v are true twins in G. Thus, we have at most 2 k distinct classes of true twins among the vertices of C, and at most k among the vertices of S.
J A Bondy, U S R Murty, Graph Theory. SpringerJ. A. Bondy and U. S. R. Murty, Graph Theory, Springer, 2008.
On the parameterized complexity of biclique cover and partition. S Chandran, D Issac, A Karrenbauer, Proceedings of IPEC 2016. IPEC 201663S. Chandran, D. Issac, and A. Karrenbauer, On the parameterized complexity of biclique cover and partition, in Proceedings of IPEC 2016, vol. 63 of LIPIcs, 2017, pp. 11:1- 11:13.
Parameterized algorithms for finding square roots. M Cochefert, J Couturier, P A Golovach, D Kratsch, D Paulusma, Algorithmica. 74M. Cochefert, J. Couturier, P. A. Golovach, D. Kratsch, and D. Paulusma, Parameterized algorithms for finding square roots, Algorithmica, 74 (2016), pp. 602-629.
Computing square roots of graphs with low maximum degree. M Cochefert, J Couturier, P A Golovach, D Kratsch, D Paulusma, A Stewart, Discrete Applied Mathematics. 248M. Cochefert, J. Couturier, P. A. Golovach, D. Kratsch, D. Paulusma, and A. Stewart, Computing square roots of graphs with low maximum degree, Discrete Applied Mathematics, 248 (2018), pp. 93-101.
. M Cygan, F V Fomin, L Kowalik, D Lokshtanov, D Marx, M Pilipczuk, M Pilipczuk, S Saurabh, Parameterized Algorithms. SpringerM. Cygan, F. V. Fomin, L. Kowalik, D. Lokshtanov, D. Marx, M. Pilipczuk, M. Pilipczuk, and S. Saurabh, Parameterized Algorithms, Springer, 2015.
Finding cut-vertices in the square roots of a graph. G Ducoffe, Discrete Applied Mathematics. 257G. Ducoffe, Finding cut-vertices in the square roots of a graph, Discrete Applied Mathe- matics, 257 (2019), pp. 158-174.
Square-root finding problem in graphs, a complete dichotomy theorem, CoRR. B Farzad, M Karimi, abs/1210.7684B. Farzad and M. Karimi, Square-root finding problem in graphs, a complete dichotomy theorem, CoRR, abs/1210.7684 (2012).
Complexity of finding graph roots with girth conditions. B Farzad, L C Lau, V B Le, N N Tuy, Algorithmica. B. Farzad, L. C. Lau, V. B. Le, and N. N. Tuy, Complexity of finding graph roots with girth conditions, Algorithmica, 62 (2012), pp. 38-53.
An application of simultaneous diophantine approximation in combinatorial optimization. A Frank, E Tardos, Combinatorica. 7A. Frank and E. Tardos, An application of simultaneous diophantine approximation in combinatorial optimization, Combinatorica, 7 (1987), pp. 49-65.
Algorithms for outerplanar graph roots and graph roots of pathwidth at most 2, Algorithmica. P A Golovach, P Heggernes, D Kratsch, P T Lima, D Paulusma, P. A. Golovach, P. Heggernes, D. Kratsch, P. T. Lima, and D. Paulusma, Algorithms for outerplanar graph roots and graph roots of pathwidth at most 2, Algorithmica, 81 (2019), pp. 2795-2828.
A linear kernel for finding square roots of almost planar graphs. P A Golovach, D Kratsch, D Paulusma, A Stewart, Theor. Comput. Sci. 689P. A. Golovach, D. Kratsch, D. Paulusma, and A. Stewart, A linear kernel for finding square roots of almost planar graphs, Theor. Comput. Sci., 689 (2017), pp. 36-47.
Finding cactus roots in polynomial time. P A Golovach, D Kratsch, D Paulusma, A Stewart, Theory of Computing Systems. 62P. A. Golovach, D. Kratsch, D. Paulusma, and A. Stewart, Finding cactus roots in polynomial time, Theory of Computing Systems, 62 (2018), pp. 1409-1426.
On the complexity of k-sat. R Impagliazzo, R Paturi, J. Comput. Syst. Sci. 62R. Impagliazzo and R. Paturi, On the complexity of k-sat, J. Comput. Syst. Sci., 62 (2001), pp. 367-375.
Which problems have strongly exponential complexity?. R Impagliazzo, R Paturi, F Zane, J. Comput. Syst. Sci. 63R. Impagliazzo, R. Paturi, and F. Zane, Which problems have strongly exponential complexity?, J. Comput. Syst. Sci., 63 (2001), pp. 512-530.
Minkowski's convex body theorem and integer programming. R Kannan, Mathematics of Operations Research. 12R. Kannan, Minkowski's convex body theorem and integer programming, Mathematics of Operations Research, 12 (1987), pp. 415-440.
Bipartite roots of graphs. L C Lau, ACM Transactions on Algorithms. 2L. C. Lau, Bipartite roots of graphs, ACM Transactions on Algorithms, 2 (2006), pp. 178- 208.
Recognizing powers of proper interval, split, and chordal graph. L C Lau, D G Corneil, SIAM J. Discrete Math. 18L. C. Lau and D. G. Corneil, Recognizing powers of proper interval, split, and chordal graph, SIAM J. Discrete Math., 18 (2004), pp. 83-102.
Polynomial time recognition of squares of ptolemaic graphs and 3-sun-free split graphs. V B Le, A Oversberg, O Schaudt, Theoretical Computer Science. 602V. B. Le, A. Oversberg, and O. Schaudt, Polynomial time recognition of squares of ptolemaic graphs and 3-sun-free split graphs, Theoretical Computer Science, 602 (2015), pp. 39-49.
A unified approach for recognizing squares of split graphs. Theoretical Computer Science. 648, A unified approach for recognizing squares of split graphs, Theoretical Computer Science, 648 (2016), pp. 26-33.
The square of a block graph. V B Le, N N Tuy, Discrete Mathematics. V. B. Le and N. N. Tuy, The square of a block graph, Discrete Mathematics, 310 (2010), pp. 734-741.
A good characterization of squares of strongly chordal split graphs. Inf. Process. Lett. 111, A good characterization of squares of strongly chordal split graphs, Inf. Process. Lett., 111 (2011), pp. 120-123.
Integer programming with a fixed number of variables. H W Lenstra, Mathematics of Operations Research. 8H. W. Lenstra, Integer programming with a fixed number of variables, Mathematics of Operations Research, 8 (1983), pp. 538-548.
Algorithms for square roots of graphs. Y.-L Lin, S Skiena, SIAM J. Discrete Math. 8Y.-L. Lin and S. Skiena, Algorithms for square roots of graphs, SIAM J. Discrete Math., 8 (1995), pp. 99-118.
A characterization of line graphs that are squares of graphs. M Milanic, A Oversberg, O Schaudt, Discrete Applied Mathematics. 173M. Milanic, A. Oversberg, and O. Schaudt, A characterization of line graphs that are squares of graphs, Discrete Applied Mathematics, 173 (2014), pp. 83-91.
Computing square roots of trivially perfect and threshold graphs. M Milanic, O Schaudt, Discrete Applied Mathematics. 161M. Milanic and O. Schaudt, Computing square roots of trivially perfect and threshold graphs, Discrete Applied Mathematics, 161 (2013), pp. 1538-1545.
Computing roots of graphs is hard. R Motwani, M Sudan, Discrete Applied Mathematics. 54R. Motwani and M. Sudan, Computing roots of graphs is hard, Discrete Applied Math- ematics, 54 (1994), pp. 81-88.
Square roots of minor closed graph classes. N V Nestoridis, D M Thilikos, Discrete Applied Mathematics. 168N. V. Nestoridis and D. M. Thilikos, Square roots of minor closed graph classes, Discrete Applied Mathematics, 168 (2014), pp. 34-39.
Simpler linear-time modular decomposition via recursive factorizing permutations. M Tedder, D Corneil, M Habib, C Paul, Proceedings of ICALP. ICALPM. Tedder, D. Corneil, M. Habib, and C. Paul, Simpler linear-time modular de- composition via recursive factorizing permutations, in Proceedings of ICALP 2008, 2008, pp. 634-645.
Node-deletion problems on bipartite graphs. M Yannakakis, SIAM Journal on Computing. 10M. Yannakakis, Node-deletion problems on bipartite graphs, SIAM Journal on Computing, 10 (1981), pp. 310-327.
| [] |
[
"The fourth root of the Kogut-Susskind determinant via infinite component fields",
"The fourth root of the Kogut-Susskind determinant via infinite component fields"
] | [
"H Neuberger [email protected] \nDepartment of Physics and Astronomy\nRutgers University\n08855PiscatawayNJ\n"
] | [
"Department of Physics and Astronomy\nRutgers University\n08855PiscatawayNJ"
] | [] | An example of interpolation by means of local field theories between the case of normal Kogut-Susskind fermions and the case of keeping just the fourth root of the Kogut-Susskind determinant is given. For the fourth root trick to be a valid approximation certain limits need to be smooth. The question about the validity of the fourth root trick is not resolved, only cast into a local field theoretical framework. | 10.1103/physrevd.70.097504 | [
"https://export.arxiv.org/pdf/hep-lat/0409144v1.pdf"
] | 62,781,301 | hep-lat/0409144 | 3a86d597c9c030129609073ad0d3e81a0418de9f |
The fourth root of the Kogut-Susskind determinant via infinite component fields
23 Sep 2004
H Neuberger [email protected]
Department of Physics and Astronomy
Rutgers University
08855PiscatawayNJ
The fourth root of the Kogut-Susskind determinant via infinite component fields
23 Sep 2004
An example of interpolation by means of local field theories between the case of normal Kogut-Susskind fermions and the case of keeping just the fourth root of the Kogut-Susskind determinant is given. For the fourth root trick to be a valid approximation certain limits need to be smooth. The question about the validity of the fourth root trick is not resolved, only cast into a local field theoretical framework.
Introduction.
Recent simulations of QCD [1] have been claimed to correctly include sea quark effects by eliminating the extra tastes coming with Kogut-Susskind lattice fermions with the help of the so called "fourth root trick". This trick amounts to replacing the local lattice field theory, which would include all tastes, with one in which the determinant of the gauge dependent Kogut-Susskind fermion matrix, K s , is taken at the power of 1 4 . The objective of this letter is to propose a class of embeddings of the 4D lattice fermions into six dimensions, four of which are the original lattice axes. These embeddings can be deformed by a parameter of mass dimension, Λ, so that they look local from the four dimensional viewpoint, so long as Λ is of the order of the inverse four dimensional lattice spacing a. Formally, if one takes Λ → 0 at fixed a one recovers the fourth root trick. If one takes Λ → ∞ at fixed a one recovers a local theory with unmolested four tastes per species. Whatever scenario one has in mind for the validity of the fourth root trick, it seems plausible that it should boil down to some robustness statement concerning the combined limits a → 0 and Λ → 0.
2. General structure.
The basic idea is a generalization of the work of Slavnov and Frolov [2]. As is well known one could have proceeded from this work alone to construct the overlap Dirac operator [3], and below I try to follow some of the steps that would have achieved this. However, the problem we are looking at here is substantially different, and by no means is it obvious what the final conclusion (if any) about the validity of the fourth root trick would end up being. This letter is limited in scope to merely setting the problem up in the language of infinite component fermi fields.
We replace each original Kogut-Susskind fermion pair χ, χ by an infinite towerχ α n , χ α n , labeled by indices n, α, with the range of α being given by a "degeneracy" g n , for any given n. n runs over all positive integers. We are looking for a set of integers g n , for which, formally at least, the following holds:
det 1 4 (K s ) = n det gn (K s )(1)
Neglecting questions of absolute convergence, we have the requirement
n g n = 1 4 (2)
Obviously, with the g n all positive integers we can't have even conditional convergence with the desired result. However, if we allow the statistics of the fields to vary among the members of the tower, alternating signs might make (2) hold under conditional convergence. This is easily achieved by
1 (1 + x) 2 = ∞ n=1 n(−1) n−1 x n−1 ,(3)
and setting x = 1. We can view the members of the towers as components of a vector in an infinite Hilbert space. Each component is a vector in itself, representing an ordinary Kogut-Susskind fermion. The infinite Hilbert space is defined as the Hilbert space associated with a two dimensional harmonic oscillator, whose Hamiltonian is H.
H = 1 2 p 2 + 1 2 q 2 = − 1 2 ∂ ∂ q 2 + 1 2 q 2 = a † a + b † b + 1 [a † , a] = 1 = [b † , b], [a, b] = [a, b † ] = 0 P aP = −a, P bP = −b, [H, P ] = 0
H|n; α >= n|n; α >, n ≥ 1, g n = n a|1 = b|1 = 0, P |n; α = (−) n−1 |n; α
P is the parity operation. We now declare any component consisting of an ordinary Kogut-Susskind structure to obey fermi statistics if it has positive parity and bose statistics if its parity is negative. A component is fermionic if n is odd and bosonic if n is even. Thus, the bosonic components can be fully paired up, providing a way to make the path integrals over them convergent in spite of K s having a spectrum that includes positive and negative values. The ground state of H has eigenvalue 1, is non-degenerate and labels a field of fermionic character. At a fixed n > 1, the statistics is the same for all g(n) vector components labeled by α. If one deforms H, the deformation should preserve parity so that fields corresponding to different statistics do not get mixed. We denote the inner product in the internal Hilbert space by (, ). With the fermion action,
(χ, K s χ) = ∞ n=1 gn α=1χ α n K s χ α n(5)
we formally have a local action where the entire contribution of fermion loops is given by the fourth root of the determinant of K s .
Regularization.
We need to control the infinite number of fields; so far the locality is a mere illusion, as we have an infinite number of massless fields from the four dimensional viewpoint. This can be done by giving all the higher members of the towers a large mass, of the order of the ultraviolet cutoff affecting the four dimensional part of the fermion momenta, 1 a :
K s → K s + Λf (H)(6)
where f (x) > c > 0 for any x = 2, 3, 4....∞. One can expand the sea quark contribution in Feynman diagrams which would now contain also a trace over states in H.
The convergence of that trace would depend on the number of attached gauge field legs and the asymptotic behavior of f (x) as x → ∞. It is clear that demanding that f (x) behave asymptotically as x κ will make all diagrams converge if κ is a large enough integer; for example κ = 4 is already an overkill, and κ = 2 seems sufficient. The extra two q dimensions are seen only by the fermions; other fields are oblivious of them. One could try to make up an operator H which creates a pointlike defect at the origin of q space, so that low energymomentum fermionic modes are restricted to it. In that case it would suffice to pick f (x) → c > 0 as x → ∞. However, this is not guaranteed to eliminate all ambiguities and some additional interpretation might be needed.
For a finite Λ, the limit a → 0 will produce, most likely, a theory with undesirable four-fold degeneracy for each fermion species if f (1) = 0. Formally, if we take Λ → 0 at fixed a, we get a model that employs the fourth root trick. These observations reduce the problem to an investigation of the combined limits a → 0 and Λ → 0.
Discussion.
It is sometimes argued that the fourth root trick is valid because (1), it agrees with experiment, and (2), it agrees with low energy predictions about systems with approximate Goldstone bosons. Point (1) is well taken, and the improvement of the agreement between lattice data and experimental data, in particular in cases where numerical data obtained from quenched simulations showed distinct differences from experiment, is notable. However, eventually, we would like numerical QCD to be so reliable that when one detects a numerically significant discrepancy between its prediction and experimental data, one can interpret it as evidence of new physics. In other words, we shouldn't rely on experimental data when assessing our calculations. Point (2) is not valid, as far as the author can see: One could have a regime where low energy effective Lagrangians describe the theory well, even if the ultraviolet completion of the theory is non-local. At a more practical level, the number of free parameters in the effective Lagrangian -even ignoring (unjustifiably) some lattice Lorentz-violating terms -is too large to make the agreement credible to such a degree that support for far reaching features can be abstracted from it.
The fourth root trick has been recently criticized in [4] where an operator corresponding to K 1 4 s was considered. While this is one possible way to get to a theory where the fermion contribution to the partition function is given by the fourth root of the determinant of K s , it does not create an option for the system to become local, and therefore the lack of locality one finds not necessarily implies that something is wrong with the fourth root trick, as used. To substantially increase the amount of intuitive doubt about the fourth root trick one would need to establish non-locality in a scheme that induces the system to select its true low energy modes in a smooth manner.
Thus, it seems that the more theoretical and direct approach outlined above could be of use to those who feel strongly that the fourth root trick does not mislead us. Alternatively, although a priori it would seem to be substantially more difficult to establish a negative, the present approach could yield a proof that the fourth root trick never allows locality to be restored in the continuum limit. Obviously, the preferred outcome would be a positive one, and one would hope that in the future the proponents of the fourth root trick would produce a proof of its validity using the above approach, or possibly some variation of it. The mere absence of a real proof that the fourth root trick is inadequate will never provide satisfactory support for relying on numerical QCD carried out employing the fourth root trick.
5. Acknowledgments. This research was partially supported by the DOE under grant number DE-FG02-01ER41165 at Rutgers University.
. C T H Davies, Phys. Rev. Lett. 9222001C. T. H. Davies et. al., Phys. Rev. Lett. 92 (2004) 022001;
. C Aubin, Phys. Rev. 7031504C. Aubin et. al., Phys. Rev. D70 (2004) 031504;
. C Aubin, hep-lat/0407028C. Aubin et. al. hep-lat/0407028.
. S A Frolov, A A Slavnov, Phys. Let. B. 309344S. A. Frolov, A. A. Slavnov, Phys. Let. B 309 (1993) 344.
. H Neuberger, Phys. Lett. B. 417141H. Neuberger, Phys. Lett. B 417 (1998) 141.
. B Bunk, M Della Morte, K Jansen, F Knechtli, Nucl. Phys. 697343B. Bunk, M. Della Morte, K. Jansen, F. Knechtli, Nucl. Phys. B697 (2004) 343.
| [] |
[
"Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets",
"Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets",
"Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets",
"Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets"
] | [
"J Li \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"P Van Nieuwkerk \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"M A Verschuuren \nSCIL nanoimprint solutions\nPhilips Research Laboratories\nEindhovenThe Netherlands\n",
"B Koopmans \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"R Lavrijsen \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"J Li \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"P Van Nieuwkerk \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"M A Verschuuren \nSCIL nanoimprint solutions\nPhilips Research Laboratories\nEindhovenThe Netherlands\n",
"B Koopmans \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n",
"R Lavrijsen \nDepartment of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands\n"
] | [
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"SCIL nanoimprint solutions\nPhilips Research Laboratories\nEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"SCIL nanoimprint solutions\nPhilips Research Laboratories\nEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands",
"Department of Applied Physics\nEindhoven University of Technology\nP.O. Box 5135600 MBEindhovenThe Netherlands"
] | [] | Methods to fabricate and characterize monodisperse magnetic nanoplatelets for fluid/bio-based applications based on spintronic thin-film principles are a challenge. This is due to the required top-down approach where the transfer of optimized blanket films to free particles in a fluid while preserving the magnetic properties is an uncharted field. Here, we explore the use of substrate conformal imprint lithography (SCIL) as a fast and cost-effective fabrication route. We analyze the size distribution of nominal 1.8 µm and 120 nm diameter platelets and show the effect of the fabrication steps on the magnetic properties which we explain through changes in the dominant magnetization reversal mechanism as the size decreases. We show that SCIL allows for efficient large-scale platelet fabrication and discuss how application-specific requirements can be solved via process and material engineering.Magnetic particles have been widely used in bioapplications due to their ability to mechanically manipulate their surroundings remotely via externally applied magnetic fields. In particular, the utilization of magnetic torques induced via an externally rotating magnetic field is of interest for applications such as micro mixing 1 , cancer treatment 2,3 and the manipulation of cells 4 . Superparamagnetic nanoparticles (SPNs) have traditionally been used in torque-related applications 5 . However, due to their limited magnetic anisotropy and spherical shape, the translation of magnetic torque to mechanical torque is limited. Despite these limitations, particles with enhanced shape or magnetic anisotropy have been studied, e.g. NiFe nanodiscs with a vortex spin configuration 6,7 , and magnetic nanorods 8 . Synthetic antiferromagnetic (SAF) nanoplatelets (NPs) with high perpendicular magnetic anisotropy (PMA) are among the most promising candidates 9-12 .SAF NPs with PMA typically consist of a multilayer stack: Ta/Pt/Co/Pt/Ru/Pt/Co/Pt. The strong hybridization of the 3d-5d orbitals at Co and Pt interfaces induces a large PMA 13,14 . This large anisotropy and the fact that PMA induces a hardplane anisotropy (the plane of the disc) and an easy-axis perpendicular to the the disc, are the key factors for effective magnetic-mechanical torque transduction 12 . The two ferromagnetic layers are antiferromagnetically coupled by the Ru layer through the Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction 15-17 . The SAF stack exhibits a zero net magnetic moment at zero applied magnetic field, preventing the aggregation of particles in liquid at zero field; a key requisite for applications. The Pt layers around the Ru layer tune the RKKY interaction and increase the PMA 18 . The high tunability of the PMA and RKKY and the freedom in shape and size of the platelets using top-down lithography methods make SAF NPs fascinating for remotely induced nanoscale torque applications.However, one of the major issues is to fabricate monodisperse SAF NPs with high throughput and low cost. UVlithography with a lift-off process has been reported to pattern 1.8 µm diameter SAF NPs 10 . However, due to the diffraction limit, it is hard to reach the sub-micron meter with conven-tional UV-lithography. Nanoimprint were used to fabricate SAF NPs with in-plane anisotropy at much smaller diameters down to 122 nm19,20. Although smaller size can be achieved, the additive lift-off process has its native problem, i.e. it is difficult to obtain a uniform thickness when the critical dimensions reach the resist thickness. As the PMA-SAF system requires Ångstrom scale control of the layer thickness to stabilize the magnetic behavior, such additive methods cannot be used. Hence, a subtractive method is preferred where one can start with a blanket film on a wafer. Recently, nanosphere lithography; where polystyrene (PS) beads were used as hard masks, combined with an ion milling process was reported to produce NPs with different sizes 21-23 . Nevertheless, this method suffers from non-uniform PS bead size and moreover, the yield depends on the distribution of the beads over a large area.In this paper, we present a subtractive method based on substrate conformal imprint lithography (SCIL) to fabricate monodisperse SAF NPs. The stamp used in SCIL is composed of two rubber layers on a thin glass support (seeFig. S1). The SCIL technique is based on a difference between the inplane stiffness of the glass which avoids pattern deformation over large areas, while the out-of-plane flexibility from the rubber layers allows conformal contact to underlying surface features 24 . With these properties, SCIL can be used to pattern large wafers up to 300 mm while keeping a uniform size of the features. In addition, this technique can be used to fabricate NPs from the nanometer to micrometer range and different shapes of the platelets can be thought off as the stamp used for the process can be custom made. Here we focus on 120 nm and 1.8 µm diameter disc-shaped SAF NPs and a subtractive method using Ar ion beam milling (IBM). The magnetic properties of the discs after fabrication are studied and compared to literature, indicating that the SCIL fabrication route is a good candidate for large-scale PMA-SAF production.The SCIL based fabrication process for the PMA-SAF platelets is outlined inFig. 1(see supplementary material section 1 for details). We start with depositing a 2" Si wafer with a 30 nm sacrificial Cu layer and the SAF stack using DC magnetron sputtering. The basic SAF stack is arXiv:2206.15320v1 [physics.app-ph] 30 Jun 2022 | 10.1063/5.0100657 | [
"https://export.arxiv.org/pdf/2206.15320v1.pdf"
] | 250,144,814 | 2206.15320 | eb51fc2b0844b7a2b3c20e21550f7bfc1c78a799 |
Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets
J Li
Department of Applied Physics
Eindhoven University of Technology
P.O. Box 5135600 MBEindhovenThe Netherlands
P Van Nieuwkerk
Department of Applied Physics
Eindhoven University of Technology
P.O. Box 5135600 MBEindhovenThe Netherlands
M A Verschuuren
SCIL nanoimprint solutions
Philips Research Laboratories
EindhovenThe Netherlands
B Koopmans
Department of Applied Physics
Eindhoven University of Technology
P.O. Box 5135600 MBEindhovenThe Netherlands
R Lavrijsen
Department of Applied Physics
Eindhoven University of Technology
P.O. Box 5135600 MBEindhovenThe Netherlands
Substrate conformal imprint fabrication process of synthetic antiferromagnetic nanoplatelets
Methods to fabricate and characterize monodisperse magnetic nanoplatelets for fluid/bio-based applications based on spintronic thin-film principles are a challenge. This is due to the required top-down approach where the transfer of optimized blanket films to free particles in a fluid while preserving the magnetic properties is an uncharted field. Here, we explore the use of substrate conformal imprint lithography (SCIL) as a fast and cost-effective fabrication route. We analyze the size distribution of nominal 1.8 µm and 120 nm diameter platelets and show the effect of the fabrication steps on the magnetic properties which we explain through changes in the dominant magnetization reversal mechanism as the size decreases. We show that SCIL allows for efficient large-scale platelet fabrication and discuss how application-specific requirements can be solved via process and material engineering.Magnetic particles have been widely used in bioapplications due to their ability to mechanically manipulate their surroundings remotely via externally applied magnetic fields. In particular, the utilization of magnetic torques induced via an externally rotating magnetic field is of interest for applications such as micro mixing 1 , cancer treatment 2,3 and the manipulation of cells 4 . Superparamagnetic nanoparticles (SPNs) have traditionally been used in torque-related applications 5 . However, due to their limited magnetic anisotropy and spherical shape, the translation of magnetic torque to mechanical torque is limited. Despite these limitations, particles with enhanced shape or magnetic anisotropy have been studied, e.g. NiFe nanodiscs with a vortex spin configuration 6,7 , and magnetic nanorods 8 . Synthetic antiferromagnetic (SAF) nanoplatelets (NPs) with high perpendicular magnetic anisotropy (PMA) are among the most promising candidates 9-12 .SAF NPs with PMA typically consist of a multilayer stack: Ta/Pt/Co/Pt/Ru/Pt/Co/Pt. The strong hybridization of the 3d-5d orbitals at Co and Pt interfaces induces a large PMA 13,14 . This large anisotropy and the fact that PMA induces a hardplane anisotropy (the plane of the disc) and an easy-axis perpendicular to the the disc, are the key factors for effective magnetic-mechanical torque transduction 12 . The two ferromagnetic layers are antiferromagnetically coupled by the Ru layer through the Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction 15-17 . The SAF stack exhibits a zero net magnetic moment at zero applied magnetic field, preventing the aggregation of particles in liquid at zero field; a key requisite for applications. The Pt layers around the Ru layer tune the RKKY interaction and increase the PMA 18 . The high tunability of the PMA and RKKY and the freedom in shape and size of the platelets using top-down lithography methods make SAF NPs fascinating for remotely induced nanoscale torque applications.However, one of the major issues is to fabricate monodisperse SAF NPs with high throughput and low cost. UVlithography with a lift-off process has been reported to pattern 1.8 µm diameter SAF NPs 10 . However, due to the diffraction limit, it is hard to reach the sub-micron meter with conven-tional UV-lithography. Nanoimprint were used to fabricate SAF NPs with in-plane anisotropy at much smaller diameters down to 122 nm19,20. Although smaller size can be achieved, the additive lift-off process has its native problem, i.e. it is difficult to obtain a uniform thickness when the critical dimensions reach the resist thickness. As the PMA-SAF system requires Ångstrom scale control of the layer thickness to stabilize the magnetic behavior, such additive methods cannot be used. Hence, a subtractive method is preferred where one can start with a blanket film on a wafer. Recently, nanosphere lithography; where polystyrene (PS) beads were used as hard masks, combined with an ion milling process was reported to produce NPs with different sizes 21-23 . Nevertheless, this method suffers from non-uniform PS bead size and moreover, the yield depends on the distribution of the beads over a large area.In this paper, we present a subtractive method based on substrate conformal imprint lithography (SCIL) to fabricate monodisperse SAF NPs. The stamp used in SCIL is composed of two rubber layers on a thin glass support (seeFig. S1). The SCIL technique is based on a difference between the inplane stiffness of the glass which avoids pattern deformation over large areas, while the out-of-plane flexibility from the rubber layers allows conformal contact to underlying surface features 24 . With these properties, SCIL can be used to pattern large wafers up to 300 mm while keeping a uniform size of the features. In addition, this technique can be used to fabricate NPs from the nanometer to micrometer range and different shapes of the platelets can be thought off as the stamp used for the process can be custom made. Here we focus on 120 nm and 1.8 µm diameter disc-shaped SAF NPs and a subtractive method using Ar ion beam milling (IBM). The magnetic properties of the discs after fabrication are studied and compared to literature, indicating that the SCIL fabrication route is a good candidate for large-scale PMA-SAF production.The SCIL based fabrication process for the PMA-SAF platelets is outlined inFig. 1(see supplementary material section 1 for details). We start with depositing a 2" Si wafer with a 30 nm sacrificial Cu layer and the SAF stack using DC magnetron sputtering. The basic SAF stack is arXiv:2206.15320v1 [physics.app-ph] 30 Jun 2022
Methods to fabricate and characterize monodisperse magnetic nanoplatelets for fluid/bio-based applications based on spintronic thin-film principles are a challenge. This is due to the required top-down approach where the transfer of optimized blanket films to free particles in a fluid while preserving the magnetic properties is an uncharted field. Here, we explore the use of substrate conformal imprint lithography (SCIL) as a fast and cost-effective fabrication route. We analyze the size distribution of nominal 1.8 µm and 120 nm diameter platelets and show the effect of the fabrication steps on the magnetic properties which we explain through changes in the dominant magnetization reversal mechanism as the size decreases. We show that SCIL allows for efficient large-scale platelet fabrication and discuss how application-specific requirements can be solved via process and material engineering.
Magnetic particles have been widely used in bioapplications due to their ability to mechanically manipulate their surroundings remotely via externally applied magnetic fields. In particular, the utilization of magnetic torques induced via an externally rotating magnetic field is of interest for applications such as micro mixing 1 , cancer treatment 2,3 and the manipulation of cells 4 . Superparamagnetic nanoparticles (SPNs) have traditionally been used in torque-related applications 5 . However, due to their limited magnetic anisotropy and spherical shape, the translation of magnetic torque to mechanical torque is limited. Despite these limitations, particles with enhanced shape or magnetic anisotropy have been studied, e.g. NiFe nanodiscs with a vortex spin configuration 6,7 , and magnetic nanorods 8 . Synthetic antiferromagnetic (SAF) nanoplatelets (NPs) with high perpendicular magnetic anisotropy (PMA) are among the most promising candidates [9][10][11][12] .
SAF NPs with PMA typically consist of a multilayer stack: Ta/Pt/Co/Pt/Ru/Pt/Co/Pt. The strong hybridization of the 3d-5d orbitals at Co and Pt interfaces induces a large PMA 13,14 . This large anisotropy and the fact that PMA induces a hardplane anisotropy (the plane of the disc) and an easy-axis perpendicular to the the disc, are the key factors for effective magnetic-mechanical torque transduction 12 . The two ferromagnetic layers are antiferromagnetically coupled by the Ru layer through the Ruderman-Kittel-Kasuya-Yoshida (RKKY) interaction [15][16][17] . The SAF stack exhibits a zero net magnetic moment at zero applied magnetic field, preventing the aggregation of particles in liquid at zero field; a key requisite for applications. The Pt layers around the Ru layer tune the RKKY interaction and increase the PMA 18 . The high tunability of the PMA and RKKY and the freedom in shape and size of the platelets using top-down lithography methods make SAF NPs fascinating for remotely induced nanoscale torque applications.
However, one of the major issues is to fabricate monodisperse SAF NPs with high throughput and low cost. UVlithography with a lift-off process has been reported to pattern 1.8 µm diameter SAF NPs 10 . However, due to the diffraction limit, it is hard to reach the sub-micron meter with conven-tional UV-lithography. Nanoimprint were used to fabricate SAF NPs with in-plane anisotropy at much smaller diameters down to 122 nm 19,20 . Although smaller size can be achieved, the additive lift-off process has its native problem, i.e. it is difficult to obtain a uniform thickness when the critical dimensions reach the resist thickness. As the PMA-SAF system requires Ångstrom scale control of the layer thickness to stabilize the magnetic behavior, such additive methods cannot be used. Hence, a subtractive method is preferred where one can start with a blanket film on a wafer. Recently, nanosphere lithography; where polystyrene (PS) beads were used as hard masks, combined with an ion milling process was reported to produce NPs with different sizes [21][22][23] . Nevertheless, this method suffers from non-uniform PS bead size and moreover, the yield depends on the distribution of the beads over a large area.
In this paper, we present a subtractive method based on substrate conformal imprint lithography (SCIL) to fabricate monodisperse SAF NPs. The stamp used in SCIL is composed of two rubber layers on a thin glass support (see Fig. S1). The SCIL technique is based on a difference between the inplane stiffness of the glass which avoids pattern deformation over large areas, while the out-of-plane flexibility from the rubber layers allows conformal contact to underlying surface features 24 . With these properties, SCIL can be used to pattern large wafers up to 300 mm while keeping a uniform size of the features. In addition, this technique can be used to fabricate NPs from the nanometer to micrometer range and different shapes of the platelets can be thought off as the stamp used for the process can be custom made. Here we focus on 120 nm and 1.8 µm diameter disc-shaped SAF NPs and a subtractive method using Ar ion beam milling (IBM). The magnetic properties of the discs after fabrication are studied and compared to literature, indicating that the SCIL fabrication route is a good candidate for large-scale PMA-SAF production.
The SCIL based fabrication process for the PMA-SAF platelets is outlined in Fig. 1 (see supplementary material section 1 for details). We start with depositing a 2" Si wafer with a 30 nm sacrificial Cu layer and the SAF stack using DC magnetron sputtering. The basic SAF stack is (2)] with thickness in nanometers. For 1.8 µm SAF NPs we use 5 repetitions of the basic stack and for 120 nm NPs we use one repetition. Then, we spin-coat the SCIL sol-gel resist and manually imprint the pillar structure using a custom SCIL imprint station, followed by sol-gel dependent hot plate bake and stamp removal. After transfer, the masks are etched by selective reactive ion etching (RIE) to open the area around the pillars. The metal stack is then etched by a non-selective Ar ion beam milling (IBM) step and followed by buffered HF (BHF) dip to remove the residual sol-gel resist on top of the nanoplatelets. During the IBM process, re-deposition on the masks can cause irregular side walls to grow around the NPs (see Fig. S2 (e-f)). To remove the re-deposited material, the sample is immersed in deionized (DI) water and sonicated for 20 minutes. Finally, the NPs are released in solution by dissolving the Cu layer in 1.5% CuSO 4 -10% ammonia solution 19 .
The patterns of 1.8 µm and 120 nm diameter NPs after nanoimprint, before release, and after release, were observed using scanning electron microscopy (SEM) as shown in Fig. 2. More details of the released sample preparation can be found in supplementary material section 2.A. After imprinting, monodisperse disc-shaped patterns are transferred successfully for both 1.8 µm and 120 nm diameter NPs shown in Fig. 2(a) and 2(e). From Fig. 2(b-c) and Fig. 2(f-g), we see that the disc patterns are transferred into the metallic layer with uniform size after different etching processes and the NPs can be released without damaging the shape. The extracted size of the SAF NPs is shown in Fig. 2(d) and Fig. 2(h), which is 1.88 ± 0.02 µm and 123.3 ± 3.3 nm, indicating a highly reproducible SCIL pattern transfer over the full 2" wafer area, with the size distribution approximately 1.1% and 2.6% (see supplementary material section 2.B for details related to size distribution calculation).
Here, our used stamps have a packing density of 34 % and 11% for 1.8 µm and 120 nm NPs respectively (see supplementary material section 1.F for details). This packing density is not as high as the reported value of 50% for 1 µm discs and of 31% for 100 nm discs fabricated through nanosphere lithography 25 . However, the yield of SCIL which is determined by the specific stamp used for imprint can easily be increased by using a higher-packed mask and is not further addressed here.
To investigate the change in magnetic response due to the fabrication, the hysteresis loops of 1.8 µm and 120 nm SAF NPs were measured by SQUID magnetometry as shown in Fig. 3 and Fig. 4. The left and right column of the figures show the hysteresis loops measured with the applied magnetic field perpendicular (easy-axis) and parallel (hard-plane) to the film plane respectively. To obtain the minor loop, the samples were first saturated in a positive field, then the magnetic FIG. 2. SEM images of 1.8 µm diameter and 120 nm diameter SAF NPs (a, e)after imprinting, (b, f)before release, and (c, g)after release and dried on a Si substrate. The insert of (c) and (g) is the TEM image of 1.8 µm and 120 nm diameter SAF NPs. (d) and (h) is the size distribution of released 1.8 µm diameter and 120 nm diameter SAF nanoplatelets calculated from NPs before released.
field was decreased to zero and swept back to the positive saturation field. From the minor loop, the RKKY coupling field (µ 0 H rkky ) is defined as µ 0 H 1 +µ 0 H 2 2 and the coercivity (µ 0 H c ) is defined by µ 0 H 2 −µ 0 H 1 2 as shown in Fig 3(b) 18 .
Let us first discuss the magnetic properties of 1.8 µm NPs. The as-deposited blanket thin film is shown in Fig. 3(a). At a low magnetic field, the total magnetization is approximately zero, which is expected from the antiferromagnetic coupling of the top and bottom CoB layers in the basic stack and nearly equal magnetic moment of the two CoB layers. A small remnant moment at zero field can be observed in the inset Fig. 3(a), this is due to a slight thickness difference of the CoB layers during sequential growth 23 . Increasing the external field leads to a abrupt magnetization switch of the layer as expected from PMA-SAF samples in the spin-flip regime (i.e. where the PMA is much larger than the RKKY coupling) when the field is applied along the easy-axis 18 . This field we term the switching field, which depends on µ 0 H rkky and µ 0 H c . From the minor loop we can extract µ 0 H rkky = 208 ± 10 mT and µ 0 H c = 6 ± 1 mT for the blanket film (see supplementary material section 3 for the details of error calculation).
Let us now turn to the switching behavior of the patterned film of 1.8 µm NPs before release as shown in Fig. 3(b). Ideally, the magnetic properties of the blanket film are propagated to the patterned NPs, however, two main differences can be observed; (1) µ 0 H c has increased from 6 mT to 102 mT, and the switches are smeared out in the field. (2) The coupling field µ 0 H rkky has reduced from 208 mT to 165 mT. The first observation can be explained by the dominant magnetic reversal mechanism (nucleation vs domain wall propagation 26,27 ) which for these PMA-SAF films typically depends on the number of defects per surface area. Reducing the area of the object (pattering), the chance of finding a defect per platelet reduces, hence this leads to the increase of the µ 0 H c . Moreover, the sample area probed in the SQUID is around 4x4 mm 2 containing ∼ 1.5× 10 6 particles. The hysteresis observed is an ensemble response of all the NPs in the sample, from which the distribution of switching fields can be explained. The change in coupling field µ 0 H rkky has been observed before and is attributed to processing induced changes 28 . Despite these differences, the typical SAF properties namely two distinctive switches and the well defined antiferromagnetic state at zero applied field are observed, which are similar to the blanket thin film. The saturation magnetization of the platelets remains constant at around 1300 kA/m between the blanket and patterned film (see Fig. 3(a) and 3(b)), which indicates that the fabrication process does not change the magnetic properties significantly.
Let us now concentrate on the hysteresis loop measured on the released platelets as shown in Fig. 3(c) (see supplementary Section 2.A for sample preparation). As we cannot reliably quantify the number of platelets measured, we can only quantify the field response and not the saturation magnetization. Overall the observed response is similar to the NPs before release. However, the hysteresis loop becomes more slanted and there is a slight increase in the switching distribution reflected in µ 0 H c , which speculatively attribute to a distribution in the angle of alignment of the dried-in platelets relative to the applied field direction and possibly stray fields of piled up platelets (see also Fig. 2(c) where many platelets piled on top of each other). Overall, it is clear that the 1.8 µm NPs before and after release keep their antiferromagnetic state at a low magnetic field and switch at high applied fields. The similar µ 0 H rkky and µ 0 H c indicate that the final release step does not degrade the SAF properties. On comparing the blanket film to the final released particles the most prominent change is found in the coercive field µ 0 H c which we attribute to the well-known size effect of patterning PMA films 29 .
We will now move on to discuss the PMA of the NPs, which is the key factor of effective torque transduction 5 . To demonstrate the PMA of the NPs, the hard-plane hysteresis loops are shown in the right column of Fig. 3. The deposited thin film, NPs before release and after release show nearly identi- cal hysteresis loops. At zero field, the magnetization of the layers points antiparallel to each other leading to zero net magnetization. With increasing absolute field, the magnetization of the top and bottom CoB layers are tilted towards the applied field. Further increasing the field, the magnetizations tilt more and finally saturate. Here we define µ 0 H sat as the saturation field, which is the crossing point of the saturated state ( M M s = 1) and a linear fit of the data from -1000 mT to 1000 mT. To achieve the saturated state, the applied field should be large enough to overcome both the PMA and the RKKY interaction, from which µ 0 H sat can be defined as µ 0 H sat = µ 0 H k + 2µ 0 H rkky , where µ 0 H k is the effective PMA field of the magnetic layer 12 . The perpendicular anisotropy energy (K) is given by K = H k M s 2 , where M s is the saturation magnetisation of CoB. Both µ 0 H rkky and M s are obtained from the easy-axis hysteresis loops. From the equation above, the K for the thin film sample, the NPs before release and after release are (5.4 ± 0.4) × 10 5 , (5.9 ± 0.5) × 10 5 and (4.7 ± 0.4) × 10 5 J/m 3 , respectively. We attribute the difference in K of NPs be-fore and after release to a spread in the alignment of the NPs relative to the applied magnetic field after drop-casting and drying. This directly affects the shape of the hysteresis loop where a reduced saturation field is to be expected. Overall the relatively small spread of K ≈ 15% denotes that the PMA is maintained during fabrication.
After discussing 1.8 µm SAF NPs, let's now examine the magnetic properties of 120 nm NPs depicted in Fig. 4. First, we observe that all three samples exhibit SAF properties (see Fig. 4(a-c)). The µ 0 H rkky is 197 ± 6 mT for blanket film and 189 ± 8 mT for NPs before release. After patterning, the µ 0 H c increased from 8 mT (blanket film) to 94 mT (before release). It is clear that the µ 0 H rkky and µ 0 H c of 120 nm NPs have similar values compared to the 1.8 µm NPs, and their changes follow the same trend. The main difference is observed in the hysteresis loop of the released sample (see Fig. 4(c)), where the switching fields are observed to be slanted and have a wider spread. This is due to the ill defined arrangement of dried NPs on the substrate (see Fig. 2(g)). This phenomenon is also reflected in the hard-plane hysteresis loop of released 120 NPs as shown in Fig. 4(f), where the embedded switching behavior can be observed. Due to the misalignment, it is hard to quantitatively determine the µ 0 H rkky , µ 0 H c and K of released 120 nm NPs. Moving now on to consider the PMA of 120 nm NPs during fabrication, which we calculate from the hard-plane hysteresis loop shown in Fig. 4(d-f). Here we do not include the K of the released NPs. The K values for the thin film sample and the NPs before release are (4.8 ±0.4)×10 5 and (5.2 ±0.4)×10 5 J/m 3 , respectively, which are comparable to the values of 1.8 µm SAF NPs. Based on the hard-plane loop and the fact that the released step does not degradate the magnetic properties, although we cannot directly obtain the K of released NPs, we can conclude that the PMA of 120 nm NPs remains at a high value before release and we expect that the high PMA remains even after the release step, which we will address in future work. In summary, after fabrication 120 NPs show SAF properties and the PMA does not decrease .
In conclusion, we show that SCIL can be used to fabricate PMA-SAF based disc-shaped platelets with a uniform diameter in the micrometer and sub-micrometer range. After fabrication, both 1.8 µm and 120 nm NPs maintain their high PMA for high torque applications. A change in the magnetic response is observed which can be explained by the well-known change in the magnetic switching behavior when the area of PMA-SAF films changes. Our results pave the way for using SCIL imprint for the large-scale production of magnetic nanoplatelets using a subtractive method.
FIG. 1 .
1The schematic of the fabrication process of PMA-SAF NPs. (1) A sacrificial layer of Cu is sputtered on the Si wafer. (2) The SAF stack is sputtered on the Cu layer. (3) Spin-coat the resist. (4) Imprint. (5) Reactive ion etching (RIE) is used to etch the resist. (6) The metal layers are etched by ion beam milling (IBM). (7) The remaining resist is removed by buffered hydrogen fluoride (BHF) solution. (8) The re-deposition caused by the IBM process is removed by sonication. (9) Finally, the NPs are released by dissolving the Cu layer in CuSO 4 -ammonia solution [Ta(4)Pt(2)/CoB(0.8)/Pt(0.3)/Ru(0.8)/Pt(0.3)/CoB(0.8)/Pt
FIG. 3 .
3Hysteresis loops of the 1.8 µm NPs measured through SQUID. The left column contains the hysteresis loops measured along the easy axis for (a) the as-deposited thin film (b) before release and (c) release in water and dried on a Si substrate. The inserts in (a-c) show the zoom-in region of the black box near the origin of the loop. The right column includes the hysteresis loops along the hard plane for (d) the as-deposited thin film (e) before release, and (f) after release. The arrows indicate the direction of the magnetization of the top and bottom ferromagnetic layers.
ACKNOWLEDGMENTSThe authors gratefully acknowledge Janne-Mieke Meijer and Max Schelling from Eindhoven University of Technology for their assistance with the TEM measurements.AUTHOR DECLARATIONSConflict of InterestThe authors have no conflicts to disclose.DATA AVAILABILITYThe data that support the findings of this study are available.
Chaotic fluid mixing by alternating microparticle topologies to enhance biochemical reactions. Y Gao, A Van Reenen, M Hulsen, A Jong, M Prins, J , Microfluidics and nanofluidics. 16Den ToonderY. Gao, A. van Reenen, M. Hulsen, A. De Jong, M. Prins, and J. Den Toon- der, "Chaotic fluid mixing by alternating microparticle topologies to en- hance biochemical reactions," Microfluidics and nanofluidics 16, 265-274 (2014).
. H Chiriac, E Radu, M T Ibu, G Stoian, G Ababei, L Lȃbus, D Cȃ, H. Chiriac, E. Radu, M. T , ibu, G. Stoian, G. Ababei, L. Lȃbus , cȃ, D.-
Fe-cr-nb-b ferromagnetic particles with shape anisotropy for cancer cell destruction by magneto-mechanical actuation. N Herea, Lupu, Scientific reports. 8Herea, and N. Lupu, "Fe-cr-nb-b ferromagnetic particles with shape anisotropy for cancer cell destruction by magneto-mechanical actuation," Scientific reports 8, 1-9 (2018).
Combining bulk temperature and nanoheating enables advanced magnetic fluid hyperthermia efficacy on pancreatic tumor cells. U M Engelmann, A A Roeth, D Eberbeck, E M Buhl, U P Neumann, T Schmitz-Rode, I Slabu, Scientific reports. 8U. M. Engelmann, A. A. Roeth, D. Eberbeck, E. M. Buhl, U. P. Neu- mann, T. Schmitz-Rode, and I. Slabu, "Combining bulk temperature and nanoheating enables advanced magnetic fluid hyperthermia efficacy on pan- creatic tumor cells," Scientific reports 8, 1-12 (2018).
Remote control of cellular behaviour with magnetic nanoparticles. J Dobson, 10.1038/nnano.2008.39Nature Nanotechnology. 3J. Dobson, "Remote control of cellular behaviour with magnetic nanoparti- cles," Nature Nanotechnology 2008 3:3 3, 139-143 (2008).
Actuating Soft Matter with Magnetic Torque. R M Erb, J J Martin, R Soheilian, C Pan, J R Barber, 10.1002/ADFM.201504699Advanced Functional Materials. 26R. M. Erb, J. J. Martin, R. Soheilian, C. Pan, and J. R. Barber, "Actuat- ing Soft Matter with Magnetic Torque," Advanced Functional Materials 26, 3859-3880 (2016).
Ferromagnetic microdisks as carriers for biomedical applications. E A Rozhkova, V Novosad, D.-H Kim, J Pearson, R Divan, T Rajh, S D Bader, 10.1063/1.3061685Journal of Applied Physics. 105E. A. Rozhkova, V. Novosad, D.-H. Kim, J. Pearson, R. Divan, T. Rajh, and S. D. Bader, "Ferromagnetic microdisks as carriers for biomedical ap- plications," Journal of Applied Physics 105, 07B306 (2009).
Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction. D.-H Kim, E A Rozhkova, I V Ulasov, S D Bader, T Rajh, M S Lesniak, V Novosad, 10.1038/nmat2591Nature Materials. 92D.-H. Kim, E. A. Rozhkova, I. V. Ulasov, S. D. Bader, T. Rajh, M. S. Lesniak, and V. Novosad, "Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction," Nature Materials 2009 9:2 9, 165-171 (2009).
Functionalized magnetic nanowires for chemical and magneto-mechanical induction of cancer cell death. A I Martínez-Banderas, A Aires, F J Teran, J E Perez, J F Cadenas, N Alsharif, T Ravasi, A L Cortajarena, J Kosel, 10.1038/srep35786Scientific Reports. 61A. I. Martínez-Banderas, A. Aires, F. J. Teran, J. E. Perez, J. F. Cadenas, N. Alsharif, T. Ravasi, A. L. Cortajarena, and J. Kosel, "Functionalized magnetic nanowires for chemical and magneto-mechanical induction of cancer cell death," Scientific Reports 2016 6:1 6, 1-11 (2016).
The mechanical response in a fluid of synthetic antiferromagnetic and ferrimagnetic microdiscs with perpendicular magnetic anisotropy. T Vemulkar, E N Welbourne, R Mansell, D C Petit, R P Cowburn, 10.1063/1.4974211Applied Physics Letters. 110T. Vemulkar, E. N. Welbourne, R. Mansell, D. C. Petit, and R. P. Cowburn, "The mechanical response in a fluid of synthetic antiferromagnetic and fer- rimagnetic microdiscs with perpendicular magnetic anisotropy," Applied Physics Letters 110 (2017), 10.1063/1.4974211.
Highly tunable perpendicularly magnetized synthetic antiferromagnets for biotechnology applications. T Vemulkar, R Mansell, D C Petit, R P Cowburn, M S Lesniak, 10.1063/1.4926336Applied Physics Letters. 107T. Vemulkar, R. Mansell, D. C. Petit, R. P. Cowburn, and M. S. Les- niak, "Highly tunable perpendicularly magnetized synthetic antiferromag- nets for biotechnology applications," Applied Physics Letters 107 (2015), 10.1063/1.4926336.
Co/Pd-Based synthetic antiferromagnetic thin films on Au/resist underlayers: Towards biomedical applications. G Varvaro, S Laureti, D Peddis, M Hassan, G Barucca, P Mengucci, A Gerardino, E Giovine, O Lik, D Nissen, M Albrecht, 10.1039/c9nr06866jNanoscale. 11G. Varvaro, S. Laureti, D. Peddis, M. Hassan, G. Barucca, P. Mengucci, A. Gerardino, E. Giovine, O. Lik, D. Nissen, and M. Albrecht, "Co/Pd- Based synthetic antiferromagnetic thin films on Au/resist underlayers: To- wards biomedical applications," Nanoscale 11, 21891-21899 (2019).
Magnetic particles with perpendicular anisotropy for mechanical cancer cell destruction. R Mansell, T Vemulkar, D C Petit, Y Cheng, J Murphy, M S Lesniak, R P Cowburn, 10.1038/s41598-017-04154-1Scientific Reports. 7R. Mansell, T. Vemulkar, D. C. Petit, Y. Cheng, J. Murphy, M. S. Lesniak, and R. P. Cowburn, "Magnetic particles with perpendicular anisotropy for mechanical cancer cell destruction," Scientific Reports 7 (2017), 10.1038/s41598-017-04154-1.
Perpendicular magnetic anisotropy in Pd/Co and Pt/Co thinfilm layered structures. P F Carcia, 10.1063/1.340404Journal of Applied Physics. 635066P. F. Carcia, "Perpendicular magnetic anisotropy in Pd/Co and Pt/Co thin- film layered structures," Journal of Applied Physics 63, 5066 (1998).
Perpendicular Magnetic Anisotropy Caused by Interfacial Hybridization via Enhanced Orbital Moment in <span class. N Nakajima, T Koide, T Shidara, H Miyauchi, H Fukutani, A Fujimori, K Iio, T Katayama, M Nývlt, Y Suzuki, 10.1103/PhysRevLett.81.5229Physical Review Letters. 815229N. Nakajima, T. Koide, T. Shidara, H. Miyauchi, H. Fukutani, A. Fujimori, K. Iio, T. Katayama, M. Nývlt, and Y. Suzuki, "Perpendicular Magnetic Anisotropy Caused by Interfacial Hybridization via Enhanced Orbital Mo- ment in <span class," Physical Review Letters 81, 5229 (1998).
Indirect Exchange Coupling of Nuclear Magnetic Moments by Conduction Electrons. M A Ruderman, C Kittel, 10.1103/PhysRev.96.99Physical Review. 9699M. A. Ruderman and C. Kittel, "Indirect Exchange Coupling of Nuclear Magnetic Moments by Conduction Electrons," Physical Review 96, 99 (1954).
A Theory of Metallic Ferro-and Antiferromagnetism on Zener's Model. T Kasuya, 10.1143/PTP.16.45Progress of Theoretical Physics. 16T. Kasuya, "A Theory of Metallic Ferro-and Antiferromagnetism on Zener's Model," Progress of Theoretical Physics 16, 45-57 (1956).
Magnetic Properties of Cu-Mn Alloys. K Yosida, 10.1103/PhysRev.106.893Physical Review. 106893K. Yosida, "Magnetic Properties of Cu-Mn Alloys," Physical Review 106, 893 (1957).
Hysteresis loops of the 120 nm NPs measured through SQUID. The left column contains the hysteresis loops measured along the easy axis for (a) the as-deposited thin film (b) before release and (c) after release. The right column includes the hysteresis loops along the hard plane for (d) the as-deposited thin film (e) before release, and (f) after releaseFIG. 4. Hysteresis loops of the 120 nm NPs measured through SQUID. The left column contains the hysteresis loops measured along the easy axis for (a) the as-deposited thin film (b) before release and (c) after release. The right column includes the hysteresis loops along the hard plane for (d) the as-deposited thin film (e) before release, and (f) after release.
Tuning the interlayer exchange coupling between single perpendicularly magnetized CoFeB layers. R Lavrijsen, A Feránndez-Pacheco, D Petit, R Mansell, J H Lee, R P Cowburn, 10.1063/1.3682103Applied Physics Letters. 100R. Lavrijsen, A. Feránndez-Pacheco, D. Petit, R. Mansell, J. H. Lee, and R. P. Cowburn, "Tuning the interlayer exchange coupling between single perpendicularly magnetized CoFeB layers," Applied Physics Letters 100 (2012), 10.1063/1.3682103.
Highmoment antiferromagnetic nanoparticles with tunable magnetic properties. W Hu, R J Wilson, A Koh, A Fu, A Z Faranesh, C M Earhart, S J Osterfeld, S J Han, L Xu, S Guccione, R Sinclair, S X Wang, 10.1002/adma.200703077Advanced Materials. 20W. Hu, R. J. Wilson, A. Koh, A. Fu, A. Z. Faranesh, C. M. Earhart, S. J. Os- terfeld, S. J. Han, L. Xu, S. Guccione, R. Sinclair, and S. X. Wang, "High- moment antiferromagnetic nanoparticles with tunable magnetic properties," Advanced Materials 20, 1479-1483 (2008).
Fabrication of planar, layered nanoparticles using tri-layer resist templates. W Hu, M Zhang, R J Wilson, A L Koh, J S Wi, M Tang, R Sinclair, S X Wang, 10.1088/0957-4484/22/18/185302Nanotechnology. 22W. Hu, M. Zhang, R. J. Wilson, A. L. Koh, J. S. Wi, M. Tang, R. Sin- clair, and S. X. Wang, "Fabrication of planar, layered nanoparticles us- ing tri-layer resist templates," Nanotechnology 22 (2011), 10.1088/0957- 4484/22/18/185302.
Ni80Fe20 nanodisks by nanosphere lithography for biomedical applications. P Tiberto, G Barrera, F Celegato, G Conta, M Coïsson, F Vinai, F Albertini, 10.1063/1.4913278Journal of Applied Physics. 117P. Tiberto, G. Barrera, F. Celegato, G. Conta, M. Coïsson, F. Vinai, and F. Albertini, "Ni80Fe20 nanodisks by nanosphere lithography for biomedical applications," Journal of Applied Physics 117 (2015), 10.1063/1.4913278.
High-yield fabrication of 60 nm Permalloy nanodiscs in well-defined magnetic vortex state for biomedical applications. M Goiriena-Goikoetxea, A García-Arribas, M Rouco, A V Svalov, J M Barandiaran, 10.1088/0957-4484/27/17/175302Nanotechnology. 270M. Goiriena-Goikoetxea, A. García-Arribas, M. Rouco, A. V. Svalov, and J. M. Barandiaran, "High-yield fabrication of 60 nm Permalloy nanodiscs in well-defined magnetic vortex state for biomedical applications," Nanotech- nology 27, 0 (2016).
High-yield fabrication of perpendicularly magnetised synthetic antiferromagnetic nanodiscs. E N Welbourne, T Vemulkar, R P Cowburn, 10.1007/s12274-021-3307-1Nano Research. E. N. Welbourne, T. Vemulkar, and R. P. Cowburn, "High-yield fabrica- tion of perpendicularly magnetised synthetic antiferromagnetic nanodiscs," Nano Research , 1-6 (2021).
Large area nanoimprint by substrate conformal imprint lithography (SCIL). M A Verschuuren, M Megens, Y Ni, H Van Sprang, A Polman, 10.1515/AOT-2017-0022Advanced Optical Technologies. 6M. A. Verschuuren, M. Megens, Y. Ni, H. van Sprang, and A. Pol- man, "Large area nanoimprint by substrate conformal imprint lithography (SCIL)," Advanced Optical Technologies 6, 243-264 (2017).
High-yield fabrication of perpendicularly magnetised synthetic antiferromagnetic nanodiscs. E N Welbourne, T Vemulkar, R P Cowburn, E. N. Welbourne, T. Vemulkar, and R. P. Cowburn, "High-yield fabrica- tion of perpendicularly magnetised synthetic antiferromagnetic nanodiscs,"
. Nano Research. Nano Research , 1-6 (2021).
Magnetization reversal in CoPd nanostructures and films. G Hu, T Thomson, C T Rettner, S Raoux, B D Terris, 10.1063/1.1849572Journal of Applied Physics. 97G. Hu, T. Thomson, C. T. Rettner, S. Raoux, and B. D. Terris, "Magnetiza- tion reversal in CoPd nanostructures and films," Journal of Applied Physics 97, 1-4 (2005).
Rotation and wall propagation in multidomain Co/Pd islands. G Hu, T Thomson, C T Rettner, B D Terris, 10.1109/TMAG.2005.854733IEEE Transactions on Magnetics. 41G. Hu, T. Thomson, C. T. Rettner, and B. D. Terris, "Rotation and wall propagation in multidomain Co/Pd islands," IEEE Transactions on Mag- netics 41, 3589-3591 (2005).
Weakly coupled synthetic antiferromagnetic nanodisks with perpendicular magnetic anisotropy for lab-on-chip devices. E N Welbourne, T Vemulkar, D C M C Petit, R P Cowburn, 10.1063/5.0057721Applied Physics Letters. 119102401E. N. Welbourne, T. Vemulkar, D. C. M. C. Petit, and R. P. Cowburn, "Weakly coupled synthetic antiferromagnetic nanodisks with perpendicular magnetic anisotropy for lab-on-chip devices," Applied Physics Letters 119, 102401 (2021).
Intrinsic distribution of magnetic anisotropy in thin films probed by patterned nanostructures. T Thomson, G Hu, B D Terris, 10.1103/PhysRevLett.96.257204Physical Review Letters. 96T. Thomson, G. Hu, and B. D. Terris, "Intrinsic distribution of magnetic anisotropy in thin films probed by patterned nanostructures," Physical Re- view Letters 96, 1-4 (2006).
| [] |
[] | [] | [] | [] | Let p be an odd prime and F∞ be a Zp-extension of a number field F . Given an elliptic curve E over F , we study the structure of the fine Selmer group over F∞. It is shown that under certain conditions, the fine Selmer group is a cofinitely generated module over Zp and furthermore, we obtain an upper bound for its corank (i.e., the λ-invariant), in terms of various local and global invariants. | 10.1007/s11139-023-00734-0 | [
"https://export.arxiv.org/pdf/2208.13247v1.pdf"
] | 251,903,962 | 2208.13247 | 5ca2dc6820c57ab1bbf29f5ee784ed68c4276a4e |
28 Aug 2022
28 Aug 2022ON THE CORANK OF THE FINE SELMER GROUP OF AN ELLIPTIC CURVE OVER A Z p -EXTENSION ANWESH RAY
Let p be an odd prime and F∞ be a Zp-extension of a number field F . Given an elliptic curve E over F , we study the structure of the fine Selmer group over F∞. It is shown that under certain conditions, the fine Selmer group is a cofinitely generated module over Zp and furthermore, we obtain an upper bound for its corank (i.e., the λ-invariant), in terms of various local and global invariants.
Introduction
The variation of arithmetic objects in families of number fields of interest is a central theme in number theory. Iwasawa studied the variation of class numbers of number fields in certain infinite towers of number field extensions. In greater detail, let p be a prime number and F be a number field. Set Z p to denote the ring of p-adic integers, defined to be the inverse limit lim ← −n Z/p n Z. A Z p -tower consists of a family of Galois extensions {F n } of F , satisfying the containments F = F 0 ⊂ F 1 ⊂ F 2 ⊂ · · · ⊂ F n ⊂ F n+1 ⊂ . . . , and such that Gal(F n /F ) is isomorphic to Z/p n Z. The infinite Galois extension F ∞ := ∪ n≥1 F n is referred to a Z p -extension, since the Galois group Gal(F ∞ /F ) is isomorphic to Z p . Let e n be the highest power of p dividing the class number of F n . Iwasawa showed that there are integers µ, λ ∈ Z ≥0 and ν ∈ Z such that for all sufficiently large values of n, we have that e n = p n µ + nλ + ν (cf. [Iwa73] or [Was97,Chapter 13]). Let µ p ∞ be the p-primary roots of unity inF ; the cyclotomic Z p -extension of F is the unique Z pextension of F contained in F (µ p ∞ ). It is conjectured by Iwasawa that for the cyclotomic Z p -extension of F , the µ-invariant vanishes. In other words, Iwasawa conjectured that over the cyclotomic Z p -extension of F , the p-primary part of the Hilbert class group is cofinitely generated as a Z p -module. The corank of this Z p -module is the λ-invariant.
Mazur (cf. [Maz72]) initiated the Iwasawa theory of abelian varieties with good ordinary reduction. The original motivation was to study the growth of Selmer groups, Tate-Shafarevich groups and Mordell-Weil ranks of an abelian variety in a Z p -tower of number fields. The Selmer groups in question are defined by local conditions at each prime, defined in terms of certain local Kummer maps. These Selmer groups capture various arithmetic properties of the elliptic curves or abelian varieties in question. A closer analogue with the Hilbert class groups holds for the fine Selmer group, which is the subgroup defined by strict local conditions at all primes (cf. Definition 2.1). The fine Selmer group appeared in various guises in work of Perrin-Riou (cf. [PR93,PR95]), Rubin (cf. [Rub00]) and Kato (cf. [K + 04]). Their algebraic properties were systematically studied by Coates and Sujatha in [CS05], with an emphasis on the vanishing of the µ-invariant. Indeed, it is conjectured that the fine Selmer group of an elliptic curve over the cyclotomic Z p -extension of a number field is a finitely generated Z p -module (cf. Conjecture A in loc. cit.). In recent work (cf. [DRS22]), it it shown that this conjecture is indeed satisfied over the cyclotomic Z p -extension, for various Galois representations of interest, provided certain additional conditions are satisfied.
When the µ-invariant is known to vanish, the Z p -corank of the fine Selmer group is equal to the λ-invariant. The main focus of this paper is to shed light on the λ-invariant of the fine Selmer group of an elliptic curve over a general Z p -extension. For the classical Selmer groups of elliptic curve with good ordinary reduction, the λ-invariant over the cyclotomic Z p -extension can be computed since there are efficient algorithms to compute the p-adic L-function [Ste07]. This calculation is possible since it the main conjecture is known to hold in this context, thanks to the seminal work of Skinner and Urban [SU14]. The Z p -corank of the fine Selmer group is shrouded in mystery since there are not many satisfactory computational tools to obtain a precise description of the characteristic ideal except for certain trivial cases. It should be noted here that one possible techniques used in obtaining some insight into the λ-invariant is via the Euler characteristic formula for the leading term of the characteristic series (cf. [Wut07, section 6]). If the formula for the leading term is a p-adic unit, then, under further hypotheses, the λ-invariant may be exactly computed to be equal to max{0, r − 1}, where r is the Mordell-Weil rank of the elliptic curve in question. Such ideas are used in studying distribution questions for the Iwasawa invariants of the fine Selmer groups of elliptic curves (cf. [RS21]). However, this technique does not provide any upper bound on the λ-invariant, when the leading term is not a p-adic unit. The upper bounds obtained in this paper can be expressed in terms of various local and global invariants. Theorem 4.6 describes and upper bound which holds in generality. The formula is better understood in a special case in which the global invariants in our formula are shown to vanish, see Theorem 4.9. We provide a concrete example to better clarify the main results of the paper.
Preliminary Notions
In this section, we introduce relevant notation and recall the definition and basic properties of fine Selmer groups associated to elliptic curves. Let F be a number field. Throughout, p will be an odd prime number. Fix an algebraic closureF of F . For each prime v of F , set F v to denote the completion of F at v, and letF v be a choice of algebraic closure of F v . Fix an embedding ι v :F ֒→F v for each prime v of F . Given a field F contained inF , let G F denote the absolute Galois group Gal(F /F).
Let F ∞ be a Z p -extension of F , i.e., F ∞ is an infinite Galois extension of F for which Gal(F ∞ /F ) is topologically isomorphic to Z p . For n ∈ Z ≥1 , we set F n to be the extension of F such that [F n : F ] = p n . Note that F n is a Galois extension of F and that Gal(F n /F ) is isomorphic to Z/p n Z. Let E /F be an elliptic curve with good reduction at all primes v|p of F . Let S be the set of primes v of F such that either v|p or such that E has bad reduction at v. Let S p be the set of primes of F that lie above p. Given a set of primes Σ of F and an algebraic extension F/F , let Σ(F) be the set of primes of F that lie above the set Σ. Given a prime v of F , set v(F) to be the set of primes of F that lie above v.
Let F be an extension of F contained inF . For a prime w in F, let F w be the completion of F at w. For ease of notation, H i (F w , ·) (resp. H i (F w , ·)) will be used as a shorthand for
H i (G F , ·) (resp. H i (G Fw , ·)). Given a prime v of F , we set H v F, E[p ∞ ]
to denote the following (possibly infinite) product
H v F, E[p ∞ ] := w∈v(F ) H 1 (F w , E[p ∞ ]).
Note that all primes v / ∈ S p are unramified in any Z p -extension F ∞ . Therefore, F ∞ is contained in F S . Let F S denote the maximal extension of F contained inF in which all the primes v / ∈ S are unramified. Assume that F is contained in F S . The p-primary fine Selmer group of E over F is defined as follows
(2.1) R E/F := ker H 1 (F S /F, E[p ∞ ]) → v∈S H v F, E[p ∞ ] ,
where the map in question is the natural map induced by restriction to F w as w ranges over the set S(F).
We set Γ to denote the Galois group Γ :
= Gal F ∞ /F . The Iwasawa algebra Λ is the completed group algebra Λ = lim ← −n Z p [Γ/Γ p n ].
Let γ be a topological generator of Γ, and set T to denote (γ − 1). We identify the Iwasawa algebra Λ with the formal power series ring Z p T . A polynomial f (T ) ∈ Λ is said to be distinguished if it is a nonconstant monic polynomial and all of its non-leading coefficients are divisible by p.
We introduce the Iwasawa invariants associated to a finitely generated module over Λ. A map of Λ-modules M 1 → M 2 is said to be a pseudo-isomorphism if its kernel and cokernel are both finite. Let M be a finitely generated module over Λ. According to the structure theorem of Λ-modules [Was97, Chapter 13], there is a pseudoisomorphism of the form
M −→ Λ r ⊕ s i=1 Λ/(p µ i ) ⊕ t j=1 Λ/ f j (T ) λ j ,
where r ∈ Z ≥0 is the rank, µ i are positive integers and f j (T ) are irreducible distinguished polynomials. The µ and λ-invariants are defined as follows
µ(M ) := s i=1 µ i , and λ(M ) := t j=1 λ j deg f j , where we set µ(M ) = 0 (resp. λ(M ) = 0) if s = 0 (resp. t = 0). Denote the µ (resp. λ-invariant) of R(E/F ∞ ) ∨ by µ(E/F ∞ ) (resp. λ(E/F ∞ )). We note that when F ∞ /F is the cyclotomic Z p -extension, it is conjectured that R(E/F ∞ ) ∨ is a torsion Λ-module and µ(E/F ∞ ) = 0, cf. [CS05, Conjecture A].
We are interested in the following question.
Question 2.
1. Under what conditions can it be shown that the dual fine Selmer group R(E/F ∞ ) ∨ is a finitely generated torsion Λ-module with µ(E/F ∞ ) = 0? Furthermore, assuming that these conditions are satisfied, how well can one characterize the λ-invariant λ(E/F ∞ )?
That R(E/F ∞ ) ∨ is a finitely generated as a Λ-module is well known, and follows from a standard argument that involves an application of Nakayama's lemma.
Lemma 2.2. With respect to notation above, the following conditions are equivalent.
(1) R(E/F ∞ ) ∨ is a torsion Λ-module with µ(E/F ∞ ) = 0, (2) R(E/F ∞ )[p] is finite.
Furthermore, if the above conditions are satisfied, then,
λ(E/F ∞ ) ≤ dim Z/pZ R(E/F ∞ )[p] .
Proof. Let M be a finitely generated Λ-module. It is an easy consequence of the structure theory that M is a torsion Λ-module with µ(M ) = 0 if and only if M is finitely generated as a Z p -module. Let M be the dual fine
Selmer group R(E/F ∞ ) ∨ , we find therefore that M is a torsion Λ-module with µ(M ) = 0 if and only if M/pM is finite. Note that the dual of R(E/F ∞ )[p]
is M/pM , and hence the two conditions above are equivalent.
Furthermore, λ(M ) is equal to the Z p -rank of M , and hence, λ(M ) ≤ dim Z/pZ M/pM . Therefore, we find that λ(E/F ∞ ) ≤ dim Z/pZ R(E/F ∞ )[p] .
Given a Λ-module M , we say that M is cofinitely generated (resp. cotorsion) as a Λ-module if the Pontryagin dual M ∨ := Hom M, Q p /Z p is finitely generated (resp. torsion) as a Λ-module.
The residual fine Selmer group
Recall that F is a number field and E is an elliptic curve defined over F , and p is an odd prime. Denote by E[p] the p-torsion subgroup of E(F ) and note that as an abelian group, E[p] is isomorphic to Z/pZ 2 . We define an analogue of the fine Selmer group associated with the residual representation on E[p].
Recall that S consists of the set of primes v ∤ p at which E has bad reduction and the primes v | p. For v ∈ S, set H v (F ∞ , E[p]) to be the (possibly infinite) product w∈v(F∞) H 1 (F ∞,w , E[p]). The residual fine Selmer group is defined as follows
R(E[p]/F ∞ ) := ker H 1 (F S /F ∞ , E[p]) Φ − → v∈S H v F ∞ , E[p] .
The map Φ is the natural map obtained by restriction to F ∞,w as w ranges over the primes in S(F ∞ ). Consider the Kummer sequence
0 → E[p] → E[p ∞ ] → E[p ∞ ] → 0.
Given a prime v ∈ S, let h v be the map
h v : H v (F ∞ , E[p]) → H v (F ∞ , E[p ∞ ])[p]
which is the product of the natural maps induced from the Kummer sequence
h w : H 1 (F ∞,w , E[p]) → H 1 (F ∞,w , E[p ∞ ])[p],
as w ranges over v(F ∞ ). Let h be the direct sum of the maps h v , as v ranges over S h :
v∈S H v (F ∞ , E[p]) −→ v∈S H v (F ∞ , E[p ∞ ])[p].
From the Kummer sequence, we have the following diagram (3.1)
0 R(E[p]/F ∞ ) H 1 (F S /F ∞ , E[p]) im(Φ) 0 0 R(E/F ∞ )[p] H 1 (F S /F ∞ , E[p ∞ ])[p] v∈S H v (F ∞ , E[p ∞ ])[p]. Ψ g h ′
In the above diagram, the map g is surjective with kernel isomorphic to
H 0 (F∞,E[p ∞ ]) pH 0 (F∞,E[p ∞ ])
. The vertical map h ′ is the restriction of the map h to im(Φ). By an application of the Snake lemma, we have the following exact sequence
(3.2) 0 → ker Ψ → ker g → ker h ′ → cok Ψ → 0.
Note that for v ∈ S, the kernel of h v is the product of kernels of h w as w ranges over the primes of F ∞ that lie above v.
Proposition 3.1. With respect to notation above, assume that the following conditions are satisfied
(1) ker(h) is finite, (2) R(E[p]/F ∞ ) is finite. Then, the fine Selmer group R(E/F ∞ ) is a cotorsion Λ-module with µ(E/F ∞ ) = 0. Furthermore, we have that λ(E/F ∞ ) ≤ dim Z/pZ R(E[p]/F ∞ ) + dim Z/pZ ker(h) .
Proof. First, we show that R(E/F ∞ ) is a cotorsion Λ-module and µ(E/F ∞ ) = 0. According to Lemma 2.2, R(E/F ∞ ) is a cotorsion Λ-module with µ = 0 if and only if R(E/F ∞ )[p] is finite. Consider the map
Ψ : R(E[p]/F ∞ ) → R(E/F ∞ )[p],
(cf. (3.1)) and we observe that it suffices to show that ker Ψ and cok Ψ are finite. Referring to the exact sequence (3.2), we deduce that ker Ψ is finite since ker(g) is finite. Since it is assumed that ker(h) is finite, it follows that cok Ψ is finite as well. Therefore, we have shown that R(E/F ∞ )[p] is finite, and therefore deduce that R(E/F ∞ ) is a cotorsion Λ-module and µ(E/F ∞ ) = 0.
Note that it also follows from Lemma 2.
2 that λ(E/F ∞ ) ≤ dim Z/pZ R(E/F ∞ )[p]
. Hence, we arrive at the bounds
λ(E/F ∞ ) ≤ dim Z/pZ R(E/F ∞ )[p] ≤ dim Z/pZ R(E[p]/F ∞ ) + dim Z/pZ (cok Ψ) ≤ dim Z/pZ R(E[p]/F ∞ ) + dim Z/pZ ker(h) ,
and thus, this completes the proof.
Definition 3.2. Let S 0 be the set of primes v ∈ S at which both of the following conditions are satisfied
(1) E has bad reduction at v.
(2) If p ≥ 5 and µ p is contained in F v , then, E has split multiplicative reduction at v.
Assumption 3.3. Assume that each prime v ∈ S 0 ∪ S p is finitely decomposed in F ∞ .
Since all primes of F are finitely decomposed in its cyclotomic Z p -extension, the above assumption is always satisfies when F ∞ is the cyclotomic Z p -extension of F . Note that since E is stipulated to have good reduction at all primes v ∈ S, we find that S 0 is a
subset of S\S p . Given v ∈ S p , set δ v := 2 if E(F v )[p] = 0; 0 if E(F v )[p] = 0.
For any prime v ∈ S ∪ S p , let g v be the number of primes w ∈ v(F ∞ ).
Lemma 3.4. It follows from Assumption 3.3 that ker(h) is finite, and moreover,
dim ker(h) ≤ v∈S 0 2g v + v∈Sp δ v g v .
Proof. Let v ∈ S and let w be a prime of F ∞ that lies above v. From the Kummer sequence, we find that the kernel of h w is identified with E(F ∞,w )[p ∞ ] ⊗ Z/pZ. Note that in particular, the dimension of ker h w is at most 2. There are three cases to consider.
(1) If v ∈ S 0 , then, dim ker(h w ) is at most 2. Thus, dim ker(h v ) is at most 2g v .
(2) If v ∈ S\(S 0 ∪ S p ), then, µ p is contained in F v and E has either nonsplit multiplicative reduction or additive reduction at v. Since v ∤ p, it is unramified in any Z p -extension, in particular, unramified in F ∞ . Thus, the elliptic curve E has either non-split multiplicative reduction or additive reduction at every prime w of F ∞ that lies above v. Then, it follows from [HM99, Proposition 5.1.
(iii)] that E(F ∞,w )[p ∞ ] = 0 for all primes w ∈ v(F ∞ )
. As a result, the kernel of h v is 0 in this case. dim (ker h w ) ≤ 2 for each prime w of F ∞ that lies above w. From the above case decomposition, we find that dim h ≤ v∈S 0 2g v + v∈Sp δ v g v . Since g v is assumed to be finite for each v ∈ S 0 ∪ S p , the sum is finite.
Main results and their proofs
In this section, we state and prove the main results of the paper. We illustrate the bounds obtained via a concrete example at the end of the section.
Given an algebraic extension F, let H(F) (resp. A(F)) be the maximal abelian unramified extension of F such that Gal(H(F)/F) (resp. Gal(A(F)/F)) is an elementary p-group (resp. pro-p group). Let F(E[p]) be the extension of F generated by E[p]. In other words, the field F(E[p]) is the field extension of F which is fixed by the kernel ofρ : G F → Aut(E[p]). Let K n be the extension F n (E[p]) and set L n := H(K n ). Set K ∞ to denote F ∞ (E[p]) and set L ∞ (resp. L ∞ ) to denote the extension H(K ∞ ) (resp. A(K ∞ )). We let X := Gal(L ∞ /K ∞ ) and X := Gal(L ∞ /K ∞ ). We set K to denote the extension F (F [p]), and note that K ∞ is a Z p -extension of K. Let Λ = Z p Gal(K ∞ /K) denote the associated Iwasawa algebra, and note that X is a module over Λ. It is in fact known that X is finitely generated module over Λ (cf. [Was97, Chapter 13]). By construction, X = X/pX is a module over Ω := Λ/p. Definition 4.1. Denote by H S (K ∞ ) the maximal extension of K ∞ , which is contained in H(K ∞ ), in which the primes of S(K ∞ ) are completely split. Set Y to denote Gal(H S (K ∞ )/K ∞ ), and set Y := Y/pY. Set G to denote the Galois group Gal(K ∞ /F ∞ ). From the inflation-restriction sequence, we have the following exact sequence
0 → H 1 (G, E(K ∞ )[p]) inf −→ H 1 (F S /F ∞ , E[p]) res − − → H 1 (K S /K ∞ , E[p]).
Note that the action of Gal(K S /K ∞ ) on E[p] is the trivial action, and therefore, we find that
H 1 (K S /K ∞ , E[p]) = Hom Gal(K S /K ∞ ), Z/pZ 2 .
The residual fine Selmer group R(
E[p]/F ∞ ) is a subgroup of H 1 (F S /F ∞ , E[p]). Let Z denote inf −1 (R(E[p]/F ∞ )
) and set Z 1 := res(Y ). Note that Z 1 can be identified with a subgroup of Hom(Y, Z/pZ) 2 . In this way, we have an exact sequence
(4.1) 0 → Z → R(E[p]/F ∞ ) → Hom(Y, Z/pZ) 2 ,
where the image of the rightmost map is Z 1 . Since G is finite, it is clear that Z is finite as well. The following assumption is a special case of Iwasawa's classical µ = 0 conjecture.
Assumption 4.2. Assume that Y is torsion Λ-module and that µ(Y) = 0. Equivalently, assume that Y is finitely generated as a Z p -module.
Lemma 4.3. With respect to notation above, the following conditions are equivalent.
(1) Assumption 4.2 is satisfied, (2) Y is finite.
Furthermore, if the above equivalent conditions are satisfied, then, λ(Y) ≤ rank Zp Y.
Proof. It is an easy consequence of the structure theory of finitely generated Iwasawa modules that Assumption 4.2 is satisfied if and only if Y is finitely generated as a Z pmodule and that λ(Y) = rank Zp Y. Note that
rank Zp Y ≤ dim Z/pZ Y/pY = dim Z/pZ Y,
and the result follows.
In many situations, it is shown that certain Iwasawa modules that arise from class groups do not contain any non-zero finite Λ-submodules. This property, if it holds, implies that the dimension of Y would equal the λ-invariant of Y, as the following result shows.
Proposition 4.4. With respect to notation above, suppose that Assumption 4.2 holds. Then the following conditions are equivalent
(1) Y does not contain any non-zero finite Λ-submodules,
(2) λ(Y) = dim Z/pZ Y .
Proof. It follows from Lemma 4.3 that rank Zp Y is finite and
Y = Z λ(Y) p ⊕ A,
where A is a finite abelian p-group. Note that A is the p-primary torsion subgroup of Y, hence, A is a finite Λ-submodule of Y. Therefore, condition (1) is equivalent to the vanishing of A. On the other hand, dim Z/pZ (Y ) = λ(Y) + dim Z/pZ (A). Therefore, (1) and (2) are equivalent.
Proposition 4.5. With respect to notation above, suppose that Assumption 4.2 holds.
Then, R(E[p]/F ∞ ) is finite and dim R(E[p]/F ∞ ) ≤ 2 dim(Y ) + dim(Z).
Proof. It follows from Lemma 4.3 that Y is finite. The result follows from the exact sequence (4.1).
Theorem 4.6. Let E be an elliptic curve over a number field F and let p be an odd prime such that E has good reduction at all primes that lie above p. Let F ∞ /F be a Z p -extension of F . Suppose that Assumptions 3.3 and 4.2 hold. Let Y be as in Definition 4.1. Then, the following assertions hold
(
1) R(E/F ∞ ) is a cotorsion module over Λ with µ(E/F ∞ ) = 0,
(2) Y is finite and the following bound on the λ-invariant holds
λ(E/F ∞ ) ≤ 2 dim(Y ) + dim(Z) + v∈S 0 2g v + v∈Sp δ v g v .
Proof. Recall that Proposition 3.1 asserts that if (1) ker(h) is finite,
(2) R(E[p]/F ∞ ) is finite, then, R(E/F ∞ ) is a cotorsion Λ-module with µ(E/F ∞ ) = 0. Furthermore, we have that λ(E/F ∞ ) ≤ dim R(E[p]/F ∞ ) + dim(ker(h)).
Lemma 3.4 states that ker(h) is finite and dim(ker(h)) is bounded above by v∈S 0 2g v + v∈Sp δ v g v . The result follows therefore from Proposition 4.5. We now show that Y and Z can be shown to be trivial provided further conditions are satisfied. Letρ : G F → GL 2 (Z/pZ) be the residual representation on E[p].
Lemma 4.7. Suppose that the image ofρ is a non-solvable subgroup of GL 2 (Z/pZ). Then,ρ | G F∞ is irreducible.
Proof. Note that ifρ | G F∞ is reducible, then, its image is solvable. Since F ∞ is an abelian extension of F , we find that the image ofρ must also be solvable.
Lemma 4.8. The following assertions are satisfied.
(1) Suppose that any one (or both) of the following conditions forρ is satisfied (a)ρ is irreducible when restricted to G F∞ , (b) the image ofρ has cardinality coprime to p. Then, we have that Z = 0.
(2) Suppose that both of the following conditions are satisifed (a) there is only one prime v of K that lies above p which ramifies in K ∞ and v is totally ramified in K ∞ , (b) p does not divide the class number of K. Then, it follows that Y = 0.
Proof. We first prove part (1). Sinceρ is assumed to satisfy one (or both) of two conditions, we consider the conditions one at a time. First, assume thatρ is irreducible when restricted to G F∞ . Then, the vanishing of Z follows from [PS21, Lemma 2.2]. Next, assume that the image ofρ has cardinality coprime to p. Since E[p] ≃ Z/pZ 2 is a p-group, and G has cardinality coprime to p, it follows that H 1 (G, E[p]) = 0. As a consequence, we deduce that Z = 0.
Part (2) follows from [Was97, Proposition 13.22].
Theorem 4.9. Let E be an elliptic curve over a number field F and let p be an odd prime such that E has good reduction at all primes in S p . Let F ∞ /F be a Z p -extension of F . With respect to notation above, assume that Assumption 3.3 is satisfied. Furthermore, suppose that the following conditions are satisfied (1) the image ofρ is non-solvable or has cardinality coprime to p, (2) there is only one prime of K which ramifies in K ∞ and this prime is totally ramified.
(3) Suppose that A(K) = 0. Then, we find that R(E/F ∞ ) is a cotorsion module over Λ with µ(E/F ∞ ) = 0, and
λ(E/F ∞ ) ≤ v∈S 0 2g v + v∈Sp δ v g v .
In particular, if S 0 = ∅ and E(F v )[p] = 0 for all primes v ∈ S p , then, λ(E/F ∞ ) = 0 for all Z p -extensions F ∞ /F . Proof. Note that Lemma 4.8 implies that Y = 0 and hence, Assumption 4.2 is automatically satisfied. The result follows directly form Theorem 4.6 and Lemma 4.8.
An Example: We illustrate the above results through an example. Let us consider the elliptic curve E with Cremona label 11a2 and let p = 5. The residual representation ρ : G Q → GL 2 (Z/pZ) is a direct sum of two charactersρ =χ ⊕ 1, whereχ is a mod-p cyclotomic character.
(1) First, consider the case when F = Q and F ∞ = Q ∞ is the cyclotomic Z p -extension of Q. Note that K = Q(E[p]) is the number field Q(µ p ), and K ∞ is Q(µ p ∞ ), the cyclotomic Z p -extension of K. The set of primes S is equal to {11, p} and S 0 = {11}. All primes of Q are finitely decomposed in Q ∞ , the cyclotomic Z pextension of Q, and hence, Assumption 3.3 is satisfied. We verify the conditions of Theorem 4.9.
• Clearly the image ofρ has cardinality coprime to p.
• Let η p be the prime of K that lies above p. Note that η p is the principal ideal generated by (1 − e 2πi/p ) and that pO K = η p−1 p . The prime η p is totally ramified in K ∞ = Q(µ p ∞ ).
• The prime p = 5 is a regular prime, i.e., p does not divide the class number of K = Q(µ p ) (the smallest irregular prime is 37).
We compute g 11 , i.e, the number of primes of Q ∞ that lie above 11. Since 5 2 ∤ (11 4 − 1), it follows that g 11 = 1. On the other hand, p is totally ramified in Q ∞ , hence, g p = 1. Theorem 4.9 then implies that R(E/F ∞ ) is a cotorsion module over Λ with µ(E/Q ∞ ) = 0, and
λ(E/Q ∞ ) ≤ v∈S 0 2g v + v∈Sp δ v g v ≤ 4.
(2) Next, suppose that F = Q(µ p ). Note that K = F and that there are p − 1 = 4 independent Z p -extensions of F . Let F ∞ /F be any Z p -extension in which 11 is finitely decomposed and in which η p is totally ramified. Thus in particular, Assumption 3.3 holds. Note that K ∞ is equal to F ∞ . The same arguments as in part (1) above imply that the assumptions of Theorem 4.9 are satisfied. Theorem 4.9 then implies that R(E/F ∞ ) is a cotorsion module over Λ with µ(E/F ∞ ) = 0, and
λ(E/F ∞ ) ≤ v∈S 0 2g v + v∈Sp δ v g v = v|11
2g v + δ ηp .
When F ∞ is taken to be the cyclotomic Z p -extension of F , then Assumption 3.3 is clearly satisfied. In this case, we find that 11 is totally inert in F ∞ , and therefore, we find that λ(E/F ∞ ) ≤ 4.
( 3 )
3Finally, consider the case when v ∈ S p . There are two subcases to consider. (a) First, assume that E(F v )[p] = 0. Then, since F ∞,w /F v is a pro-p extension, it follows that E(F ∞,w )[p ∞ ] = 0 (cf. [NSW13, Proposition 1.6.12]). Therefore, the kernel of h v is 0 in this case. (b) On the other hand, if E(F v )[p] = 0, then, we do still have the bound
Fine selmer groups of elliptic curves over p-adic lie extensions. John Coates, Ramdorai Sujatha, Mathematische Annalen. 3314John Coates and Ramdorai Sujatha. Fine selmer groups of elliptic curves over p-adic lie exten- sions. Mathematische Annalen, 331(4):809-839, 2005.
On the µ equals zero conjecture for the fine selmer group in iwasawa theory. Anwesh Shaunak V Deo, R Ray, Sujatha, arXiv:2202.09937arXiv preprintShaunak V Deo, Anwesh Ray, and R Sujatha. On the µ equals zero conjecture for the fine selmer group in iwasawa theory. arXiv preprint arXiv:2202.09937, 2022.
An analogue of kida's formula for the selmer groups of elliptic curves. Yoshitaka Hachimori, Kazuo Matsuno, Journal of Algebraic Geometry. 83Yoshitaka Hachimori and Kazuo Matsuno. An analogue of kida's formula for the selmer groups of elliptic curves. Journal of Algebraic Geometry, 8(3):581-601, 1999.
On Z ℓ -extensions of algebraic number fields. Kenkichi Iwasawa, Annals of Mathematics. Kenkichi Iwasawa. On Z ℓ -extensions of algebraic number fields. Annals of Mathematics, pages 246-326, 1973.
p-adic hodge theory and values of zeta functions of modular forms. Kazuya Kato, Astérisque. 295Kazuya Kato et al. p-adic hodge theory and values of zeta functions of modular forms. Astérisque, 295:117-290, 2004.
Barry Mazur, Rational points of abelian varieties with values in towers of number fields. Inventiones mathematicae. 18Barry Mazur. Rational points of abelian varieties with values in towers of number fields. Inven- tiones mathematicae, 18(3):183-266, 1972.
Cohomology of number fields. Jürgen Neukirch, Alexander Schmidt, Kay Wingberg, Springer Science & Business Media323Jürgen Neukirch, Alexander Schmidt, and Kay Wingberg. Cohomology of number fields, volume 323. Springer Science & Business Media, 2013.
Fonctions l p-adiques d'une courbe elliptique et points rationnels. Bernadette Perrin-Riou, Annales de l'institut Fourier. 43Bernadette Perrin-Riou. Fonctions l p-adiques d'une courbe elliptique et points rationnels. In Annales de l'institut Fourier, volume 43, pages 945-995, 1993.
Fonctions L p-adiques des représentations p-adiques. Number 94. Société mathématique de France. Bernadette Perrin-Riou, Bernadette Perrin-Riou. Fonctions L p-adiques des représentations p-adiques. Number 94. So- ciété mathématique de France, 1995.
Relating the tate-shafarevich group of an elliptic curve with the class group. Dipendra Prasad, Sudhanshu Shekhar, Pacific Journal of Mathematics. 3121Dipendra Prasad and Sudhanshu Shekhar. Relating the tate-shafarevich group of an elliptic curve with the class group. Pacific Journal of Mathematics, 312(1):203-218, 2021.
Arithmetic statistics for the fine selmer group in iwasawa theory. Anwesh Ray, Sujatha, arXiv:2112.13335arXiv preprintAnwesh Ray and R Sujatha. Arithmetic statistics for the fine selmer group in iwasawa theory. arXiv preprint arXiv:2112.13335, 2021.
Euler systems. Karl Rubin, Number. 147Princeton University PressKarl Rubin. Euler systems. Number 147. Princeton University Press, 2000.
William Stein, Sage mathematics software. William Stein. Sage mathematics software. http://www. sagemath. org/, 2007.
The iwasawa main conjectures for gl2. Inventiones mathematicae. Christopher Skinner, Eric Urban, 195Christopher Skinner and Eric Urban. The iwasawa main conjectures for gl2. Inventiones math- ematicae, 195(1):1-277, 2014.
Introduction to cyclotomic fields. C Lawrence, Washington, Springer Science & Business Media83Lawrence C Washington. Introduction to cyclotomic fields, volume 83. Springer Science & Busi- ness Media, 1997.
Iwasawa theory of the fine selmer group. Christian Wuthrich, Journal of Algebraic Geometry. 161Christian Wuthrich. Iwasawa theory of the fine selmer group. Journal of Algebraic Geometry, 16(1):83-108, 2007.
. A Ray, V6T 1Z2 Email address: [email protected] BC, CanadaDepartment of Mathematics, University of British ColumbiaA. Ray) Department of Mathematics, University of British Columbia, Vancouver BC, Canada V6T 1Z2 Email address: [email protected]
| [] |
[
"Supplementary Materials: An approximate diffusion process for environmental stochasticity in infectious disease transmission modelling",
"Supplementary Materials: An approximate diffusion process for environmental stochasticity in infectious disease transmission modelling"
] | [
"Sanmitra Ghosh \nMRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK\n",
"Paul J Birrell \nMRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK\n\nUK Health Security Agency\nLondonUK\n",
"Daniela De Angelis \nMRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK\n\nUK Health Security Agency\nLondonUK\n"
] | [
"MRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK",
"MRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK",
"UK Health Security Agency\nLondonUK",
"MRC Biostatistics Unit\nUniversity of Cambridge\nCambridgeUK",
"UK Health Security Agency\nLondonUK"
] | [] | Appendix A: Fourier expansion of Brownian motionBy the definition of an Itô integral, within a time interval [0, T ] a standard Brownian motion can be written as [1, 2]:whereWe can interpret I [0,t] as an element of L 2 [0, T ], and expand it in terms of the basis functions:(2) Substituting (2) into (1) we see that:(3) | 10.1371/journal.pcbi.1011088 | [
"https://export.arxiv.org/pdf/2208.14363v1.pdf"
] | 251,928,964 | 2208.14363 | e30ab09b8d4da5dd7ff22fd98400579fcb094aa2 |
Supplementary Materials: An approximate diffusion process for environmental stochasticity in infectious disease transmission modelling
Sanmitra Ghosh
MRC Biostatistics Unit
University of Cambridge
CambridgeUK
Paul J Birrell
MRC Biostatistics Unit
University of Cambridge
CambridgeUK
UK Health Security Agency
LondonUK
Daniela De Angelis
MRC Biostatistics Unit
University of Cambridge
CambridgeUK
UK Health Security Agency
LondonUK
Supplementary Materials: An approximate diffusion process for environmental stochasticity in infectious disease transmission modelling
Appendix A: Fourier expansion of Brownian motionBy the definition of an Itô integral, within a time interval [0, T ] a standard Brownian motion can be written as [1, 2]:whereWe can interpret I [0,t] as an element of L 2 [0, T ], and expand it in terms of the basis functions:(2) Substituting (2) into (1) we see that:(3)
Appendix A: Fourier expansion of Brownian motion
By the definition of an Itô integral, within a time interval [0, T ] a standard Brownian motion can be written as [1,2]:
W t = t 0 dW s = T 0 I [0,t] (s)dW s ,(1)
where (2) Substituting (2) into (1) we see that:
I [0,t] (·) is the indicator function. Suppose {ϕ i } ∞ i=1W t = ∞ i=1 T 0 ϕ i (s)dW s t 0 ϕ i (u)du.
(3)
Appendix B: Adaptive MCMC
In an adaptive MCMC algorithm optimal values of the proposal density is learnt on the fly using past samples from the Markov chain. Different mechanisms can be used to adapt or learn the parameters of the proposal. [3] proposed a general framework for constructing adaptive MCMC algorithms that rely on the stochastic approximation method [4] for learning the proposal's parameters on the fly. Consider in general the proposal density q ϕ (θ j+1 |θ j ) parameterised by ϕ. Let us also define a suitable objective function h(ϕ) := E ϕ H(ϕ, θ 0 , θ 1 , . . . , θ j , θ j+1 ) ,
that expresses some measure of the statistical performance of the Markov chain in its stationary regime. The expectation is with respect to a ϕ dependent distribution. For example, the coerced acceptance probability is often used as the objective:
H(ϕ, θ 0 , θ 1 , . . . , θ j , θ j+1 ) = min 1, π(θ j+1 ) π(θ j ) q ϕ (θ j |θ j+1 ) qϕ(θ j+1 |θ j ) =:α j+1 −ᾱ,(5)
where π(θ) is the target distribution andᾱ is the approximate optimal expected acceptance probability in the stationary regime. For the Gaussian proposal q := N (θ j+1 |θ j , Σ j ), with its parameter ϕ being the covariance Σ j , the following objective function:
H(Σ j , θ j+1 ) = θ j+1 θ j+1 ′ − Σ j ,(6)
corresponds to matching the moments of the proposal with that of the target. Here by a ′ we denote the transpose of the vector a.
Optimal exploration of π(θ) can thus be formulated as finding the rootφ of the following equation: h(ϕ) = 0. The challenge here is to devise an algorithm to find the roots of h(ϕ), which involves both integration and optimisation. [3] suggested using the stochastic approximation method [4] which is tailored to this situation:
ϕ j+1 = ϕ j + δ j+1 H(ϕ j , θ 0 , θ 1 , . . . , θ j , θ j+1 ) = ϕ j + δ j+1 h(ϕ) + δ j+1 H(ϕ j , θ 0 , θ 1 , . . . , θ j , θ j+1 ) − δ j+1 h(ϕ) = ϕ j + δ j+1 h(ϕ) + δ j+1 ξ j+1 ,(7)
where ξ j+1 := H(ϕ j , θ 0 , θ 1 , . . . , θ j , θ j+1 ) − h(ϕ) is usually referred to as the noise term and δ j is a decreasing sequence (a step-size parameter). If the noise term ξ j+1 averages to zero as j → ∞, the above recursion will converge to the rootφ (or at least oscillate around it) when the following conditions hold:
∞ j=0 δ j = ∞ and ∞ j=0 (δ j ) 2 < ∞.(8)
Combining the above objective functions and using the stochastic approximation we have the following recursions for adapting a random-walk proposal with a global scaling λ j , N (θ j+1 |θ j , λ j Σ j ), as [3]: log(λ j+1 ) = log(λ j ) + δ j+1 (α j+1 −ᾱ)
µ j+1 = µ j + δ j+1 (θ j+1 − µ j ) Σ j+1 = Σ j + δ j+1 (θ j+1 θ j+1 ′ − Σ j ),(9)
where the recursion in the first equation, trying to adapt the global scaling, is based on the coerced accepted probability objective in (5) and the following two equations are minimising the moment matching objective in (6). By choosing a decreasing sequence {δ j } ∞ j=0 of step-sizes it is ensured that the adaptation declines over time, also known as vanishing adaptation [3], and the Markov chain converges to the correct stationary distribution. For all the experiments we have consistently used the following schedule:
δ j = j −0.6 ,(10)
which was shown to work particularly well for nonlinear differential equation models in [5].
Appendix C: Simulation study for influenza epidemic
Using a real dataset we are oblivious to the ground truth of the estimated quantities. Thus, we have also carried out a detailed simulation study where we have used simulated datasets that mimic the influenza epidemic used in the main text. We generated three simulated epidemics using the model in Eq (2), in the main text, on the same time period T = 14 days, and used the same population size N = 763, as the real influenza epidemic. We chose parameter values that generate an epidemic curve similar to the real dataset. These generative parameter values are shown in Figure 1-3. We then proceed to fit the two alternative models using the inferential setup discussed in the main text. In Figure 1-3 we compared the marginal densities of the parameters obtained using the SDE and SA counterparts, for each of the simulated datasets. Clearly the estimates match well and generative parameter values are recovered.
Furthermore, in Figure 4-6 we compared the goodness-of-fit. As was found for the real dataset, we observed little disagreement between the epidemic curves obtained using the SDE and the SA, but for the posterior distribution of the latent diffusion paths we noticed, for all the datasets, that the credible intervals are narrower for the SA. For all these datasets, the posterior means, and the draws of the sample path, of the two models match well.
Appendix D: Calculating a time-varying reproduction number
The estimate of the contact-rate β t k ,r is used to derive an estimate of a time-varying reproduction number. Firstly, using the formula of [6], the initial reproduction number R 0,r is estimated as follows:
R 0,r = ψ r d I ψrd L 2 + 1 2 1 − 1 ψr d I 2 +1 2 .
(11)
Over time the value of the reproduction number will change as contact patterns shift and the supply of susceptible individuals deplete. The time-t reproduction number is then estimated using the following formula:
R t k ,r = R 0,r R * t k ,r R * 0,r if t k < t lock β t k ,r R 0,r R * t k ,r R * 0,r if t k ≥ t lock(12)
where t lock indicates the time-point corresponding to the lockdown. R * t k ,r is the dominant eigenvalue of the time t k next-generation matrix, Λ k,r , with elements:
(Λ k,r ) ij = S r,t k ,i C t k r,ij d I ,(13)
where C t k r,ij is a region-specific time-varying contact matrix, see [7] for further details on these matrices. To get an 'all England' value for R t k ,E a weighted average of the regional R t k ,r is calculated, where the weights are given by the sum of the infections in each region:
(a) (b) (c) (d)R t k ,E = r R t k ,r i ∆ infec r,t k ,i r i ∆ infec r,t k ,i .(14)
Appendix E: Priors for the COVID-19 model
The priors for the global and regional parameters for the COVID-19 model are listed in Table 1. We used the same priors as was used in [7]. Note that we also used the same prior for the volatility of both the piecewise constant random-walk and the Brownian motion model of the transmissionpotential.
Appendix F: Pseudocode of the MwG algorithm
The pseudocode listed in Algorithm 1 describes the Metropolis-within-Gibbs algorithm for sampling from the posterior distribution p(θ g , θ 1 , . . . , θ nr |y d , y s ) of the global θ g and regional θ 1 , . . . , θ nr parameters of the COVID-19 model. For each parameter group θ g , θ 1 , . . . , θ nr we use a proposal with a different set of parameters that are adapted through the mechanism described in (9).
Appendix G: Goodness-of-fit as per regions of England
In Figure 10 -16 we show the posterior predictive distributions of the number of deaths and the posterior distribution of the latent infection for each region respectively. We have aggregated the results across ages.
Appendix H: Maximum mean discrepancy
For any given probability distribution P on a domain X its kernel embedding is defined as µ P = E X∼P k(·, θ) [8], an element of reproducing kernel Hilbert space H associated with a positive definite kernel function k : X × X → R. Such an embedding exists for any P whenever k is bounded. Given two probability distributions P and Q the maximum mean discrepancy (MMD) is the Hilbert space distance between their kernel embedding µ P and µ Q . Considering that we have two set of samples
{X i } n i=1 and {Y i } m i=1
from corresponding distributions P and Q respectively, then the MMD between P and Q is given by [9] M M D 2 (P, Q) = ||µ P − µ Q || H
= 1 n(n − 1) n i=1 m j̸ =i k(X i , X j ) + 1 m(m − 1) n i=1 m j̸ =i k(Y i , Y j ) − 2 nm n i=1 m j=1 k(X i , Y j ).(15)
Algorithm 1 A random-scan adaptive Metropolis-within-Gibbs sampler
Input: Number of iterations J; data y d , y s ; optimal acceptance rateᾱ. Initialise the regional θ 0 1 , . . . , θ 0 nr and global parameters θ 0 g . Initialise the regional proposal parameters λ 0 1 , . . . , λ 0 nr , µ 0 1 , . . . , µ 0 nr and Σ 0 1 , . . . , Σ 0 nr . Initialise the global proposal's parameters λ 0 g , µ 0 g and Σ 0 g . for j = 0 to J − 1 do Global move:
1.
Draw θ * g ∼ N (θ j g , λ j g Σ j g ) and set θ j+1
Appendix I: Considering overdispersion while fitting the influenza dataset
In section 4 (main text) while fitting to the influenza dataset we considered a Poisson likelihood model, for both the SA and SDE variants, given by Eq (14) and Eq (15) respectively. Such a likelihood model formulation ignores overdispersion in the measurement process. Interestingly, not allowing for overdispersion in the measurement process may lead to the compensation of variability in the observations through spurious changes in the transmission-potential. We noticed in Figure 3 (b) (main text) that there is a noticeable increase in x t , and thus transmission-potential. In order to rule out such increase as something spurious, we re-fit both the SA and SDE variants, in this section, using a negative binomial likelihood. More specifically, we consider the following likelihood models for the SDE:
y t i |θ, x, X 0 ∼ NegBin(I t i , η), i = 1, . . . , m,(17)
where η is an overdispersion parameter such that Ey t i = I t i and Var (y t i ) = I t i (1 + η), and the SA:
y t i |θ, Z, X 0 ∼ NegBin(I t i , η).(18)
We retained all the experimental setting as was used in section 4 and carried out inference, while additionally estimating the overdispersion parameter η. We placed a Gamma(2, 5) prior on η. In Figure 7 we show the goodness-of-fit and the posterior distribution of the latent diffusion's sample path. In Figure 8 we plot the posterior distribution of the parameters. The main difference that we noticed, in comparison to the Poisson likelihood model, is that the Poisson likelihood model produced a closer fit (see Figure 3 (a) in main text) to the observation on the 9-th day, for both the SA and SDE variants. Importantly, both the Poisson and negative binomial likelihood models picked-up the increase in the transmission-potential between the 6-th and the 9-th day.
is a complete orthonormal basis of L 2 [0, T ]. We can interpret I [0,t] as an element of L 2 [0, T ], and expand it in terms of the basis functions: (u)du ϕ i (s).
Figure 1 :
1Simulated dataset 1: Posterior marginal densities of the parameters obtained using the SDE and the SA (with n = 15 basis function). These densities are summarised using a kernel density estimate. The black line in each of the plots indicate the generative parameter value.
Figure 2 :
2Simulated dataset 2: Posterior marginal densities.
Figure 3 :
3Simulated dataset 3: Posterior marginal densities.
Figure 4 :
4Simulated dataset 1: Goodness-of-fit (a); posterior distribution of the latent diffusion paths corresponding to the SDE and SA counterparts (b), with densities summarised by the mean (solid lines) and 95% credible intervals (broken lines); and samples from the posterior distribution of the latent diffusion paths, SDE (c) and SA (d)
Figure 5 :Figure 6 :
56Simulated dataset 2: Comparison of the goodness-of-fit Simulated dataset 3: Comparison of the goodness-of-fit
Figure 7 :
7Considering a negative binomial likelihood for fitting the influenza dataset: Goodness-of-fit (a); posterior distribution of the latent diffusion paths corresponding to the SDE and SA counterparts (b), with densities summarised by the mean (solid lines) and 95% credible intervals (broken lines); and samples from the posterior distribution of the latent diffusion paths, SDE (c) and SA (d)
Figure 8 :Figure 9 :
89Considering a negative binomial likelihood for fitting the influenza dataset: Posterior marginal densities of the parameters. SIRS model: Posterior marginal densities of the parameters.
Figure 10 :Figure 11 :Figure 12 :
101112Goodness-of-fit of daily death data (a) and the inferred latent infections (b), produced using the random-walk (magenta lines) and SAd (orange lines) for the region East of England. These densities are summarised by the mean (solid lines) and 95% credible intervals (broken lines). The black line indicates the day of lockdown in England 23 rd March, 2020. Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region North West. Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region Midlands.
Figure 13 :
13Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region London.
Figure 14 :Figure 15 :Figure 16 :
141516Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region North East and Yorkshire. Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region South East. Goodness-of-fit of daily death data (a) and the inferred latent infections (b) for the region South West.
Table 1 :
1Model parameters with assumed prior distributions or fixed values, as was used in[7].Name
Prior source
Over-dispersion, η
Uninformative Gamma(1, 0.2).
Mean infectious period, d I
2 + Gamma(1.43, 0.549).
Infection-fatality rate for age < 5: p 1
Beta(1, 62110.8012).
Infection-fatality rate for age, 5 − 14: p 2
Beta(1, 23363.4859).
Infection-fatality rate for age 15 − 24: p 3
Beta(1, 5290.0052).
Infection-fatality rate for age 25 − 44: p 4
Beta(1, 1107.6474).
Infection-fatality rate for age 45 − 64: p 5
Beta(1, 120.9512).
Infection-fatality rate for age 65 − 74: p 6
Beta(1, 31.1543).
Infection-fatality rate for age > 74: p 7
Beta(9.5, 112).
Serological test sensitivity, k sens
Beta(71.5, 29.5).
Serological test specificity, k spec
Beta(777.5, 9.5).
Exponential growth, ψ r
Gamma(31.36, 224).
Log of initial infectives, log I 0,r
N (−17.5, 1.25 2 ).
Volatility of transmission-potential, σ βw , σ βt Gamma(1, 100).
Mean latent period, d L
3 days (fixed not estimated).
g with probability α j+1 g = min 1, p(θ * g |y d ,y s ) p(θg|y d ,y s ) , otherwise θ j+1 g = θ j g . Regional move:1.Draw r * ∼ Uniform(1, n r ).2.Draw θ * r * ∼ N (θ j r * , λ j r * Σ j r * ) and set θ j+1 r * = θ * r * with probability α j+1 r * = min 1,Set θ j+1 nr\r * = θ j nr\r * , where the symbol A \ a denotes all elements of the set A except a. Adaptation:1.Adapt global proposal's parameters:Adapt proposal's parameters for region r * :3.Set λ j+1 nr\r * = λ j nr\r * , µ j+1 nr\r * = µ j nr\r * and Σ j+1 nr\r * = Σ j nr\r * . end for Output: {θ j g , θ j 1 , . . . , θ j nr } J−1 j=0 .The M M D 2 (P, Q) = 0 iff P = Q, following the properties of kernel embedding. The kernel embedding captures all the necessary information about a distribution[8], thus the distance between two embedding would naturally highlight the discrepancy more efficiently in the tail regions of the distributions under comparison. In this paper we used an exponentiated quadratic kernel given bywhere ρ is a hyperparameter. We set ρ to the median distance among the samples.Appendix J: Further details of the SIRS model parametersWe used the following parameter and initial values, following[10], to generate the simulated dataset: 1/µ = (50 * 365), 1/α = (7 * 365), 1/γ = 14, β 0 = 0.65, β 1 = 0.4, β 2 = −0.2, N = 10000, S 0 = 600, I 0 = 30. We placed the following prior for the estimated parameters and initial values: β 0 ∼ U(0.1, 0.7), σ ∼ U(0, 0.06), 1/α ∼ N (2555, 120 2 ), 1/γ ∼ N (14, 1.05 2 ), S 0 ∼ U(500, 700), I 0 ∼ U(27, 60). InFigure 9we plot the posterior distribution of the parameters.
Wiener chaos expansion and numerical solutions of stochastic partial differential equations. Wuan Luo, California Institute of TechnologyWuan Luo. Wiener chaos expansion and numerical solutions of stochastic partial differential equations. California Institute of Technology, 2006.
The coloured noise expansion and parameter estimation of diffusion processes. M J Simon, Amos J Lyons, Simo Storkey, Särkkä, Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems. Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. WeinbergerLake Tahoe, Nevada, United StatesProceedings of a meeting heldSimon M. J. Lyons, Amos J. Storkey, and Simo Särkkä. The coloured noise expansion and parameter estimation of diffusion processes. In Peter L. Bartlett, Fernando C. N. Pereira, Christopher J. C. Burges, Léon Bottou, and Kilian Q. Weinberger, editors, Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States, pages 1961-1969, 2012.
A tutorial on adaptive mcmc. Christophe Andrieu, Johannes Thoms, Statistics and computing. 184Christophe Andrieu and Johannes Thoms. A tutorial on adaptive mcmc. Statistics and com- puting, 18(4):343-373, 2008.
A stochastic approximation method. The annals of mathematical statistics. Herbert Robbins, Sutton Monro, Herbert Robbins and Sutton Monro. A stochastic approximation method. The annals of mathematical statistics, pages 400-407, 1951.
Uncertainty and variability in models of the cardiac action potential: Can we build trustworthy models?. H Ross, Johnstone, T Y Eugene, Rémi Chang, Bardenet, Teun P De, Boer, J David, Pras Gavaghan, Pathmanathan, H Richard, Gary R Clayton, Mirams, Journal of molecular and cellular cardiology. 96Ross H Johnstone, Eugene TY Chang, Rémi Bardenet, Teun P De Boer, David J Gavaghan, Pras Pathmanathan, Richard H Clayton, and Gary R Mirams. Uncertainty and variability in models of the cardiac action potential: Can we build trustworthy models? Journal of molecular and cellular cardiology, 96:49-62, 2016.
Appropriate models for the management of infectious diseases. J Helen, Pejman Wearing, Matt J Rohani, Keeling, PLoS medicine. 27174Helen J Wearing, Pejman Rohani, and Matt J Keeling. Appropriate models for the management of infectious diseases. PLoS medicine, 2(7):e174, 2005.
Realtime nowcasting and forecasting of covid-19 dynamics in england: the first wave. Paul Birrell, Joshua Blake, Edwin Van Leeuwen, Nick Gent, Daniela De Angelis, Philosophical Transactions of the Royal Society B. 37620200279Paul Birrell, Joshua Blake, Edwin Van Leeuwen, Nick Gent, and Daniela De Angelis. Real- time nowcasting and forecasting of covid-19 dynamics in england: the first wave. Philosophical Transactions of the Royal Society B, 376(1829):20200279, 2021.
Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning. Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, Bernhard Schölkopf, 10Krikamol Muandet, Kenji Fukumizu, Bharath Sriperumbudur, and Bernhard Schölkopf. Kernel mean embedding of distributions: A review and beyond. Foundations and Trends® in Machine Learning, 10(1-2):1-141, 2017.
A kernel two-sample test. Arthur Gretton, M Karsten, Borgwardt, J Malte, Bernhard Rasch, Alexander Schölkopf, Smola, Journal of Machine Learning Research. 13Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(Mar):723-773, 2012.
Accounting for non-stationarity in epidemiology by embedding time-varying parameters in stochastic models. Bernard Cazelles, Clara Champagne, Joseph Dureau, PLoS computational biology. 1481006211Bernard Cazelles, Clara Champagne, and Joseph Dureau. Accounting for non-stationarity in epidemiology by embedding time-varying parameters in stochastic models. PLoS computational biology, 14(8):e1006211, 2018.
| [] |
[
"IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation",
"IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation",
"IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation",
"IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation"
] | [
"Yang Nan \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Javier Del Ser \nDepartment of Communications Engineering\nUniversity of the Basque Country UPV/EHU\nBilbaoSpain\n\nTECNALIA\nBasque Research and Technology Alliance (BRTA)\nDerioSpain\n",
"Senior Member, IEEEZeyu Tang \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Peng Tang \nDepartment of Informatics\nTechnical University of Munich\n\n",
"Xiaodan Xing \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Yingying Fang \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Francisco Herrera \nDepartment of Computer Sciences and Artificial Intelligence\nAndalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada\nGranadaSpain\n\nFaculty of Computing and Information Technology\nKing Abdulaziz Uni-versity\n21589JeddahSaudi Arabia\n",
"Senior Member, IEEEWitold Pedrycz \nDepartment of Electrical and Computer Engineering\nUniversity of Alberta\nEdmontonCanada\n\nDepartment of Electrical and Computer Engineering\nFaculty of Engineer-ing\nKing Abgudulaziz University\nJeddahSaudi Arabia\n\nThe Systems Research Institute\nPolish Academy of Sciences\nWarsawPoland\n",
"Fellow, IEEESimon Walsh \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n\nRoyal Brompton Hospital\nSydney StreetLondonUK\n",
"Guang Yang [email protected] \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n\nRoyal Brompton Hospital\nSydney StreetLondonUK\n",
"S Walsh ",
"G Yang ",
"Yang Nan \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Javier Del Ser \nDepartment of Communications Engineering\nUniversity of the Basque Country UPV/EHU\nBilbaoSpain\n\nTECNALIA\nBasque Research and Technology Alliance (BRTA)\nDerioSpain\n",
"Senior Member, IEEEZeyu Tang \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Peng Tang \nDepartment of Informatics\nTechnical University of Munich\n\n",
"Xiaodan Xing \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Yingying Fang \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n",
"Francisco Herrera \nDepartment of Computer Sciences and Artificial Intelligence\nAndalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada\nGranadaSpain\n\nFaculty of Computing and Information Technology\nKing Abdulaziz Uni-versity\n21589JeddahSaudi Arabia\n",
"Senior Member, IEEEWitold Pedrycz \nDepartment of Electrical and Computer Engineering\nUniversity of Alberta\nEdmontonCanada\n\nDepartment of Electrical and Computer Engineering\nFaculty of Engineer-ing\nKing Abgudulaziz University\nJeddahSaudi Arabia\n\nThe Systems Research Institute\nPolish Academy of Sciences\nWarsawPoland\n",
"Fellow, IEEESimon Walsh \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n\nRoyal Brompton Hospital\nSydney StreetLondonUK\n",
"Guang Yang [email protected] \nNational Heart and Lung Institute\nImperial College London\nLondonUK\n\nRoyal Brompton Hospital\nSydney StreetLondonUK\n",
"S Walsh ",
"G Yang "
] | [
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Communications Engineering\nUniversity of the Basque Country UPV/EHU\nBilbaoSpain",
"TECNALIA\nBasque Research and Technology Alliance (BRTA)\nDerioSpain",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Informatics\nTechnical University of Munich\n",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Computer Sciences and Artificial Intelligence\nAndalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada\nGranadaSpain",
"Faculty of Computing and Information Technology\nKing Abdulaziz Uni-versity\n21589JeddahSaudi Arabia",
"Department of Electrical and Computer Engineering\nUniversity of Alberta\nEdmontonCanada",
"Department of Electrical and Computer Engineering\nFaculty of Engineer-ing\nKing Abgudulaziz University\nJeddahSaudi Arabia",
"The Systems Research Institute\nPolish Academy of Sciences\nWarsawPoland",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Royal Brompton Hospital\nSydney StreetLondonUK",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Royal Brompton Hospital\nSydney StreetLondonUK",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Communications Engineering\nUniversity of the Basque Country UPV/EHU\nBilbaoSpain",
"TECNALIA\nBasque Research and Technology Alliance (BRTA)\nDerioSpain",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Informatics\nTechnical University of Munich\n",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Department of Computer Sciences and Artificial Intelligence\nAndalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada\nGranadaSpain",
"Faculty of Computing and Information Technology\nKing Abdulaziz Uni-versity\n21589JeddahSaudi Arabia",
"Department of Electrical and Computer Engineering\nUniversity of Alberta\nEdmontonCanada",
"Department of Electrical and Computer Engineering\nFaculty of Engineer-ing\nKing Abgudulaziz University\nJeddahSaudi Arabia",
"The Systems Research Institute\nPolish Academy of Sciences\nWarsawPoland",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Royal Brompton Hospital\nSydney StreetLondonUK",
"National Heart and Lung Institute\nImperial College London\nLondonUK",
"Royal Brompton Hospital\nSydney StreetLondonUK"
] | [] | Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases, while its manual delineation is unduly burdensome. To alleviate this time-consuming and potentially subjective manual procedure, researchers have proposed methods to automatically segment airways from computerized tomography (CT) images. However, some small-sized airway branches (e.g., bronchus and terminal bronchioles) significantly aggravate the difficulty of automatic segmentation by machine learning models. In particular, the variance of voxel values and the severe data imbalance in airway branches make the computational module prone to discontinuous and false-negative predictions. especially for cohorts with different lung diseases. Attention mechanism has shown the capacity to segment complex structures, while fuzzy logic can reduce the uncertainty in feature representations. Therefore, the integration of deep attention networks and fuzzy theory, given by the fuzzy attention layer, should be an escalated solution for better generalization and robustness. This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function to enhance the spatial continuity of airway segmentation. The deep fuzzy set is formulated by a set of voxels in the feature map and a learnable Gaussian membership function. Different from the existing attention mechanism, the proposed channel-specific fuzzy attention addresses the issue of heterogeneous features in different channels. Furthermore, a novel evaluation metric is proposed to assess both the continuity and completeness of airway structures. The efficiency, generalization and robustness of the proposed method have been proved by training on normal lung disease while testing on datasets of lung cancer, COVID-19 and pulmonary fibrosis. | 10.1109/tnnls.2023.3269223 | [
"https://export.arxiv.org/pdf/2209.02048v2.pdf"
] | 252,089,332 | 2209.02048 | 930b72a4a087b83a215c4b4d47490d52692632ea |
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation
Yang Nan
National Heart and Lung Institute
Imperial College London
LondonUK
Javier Del Ser
Department of Communications Engineering
University of the Basque Country UPV/EHU
BilbaoSpain
TECNALIA
Basque Research and Technology Alliance (BRTA)
DerioSpain
Senior Member, IEEEZeyu Tang
National Heart and Lung Institute
Imperial College London
LondonUK
Peng Tang
Department of Informatics
Technical University of Munich
Xiaodan Xing
National Heart and Lung Institute
Imperial College London
LondonUK
Yingying Fang
National Heart and Lung Institute
Imperial College London
LondonUK
Francisco Herrera
Department of Computer Sciences and Artificial Intelligence
Andalusian Research Institute in Data Science and Computational Intelligence (DaSCI) University of Granada
GranadaSpain
Faculty of Computing and Information Technology
King Abdulaziz Uni-versity
21589JeddahSaudi Arabia
Senior Member, IEEEWitold Pedrycz
Department of Electrical and Computer Engineering
University of Alberta
EdmontonCanada
Department of Electrical and Computer Engineering
Faculty of Engineer-ing
King Abgudulaziz University
JeddahSaudi Arabia
The Systems Research Institute
Polish Academy of Sciences
WarsawPoland
Fellow, IEEESimon Walsh
National Heart and Lung Institute
Imperial College London
LondonUK
Royal Brompton Hospital
Sydney StreetLondonUK
Guang Yang [email protected]
National Heart and Lung Institute
Imperial College London
LondonUK
Royal Brompton Hospital
Sydney StreetLondonUK
S Walsh
G Yang
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 1 Fuzzy Attention Neural Network to Tackle Discontinuity in Airway Segmentation
Index Terms-Airway segmentationfuzzy neural networksfuzzy attentionCOVID-19pulmonary fibrosis
Airway segmentation is crucial for the examination, diagnosis, and prognosis of lung diseases, while its manual delineation is unduly burdensome. To alleviate this time-consuming and potentially subjective manual procedure, researchers have proposed methods to automatically segment airways from computerized tomography (CT) images. However, some small-sized airway branches (e.g., bronchus and terminal bronchioles) significantly aggravate the difficulty of automatic segmentation by machine learning models. In particular, the variance of voxel values and the severe data imbalance in airway branches make the computational module prone to discontinuous and false-negative predictions. especially for cohorts with different lung diseases. Attention mechanism has shown the capacity to segment complex structures, while fuzzy logic can reduce the uncertainty in feature representations. Therefore, the integration of deep attention networks and fuzzy theory, given by the fuzzy attention layer, should be an escalated solution for better generalization and robustness. This paper presents an efficient method for airway segmentation, comprising a novel fuzzy attention neural network and a comprehensive loss function to enhance the spatial continuity of airway segmentation. The deep fuzzy set is formulated by a set of voxels in the feature map and a learnable Gaussian membership function. Different from the existing attention mechanism, the proposed channel-specific fuzzy attention addresses the issue of heterogeneous features in different channels. Furthermore, a novel evaluation metric is proposed to assess both the continuity and completeness of airway structures. The efficiency, generalization and robustness of the proposed method have been proved by training on normal lung disease while testing on datasets of lung cancer, COVID-19 and pulmonary fibrosis.
Fig. 1
. Current challenges and our solution of airway segmentation. (a) The heterogeneity (yellow boxes) of the intensity of airway structures and the excessive data imbalance (red box), which leads to discontinuity predictions (green boxes) and leakages (white box); (b) This paper presents a fuzzy attention neural network to help the module to better learn the robustness feature, a JCAM (Jaccard Continuity and Accumulation Mapping) loss for optimization, and a novel metric that assesses the completeness and continuity of airway predictions.
I. INTRODUCTION
P ULMONARY fibrosis occurs when the lung tissue becomes scarred and damaged, resulting in symptoms such as dyspnea, and is responsible for 1% of all deaths in the UK. A challenge to managing patients with fibrotic lung disease is that currently there are no reliable means to predict progressive disease using baseline data. The early identification of the progressive fibrotic lung disease would allow clinicians to initiate therapies to slow or prevent progression at the earliest opportunity, without the need to delay intervention until progression has been clinically observed. The lack of reliable discriminatory data at baseline is possibly the most urgent unmet challenge for effective management for patients with pulmonary fibrosis [1]- [3].
High resolution computed tomography (HRCT) of the chest is a key initial investigation in patients with suspected fibrotic arXiv:2209.02048v2 [eess.IV] 9 Sep 2022 lung disease and has been the focus of extensive biomarker research over the past 30 years. One HRCT biomarker which strongly predicts outcome in several distinct fibrotic lung subsets based on visual assessment is the severity of traction bronchiectasis [4]- [6]. Traction bronchiectasis is the abnormal dilatation of the tracheobronchial tree due to surrounding fibrosis. However, visual quantification of disease on computed tomography is liable to significant interobserver variability, and poor reproducibility and is relatively insensitive to subtle but clinically important changes over short follow up periods [7]. This provides the rationale for developing objective computer-based methods for disease quantification on HRCT [8]- [10]. Such methods can support many applications capable of accurately quantifying lung diseases on HRCT: among them, we focus on airway segmentation, which refers to those computer-assisted methods that allow for an automated evaluation of the tracheobronchial tree. Unfortunately, airway segmentation remains elusive due to the high complexity of airway structures and inconsistent boundaries, especially for patients with fibrosis or COVID-19.
The feasibility of using conventional methods (e.g., morphology, intensity thresholding [11], region growing [12]- [14], fuzzy connectedness [15]) for airway segmentation was reported in a previous challenge EXACT-09 [16]. However, these approaches were shown to suffer from weak robustness and coarse predictions due to the high complexity of bronchus patterns. Considering the airway segmentation task, there exist five major challenges (1) the imbalanced data distribution of foreground and background samples; (2) the homogeneity of voxel values between airway structures and normal lung regions; (3) the heterogeneity of voxel values in manual annotations and the incorrect labelling; (4) the weak reliability of evaluation metrics for module selection; (5) lack of generalization and robustness during the clinical pratice. Firstly, the huge imbalanced distribution of airway trees and other structures significantly aggravates the difficulty of training a data-driven based deep network. Most 3D neural networks are trained on patch datasets, while the criteria for extracting those patches are not valued. For instance, most studies simply extracted patches through a sliding window mechanism, while ignoring other sampling approaches. Secondly, the voxel values of normal lung regions and airway trees are similar, which requires the network to learn more spatial and structural information than voxel intensities. Thirdly, the inevitable incorrect manual annotations and the variances of voxel values in airway regions also require a strong robustness module. Moreover, the inappropriate metrics for module selection also hinder the saving of trainable weights. For example, most researchers employed intersection over union (IoU) or Dice score as the metric to save the best model. Unfortunately, the imbalanced voxel volume of the main trachea and small branches makes the overlapped metrics unreliable. Last but not least, most modules suffer from weak generalization, robustness, and require further fine-turning for different cohorts of different diseases.
Recently, deep neural networks have dominated the basic tasks and have become a common solution for biomedical image segmentation (e.g., the well known nnUNet [17] and VNet [18]). Based on the high modelling capacity of Convolutional Neural Networks (CNNs), researchers have achieved great efforts in airway segmentation [19]- [23]. However, these methods still faced the challenge of low continuity and completeness of airway predictions. Among several deep learning techniques, the attention mechanism [24] has shown a high capacity to segment complex structures. Meanwhile, studies have shown that by implementing fuzzy logic in deep neural networks, the uncertainty in feature representations can be reduced and significant improvements have been observed [25]- [27]. Therefore, we hypothesize the combination of deep attention mechanism and fuzzy theory should be able to tackle the challenges in accurate airway segmentation.
This paper presents an efficient scheme for accurate airway segmentation, including 1) smart patch sampling criteria, 2) a novel fuzzy attention neural network (FANN) for segmentation, and 3) a comprehensive loss function to enhance the continuity of airway predictions. Different from the previous methods that integrated fuzzy logic with deep learning through sequential learning paradigms (e.g., multi-module embedding or feature representation fusion), this paper incorporates fuzzy theory and attention mechanism as a fuzzy attention layer. The attention map in the fuzzy attention layer is formulated by a set of voxels in the feature map and a learnable Gaussian membership function, which is channel-specific, compared with the current channel-homogeneous attention mechanism. This fuzzy module can help the network better focus on the relevant feature representations while reducing their uncertainty to improve module's generalization and robustness. Meanwhile, an aggregative loss function that comprises airway continuity loss, accumulation mapping loss, and Dice loss is proposed. The airway continuity loss assesses the airway centerlines, whereas the accumulation mapping loss assesses the 3D prediction by estimating the error of mappings from different axes. Furthermore, a novel evaluation metric AFscore (airway f-score) is proposed to assess both the continuity and completeness of airway predictions.
Comprehensive experimental studies were conducted over an open-access dataset (90 cases) and our in-house datasets (40 cases), including normal scans, patients with mild anomalies, lung cancer, fibrosis, and COVID-19. The performance is assessed by calculating the completeness (dice coefficient score) and continuity (detected length and detected branches) of airway trees. In addition, the radius of each bronchus in the airway trees is estimated for a better understanding of the disease etiology. The preview of segmentation results is also demonstrated to provide a straight comparison.
The main contributions of this paper can be summarized as follows:
• A smart patch sampling strategy is proposed to select appropriate patches for training deep networks. • A novel channel-specific fuzzy attention layer by incorporating attention mechanisms and fuzzy logic for 3D segmentation. The deep fuzzy set is formulated by a set of voxels in the feature map and learnable Gaussian membership functions (MFs). To the best of our knowledge, this is the first attempt to adopt fuzzy logic in the attention layer.
• A comprehensive loss function including airway continuity loss and accumulation mapping loss is proposed to enhance the continuity and completeness of bronchus. • A novel evaluation metric CCF-score for module selection. • Comprehensive evaluations of the proposed method and comparisons are given in terms of different lung diseases including pulmonary fibrosis cases, lung cancer and COVID-19.
The rest of this paper is organised as follows. The related works in airway segmentation and fuzzy logic are summarised in Section II. Details of the proposed method are illustrated in Section III. The experimental settings and results are described in Section IV. Sections V and VI present the discussion and conclusion of this study.
II. RELATED WORKS
A. Airway Segmentation in HRCT
Segmentation of airway trees has always been a challenging task, due to the stated issues (in the introduction). E.g., the results in EXACT'09 demonstrated that no algorithm was capable to extract more than an average of 74% of the total length of all branches in the reference. To obtain a better performance, Charbonnier et al. [19] designed a 2D CNN to detect airway leakages for post-processing. Yun et al. [20] presented a 2.5D CNN that takes three adjacent slices in each orthogonal direction (axial, sagittal, coronal) as inputs to increase the spatial information. In addition to 2D and 2.5D architectures, 3D CNNs [21]- [23] became a popular solution due to the better consistency and integrity of their prediction. Some researchers regarded them as a class imbalance problem and proposed approaches to reduce interference of background regions. For instance, both Garcia-Uceda et al. [28] and Qin et al. [29] cropped the full-size volumes to a bounding box based on the pre-segmented lung region to remove the irrelevant background regions. Juarez et al. [30] applied weighted cross-entropy loss to balance the foreground and background samples. Zheng et al. [31] trained the network on sampled patches in the first stage and selected hard-segment regions based on the false-negative predictions. Then, the network was fine-tuned in the second stage to further improve the performance.
Others aim to improve the prediction results by addressing the discontinuous issue and airway leakages. For example, Qin et al. [22] proposed a 3D UNet with the voxel-connectivity awareness to segment airways. Meng et al. [32] and Reynisson et al. [33] improved the continuity of predictions by learning the airway centrelines. Nadeem et al. [34] proposed 3D UNet with a freeze-and-grow propagation to reduce the airway leakages and improve the accuracy of detected branches. Zheng et al. [31] analysed and explained the inaccurate segmentation issues by gradient erosion and dilation of adjacent voxels. Wang et al. [35] presented a radial distance loss to enhance the completeness of tubular predictions.
However, even with these efforts, discontinuous predictions still exist, especially for those small airway branches.
Moreover, most of these previous methods require multistage training protocol or complex post-processing procedures, which aggravates the computational and time costs in practical use.
B. Fuzzy logic in Deep Semantic Segmentation
In addition to the well-known fuzzy clustering approaches [36], [37] for segmentation, there were many earlier attempts to combine deep neural networks and fuzzy logic through sequential learning protocol. For instance, using neural networks to extract low dimensional feature representations and applying fuzzified representations for classification [38]. Unfortunately, these multi-stage approaches cannot be trained as an end-to-end scheme and did not aggregate fuzzy theory into the training of deep neural networks.
To address this issue, some researchers integrated fuzzy logic with deep neural networks [25] and proposed an endto-end hierarchical scheme for deep fuzzy neural network. With these efforts, researchers have presented approaches to combining fuzzy logic with fully CNNs for segmentation [26], [39], [40]. For instance, Price et al. [39] proposed a flexible and capable fuzzy layer to utilize the powerful aggregation of fuzzy integrals. Huang et al. [40] proposed a fuzzy fully convolutional network for breast ultrasound image segmentation by transforming the data into the fuzzy domain. Guan et al. [26] proposed a fuzzy CNN for lip segmentation, introducing the fuzzy logic module to enhance the robustness of segmentation results. Ding et al. [41]. presented fuzzyenabled multi-scale learning to segment brain tumours from T1 and T2 scans simultaneously. However, most existing methods adopted fuzzy integral within a residual block for information fusion, which cannot address the main challenges of airway segmentation (shown in the experiment section). Besides, only the 'AND' aggregation operator was applied to membership functions in most previous studies without exploring other operators.
C. Evaluation Metrics of Airway Predictions
Given the semantic prediction X and ground truth annotation Y , the main evaluation metrics for airway segmentation include overlapped metrics, detected branch ratio, detected length ratio and airway leakage ratio, which are given as follows:
1) Overlapped metrics measure the percentage overlap between the prediction and ground truth, including dice coefficient score and Jaccard index score (or intersection over union score), which are represented as:
Jaccard(X, Y ) = XY X + Y − XY ,(1)Dice(X, Y ) = 2XY X + Y ,(2)
where XY are calculated by dot product of X and Y . 2) Detected branch ratio (DBR) measures the proportion of detected branches with respect to the ground truth
annotations DBR = N x N y × 100%,(3)
where N X is the number of branches that have been correctly recognized, N Y is the number of branches in the ground truth annotation. In this study, branches with the intersection over union (IoU) score greater than 0.8 are referred to be correctly identified. 3) Detected length ratio (DLR) measures the proportion of detected branch length with respect to that of the ground truth annotations
DLR = L x L y × 100%,(4)
where L X is the total length of branches that have been correctly recognized, L Y is the total length of branches in the ground truth annotation. 4) Airway leakage ratio (ALR) refers to the proportion of total false positive volumes with respect to the ground truth annotations
ALR = V x V y × 100%,(5)
where V X is the volume of false-positive predictions, V Y is the volume of ground truth annotation. Though the airway segmentation can be evaluated through these metrics, none of them can be used independently for assessment.
III. METHODOLOGY
This section illustrates details of the proposed scheme for airway segmentation, comprising the overview of FANN, the smart patch sampling, a fuzzy attention layer, a JCAM (Jaccard continuity and accumulation mapping) loss and a CCF-score (Continuity and Completeness F-score). Given an input raw 3D volume V ∈ R W ×H×Z , a smart patch sampling strategy is first presented to extract appropriate patches V P ∈ R w×h×c for training and validation. Then, a fuzzy attention neural network (FANN) is proposed to segment the airway trees with the supervision of a comprehensive loss function (including dice loss, airway continuity loss and accumulation mapping loss) on V P . During the training, the AF-score is developed to assess the model performance and save the best values of the trainable network parameters.
A. Overview
The proposed FAAN is built based on the 3D U-Net by adding a novel fuzzy attention layer, deep supervision, and airway continuity and accumulation mapping (ACAM) loss. The size of the volume patches V P is set as the same ratio as the median shape of the ground truth annotations in the training data. FANN includes 3D convolution layers, instance normalization (IN), LeakyReLu (LReLU), 3D transpose convolution layers, fuzzy attention layers and sigmoid activation layers, as shown in Fig. 2. It is of note that the proposed network has multiple outputs, including the main output and three low-level outputs. The final prediction is given from the main output, while the low-level outputs are collected for deep supervision through the auxiliary losses (losses are calculated as the same way as that of the main output with different weights). When calculating these auxiliary losses, the prediction is upsampled to align with the size of ground truth masks.
B. Smart Patch Sampling (SPS)
Deep learning is a data-driven method that highly relies on a large amount of data, thus a balanced class distribution is crucial for training a neural network. The huge imbalance of foreground and background samples are prone to gradient erosion issue [31]. Due to the large volume array in HRCT and GPU memory limitations, segmentation modules in HRCT are mainly based on patch training strategies. However, most studies simply extracted patches through sliding windows, which cannot completely address the class imbalanced issue. Although some researchers alleviated class imbalance by extracting hard samples [31], [42], these methods required a further training process to collect hard samples. In this study, we present smart patch sampling (SPS) criteria for patch extraction that do not require further training/fine-tuning operations. Given the dataset that comprises 3D images and their corresponding annotations, the workflow of SPS can be summarized as:
1) The average size S (z × y × x) of the 3D minimum bounding volume of ground truth annotations is first calculated. The patch size is set as the same scale as S.
2) The centreline of manual annotation is extracted by skeletonization [43]. 3) Overlapped sliding windows are adopted to extract image patches, mask patches, and centreline patches. 4) 4) Patches with a centerline ratio (the proportion of the centerline voxels in a patch in terms of overall centerline voxels) larger than 15% or a voxel volume ratio (the proportion of the ground truth voxels in a patch in terms of overall ground truth voxels) larger than 10% are kept.
C. Fuzzy Attention Layer
One challenge towards obtaining a well-trained module for airway segmentation is the uncertainty of annotations and voxel values within the airway region. There have been efforts to make the network focus more on the relevant regions. For instance, the Attention U-Net [24] proposes an attention gate to improve the accuracy by suppressing feature activations in irrelevant regions. However, we deem that the sigmoid activation may not be the best solution to organize the attention gate.
In addition to the "gradient vanishing" problem, one major concern with the sigmoid activation function is its sharpness, by which only a small interval can obtain the output value ranges between 0 and 1. Therefore, it is difficult to find a robust "boundary" in sigmoid activation that distinguishes whether the feature is relevant or not. Another issue is the monotonicity: similar to the raw intensity distribution of airway structures that has both negative and positive variations, the feature representation of airway regions should also have two side variations. However, to align with the sigmoid activation function, the 1 × 1 convolution layers must learn to shift the values of all the features of interest to a single side to obtain a positive response. In particular, there must exist a certain "threshold" in the feature representation reconstitution (usually accomplished by the 1×1 convolutional layers in the attention layer) to determine whether the region is important or not. Moreover, non-channel specifics of the current attention map assign the same "attention" coefficient to all the feature points along the channel-wise. Specifically, given a feature representation F ∈ R C×W ×H×D , the existing attention map is formulated as α ∈ R W ×H×D , while all the feature representations along the channel wise C shares the same 'importance'. This mechanism is unreliable since the feature representations in different channels are extracted by different convolution kernels; therefore, we advocate the attention map to be channel-specific.
Different from the sigmoid activation function which is fixed and sharped, we believe the Gaussian function is more effective to formulate the attention gate, due to its symmetry and flexibility (the mean and variance of the Gaussian function can be set as learnable parameters). Studies have shown that both fuzzy logic and neural networks are efficient for data representation [25]. In general, the neural network aims to reduce noises in the original data to extract useful feature representations, whereas fuzzy logic is used to obtain fuzzy representations to reduce the uncertainty in original data. Therefore, we combine fuzzy logic with the attention mechanism using trainable Gaussian membership functions to better help the segmentation network focus on the relevant region while reducing the uncertainty and variations of the data representations.
The proposed fuzzy attention layer is adopted within the skip connection, taking both feature representations from the l-th encoder layer space e l and decoder layer space d l as inputs (shown in Fig. 3(a)). These two input feature vectors are first processed by a 1 × 1 × 1 3D convolution layer, an instance normalization, and a LeakyReLU for feature reconstitution. Then, a voxel-wise adding operation is adopted to fuse the information, followed by a LeakyReLU. Next, the feature representations are fed into the FAG to generate a voxelwise attention map, shown in Fig. 3(b). Assume X (with a The fuzzy attention layer takes both feature representations from the l-th encoder layer e l and decoder layer d l as inputs and outputs an attention map that has the same dimension as the inputs.The learnable Gaussian membership function is adopted.
shape of C × H × W × D regardless of batch size) as the input of the fuzzy attention gate. Due to the smoothness and concise notation of Gaussian membership functions, learnable Gaussian MFs are proposed to specify the deep fuzzy sets. Each feature map (with the height H, width W and depth D) is filtered by m Gaussian membership functions with the trainable centre µ i,j and spread σ i,j
f i,j (X, µ, σ) = e −(X j −µ i,j ) 2 2σ 2 i,j ,(6)
where i = 1, . . . , m, j = 1, . . . , C. Different from most previous studies that applied the operators 'AND' to obtain fuzzy feature representations, our goal is to use the membership function to learn the 'importance' of target feature representations. Therefore, we believe that the information can be better preserved by applying the aggregation operator 'OR' while suppressing irrelevant features. Having two fuzzy sets A andB, the operator 'OR' is described as
fà ∪B (y) = fÃ(y) ∨ fB(y), ∀y ∈ U ,(7)
where U is the universe of information and y is the element of the universe. To make the operator 'OR' derivative, we modified it as fà ∪B (y) = max(fÃ(y), fB(y)),
Therefore, the fuzzy degree f j (X, µ, σ) ∈ Θ H×W ×D , Θ ∈ [0, 1] of j − th channel can be obtained based on Eq.(6) and
Eq. (8) as
f j (X, µ, σ) = m i=1 e −(X j −µ i,j ) 2 2σ 2 i,j = max(e −(X j −µ i,j ) 2 2σ 2 i,j )(9)
where the large V indicates the union operation. Eventually, the output tensor of the proposed FAG has the same shape as the input tensors, which can provide a voxelwise attention map. The pseudo-code of the fuzzy attention layer is shown in Algorithm 1.
Algorithm 1 Pseudo code of fuzzy attention layer
Input: feature representations e l , d l ∈ R C×W ×H×D ,from l-th encoder and decoder, , weights w connecting to l-th encoder and decoder Output: fused feature representation y ∈ R C×W ×H×D 1: randomly initialise parameters µ ∈ R m×C and σ ∈ R m×C of membership functions f (X, µ, σ).
2: compute the input X ∈ R C×H×W ×D of FAG, X = LReLU[LReLU(IN(w (l) e e l + b (l) e )) + LReLU(IN(w (l) d d l + b (l) d ))] 3: for i in C do 4:
compute the fuzzy membership degrees f j (X, µ, σ) through m membership functions 5: compute degrees f j (X, µ, σ) of j-th channel by adopting aggregation operator "OR" to f j
6:
compute output of j-th channel y j = e (l) f j (X, µ, σ) 7: end for
D. Jaccard Continuity and Accumulation Mapping Loss
Studies have shown no qualms about applying strategies such as cascade modules or cropping operations to better segment small organs or structures. By implementing these strategies, the inter-class imbalance issue can be well alleviated. However, another issue that the model needs to address is the intra-class imbalance, which is caused by different volumes of different branches (trachea, secondary/tertiary bronchus). The trachea and the secondary bronchus account for the majority of the total airway volume, leading to a discontinuous prediction for the small bronchus.
To bridge this gap, we propose a Jaccard continuity and accumulation mapping loss L JCAM to force the network to pay more attention to the continuity of airway predictions. The L JCAM assesses the error of different projections (through coronal, sagittal, and axial planes) and centrelines between the prediction and ground truth. Given the prediction array X, the corresponding ground truth annotation Y , the centrelines Y CL of the ground truth masks are extracted by the skeletonization [43]. With the ground truth centrelines Y CL , the continuity can be well evaluated by calculating the ratio of correct-predicted centrelines and ground truth centrelines
L C = 1 − C = 1 − X · Y CL Y CL(10)
without considering the airway branch size. In addition to the continuity, we also present two types of accumulation maps, including linear accumulation maps (LAMs) and nonlinear accumulation maps (nLAMs), as shown in Fig. 4. LAMs are calculated by summing the volume array along three different axes, while nLAMs are acquired by performing a nonlinear transformation on LAMs. By integrating LAM into the optimization function, the veracity of the 3D prediction can be better assessed since each incorrect voxel leads to incorrect variations in different projection maps. Furthermore, to reduce the intra-class imbalance, a nonlinear transformation (tanh activation function) is introduced to LAMs to acquire nLAMs. Although nLAM (ranges from [0, 1]) discards the spatial information within LAM, it can better gauge the continuity and veracity between the predictions and ground truth masks. Denote A ⊕ τ as the summation operation that sums the array A ∈ R N ×C×W ×H×D along τ th channel. The LAM and nLAM are calculated by
L LAM = τ ∈[W,H,D] l 1 (X ⊕ τ, Y ⊕ τ ),(11)L nLAM = τ ∈[W,H,D] L J (tanh(X ⊕ τ ), tanh(Y ⊕ τ )),(12)
where L J is the Jaccard loss which is defined as
L J = 1 − J = 1 − XY + ε X + Y − XY + ε(13)
and ε is the smoothing factor ε>0.
Overall, the L JCAM can be summarised as
L JCAM = αL J (X, Y ) + βL C (X, Y CL )+ ϕL CE (X, Y ) + γL LAM (X, Y ) + δL nLAM (X, Y )(14)
where α, β, γ, ϕ, δ are the weights of Jaccard loss L J , continuity loss L C , cross-entropy loss L CE , linear and nonlinear accumulation mapping loss L LAM , L nLAM , repectively. E. Continuity and Completeness F-score (CCF-score)
In addition to developing novel layers or operations, module selection also plays a crucial role in machine learning. In airway segmentation, it is unreliable to find the best model by selecting the highest regional overlapping metrics, due to (1) the intra-class imbalance of the airway structures and (2) the imperfection of manual annotations. A high Dice or IoU score only reflects the overall overlapping ratio of the predictions voxels, which cannot assess the continuity and completeness. Thus, there is an urgent requirement to develop an effective metric to evaluate the continuity and completeness of tree structures (e.g., airways, blood vessels, neurons, etc.).
Here we propose a continuity and completeness F-score (CCF-score) for the aforementioned purpose
CCF s = (1 + ω 2 ) × J × C ω 2 × J + C(15)
where ω ∈ [0, 1] is the preference parameter, J and C are the Jaccard index and Continuity index as defined in Eq. (10) and Eq. (13), respectively. ω can be set larger (smaller) than 1 when C(J) is more important. With this novel F-measure based metric, the module that focuses on both continuity and completeness can be saved.
IV. EXPERIMENTS AND RESULTS
This section details all the settings of experiments including datasets, implementation details, evaluation metrics and results.
A. Datasets and Training Strategies
In this study, we trained our airway segmentation and comparison models using 90 clinical thoracic CT scans from the open-access BAS dataset. In particular, the in-house datasets collected from patients with fibrotic lung disease and COVID-19 are used for testing only. Details of all the data are shown in Table I [44]. Among the 1018 cases, 70 cases whose slice thickness ≤ 0.625mm were randomly selected and annotated by experts [29], [31]. Our in-house dataset: (1) COVID-19 dataset: this subset includes 25 HRCT for patients with COVID-19, from Wuhan Renmin hospital. (2) Fibrosis dataset: this subset contains 25 cases from patients with fibrotic lung disease, collected from OSIC dataset 1 . The annotations of these cases were given by experts from Royal Brompton Hospital.
Training Strategies: Parameters of the proposed 3D-UNet are initialised with He-normal initialisation. Randomised rotation (rotation degree ranges from -10 to 10), and randomised flip (up, down, left, right) were implemented to augment the dataset during the training. All modules were trained on an NVIDIA RTX 3090 GPU for 200 epochs, with an initial learning rate of 1e −3 and a decay of 0.5 at the 20th, 50th, 80th. 110th and 150th epoch. For equal contribution of L J , L C , L CE the value of the hyper-parameters α, β, γ in L JCAM was set to 1, and that of ϕ, δ was set to 0.3 to constraint the accumulation mapping loss (since it is calculated through three dimension). The ω in CCF-score was set to 0.9 to prevent excessive leakages.
B. Experimental Settings and Evaluation Metrics
This section describes the experimental settings of airway segmentation. We randomly divided all the 90 cases from the BAS dataset into 72 for training and validation, and 18 for the test. All the modules in ablation studies and comparison models were trained from scratch based on the same patchset given from SPS, using the same optimizer and training strategy.
To provide a fair comparison, all the models were applied the same postprocessing (extracting the largest connected component to remove noises) and inferencing criteria (nonoverlapping sliding window). The significance of the results between the proposed method and comparison method was assessed by Wilcoxon signed-rank test, with the p-value¡0.05 indicating significant better(worse) results. Ablation studies. To better evaluate the effectiveness of the proposed submodules, we conducted ablation studies for airway segmentation including 3D U-Net (BL), 3D U-Net with fuzzy attention layer (BL+Fuzzy), 3D U-Net with fuzzy attention layer and JCAM loss (BL+Fuzzy+JCAM), 3D U-Net with fuzzy attention layer, JCAM loss and CCFscore (BL+Fuzzy+JCAM+CCF s ) and other combinations (BL+JCAM, BL+Fuzzy+CCF s ). It is of note that the baseline 3D UNet has the same structure as the FAAN without a fuzzy attention layer (shown in Fig. 2) for the sake of fair comparison.
Comparisons. To evaluate the effectiveness of the proposed method, we compare our method with existing approaches including nnUNet [17], VNet [18], Attention UNet [24], and methods proposed by Juarez et al. [30], Zheng et al. [31] and Wang et al. [35]. All the modules were trained on the same patch dataset, acquired based on SPS criteria. Besides, all the comparisons were performed on open BAS and our in-house datasets to assess the robustness and performance, respectively. Evaluation metrics. In this study, we applied 5 metrics to evaluate the performance of airway segmentation modules. In addition to the three metrics (IoU, detected length ratio, detected branch ratio) in section 2.2, the precision and airway missing ratio (AMR) were also evaluated:
Precision = T P T P + F P (16) AMR = F N V Y × 100%(17)
where T P , F P , and F N are the number of true positive volumes, false positive volumes and false negative volumes.
C. Experimental Results
Results of all the experiments are calculated on the test cases shown as mean ± standard deviation in this section. Ablation studies. Results of the ablation experiments are presented in Table II, illustrating the mean and standard deviation of the different evaluation metrics. It can be observed that the baseline 3D U-Net could achieve competitive airway segmentation performance, with a 0.8749 IoU score, 87.81% DLR and 81.74% DBR. By integrating the fuzzy attention layer with 3D-UNet, a considerable improvement the DBR score was witnessed (roughly 2.2% average gain). Introducing the JCAM loss promotes the DBR ratio with 82.93%, while adopting the CCF-score also helps to find a better module (with a 1.3% increment of DBR). The largest improvement is witnessed by adopting both fuzzy attention layer and JCAM loss, with 90.64% DBR and 92.95% DLR and the lowest AMR of 4.92%. By integrating all these novel strategies (BL+Fuzzy+JCAM+CCF s ), the proposed FANN achieves the highest CCF-score 0.8969, and competitive DBR (89.01%), DLR (92.17%) as well as the AMR (5.22%), while a reduction of IoU and dice score is observed. Comparison experiments on BAS dataset. We note that the proposed FANN achieves the state-of-the-art performance for airway segmentation on BAS (Table III), with a 0.8969 CCF-score, 92.71% DLR, and 89.01% DBR. Both the V-Net [18] and nnUNet [17] V. DISCUSSION This section mainly discusses the overall performance, the importance of data sampling, the airway leakage and neglect in the prediction, the importance of evaluation metrics, and the detection rate of different sized branches. Overall performance analysis. The experimental results on public and our in-house datasets have demonstrated the superior performance and robustness of the proposed method for airway segmentation. Among all the comparisons, V-Net obtained the poorest performance, due to the discontinuity of airway predictions (all the predictions were post-processed by keeping the largest 3D component). Here, we mainly discuss three comparison models with the proposed method, including attention UNet, nnUNet, and WingNet. In all three sub-datasets, both nnUNet and WingNet represented competitive precision compared to FANN and attention UNet. However, the AMR score of WingNet was roughly twice larger than that of the nnUNet, while the DBR and DLR of the WingNet were better than that of the nnUNet. This illustrates that WingNet obtained worse predictions on the main trachea, but better predictions on small/middle-sized branches. Compared with the other two models, the attention UNet presents better performance on all the datasets, showing the effectiveness of the attention mechanism.
In particular, FANN achieved significantly better results in DLR, DBR, and CCF-scores than nnUnet, attention UNet and WingNet in the public BAS dataset. The IoU of FANN (0.8738) represented no significant differences compared with nnUNet (0.8805) and attention UNet (0.8762) which was much better than that of the WingNet (0.8445). In the COVID data, the proposed FANN gained better DLR and DBR compared to nnUNet and attention UNet. Although WingNet achieves a relatively low IoU (0.8282), it obtains a similar DLR and DBR to FANN (with Wilcoxon signed-rank test p-value >0.05). On fibrosis data, although the nnUNet achieved a better IoU and precision score than FANN, it suffers from heavy discontinuity with poor DLR (58.15%) and DBR (50.18%) gained. WingNet achieves the highest precision (0.9401) but the largest AMR (21.98%), indicating severe false-negative predictions. FANN achieves the best CCF-score, DLR, DBR, and AMR among all the comparisons, with competitive IoU and precision scores. Data sampling strategy. The strategy of data sampling is prone to be overlooked for patch-based modules. In this paper, we presented a smart data sampling strategy for patch extraction in the imbalanced dataset. To better explore the importance of data sampling, we conducted experiments on the open BAS dataset in terms of different data sampling strategies, including different extraction criteria and patch sizes. Here we define three extraction criteria, including (1) Sequential: extracting the patches sequentially by sliding windows within the minimum bounding box volume of the 3D ground truth area; (2) Drop: extracting the patches sequentially by sliding windows within the minimum bounding box volume of the 3D ground truth area and discarding the main trachea (Drop-A) or pure negative patches (Drop-B) to alleviate data imbalance. (3) Smart patch sampling as described in section 3.3. We also compare the patch size defined by three different criteria, with (1) uniform of 128×128×128 (2) the roughly same ratio as the median size of the ground truth volumes (160×96×160) (3) the roughly same ratio as the mean size of the ground truth volumes (128×96×144). Although little increment of IoU is emerged from Cut-1, Cut-2, and Cut-3, a significant improvement in the DLR and DBR is noted. In The 'uniform' patch size refers to 128×128×128, while the 'median' and 'mean' are 160×96×160 and 128×96×144, respectively.
addition, the results of Cut-3 and Cut-4 (Table V) indicate that the segmentation performance can be improved by discarding the pure negative patches (with 1.6% and 1% gain in DBR and DLR, respectively). However, discarding the main trachea may not be applicable since the module failed to predict the trachea by learning the features of small and mediumsize branches (shown in Cut-4). The model trained on Cut-5 presents a competitive DBR with a roughly 4.5% increment compared with Cut-1. The proposed SPS strategy achieves the best performance with the highest DLR, DBR and lowest AMR.
Airway leakages and Airway neglect. Airway leakages are false-positive predictions that can be reflected by the precision score. Airway neglect refers to false-negative predictions that are assessed by AMR. As shown in Fig. 5, due to the unavoidable neglect of manual labelling, some leakages might be ascribed to the airway branches that are not annotated correctly. nnUNet presents a high precision score with limited airway leakages, while it suffers from relative AMR with many airway neglects (especially in fibrosis cases). WingNet performs severe airway neglect of the main trachea, leading to a low IoU score with comparable DBR and DLR. WingNet, attention UNet, and FANN present a few leakages at the terminal branches, indicating the incompleteness of airway annotations. Compared with all the studied models, FANN achieves the smallest AMR and most of its leakages belong to incomplete annotations. Therefore, we reach the conclusion that a small amount of airway leakages are encouraged, as they can help the model better predict terminal bronchioles. Evaluation metric. The conventional evaluation metrics such as IoU score or dice score no longer work well to find the best module during the training. As Tables II, III and IV illustrated, the module with the highest IoU score does not refer to the one with the best segmentation ability. For instance, the nnUNet achieved the best IoU score on the BAS dataset, but it obtained a weak performance on the fibrosis evaluation. Meanwhile, only focusing on the branch performance while ignoring the overlapped metric may also lead to airway leakages. For example, the BL+fuzzy+JCAM achieved better DLR and DBR but a lower CCF-score than the final combination in the ablation studies of FAAN (Table I) plore the segmentation performance of different-sized airway branches, the detection rate of the branches with different radius is calculated. Here we define a certain branch is correctly detected when its IoU is larger than 0.8. We simply divided the airway branches into 4 sizes based on their estimated radius, including TB (terminal branches that range from 0 to 2mm), SB (small branches 2mm to 4mm), MB (middle branches range from 4mm to 8mm) and LB (large branches that larger than 8mm). The performance of the proposed FAAN on BAS, COVID-19 and fibrosis datasets is shown in Table VI. Table V illustrates that the proposed method has achieved superior performance of airway segmentation on BAS and COVID-19 data, while the terminal branch detection rate on the Fibrosis data is relatively low. This is caused by the complex honeycombing in fibrotic lung disease which has similar structural similarity with the airway branches. Potential studies. This study has shown the feasibility of adopting fuzzy logic into deep attention neural networks. In particular, many potential research directions can be explored. E.g., introducing a fuzzy cost function for better optimization, or bringing in other MFs such as generalized bell and sigmoid MFs.
VI. CONCLUSION Airway segmentation is a crucial step that can help clinicians better assess disease and prognosis analysis. However, most existing methods heavily suffer from discontinuous prediction for small-sized airway branches. The proposed study illustrates the importance of data sampling strategy and evaluation metrics. The effectiveness of fuzzy attention neural network, Jaccard continuity and accumulation mapping loss, and the CCFscore for 3D airway segmentation has been demonstrated.
The superior performance of the proposed method has been observed on both open datasets and our in-house datasets, including scans acquired from patients with cancer, COVID-19, fibrosis, and mild lung disease.
The imperfect terminal branches detection rate on fibrosis data indicates the potential studies. In the future, we will conduct further studies that integrate fuzzing learning and deep learning to improve the detection rate of tiny airway branches in patients with fibrotic lung disease.
VII. ACKNOWLEDGEMENT
Fig. 2 .
2Fuzzy attention Neural Network. The fuzzy attention layer is adopted within the skip connection between the encoders and decoders. Each decoder block produces a binary prediction for deep supervision. Sizes of feature maps are presented for a better understanding of the architecture of FANN.
Fig. 3 .
3Details of (a) fuzzy attention layer and (b) fuzzy attention gate (FAG).
Fig. 4 .
4Skeletonization and accumulation maps (acquired from the projection of coronal, sagittal and transverse planes, and nonlinear transformation) of the airway structure.
Fig. 5 .
5Visualization results of the competitive module, including nnUNet, Attention UNet, WingNet and the proposed FANN on fibrosis data (first row), COVID-19 data (second row), and BAS data (third row), with the red, blue, green color representing the true positive, false negative (airway neglects), and false positive (airway leakages) predictions, respectively.
TABLE I DETAILS
IOF DATASETS IN THIS STUDYDataset
Composition
Volume size
Slice thickness
Disease
BAS
90 cases*
s×512×512, s ∈ [157, 764]
0.45mm-1.80mm Healthy volunteers or patients with pulmonary disease
COVID-19 25 cases (test only) s×512×512, s ∈ [512, 1145] 0.40mm-0.65mm Patients with COVID-19
Fibrosis
25 cases (test only) s×512×512, s ∈ [516, 945]
0.40mm-0.70mm Patients with fibrotic lung disease
* 90 cases are split to 54,18 and 18 for training, validation and testing, respectively.
TABLE II ABLATION
IISTUDIES OF THE PROPOSED METHOD ON BAS DATASET% indicates the ratio metrics, DLR: detected length ratio, DBR: detected branch ratio, AMR: airway missing ratio. Model abbreviations BL: baseline 3D UNet with deep supervision, JCAM: jaccard continuity and accumulation mapping loss, CCFs: continuity and completeness f-score.Model
IoU
Precision
DLR(%)
DBR(%)
AMR(%)
CCFs
BL
0.8749±0.0385 0.9190±0.0302
87.81±8.74
81.74±11.15
5.11±4.10
0.8763±0.0514
BL+Fuzzy
0.8731±0.0493
0.9205±0.0326
89.20±10.73 83.92±12.48
5.45±5.41
0.8814±0.0650
BL+JCAM
0.8584±0.0448
0.9027±0.0388
88.71±7.83
82.93±10.90
5.33±4.09
0.8710±0.0554
BL+Fuzzy+JCAM
0.8625±0.0414
0.9039±0.0366
93.95±7.96
90.64±10.42 4.92±4.22
0.8913±0.0527
BL+Fuzzy+CCFs
0.8701±0.0522
0.9112±0.0315
89.63±3.27
85.24±12.53
5.71±5.82
0.8816±0.0412
BL+Fuzzy+JCAM+CCFs
0.8738±0.0445
0.9187±0.0320
92.71±7.93
89.01±10.3
5.22±4.50
0.8969±0.0554
TABLE III COMPARISON
IIIEXPERIMENTS ON THE OPEN DATASET BAS DATASETModel
IoU
Precision
DLR(%)
DBR(%)
AMR(%)
CCFs
Attention UNet [24] †
0.8762±0.0414
0.9207±0.0309
88.61±8.32 ‡
82.43±10.65 ‡
5.16±4.18
0.8806±0.0534 ‡
Juarez et al. [30] †
0.8371±0.0752 ‡
0.9344±0.0314
73.46±14.19 ‡
64.25±15.55 ‡
11.17±7.62 ‡
0.7879±0.0952 ‡
nnUNet [17] *
0.8805±0.0313
0.9436±0.0234 ‡
86.84±7.00 ‡
79.21±9.43 ‡
6.96±4.02 ‡
0.8750±0.0416 ‡
V-Net [18] *
0.7140±0.1716 ‡
0.9781±0.0150 ‡
33.96±17.96 ‡
22.04±14.22 ‡
27.15±17.99 ‡
0.4779±0.1751 ‡
Wang et al. [35] †
0.7330±0.0786 ‡
0.7636±0.0737 ‡
85.05±12.27 ‡
78.58±14.20 ‡
5.07±6.19
0.7813±0.0937 ‡
WingNet [31] *
0.8445±0.0596 ‡
0.9373±0.0303 ‡
88.00±11.95 ‡ ‡
83.11±13.46 ‡
10.32±6.89 ‡
0.8600±0.0768 ‡
Proposed
0.8738±0.0445
0.9187±0.0320
92.71±7.93
89.01±10.3
5.22±4.50
0.8969±0.0554
* refers to results obtained by an open-source module (with trained weights) or modules trained from open-source codes. † indicates the
reproduced results. ‡ represents statistical significance (with Wilcoxon signed-rank test p-value < 0.05) compared with the proposed
method.
TABLE IV
COMPARISON EXPERIMENTS ON OUR IN-HOUSE DATASET
Model
IoU
Precision
DLR(%)
DBR(%)
AMR(%)
CCFs
COVID-19
Attention UNet [24] †
0.9230±0.0279
0.9464±0.0175
88.53±6.66 ‡
82.97±8.34 ‡
2.62±2.21
0.9057±0.0377 ‡
Juarez et al. [30] †
0.8637±0.1016 ‡
0.9566±0.0134
71.73±15.07 ‡
63.02±15.07 ‡
10.08±10.79 ‡
0.7914±0.1189 ‡
nnUNet [17] *
0.9158±0.0381 ‡
0.9685±0.0407 ‡
91.36±6.72 ‡
87.33±8.89 ‡
2.61±1.73 ‡
0.9148±0.0473 ‡
V-Net [18] *
0.7898±0.0871 ‡
0.9893±0.0040 ‡
34.96±9.66 ‡
24.07±8.80 ‡
20.33±8.86 ‡
0.5052±0.0911 ‡
Wang et al. [35] †
0.7433±0.1010 ‡
0.7749±0.0736 ‡
84.87±13.20 ‡
79.93±14.09 ‡
5.41±10.35 ‡
0.7870±0.1129 ‡
WingNet [31] *
0.8282±0.0672 ‡
0.9537±0.0215 ‡
92.50±6.89
90.67±8.95
13.68±7.15 ‡
0.8689±0.0680 ‡
Proposed
0.9222±0.0261
0.9431±0.0166
93.30±5.29
90.18±7.59
2.37±1.64
0.9270±0.0338
Fibrosis
Attention UNet [24] †
0.8202±0.0553
0.8844±0.0640
72.83±9.41 ‡
66.88±10.84 ‡
7.98±2.97
0.7764±0.0678 ‡
Juarez et al. [30] †
0.7911±0.0532 ‡
0.9296±0.0294 ‡
56.16±10.76 ‡
48.29±12.18 ‡
15.68±6.37 ‡
0.6688±0.0688 ‡
nnUNet [17] *
0.8312±0.0495 ‡
0.9381±0.0314 ‡
58.15±6.80 ‡
50.18±7.93 ‡
11.74±2.93 ‡
0.6972±0.0564 ‡
V-Net [18] *
0.5002±0.0967 ‡
0.9768±0.0006 ‡
8.54±3.76 ‡
3.40±2.99 ‡
49.36±9.88 ‡
0.1576±0.0568 ‡
Wang et al. [35] †
0.6979±0.0647 ‡
0.7468±0.0773 ‡
69.61±9.24 ‡
62.61±11.17 ‡
8.22±3.88
0.6971±0.0747 ‡
WingNet [31] *
0.7436±0.0805 ‡
0.9401±0.0156 ‡
69.46±9.71 ‡
63.01±11.39 ‡
21.98±8.48 ‡
0.7208±0.0872 ‡
Proposed
0.8269±0.0402
0.8904±0.0373
78.98±8.00
73.44±9.54
7.95±2.37
0.8099±0.0517
* refers to results obtained by open-source module (with trained weights) or module trained from open-source codes. † indicates the reproduced
results. ‡ represents statistical significance (with Wilcoxon signed-rank test p-value < 0.05) compared with the proposed method.
TABLE V COMPARISON
VEXPERIMENTS OF DATA SAMPLING STRATEGYExtraction
Patch size IoU
Precision
DLR(%)
DBR(%)
AMR(%)
Cut-1 Sequential uniform
0.8782±0.0641
0.9341±0.0409
85.17±14.28 76.41±15.27
7.64±7.22
Cut-2 Sequential median
0.8854±0.0470
0.9408±0.0300
85.79±8.12
78.77±10.06
6.03±4.63
Cut-3 Sequential mean
0.8862±0.0422
0.9394±0.0275
86.66±8.88
79.39±10.80
6.06±4.61
Cut-4 Drop-A
mean
0.8069±0.0826
0.9437±0.0316
84.64±10.00 77.57±12.70
14.87±8.13
Cut-5 Drop-B
mean
0.8883±0.0407 0.9451±0.0286 87.73±8.01
80.92±10.51
5.95±4.25
Cut-6 Smart
mean
0.8749±0.0385
0.9190±0.0302
87.81±8.74
81.74±11.15 5.11±4.10
. To explore the most appropriate metric, we further evaluated the BL+fuzzy+JCAM on the fibrosis and COVID-19 datasets. It achieved 0.7961(0.9099) IoU score, 74.66% (93.81%) DLR and 68.85% (91.87%) DBR, 10.29% (2.36%) AMR, and 0.7687 (0.9215) CCF-score on fibrosis (COVID-19) data, with a considerable gap compared with the FAAN. This also proves the efficiency of the proposed CCF-score, which can better help the module selection. The detection rate of different-sized branches. To ex-
TABLE VI DETECTION
VIRATE OF DIFFERENT-SIZED BRANCHES TB SB MB LB BAS 81.55% 95.76% 98.59% 99.38% COVID-19 82.96% 97.05% 99.67% 100% Fibrosis 47.15% 87.60% 98.32% 98.50%
https://www.osicild.org/
Molecular biomarkers in idiopathic pulmonary fibrosis. B Ley, K K Brown, H R Collard, American Journal of Physiology-Lung Cellular and Molecular Physiology. 3079B. Ley, K. K. Brown, and H. R. Collard, "Molecular biomarkers in idiopathic pulmonary fibrosis," American Journal of Physiology-Lung Cellular and Molecular Physiology, vol. 307, no. 9, pp. L681-L691, 2014.
An epithelial biomarker signature for idiopathic pulmonary fibrosis: an analysis from the multicentre profile cohort study. T M Maher, E Oballa, J K Simpson, J Porte, A Habgood, W A Fahy, A Flynn, P L Molyneaux, R Braybrooke, H Divyateja, The Lancet Respiratory Medicine. 512T. M. Maher, E. Oballa, J. K. Simpson, J. Porte, A. Habgood, W. A. Fahy, A. Flynn, P. L. Molyneaux, R. Braybrooke, H. Divyateja, et al., "An epithelial biomarker signature for idiopathic pulmonary fibrosis: an analysis from the multicentre profile cohort study," The Lancet Respiratory Medicine, vol. 5, no. 12, pp. 946-955, 2017.
Biomarkers of extracellular matrix turnover in patients with idiopathic pulmonary fibrosis given nintedanib (inmark study): a randomised, placebo-controlled study. T M Maher, S Stowasser, Y Nishioka, E S White, V Cottin, I Noth, M Selman, K B Rohr, A Michael, C Ittrich, The Lancet Respiratory Medicine. 79T. M. Maher, S. Stowasser, Y. Nishioka, E. S. White, V. Cottin, I. Noth, M. Selman, K. B. Rohr, A. Michael, C. Ittrich, et al., "Biomarkers of extracellular matrix turnover in patients with idiopathic pulmonary fibro- sis given nintedanib (inmark study): a randomised, placebo-controlled study," The Lancet Respiratory Medicine, vol. 7, no. 9, pp. 771-779, 2019.
Chronic hypersensitivity pneumonitis: high resolution computed tomography patterns and pulmonary function indices as prognostic determinants. S L Walsh, N Sverzellati, A Devaraj, A U Wells, D M Hansell, European Radiology. 228S. L. Walsh, N. Sverzellati, A. Devaraj, A. U. Wells, and D. M. Hansell, "Chronic hypersensitivity pneumonitis: high resolution com- puted tomography patterns and pulmonary function indices as prognostic determinants," European Radiology, vol. 22, no. 8, pp. 1672-1679, 2012.
Connective tissue disease related fibrotic lung disease: high resolution computed tomographic and pulmonary function indices as prognostic determinants. S L Walsh, N Sverzellati, A Devaraj, G J Keir, A U Wells, D M Hansell, Thorax. 693S. L. Walsh, N. Sverzellati, A. Devaraj, G. J. Keir, A. U. Wells, and D. M. Hansell, "Connective tissue disease related fibrotic lung disease: high resolution computed tomographic and pulmonary function indices as prognostic determinants," Thorax, vol. 69, no. 3, pp. 216-222, 2014.
Fibrotic idiopathic interstitial pneumonias: Hrct findings that predict mortality. A J Edey, A A Devaraj, R P Barker, A G Nicholson, A U Wells, D M Hansell, European radiology. 218A. J. Edey, A. A. Devaraj, R. P. Barker, A. G. Nicholson, A. U. Wells, and D. M. Hansell, "Fibrotic idiopathic interstitial pneumonias: Hrct findings that predict mortality," European radiology, vol. 21, no. 8, pp. 1586-1593, 2011.
Interobserver agreement for the ats/ers/jrs/alat criteria for a uip pattern on ct. S L Walsh, L Calandriello, N Sverzellati, A U Wells, D M Hansell, Thorax. 711S. L. Walsh, L. Calandriello, N. Sverzellati, A. U. Wells, and D. M. Hansell, "Interobserver agreement for the ats/ers/jrs/alat criteria for a uip pattern on ct," Thorax, vol. 71, no. 1, pp. 45-51, 2016.
Mortality prediction in idiopathic pulmonary fibrosis: evaluation of computer-based ct analysis with conventional severity measures. J Jacob, B J Bartholmai, S Rajagopalan, M Kokosi, A Nair, R Karwoski, S L Walsh, A U Wells, D M Hansell, European Respiratory Journal. 491J. Jacob, B. J. Bartholmai, S. Rajagopalan, M. Kokosi, A. Nair, R. Kar- woski, S. L. Walsh, A. U. Wells, and D. M. Hansell, "Mortality pre- diction in idiopathic pulmonary fibrosis: evaluation of computer-based ct analysis with conventional severity measures," European Respiratory Journal, vol. 49, no. 1, 2017.
Idiopathic pulmonary fibrosis: data-driven textural analysis of extent of fibrosis at baseline and 15-month follow-up. S M Humphries, K Yagihashi, J Huckleberry, B.-H Rho, J D Schroeder, M Strand, M I Schwarz, K R Flaherty, E A Kazerooni, E J Van Beek, Radiology. 2851S. M. Humphries, K. Yagihashi, J. Huckleberry, B.-H. Rho, J. D. Schroeder, M. Strand, M. I. Schwarz, K. R. Flaherty, E. A. Kazerooni, E. J. van Beek, et al., "Idiopathic pulmonary fibrosis: data-driven textural analysis of extent of fibrosis at baseline and 15-month follow-up," Radiology, vol. 285, no. 1, pp. 270-278, 2017.
Ct staging and monitoring of fibrotic interstitial lung diseases in clinical practice and treatment trials: a position paper from the fleischner society. D M Hansell, J G Goldin, T E KingJr, D A Lynch, L Richeldi, A U Wells, The Lancet Respiratory Medicine. 36D. M. Hansell, J. G. Goldin, T. E. King Jr, D. A. Lynch, L. Richeldi, and A. U. Wells, "Ct staging and monitoring of fibrotic interstitial lung diseases in clinical practice and treatment trials: a position paper from the fleischner society," The Lancet Respiratory Medicine, vol. 3, no. 6, pp. 483-496, 2015.
Segmentation and analysis of the human airway tree from three-dimensional x-ray ct images. D Aykac, E A Hoffman, G Mclennan, J M Reinhardt, IEEE Transactions on Medical Imaging. 228D. Aykac, E. A. Hoffman, G. McLennan, and J. M. Reinhardt, "Seg- mentation and analysis of the human airway tree from three-dimensional x-ray ct images," IEEE Transactions on Medical Imaging, vol. 22, no. 8, pp. 940-950, 2003.
Intrathoracic airway trees: segmentation and airway morphology analysis from lowdose ct scans. J Tschirren, E A Hoffman, G Mclennan, M Sonka, IEEE Transactions on Medical Imaging. 2412J. Tschirren, E. A. Hoffman, G. McLennan, and M. Sonka, "Intrathoracic airway trees: segmentation and airway morphology analysis from low- dose ct scans," IEEE Transactions on Medical Imaging, vol. 24, no. 12, pp. 1529-1539, 2005.
Three-dimensional human airway segmentation methods for clinical virtual bronchoscopy. A P Kiraly, W E Higgins, G Mclennan, E A Hoffman, J M Reinhardt, Academic Radiology. 910A. P. Kiraly, W. E. Higgins, G. McLennan, E. A. Hoffman, and J. M. Reinhardt, "Three-dimensional human airway segmentation methods for clinical virtual bronchoscopy," Academic Radiology, vol. 9, no. 10, pp. 1153-1168, 2002.
Robust 3-d airway tree segmentation for image-guided peripheral bronchoscopy. M W Graham, J D Gibbs, D C Cornish, W E Higgins, IEEE Transactions on Medical Imaging. 294M. W. Graham, J. D. Gibbs, D. C. Cornish, and W. E. Higgins, "Robust 3-d airway tree segmentation for image-guided peripheral bronchoscopy," IEEE Transactions on Medical Imaging, vol. 29, no. 4, pp. 982-997, 2010.
A hybrid method for airway segmentation and automated measurement of bronchial wall thickness on ct. Z Xu, U Bagci, B Foster, A Mansoor, J K Udupa, D J Mollura, Medical Image Analysis. 241Z. Xu, U. Bagci, B. Foster, A. Mansoor, J. K. Udupa, and D. J. Mollura, "A hybrid method for airway segmentation and automated measurement of bronchial wall thickness on ct," Medical Image Analysis, vol. 24, no. 1, pp. 1-17, 2015.
Extraction of airways from ct (exact'09). P Lo, B Van Ginneken, J M Reinhardt, T Yavarna, P A Jong, B Irving, C Fetita, M Ortner, R Pinho, J Sijbers, IEEE Transactions on Medical Imaging. 3111P. Lo, B. Van Ginneken, J. M. Reinhardt, T. Yavarna, P. A. De Jong, B. Irving, C. Fetita, M. Ortner, R. Pinho, J. Sijbers, et al., "Extraction of airways from ct (exact'09)," IEEE Transactions on Medical Imaging, vol. 31, no. 11, pp. 2093-2107, 2012.
nnu-net: a self-configuring method for deep learning-based biomedical image segmentation. F Isensee, P F Jaeger, S A Kohl, J Petersen, K H Maier-Hein, Nature Methods. 182F. Isensee, P. F. Jaeger, S. A. Kohl, J. Petersen, and K. H. Maier-Hein, "nnu-net: a self-configuring method for deep learning-based biomedical image segmentation," Nature Methods, vol. 18, no. 2, pp. 203-211, 2021.
V-net: Fully convolutional neural networks for volumetric medical image segmentation. F Milletari, N Navab, S.-A Ahmadi, International Conference on 3D Vision (3DV). F. Milletari, N. Navab, and S.-A. Ahmadi, "V-net: Fully convolutional neural networks for volumetric medical image segmentation," in Inter- national Conference on 3D Vision (3DV), pp. 565-571, 2016.
Improving airway segmentation in computed tomography using leak detection with convolutional networks. J.-P Charbonnier, E M Van Rikxoort, A A Setio, C M Schaefer-Prokop, B Van Ginneken, F Ciompi, Medical Image Analysis. 36J.-P. Charbonnier, E. M. Van Rikxoort, A. A. Setio, C. M. Schaefer- Prokop, B. van Ginneken, and F. Ciompi, "Improving airway segmen- tation in computed tomography using leak detection with convolutional networks," Medical Image Analysis, vol. 36, pp. 52-60, 2017.
Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net. J Yun, J Park, D Yu, J Yi, M Lee, H J Park, J.-G Lee, J B Seo, N Kim, Medical Image Analysis. 51J. Yun, J. Park, D. Yu, J. Yi, M. Lee, H. J. Park, J.-G. Lee, J. B. Seo, and N. Kim, "Improvement of fully automated airway segmentation on volumetric computed tomographic images using a 2.5 dimensional convolutional neural net," Medical Image Analysis, vol. 51, pp. 13-20, 2019.
Automatic airway segmentation in chest ct using convolutional neural networks. A Garcia-Uceda, H A Juarez, M D Tiddens, Bruijne, Image Analysis for Moving Organ, Breast, and Thoracic Images. SpringerA. Garcia-Uceda Juarez, H. A. Tiddens, and M. d. Bruijne, "Automatic airway segmentation in chest ct using convolutional neural networks," in Image Analysis for Moving Organ, Breast, and Thoracic Images, pp. 238-250, Springer, 2018.
Airwaynet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks. Y Qin, M Chen, H Zheng, Y Gu, M Shen, J Yang, X Huang, Y.-M Zhu, G.-Z Yang, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerY. Qin, M. Chen, H. Zheng, Y. Gu, M. Shen, J. Yang, X. Huang, Y.-M. Zhu, and G.-Z. Yang, "Airwaynet: a voxel-connectivity aware approach for accurate airway segmentation using convolutional neural networks," in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 212-220, Springer, 2019.
3d u-net: learning dense volumetric segmentation from sparse annotation. Ö Içek, A Abdulkadir, S S Lienkamp, T Brox, O Ronneberger, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerÖ. Ç içek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, "3d u-net: learning dense volumetric segmentation from sparse anno- tation," in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 424-432, Springer, 2016.
Attention u-net: Learning where to look for the pancreas. O Oktay, J Schlemper, L L Folgoc, M Lee, M Heinrich, K Misawa, K Mori, S Mcdonagh, N Y Hammerla, B Kainz, International Conference on Medical Imaging with Deep Learning. 15O. Oktay, J. Schlemper, L. L. Folgoc, M. Lee, M. Heinrich, K. Misawa, K. Mori, S. McDonagh, N. Y. Hammerla, B. Kainz, et al., "Attention u-net: Learning where to look for the pancreas," in International Conference on Medical Imaging with Deep Learning, p. 15, 2018.
A hierarchical fused fuzzy deep neural network for data classification. Y Deng, Z Ren, Y Kong, F Bao, Q Dai, IEEE Transactions on Fuzzy Systems. 254Y. Deng, Z. Ren, Y. Kong, F. Bao, and Q. Dai, "A hierarchical fused fuzzy deep neural network for data classification," IEEE Transactions on Fuzzy Systems, vol. 25, no. 4, pp. 1006-1012, 2016.
Lip image segmentation based on a fuzzy convolutional neural network. C Guan, S Wang, A W , -C Liew, IEEE Transactions on Fuzzy Systems. 287C. Guan, S. Wang, and A. W.-C. Liew, "Lip image segmentation based on a fuzzy convolutional neural network," IEEE Transactions on Fuzzy Systems, vol. 28, no. 7, pp. 1242-1251, 2019.
Hierarchical fused model with deep learning and type-2 fuzzy learning for breast cancer diagnosis. T Shen, J Wang, C Gou, F.-Y. Wang, IEEE Transactions on Fuzzy Systems. 2812T. Shen, J. Wang, C. Gou, and F.-Y. Wang, "Hierarchical fused model with deep learning and type-2 fuzzy learning for breast cancer diagno- sis," IEEE Transactions on Fuzzy Systems, vol. 28, no. 12, pp. 3204- 3218, 2020.
Automatic airway segmentation from computed tomography using robust and efficient 3-d convolutional neural networks. A Garcia-Uceda, R Selvan, Z Saghir, H A Tiddens, M De Bruijne, Scientific Reports. 111A. Garcia-Uceda, R. Selvan, Z. Saghir, H. A. Tiddens, and M. de Brui- jne, "Automatic airway segmentation from computed tomography using robust and efficient 3-d convolutional neural networks," Scientific Re- ports, vol. 11, no. 1, pp. 1-15, 2021.
Airwaynetse: A simple-yet-effective approach to improve airway segmentation using context scale fusion. Y Qin, Y Gu, H Zheng, M Chen, J Yang, Y.-M Zhu, International Symposium on Biomedical Imaging (ISBI). IEEEY. Qin, Y. Gu, H. Zheng, M. Chen, J. Yang, and Y.-M. Zhu, "Airwaynet- se: A simple-yet-effective approach to improve airway segmentation using context scale fusion," in International Symposium on Biomedical Imaging (ISBI), pp. 809-813, IEEE, 2020.
Automatic airway segmentation in chest ct using convolutional neural networks. A Garcia-Uceda, H A Juarez, M D Tiddens, Bruijne, Image Analysis for Moving Organ, Breast, and Thoracic Images. SpringerA. Garcia-Uceda Juarez, H. A. Tiddens, and M. d. Bruijne, "Automatic airway segmentation in chest ct using convolutional neural networks," in Image Analysis for Moving Organ, Breast, and Thoracic Images, pp. 238-250, Springer, 2018.
Alleviating class-wise gradient imbalance for pulmonary airway segmentation. H Zheng, Y Qin, Y Gu, F Xie, J Yang, J Sun, G.-Z Yang, IEEE Transactions on Medical Imaging. 409H. Zheng, Y. Qin, Y. Gu, F. Xie, J. Yang, J. Sun, and G.-Z. Yang, "Alleviating class-wise gradient imbalance for pulmonary airway seg- mentation," IEEE Transactions on Medical Imaging, vol. 40, no. 9, pp. 2452-2462, 2021.
Tracking and segmentation of the airways in chest ct using a fully convolutional network. Q Meng, H R Roth, T Kitasaka, M Oda, J Ueno, K Mori, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerQ. Meng, H. R. Roth, T. Kitasaka, M. Oda, J. Ueno, and K. Mori, "Tracking and segmentation of the airways in chest ct using a fully convolutional network," in International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 198-207, Springer, 2017.
Airway segmentation and centerline extraction from thoracic ct-comparison of a new method to state of the art commercialized methods. P J Reynisson, M Scali, E Smistad, E F Hofstad, H O Leira, F Lindseth, T A Hernes, T Amundsen, H Sorger, T Langø, PloS One. 1012144282P. J. Reynisson, M. Scali, E. Smistad, E. F. Hofstad, H. O. Leira, F. Lindseth, T. A. Nagelhus Hernes, T. Amundsen, H. Sorger, and T. Langø, "Airway segmentation and centerline extraction from thoracic ct-comparison of a new method to state of the art commercialized methods," PloS One, vol. 10, no. 12, p. e0144282, 2015.
A ct-based automated algorithm for airway segmentation using freeze-and-grow propagation and deep learning. S A Nadeem, E A Hoffman, J C Sieren, A P Comellas, S P Bhatt, I Z Barjaktarevic, F Abtin, P K Saha, IEEE Transactions on Medical Imaging. 401S. A. Nadeem, E. A. Hoffman, J. C. Sieren, A. P. Comellas, S. P. Bhatt, I. Z. Barjaktarevic, F. Abtin, and P. K. Saha, "A ct-based automated algorithm for airway segmentation using freeze-and-grow propagation and deep learning," IEEE Transactions on Medical Imaging, vol. 40, no. 1, pp. 405-418, 2020.
Tubular structure segmentation using spatial fully connected network with radial distance loss for 3d medical images. C Wang, Y Hayashi, M Oda, H Itoh, T Kitasaka, A F Frangi, K Mori, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerC. Wang, Y. Hayashi, M. Oda, H. Itoh, T. Kitasaka, A. F. Frangi, and K. Mori, "Tubular structure segmentation using spatial fully connected network with radial distance loss for 3d medical images," in Interna- tional Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 348-356, Springer, 2019.
Automatic fuzzy clustering framework for image segmentation. T Lei, P Liu, X Jia, X Zhang, H Meng, A K Nandi, IEEE Transactions on Fuzzy Systems. 289T. Lei, P. Liu, X. Jia, X. Zhang, H. Meng, and A. K. Nandi, "Automatic fuzzy clustering framework for image segmentation," IEEE Transactions on Fuzzy Systems, vol. 28, no. 9, pp. 2078-2092, 2019.
Superpixelbased fast fuzzy c-means clustering for color image segmentation. T Lei, X Jia, Y Zhang, S Liu, H Meng, A K Nandi, IEEE Transactions on Fuzzy Systems. 279T. Lei, X. Jia, Y. Zhang, S. Liu, H. Meng, and A. K. Nandi, "Superpixel- based fast fuzzy c-means clustering for color image segmentation," IEEE Transactions on Fuzzy Systems, vol. 27, no. 9, pp. 1753-1766, 2018.
Fuzzy deep belief networks for semisupervised sentiment classification. S Zhou, Q Chen, X Wang, Neurocomputing. 131S. Zhou, Q. Chen, and X. Wang, "Fuzzy deep belief networks for semi- supervised sentiment classification," Neurocomputing, vol. 131, pp. 312- 322, 2014.
Introducing fuzzy layers for deep learning. S R Price, S R Price, D T Anderson, International Conference on Fuzzy Systems (FUZZ-IEEE). IEEES. R. Price, S. R. Price, and D. T. Anderson, "Introducing fuzzy layers for deep learning," in International Conference on Fuzzy Systems (FUZZ-IEEE), pp. 1-6, IEEE, 2019.
Semantic segmentation of breast ultrasound image with fuzzy deep learning network and breast anatomy constraints. K Huang, Y Zhang, H Cheng, P Xing, B Zhang, Neurocomputing. 450K. Huang, Y. Zhang, H. Cheng, P. Xing, and B. Zhang, "Semantic segmentation of breast ultrasound image with fuzzy deep learning network and breast anatomy constraints," Neurocomputing, vol. 450, pp. 319-335, 2021.
Multimodal infant brain segmentation by fuzzy-informed deep learning. W Ding, M Basset, H Hawash, W Pedrycz, IEEE Transactions on Fuzzy Systems. 304W. Ding, M. Abdel-Basset, H. Hawash, and W. Pedrycz, "Multimodal infant brain segmentation by fuzzy-informed deep learning," IEEE Transactions on Fuzzy Systems, vol. 30, no. 4, pp. 1088-1101, 2022.
Relational modeling for robust and efficient pulmonary lobe segmentation in ct scans. W Xie, C Jacobs, J.-P Charbonnier, B Van Ginneken, IEEE Transactions on Medical Imaging. 398W. Xie, C. Jacobs, J.-P. Charbonnier, and B. Van Ginneken, "Relational modeling for robust and efficient pulmonary lobe segmentation in ct scans," IEEE Transactions on Medical Imaging, vol. 39, no. 8, pp. 2664- 2675, 2020.
Building skeleton models via 3-d medial surface axis thinning algorithms. T.-C Lee, R L Kashyap, C.-N Chu, CVGIP: Graphical Models and Image Processing. 566T.-C. Lee, R. L. Kashyap, and C.-N. Chu, "Building skeleton models via 3-d medial surface axis thinning algorithms," CVGIP: Graphical Models and Image Processing, vol. 56, no. 6, pp. 462-478, 1994.
The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans. S G Armato, Iii , G Mclennan, L Bidaut, M F Mcnitt-Gray, C R Meyer, A P Reeves, B Zhao, D R Aberle, C I Henschke, E A Hoffman, Medical Physics. 382S. G. Armato III, G. McLennan, L. Bidaut, M. F. McNitt-Gray, C. R. Meyer, A. P. Reeves, B. Zhao, D. R. Aberle, C. I. Henschke, E. A. Hoffman, et al., "The lung image database consortium (lidc) and image database resource initiative (idri): a completed reference database of lung nodules on ct scans," Medical Physics, vol. 38, no. 2, pp. 915-931, 2011.
| [] |
[
"When does Privileged Information Explain Away Label Noise?",
"When does Privileged Information Explain Away Label Noise?"
] | [
"Guillermo Ortiz-Jimenez ",
"Mark Collier ",
"Anant Nawalgaria ",
"Alexander D'amour ",
"Jesse Berent ",
"Rodolphe Jenatton ",
"Effrosyni Kokiopoulou "
] | [] | [] | Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from mislabeled data, while enabling a learning shortcut to memorize the mislabeled examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise. stop grad kitty kitty kitty annotation time: 3 hours confidence: 5% annotator id: #75147 stop grad kitty puppy puppy annotation time: 5 secs confidence: 99% annotator id: #00342 | 10.48550/arxiv.2303.01806 | [
"https://export.arxiv.org/pdf/2303.01806v2.pdf"
] | 257,353,283 | 2303.01806 | b14b2c2945fa6094849da3046325e77aa58b992f |
When does Privileged Information Explain Away Label Noise?
Guillermo Ortiz-Jimenez
Mark Collier
Anant Nawalgaria
Alexander D'amour
Jesse Berent
Rodolphe Jenatton
Effrosyni Kokiopoulou
When does Privileged Information Explain Away Label Noise?
Leveraging privileged information (PI), or features available during training but not at test time, has recently been shown to be an effective method for addressing label noise. However, the reasons for its effectiveness are not well understood. In this study, we investigate the role played by different properties of the PI in explaining away label noise. Through experiments on multiple datasets with real PI (CIFAR-N/H) and a new large-scale benchmark ImageNet-PI, we find that PI is most helpful when it allows networks to easily distinguish clean from mislabeled data, while enabling a learning shortcut to memorize the mislabeled examples. Interestingly, when PI becomes too predictive of the target label, PI methods often perform worse than their no-PI baselines. Based on these findings, we propose several enhancements to the state-of-the-art PI methods and demonstrate the potential of PI as a means of tackling label noise. Finally, we show how we can easily combine the resulting PI approaches with existing no-PI techniques designed to deal with label noise. stop grad kitty kitty kitty annotation time: 3 hours confidence: 5% annotator id: #75147 stop grad kitty puppy puppy annotation time: 5 secs confidence: 99% annotator id: #00342
Introduction
Label noise, or incorrect labels in training data, is a pervasive problem in machine learning that is becoming increasingly common as we train larger models on more weakly annotated data. Human annotators are often the source of this noise, assigning incorrect labels to certain examples (Snow et al., 2008;Sheng et al., 2008), e.g., when the class categories are too fine-grained. Incorrect labeling can also come from using other models to provide proxy labels (Prabhu et al., 2022) et al., 2021). However, the standard approach in supervised learning is to ignore this issue and treat all labels as correct, leading to significant drops in model performance as the models tend to memorize the noisy labels and degrade the learned representations (Zhang et al., 2017).
Recently, some studies have proposed to mitigate the effect of label noise by leveraging privileged information (PI) (Vapnik & Vashist, 2009;Collier et al., 2022) i.e., features available at training time but not at test time. Examples of PI are features describing the human annotator that provided a given label, such as the annotator ID, the amount of time needed to provide the label, the experience of the annotator, etc. While several PI methods have shown promising gains in mitigating the effects of label noise (Lopez-Paz et al., 2016;Lambert et al., 2018;Collier et al., 2022), the reasons behind their success are not fully understood. Moreover, the fact that existing results have been provided in heterogeneous settings makes comparisons and conclusions difficult to be drawn.
In this work, we aim to standardize the evaluation of PI and conduct a large-scale study on the role of PI in explaining away label noise. We examine the performance of several PI methods on different noisy datasets and analyze their behavior based on the predictive properties of the available PI. Interestingly, we find that when the PI is too predictive of the target label, the performance of most PI methods significantly degrades below their no-PI baselines as the models fail to learn the associations between the non-privileged input features and the targets. Conversely, we discover that the strongest form of PI exhibits two main properties: (i) it allows the network to easily separate clean and mislabeled examples, and (ii) it enables an easier "learning shortcut" (in a sense to be clarified later) to overfit to the mislabeled examples (see Figure 1). When both these properties are present, the performance of PI methods significantly exceeds their no-PI counterparts by becoming more robust to label noise.
Overall, we observe that using PI during training can enable models to discover shortcuts that can prevent learning the relationship between features and labels (Geirhos et al., 2020;D'Amour et al., 2020). On the one hand, these PI-enabled shortcuts can have a positive effect when the relationship being ignored only concerns incorrect labels, which thus (Collier et al., 2022) with noisy labels. Having access to PI allows a network to use a learning shortcut to memorize the mislabeled examples using only PI. This protects the extraction of features from the actual data, which are only refined using the correctly labeled examples.
prevents the learned feature representations from being contaminated by label noise. On the other hand, they can have a detrimental effect when the clean labels are also affected, which prevents the model from learning the correct association between features and targets. Shortcuts are key to understanding the role of PI in mitigating label noise and are directly linked to deep learning dynamics (Zhang et al., 2017).
When focusing on the dynamics of the strongest PI methods, we show that using PI allows training larger models on datasets with a higher level of noise. PI can counteract the negative effect of memorizing incorrect associations between features and incorrect labels as it enables a shortcut that primarily affects the mislabeled examples. We use these new insights to improve current state-of-the-art PI algorithms.
Overall, the main contributions of our work are:
• We present the first large-scale study on the role of PI on supervised noisy datasets of various types.
• We release ImageNet-PI, the largest available testbed for experimenting with PI and label noise.
• We find that effective PI enables learning shortcuts only on mislabeled data greatly benefitting performance.
• We improve a wide range of PI methods using simple improvements, and demonstrate cumulative gains with other state-of-the-art noisy labels methods.
We believe our findings can have a significant impact on future research about both label noise and PI. They indeed not only inform us about the desired properties of the ideal PI (which can help design and collect PI features in practice) but also provide practical insights for improving existing methods. Formally capturing our empirical results is another promising direction for future research.
Methodology
Our large-scale experiments provide a comprehensive analysis of the usefulness of PI in the presence of label noise. Previous results have been provided in heterogeneous settings with testing performed on different datasets and various types of PI, making well-aligned comparisons difficult. We aim at standardizing these comparisons, making the unification of all the settings of our experiments part of our core contribution. Our code can be found at https://github.com/google/ uncertainty-baselines. In what follows, we briefly describe the datasets and baselines used in our study.
Datasets
In this work, we address the supervised learning setting with PI and label noise as described in Collier et al. (2022). Our training data consists of triplets (x,ỹ; a), where x ∈ R d is a set of input features,ỹ ∈ {1, . . . , K} is a noisy target label (assuming K classes), and a ∈ R p is a vector of PI features. In this work, we mainly focus in the case when these PI features are related to the annotation process, as this is a common source of label noise (Snow et al., 2008;Sheng et al., 2008). This PI may include information about the annotator, such as their ID or experience; or about the process itself, such as the annotation duration or confidence. At test time, we do not have access to any PI and evaluate our models based only on clean (x, y) pairs from the data distribution.
We use relabelled versions of standard image recognition datasets which provide various forms of PI about the annotation process in our experiments. These datasets allow us to access both clean (y) and noisy labels (ỹ), but we only use the noisy labels for training and hyperparameter selection (see details in Appendix D for a discussion about the effect of noisy labels at this step). The clean labels are only used for evaluation. The datasets we use offer a range of training conditions, including differing numbers of samples and classes, levels of noise, types of PI, image sizes, and annotation processes, making our findings widely applicable.
Some of these datasets provide multiple annotations per example. Nonetheless, to create a unified benchmark we only sample one label per example for datasets that provide multiple labels. So that we can control the noise level and examine its impact on the performance of PI methods, we create high and low noise versions of each dataset, when possible. We follow the terminology of Wei et al. (2022) by naming the low noise version the "uniform" version, which selects one of the available labels uniformly at random, and the high noise version, the "worst" version, which always selects an incorrect label if available. The "worst" version is by design more noisy than the "uniform" one.
CIFAR-10/100N. A relabelled version of the CIFAR-10/100 datasets (Krizhevsky, 2009) that includes multiple annotations per image (Wei et al., 2022). The raw data includes information about the annotation process, such as annotation times and annotator IDs, but this information is not given at the example level. Instead, it is provided as averages over batches of examples, resulting in coarse-grained PI. We will show that the PI baselines perform poorly on this dataset. The "uniform" version of CIFAR-10N agrees 82.6% of the time with the clean labels, and the "worst" version 59.8%. CIFAR-100N agrees 59.8% with the clean labels. For reference, training our base architectures without label noise and without PI achieves test accuracies of 93.5% and 77.9% on CIFAR-10 and CIFAR-100, respectively.
CIFAR-10H. An alternative human-relabelled version of CIFAR-10, where the new labels are provided only on the test set (Peterson et al., 2019). As in Collier et al. (2022), when we train on CIFAR-10H, we evaluate the performance of the models on the original CIFAR-10 training set (since CIFAR-10H relabels only the validation set). Contrary to the CIFAR-N datasets, CIFAR-10H contains rich PI at the example-level, with high-quality metadata about the annotation process. The "uniform" version agrees 95.1% of the time with the clean labels, and the "worst" 35.4%. For reference, training our base architecture without label noise and without PI achieves a test accuracy of 88.4% on CIFAR-10H.
ImageNet-PI. Inspired by Collier et al. (2022), a relabeled version of ImageNet (Deng et al., 2009) in which the labels are provided by a set of pre-trained deep neural networks with different architectures. During the relabeling process, we sample a random label from a temperature-scaled predictive distribution of each model on each example. This leads to label noise that is asymmetrical and feature-dependent. Technical details of the relabeling process and temperature-scaling can be found in Appendix A. The PI of the dataset comes from the confidences of the models on the sampled labels, the parameter counts of the models, and the models' test accuracies on the clean test distribution. These PI features serve as a good proxy for the expected reliability of each model. The ImageNet-PI high-noise version that we use agrees 16.2% of the time with the clean labels and the low-noise version 51.9%. For reference, training our base architecture without label noise and without PI achieves a test accuracy of 76.2% on ImageNet. As a contribution of this work, we open-source ImageNet-PI (with different amounts of label noise) to encourage further research on PI and label noise at a scale larger than possible today with CIFAR-N/H. The data is publicly available at https://github.com/ google-research-datasets/imagenet_pi.
PI Algorithms
We study the performance of four representative approaches that exploit PI. They all have been shown to be effective at mitigating the effect of label noise (Collier et al., 2022):
no-PI. A standard supervised learning baseline that minimizes the cross-entropy loss on the noisy labels to approximate p(ỹ|x) without access to PI.
Distillation (Lopez-Paz et al., 2016). A knowldege distillation method in which a teacher model is first trained using standard maximum likelihood estimation with access to PI to approximate p(ỹ|x, a). A student model with the same architecture is later trained to match the output of the teacher without access to the PI. We also provide results for a standard self-distillation baseline in which the teacher model does not have access to PI (Hinton et al., 2015).
TRAM (Collier et al., 2022). Method based on a two-headed model in which one head has access to PI and the other one not. At training time, a common feature representation ϕ(x) is fed to two classification heads π(ϕ(x), a) ("PI head") and ψ(ϕ(x)) ("no-PI head") to jointly solve min ϕ,π,ψ E (x,a,ỹ) [ℓ(π(ϕ(x), a),ỹ) + ℓ(ψ(ϕ(x)),ỹ)] .
(1) Importantly, during training, the no-PI feature extractor ϕ is updated using only the gradients coming from the PI head. At test time, only the no-PI head is used for prediction.
Approximate Full Marginalization (Collier et al., 2022). A neural network is first trained using maximum likelihood estimation with access to PI to approximate p(ỹ|x, a). During inference, a Monte-Carlo estimation is used to approximate the marginal p(ỹ|x) = p(ỹ|x, a)p(a|x) da typically further assuming the independence p(a|x) ≈ p(a). Note that this process increases the memory and computational costs during inference as it requires computing the output of the network for each of the different sampled values of a (in practice, in the order of 1, 000 extra values).
All the methods use the same underlying network architecture with minimal changes to accommodate their specific requirements, like Collier et al. (2022). In particular, during inference, all methods use exactly the same network, except for the approximate full marginalization (AFM) baseline, which has additional parameters to deal with the sampled PI.
In all experiments, we use the same protocol for training with noisy labels and evaluating on a clean test set. As early stopping is a strong baseline against label noise (Bai et al., 2021), we always report values of test accuracy at the end of the epoch that achieves the best performance on a held-out validation percentage of the noisy training set. We reproduce our results without early stopping in Appendix D. Unless otherwise specified, we conduct a grid search to tune the most important hyperparameters of each method for each experiment, and report the mean test accuracy and standard deviation over five runs. Further details on the experimental setup can be found in Appendix B.
When is PI helpful?
Table 1 (Original) shows the performance of the different PI algorithms on our collection of noisy datasets 1 , where we see that leveraging PI does not always yield big gains in performance. Indeed, while TRAM and AFM substantially improve upon the no-PI baseline on CIFAR-10H and ImageNet-PI, they do not perform much better on CIFAR-10N and CIFAR-100N. Moreover, we observe little gains of Distillation (PI) over the vanilla self-distillation baseline.
The performance disparities of the same algorithms on datasets where the main source of variation is the available PI, i.e., CIFAR-10N vs. CIFAR-10H, highlights that leveraging PI is not always helpful. In fact, depending on the predictive properties of the PI and the noise distribution, we report very different results. This begs the questions: i) "what makes PI effective for these algorithms?" and ii) "how do they exploit PI to explain away label noise?".
To answer these question, we perform a series of controlled experiments in which we train our PI methods using different PI features (including both real and synthetic ones). By doing so our objective is to identify the main mechanisms that lead to the top performance of these algorithms.
Fully predictive PI
Hypothesis: The PI a always complements the information about the labelsỹ contained in x.
It is natural to assume that knowing a on top of x can help predictỹ and thus improve over supervised learning. However, this reasoning is flawed as it forgets that during inference the models cannot exploit a. On the contrary, as we will see, if a is very predictive of the targetỹ, the test performance can severely degrade.
We test this hypothesis by retraining the algorithms on the noisy datasets but using a =ỹ instead of the original PI features. That is, having access to fully predictive PI.
Finding: When a is fully predictive ofỹ, most PI methods perform worse than the no-PI baselines.
As we can see in Table 1 (Labels), all the PI baselines greatly suffer in this regime. The reason for this is simple: when the PI is too informative of the target label, then the models are heavily relying on the PI to explain the label and they are discouraged from learning any associations between x and y and do not learn any meaningful feature representations.
In this regard, we see how Distillation (PI) achieves roughly the same performance as Distillation (no-PI), while TRAM and AFM achieve very low test accuracies.
The fact that very predictive PI can hurt the performance of these algorithms highlights a key element of their dynamics: PI can enable learning shortcuts (D'Amour et al., 2020;Geirhos et al., 2020) that prevent learning certain associations between x andỹ, possibly by starving the gradient signal that updates ϕ(x) (Pezeshki et al., 2021). This has practical implications as it discourages blindly appending arbitrarily complex metadata to a during training which could be very predictive of the target label.
Noise indicator
Hypothesis: PI helps because it can separate mislabeled from correct examples.
We saw that when a is too predictive ofỹ, the PI approaches perform poorly. We now turn to an alternative hypothesis of why PI can be beneficial to explain away label noise: The PI features can help the network separate the clean from the mislabeled examples. Indeed, the original motivation of using PI to fight label noise in Collier et al. (2022) was that annotator features, e.g., confidences, could act Albeit natural, this hypothesis has not been tested before, but can be done using the datasets in this study. Recall that we have access to clean and noisy labels for all the training samples, and thus we can synthesize an indicator signal 1(ỹ ̸ = y) that takes a value of 1 when the clean and noisy labels agree and 0 otherwise. Table 1 (Indicator) shows the results of training using a = 1(ỹ ̸ = y).
Finding: Some PI methods perform better with the original PI than with an oracle noise indicator.
Interestingly, although we see that the performances on the Indicator columns are generally higher than on the Original one, this is not always the case, and sometimes the indicator underperforms or does not significantly improve over the original PI (cf. AFM and TRAM on CIFAR10H and ImageNet-PI). This suggests that the PI methods do more than just leveraging the noise indication abilities of the PI. Clearly, if even using an ideal noise indicator signal 1(ỹ ̸ = y) as PI, the original PI can sometimes outperform it, then there must be other information in the PI that the algorithms can exploit to improve performance.
Memorization dynamics play a significant role
Inspecting the training dynamics of the algorithms can help understand the previous results. For example, Figure 2 shows the evolution of test and training accuracies of a TRAM model on CIFAR-10H using different PI features 2 . The original PI leads to better final test accuracy than the noise indicator. Meanwhile, models trained using annotator labels as PI do not seem to learn anything useful. These differences are explained by the rates at which these models fit the mislabeled and correct samples using each of the TRAM heads.
Focusing on the training accuracies of the PI-head, Figure 2 and correct) using the PI head, which in turn slows the training speed of the no-PI head (central column). This happens because the feature extractor is only updated by gradients from the PI head, leading to a lack of meaningful representation of p(ỹ|x) if the model is learning to fit all examples using PI features alone. for all examples, and these are not enough to separate the noisy training set. Indeed, as we will see, having access to PI that can be easily memorized on the mislabeled examples is fundamental to maximize performance.
Near-optimal PI features
Hypothesis: The optimal PI enables a learning shortcut to memorize only the mislabeled labels.
The experiments using the annotator labels as PI are a clear example of a PI-enabled learning shortcut which is very detrimental for the model performance. On the other hand, the dynamics of the original models hint that the same shortcut mechanism can also have a positive effect when it only applies to the mislabeled examples. To test this hypothesis, we design a new form of PI features, denoted as near-optimal in the tables and plots. As its name indicates, this PI should allow the models to get very close to their top performance. The near-optimal features are designed to exploit the PI shortcut only on the mislabeled examples, allowing the model to learn freely on the correct ones. To that end, the near-optimal PI features consist of two concatenated values: (i) the indicator signal that says if a given example is mislabeled or not, and (ii) the annotator label only if that example is mislabeled. Otherwise an all-zero vector is concatenated with the same dimensionality as the one-hot encoded labels to the indicator signal.
Finding: When a learning shortcut is provided only for mislabeled examples, PI methods achieve top performance.
The results in Table 1 (Near-optimal) show that those PI features significantly outperform all other PI features by a large margin on all datasets when using TRAM or AFM 4 . Similarly, in Figure 2 we observe that the dynamics of the near-optimal models fully match our expectations. The nearoptimal models train the fastest on the mislabeled examples on the PI head, thus leading to a very slow training speed on mislabeled examples on the no-PI head. Moreover, since the mislabeled examples no longer influence (because their accuracies are already maximal on the PI head) the updates of the feature extraction, then we observe that the performance on the correct examples is much higher.
The same explanation applies to AFM whose dynamics are shown in Appendix E.1. In this case, the memorization of the mislabeled examples using PI alone also protects the no-PI features. This way, during inference, the PI sampled from the mislabeled examples simply adds a constant noise floor to the predicted probabilities of the incorrect labels. This averaged noise floor is usually much smaller than the probability predicted using the clean features of the no-PI, and thus does not lead to frequent misclassification errors.
Improving PI algorithms
In this section, we use the insights of the previous analysis to improve the design of PI methods. We perform ablation studies on different design parameters of the main PI approaches, identifying simple modifications that significantly boost their performance. We primarily focus on TRAM and AFM as these methods outperform Distillation (PI) by a large margin when the PI is helpful (cf. Table 1). We provide illustrative results here, and full results in Appendix E.
Model size
We explore how the model size affects performance. In particular, note that the parameter count of all PI algorithms can be split into two parts: the feature extractor ϕ of the standard features x and the tower π that processes the PI; see Eq. (1) and Section 2.2. We therefore perform an ablation study in which we scale each of these parts of the models separately.
Feature extractor. Figure 3 shows how test accuracy changes as we increase the size of the feature extractor of the PI approaches. The performance follows a U-shape, where scaling the model past a certain point harms final performance. Indeed, a larger capacity discourages the model from using PI features and causes overfitting to standard features, as shown by the simultaneous increase in training accuracy on mislabeled examples and decrease in test accuracy.
Finding: Increasing the feature extractor size discourages using the PI as a shortcut.
PI head size. Figure 4 shows the results of scaling the size of the PI processing tower while keeping the feature extractor size fixed. We observe how larger PI heads improve performance as they encourage more memorization using PI alone and protect the extraction of the no-PI features. This is illustrated by the decay of the training accuracy of the
Random PI can enable positive shortcuts
Hypothesis: Random PI that uniquely identifies each example can enable a PI shortcut that protects the model from memorizing incorrect labeles with x.
The near-optimal, labels, and indicator signals of Table 1 are all synthetic PI features that cannot be used in practice, as they rely on the knowledge of which examples are mislabeled and which examples are correct. However, they show that having access to a signal that can be more easily memorized than the standard features x on the mislabeled examples is a good recipe to improve performance. This being said,a key property of incorrect labels is that they are, by definition, a mistake. In this sense, fitting an incorrect training label simply amounts to memorizing a specific training example whose features are not predictive of the target label, i.e., the features serve just as an example ID. In fact, any set of features which are different enough for each example could act as such an ID. Finding: Random PI is effective at reducing overfitting to the incorrect labels using x.
We evaluate this hypothesis in Table 2 where we introduce TRAM++: a version of TRAM in which the original PI features are augmented with a unique random vector for each example (experimental details are provided in Appendix F and results for AFM++ in Appendix G). As we can see, TRAM++ generally achieves better performance than TRAM alone, with greater improvements in those datasets where overfitting is a bigger issue (i.e., CIFAR).
Combination with other no-PI techniques
In this section, we show experimentally that the performance improvements obtained by PI methods on noisy datasets can work symbiotically with other state-of-the-art techniques from the noisy label literature. In particular, we show that TRAM++ can be easily combined with Sparse Over-parameterization (SOP) (Liu et al., 2022) and Heteroscedastic output layers (Collier et al., 2021) while providing cumulative gains with respect to those baselines 5 .
Sparse Over Parameterization (SOP)
Sparse over-parameterization (SOP) (Liu et al., 2022) is a state-of-the-art method which leverages the implicit bias of stochastic gradient descent (SGD) and overparameterization to estimate and correct the noisy label signal, a concept which has proven to work well (Zhao et al., 2022). It does so by adding two new sets of K-dimensional parameters
{u i } N i=1 and {v i } N i=1 ,
where N denotes the number of training points, and solving
min θ,{ui,vi} N i=1 1 N N i=1 ℓ (f θ (x) + u i ⊙ u i − v i ⊙ v i ,ỹ)
using SGD. This specific parameterization biases the solution of SGD towards the recovery of the noise signal ϵ i = u i ⊙ u i − v i ⊙ v i that corrupts y, i.e.,ỹ i ≈ y i + ϵ i , implicitly assuming that ϵ i is sparse across the dataset.
In this work, we explore whether the combination of TRAM++ with SOP can yield cumulative gains in perfor- 5 We provide further experiments combining TRAM with label smoothing (Szegedy et al., 2016) in Appendix H. mance against label noise. In particular, we propose a simple two-step training process to combine them: (i) We first pretrain a neural network using TRAM++ and (ii) we finetune the no-PI side of the network using the SOP loss without stop-gradients. Table 2 shows the results of this method 6 where we see that, indeed, TRAM+SOP is able to significantly outperform TRAM++ or SOP alone in all datasets. More experimental details can be found in Appendix I.
Heteroscedastic output layers
Finally, we further analyze the combination of TRAM with HET, another state-of-the-art no-PI baseline from the noisy label literature that can be scaled up to ImageNet scale (Collier et al., 2021). HET here refers to the use of heteroscedastic output layers to model the aleatoric uncertainty of the predictions without PI. In particular, we apply HET layers to both heads of TRAM++ and follow the same training setup. We call the resulting approach TRAM+HET.
Our experiments, presented in Table 2, show that the TRAM+HET model outperforms both TRAM++ and HET applied alone . More experimental details about that model combination can be found in Appendix J. All in all, these results corroborate our main findings:
Finding: PI methods work symbiotically with other no-PI algorithms from the noisy label literature.
Related work
The general framework of learning with privileged information (Vapnik & Vashist, 2009) has been widely studied in deep learning, with many works exploring different baselines, including loss manipulation (Yang et al., 2017), distillation (Lopez-Paz et al., 2016), or Gaussian dropout (Lambert et al., 2018). This line of work has mainly focused on the noiseless scenario, conceiving PI as a guiding signal that helps identify easy or hard instances (Vapnik & Izmailov, 2015). Similar to our work, Yang et al. (2022) also studied the role of PI in improving the performance of deep learning methods, but focusing on the task of learningto-rank using distillation methods in the noiseless setting.
More recently, Collier et al. (2022) proposed a new perspective on PI, arguing that it can make models more robust to the presence of noise. Their proposed PI approach, referred to as TRAM, led to gains on various experimental settings, with both synthetic and real-world noise. However, their results lacked a detailed analysis of how different sources of PI affect performance.
Our work takes inspiration from the rich deep-learning theory studying the memorization dynamics of neural networks (Zhang et al., 2017;Rolnick et al., 2017;Toneva et al., 2019;Maennel et al., 2020;Baldock et al., 2021). In the no-PI setting, the dynamics of neural networks wherein the incorrect labels tend to be later memorized during training has been heavily exploited by the noisy-label community through techniques such as early-stopping and regularization (Liu et al., 2020;Bai et al., 2021). Other works have exploited the intrinsic difference between the learning of clean and mislabeled examples to detect and correct misclassification errors using self-supervision (Veit et al., 2017;Li et al., 2020), co-teaching (Han et al., 2018), or regularization (Cheng et al., 2021). Finally, many works have attempted to model the label corruption process by estimating the label transition matrix (Patrini et al., 2017) or the noisy signal directly in the prediction space (Liu et al., 2022). In general, we see this line of research about noisy labels (Song et al., 2020) as orthogonal to the use of PI and we have experimentally shown that our PI approach is in fact complementary and can be gracefully combined with such techniques. Some aspects of this work are suggestive of causal reasoning. In particular, explaining away is a well-known phenomenon when there are multiple explanations for the value that a particular variable has taken, e.g., whether it is the ground-truth label correctly annotated, or a mistake from an annotator (Pearl, 2009). We do not use causal formalism explicitly in this work, although we see similar learning dynamics at play in our results. PI (often called auxiliary labels) is also used in causally-motivated work on robust ML, although this is usually focused on the distinct problem of handling spurious correlations, rather than overcoming label noise (Kallus et al., 2018;Veitch et al., 2021;Makar et al., 2022). In self-supervised learning, the removal of shortcuts is also a topic of interest (Minderer et al., 2020).
Conclusions
In this work, we have presented a systematic study in which we investigate which forms of PI are more effective at explaining away label noise. Doing so we have found that the most helpful PI is the one that allows the networks to separate correct from mislabeled examples in feature space, but also enable an easier learning shortcut to memorize the mislabeled examples. We have also shown that methods which use appropriate PI to explain away label noise, can be combined with other state-of-the-art methods to remove noise and achieve cumulative gains. Exploring this direction further is a promising avenue for future work. Our insights show that the use of PI is a promising avenue of research to fight against label noise. Our insights further highlight that collecting the right PI in datasets requires some care to enable the learning of effective shortcuts.
Appendices
This appendix is organized as follows: In Appendix A we describe the relabelling process done to generate ImageNet-PI. In Appendix B we describe in depth the experimental details of our experiments and our hyperparameter tuning strategy. Appendix C replicates our main findings in the low-noise version of the PI datasets. Appendix D discusses and ablates the effect of early stopping in our experiments. Appendix E provides additional results of our ablation studies on other datasets. Appendix F and Appendix G give further details about TRAM++ and AFM++, respectively. And, finally, Appendix I and Appendix J describe in depth the experimental setup used to combine SOP and HET with TRAM, respectively.
A. ImageNet-PI
ImageNet-PI is a re-labelled version of the standard ILSVRC2012 ImageNet dataset in which the labels are provided by a collection of 16 deep neural networks with different architectures pre-trained on the standard ILSVRC2012. Specifically, the pre-trained models are downloaded from tf.keras.applications 7 and consist of: ResNet50V2, ResNet101V2, ResNet152V2, DenseNet121, DenseNet169, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, MobileNetV2, MobileNetV3Large, MobileNetV3Small, NASNetMobile, VGG16, VGG19, Xception.
During the re-labelling process, we do not directly assign the maximum confidence prediction of each of the models, but instead, for each example, we sample a random label from the predictive distribution of each model on that example. Furthermore, to regulate the amount of label noise introduced when relabelling the dataset, ImageNet-PI allows the option to use stochastic temperature-scaling to increase the entropy of the predictive distribution. The stochasticity of this process is controlled by a parameter β which controls the inverse scale of a Gamma distribution (with shape parameter α = 1.0), from which the temperature values are sampled, with a code snippet looking as follows: Intuitively, smaller values of β translate to higher temperature values as shown in Figure 5, which leads to higher levels of label noise as softmax comes closer to uniform distribution for high temperatures. This re-labelling process can produce arbitrarily noisy labels whose distribution is very far from being symmetrical, i.e., not all mis-classifications are equally likely. For example, it is more likely that similar dog breeds get confused among each other, but less likely that a 'dog' gets re-labeled as a 'chair'.
The PI in this dataset comes from the confidences of the models on the sampled label, their parameter count, and their test accuracy on the clean test distribution. These PI features are a good proxy for the expected reliability of each of the models. In our dataset release, we will provide the following files:
• labels-train.csv, labels-validation.csv These files contain the new (noisy) labels for the training and validation set respectively. The new labels are provided by the pre-trained annotator models. Each file provides the labels in CSV format:
<image_id>,<label_1>,<label_2>,...,<label_16>
• confidences-train.csv, confidences-validation.csv These files contain the confidence of each annotator model in its annotation; both for the training set and the validation set respectively. Each file provides the confidences in CSV format:
<image_id>,<confidence_1>,<confidence_2>,...,<confidence_16>
• annotator-features.csv This file contains the annotator features (i.e., meta-data about the model annotators themselves) in CSV format (16 rows; one for each model annotator):
<model_accuracy>,<number_of_model_parameters>
In particular, we will provide two standardized sampled annotations obtained by applying the temperature sampling process discussed above: one with β = 0.1 corresponding to high label noise and one with β = 0.5 corresponding to low label noise.
B. Experimental details
We build upon the implementations and hyperparameters from the open source Uncertainty Baselines codebase (Nado et al., 2021). All results in the paper are reported based on 5 random seeds. Unless specified otherwise, for TRAM and AFM models, we set the PI tower width to be 1024 as this was the default parameter in Collier et al. (2022). We use the same architecture for the Distillation (PI) teacher. This controls the size of the subnetwork which integrates the PI which is parameterized as a concatenation of the pre-processed PI (and then passed through a Dense + ReLU layer) and the representation of the non-PI inputs by the base Wide ResNet model, followed by a Dense + ReLU layer, a residual connection and finally a concatenation of the joint feature space with the non-PI representation. The number of units in the Dense layers is controlled by the "PI tower width".
For distillation models we uniformly sample over a temperature interval of [0. (Collier et al., 2021) to be 3 for CIFAR-10H and CIFAR-10N, and 6 for CIFAR-100N and search over {0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 2.0, 3.0, 5.0} for the heteroscedastic temperature.
B.1.2. IMAGENET-PI
ImageNet models are trained using a SGD optimizer with 0.9 Nestrov momentum for 90 epochs with a batch size of 128.
We set the initial learning rate of 0.05 with the learning rate decayed by a factor of 0.1 after 30, 60 and 80 epochs. We sweep over an L2 regularization parameter of {0.00001, 0.0001}. We use a ResNet-50 model architecture.
For TRAM and AFM models by default we set the PI tower width to be 2048, with the same parameterization of the PI tower as for the CIFAR models. For distillation models we set the distillation temperature to be 0.5. We use 1% of the original ImageNet training set as a validation set.
For TRAM++ and where relevant for AFM, we set the no-PI loss weight of to 0.5 and use random PI length of 30. For heteroscedastic models we set the number of factors for the low-rank component of the heteroscedastic covariance matrix to be 15 and search over {0.75, 1.0, 1.5, 2.0, 3.0} for the heteroscedastic temperature.
B.2. Hyperparameter tuning strategy
Unless otherwise indicated, we report the test-set accuracy at the hyperparameters determined by the arg max of best validation set accuracy (where the number of epochs are considered to be part of the set to be maximized over). The validation set used has noisy labels generated by the same process as the training set. This implements a realistic and noisy hyperparameter search with early stopping that we believe most closely replicates what is possible in real-world scenarios where a clean validation set may be unavailable. However, other papers report test-set metrics determined by a hyperparameter sweep assuming the availability of a clean validation set and/or without early stopping, which can have a large impact on the reported test-set metrics (see Appendix D for results computed in this way).
C. Results on low-noise settings
In the main text, we always reported results for the high-noise settings of each of the datasets. However, we now show that all our findings from Table 1 also apply in the low-noise setting. Table 3. Test accuracy of several methods trained using different features as PI on the low-noise versions of the datasets (baselines in gray and italics do not use PI). Here, Original denotes the standard PI of the dataset, Indicator a binary signal that separates clean from noisy examples, Labels the one-hot encoded labels, and Near-optimal a synthetic feature consisting on giving the annotator label to those examples that are miss-annotated and a zero-vector otherwise. Bold numbers represent significant maximum values across PI features where significance means p-value < 0.05.
Original
D. Effect of early stopping
As early stopping is one of the strongest baselines against label noise, in all our experiments we held out a small portion of the noisy training set and reported clean test accuracy at the epoch with the best validation accuracy. However, to make sure that our findings do not depend on the use of early stopping, or the amount of label noise in the validation set, we now present a reproduction of the results in Table 1 when either disabling early stopping or using a clean validation set to perform early stopping and hyperparameter tuning. Table 4 shows the results of our benchmark without using early stopping. In general, we observe that without early stopping most baselines perform significantly worse as they overfit more to the noisy labels. In this regard, since one of the main benefits of PI is that it prevents memorization of the noisy labels, we see that without early stopping the relative improvement of the PI techniques with respect to their no-PI baselines is much larger.
D.1. No early stopping
D.2. Clean validation set
Most of the datasets we studied have a significant amount of label noise in their training set. In this regard, the small validation set we hold out from the training set is also very noisy, which can affect the performance of early stopping and hyperparameter tuning. For this reason, we also provide results in Table 5 in which we use the clean labels from the validation set for hyperparameter tuning and early stopping. As we can see, most methods perform better in this regime, although our main findings about how the PI properties affect performance are still valid.
E. More results
In this section, we provide complete results for the experiments in the main paper using other datasets and algorithms with the main findings.
E.1. Training dynamics
In Figure 2 we provided a detailed analysis of the dynamics of TRAM on CIFAR-10H with different PI features. We now show results for TRAM on CIFAR-10N and CIFAR-100N (see Figure 6 and Figure 7, respectively). We also show results for AFM on CIFAR-10H, CIFAR-10N, and CIFAR-100N (see Figure 8, Figure 9 and Figure 10, respectively).
E.2. Feature extractor size
We replicate the results in Figure 3 for other settings with the same findings. In particular, we show results on CIFAR-10N and CIFAR-100N (see Figure 11 and Figure 12, respectively).
E.3. PI head size
We replicate the results in Figure 4 on CIFAR-10N and CIFAR-100N (see Figure 13 and Figure 14, respectively). In this case, however, we observe no clear trend in the results, probably due to the fact that the original PI on these datasets is not good enough for TRAM and AFM to shine (cf.
F. Design details of TRAM++
We now give the design details for TRAM++, the improved version of TRAM which appends a unique random PI vector to the original PI. In particular, we followed the same tuning strategy as in the rest of the TRAM experiments in the paper and we also tuned the parameter λ that weighs the losses of the two heads, i.e., min ϕ,π,ψ E (x,a,ỹ) [ℓ(π(ϕ(x), a),ỹ) + λ ℓ(ψ(ϕ(x)),ỹ)] .
(2) Collier et al. (2022) suggested that the gradients of the no-PI head do not affect the updates of the feature extraction, and thus λ could be folded directly into the tuning of the global learning rate of TRAM. However, in our experiments, we found that tuning λ given a fixed number of epochs can lead to significant gains in performance, as it can slow down training of the no-PI head. As seen in Figure 15, increasing λ has the same effect as increasing the learning rate of the no-PI head, and a sweet spot exists for values of λ < 1 in which the no-PI head trains fast enough to fit the clean examples, but avoids learning all the noisy ones.
In general, λ was not tuned in any of the other experiments, in order to remain as close as possible to the original TRAM implementation. However, for the TRAM++ experiments, which aimed to achieve the best possible performance out of TRAM, λ was tuned. Train accuracy (no-PI head)
G. Design of AFM++
In Section 4.2, we have seen that appending random PI that uniquely identifies each example to the original PI can sometimes induce beneficial shortcuts in TRAM++. We now test the same strategy applied to AFM, and design AFM++, an augmented version of AFM with additional random PI. Table 6 shows the results of our experiments where we see that AFM++ also clearly improves over "vanilla" AFM. Again, the improvements are greater in those datasets where overfitting is a bigger issue in the first place.
H. Combination of TRAM with label smoothing
We also evaluate the combination of TRAM with label smoothing (LS). In particular, we follow the standard label smoothing procedure and add the label smoothing hyperparameter to the hyperparameters swept over in Table 1. More specifically, we sweep over label smoothing of 0.2, 0.4, 0.6 and 0.8 and select the optimal hyperparameter setting following the same procedure as all experiments in the paper. The results are given in Table 7.
We observe that on all datasets, adding label smoothing to the TRAM method leads to performance improvements, demonstrating that TRAM can be successfully combined with label smoothing. More generally, this observation strengthens the point that TRAM and TRAM++ are compatible and yield additive performance gains when combined with widely used methods developed for noisy labels.
I. Experimental details for SOP and TRAM+SOP
As we have established in Section 5, the combination of TRAM and SOP has the potential to achieve cumulative gains in robustness to label noise. TRAM, with its original PI, has been shown to improve performance on datasets with dense noise, such as CIFAR-10H (worst), compared to a model with no PI. However, the PI may not always be explanatory of the noise and even if it is, it may not fully explain away all of the noise. Additionally, the feature extractor and subsequent layers of the model may still be susceptible to noise, even when the PI is able to explain away the noise.
On the other hand, SOP has been shown to work well for sparsely distributed noise and operates on the principle of modeling out the noise, which is distinct from the method used by TRAM. As these principles are complementary to one another, we propose to combine the advantages of both methods to achieve cumulative gains.
As highlighted in Section 5, the combination of TRAM+SOP consists of two main steps: pre-training with TRAM and fine-tuning with SOP. Our implementation of TRAM used regular TRAM with a few enhancements from TRAM++, such as random PI and a larger PI head size. It is important to note that our experiments were conducted using our own implementation of SOP and, although it incorporated the SOP method and was sanity-checked with the original authors of the paper, our experimental baseline environment and search space were different from theirs. As a result, the test accuracy on the CIFAR-N datasets may be lower than the results reported in the original SOP paper. However, the primary objective of these experiments was to explore whether TRAM + SOP can achieve cumulative gains over the respective implementations of TRAM and SOP alone and our results support this hypothesis.
In our experiments, both the SOP and TRAM+SOP models were trained for a total of 120 epochs, with a learning rate schedule that decayed at epochs 40, 80 and 110. We employed the SGD with Nesterov momentum for TRAM and regular momentum for SOP as in Liu et al. (2022). For a detailed description of the SOP parameters, we refer the reader to the original SOP paper. It is important to note that the results presented here for the TRAM+SOP method do not include all proposed enhancements in Liu et al. (2022). Further gains in performance may be achievable by incorporating these advancements and jointly optimizing the hyperparameter space for both the TRAM and SOP pretraining and fine-tuning stages.
J. Experimental details for TRAM+HET
TRAM+HET consists of a simple two-headed TRAM model in which the last linear layer of each of the two heads has been substituted by a heteroscedastic linear layer (Collier et al., 2021). In these experiments, we thus also sweep over the temperature of the heteroscedastic layers. A similar method was already proposed in Collier et al. (2022), under the name Het-TRAM, but here we also make use of our insights and allow the model to make use of random PI on top of the original PI features. Interestingly, contrary to what happened with TRAM+SOP, the addition of random PI, i.e., TRAM++, did not always yield performance improvements using TRAM+HET. Instead, depending on the dataset (see Table 8) we observe that the use of random PI can sometimes hurt the final performance of the models (e.g., as in CIFAR-10H). We conjecture this might be due to the TRAM+HET models using the random PI to memorize the clean labels as well. Understanding why this happens only when using heteroscedastic layers is an interesting avenue for future work.
Figure 1 .
1Conceptual illustration of ideal signal propagation while training a privileged information method such as TRAM
Figure 2 .
2Dynamics of TRAM on CIFAR-10H with different PIs. Top left: Test accuracy. Top center: Train accuracy on mislabeled examples evaluated at the no-PI head, Top right: Train accuracy on mislabeled examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated at the no-PI head. Bottom right: Train accuracy on clean examples evaluated at the PI head.
Focusing on the training accuracies of the no-PI head in Figure 2 (central column), the best models are those that achieve the highest training accuracy on correct examples, while not overfitting to the mislabeled. The difference in test performance of indicator and original is explained by the original model having a harder time overfitting to the mislabeled examples. Interestingly, the original model memorizes mislabeled examples faster with the PI head than the indicator. It looks as though fitting the training examples fast with the PI head was discouraging the model from fitting the same examples with the no-PI head, i.e., the PI is enabling a learning shortcut to memorize the mislabeled examples with the original PI, without using x. This might be because the indicator signal only takes values in {0, 1}
Figure 5 .
5The effect of parameter β in sampling temperatures.
B. 1 .
1Dataset-specific training settings B.1.1. CIFAR All CIFAR models are trained using a SGD optimizer with 0.9 Nestrov momentum for 90 epochs with a batch size of 256. We sweep over an initial learning rate of {0.01, 0.1} with the learning rate decayed by a factor of 0.2 after 27, 54 and 72 epochs. We sweep over an L2 regularization parameter of {0.00001, 0.0001, 0.001}. FollowingNado et al. (2021), we use a Wide ResNet model architecture with model-width multiplier of 10 and a model-depth of 28.
Figure 6 .
6Dynamics of TRAM on CIFAR-10N with different PI features. Top left: Test accuracy. Top center: Train accuracy on noisy examples evaluated at the no-PI head, Top right: Train accuracy on noisy examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated at the no-PI head. Bottom right: Train accuracy of clean examples evaluated at the PI head.
Figure 9 .
9Dynamics of AFM on CIFAR-10N with different PI features. Top left: Test accuracy. Top center: Train accuracy on noisy examples evaluated with marginalization, Top right: Train accuracy on noisy examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated with marginalization. Bottom right: Train accuracy of clean examples evaluated at the PI head.
Figure 15 .
15Performance of TRAM for different values of the loss weight λ in CIFAR-10N. The optimal λ is such the one that strikes a good balance between training the clean examples, while slowing down significantly the overfitting to the noisy ones.
or scraping the web (Radford Proceedings of the 40 th International Conference on Machine Learning, Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright 2023 by the author(s).
Table 1 .
1Test accuracy of several methods trained using different features as PI (baselines in gray and italics do not use PI). Here, Original denotes the standard PI of the dataset, Indicator a binary signal that separates clean from noisy examples, Labels the one-hot encoded labels, and Near-optimal a synthetic feature that gives the annotator label to those examples that are miss-annotated and a zero-vector otherwise. Bold numbers represent significant maximum values across PI features where significance means p-value < 0.05.Original
Indicator
Labels
Near-optimal
no-PI
55.0±1.5
55.0±1.5
55.0±1.5
55.0±1.5
CIFAR-10H Distillation (no-PI) 47.9±0.0
47.9±0.0
47.9±0.0
47.9±0.0
(worst)
TRAM
64.9±0.8
63.3±1.1
38.3±0.2
67.8±0.2
Approximate FM
64.0±0.6
66.7±2.1
29.5±0.5
74.4±0.1
Distillation (PI)
45.4±0.8 49.9±0.7 44.5±0.1
48.2±0.9
no-PI
80.6±0.2
80.6±0.2
80.6±0.2
80.6±0.2
CIFAR-10N Distillation (no-PI) 80.4±0.0
80.4±0.0
80.4±0.0
80.4±0.0
(worst)
TRAM
80.5±0.5
87.9±0.4
48.9±0.2
89.3±0.3
Approximate FM
82.0±0.3
91.2±0.3
22.6±0.2
92.0±0.1
Distillation (PI)
80.2±0.3
80.1±0.3 80.7±0.2
80.2±0.3
no-PI
60.4±0.5
60.4±0.5
60.4±0.5
60.4±0.5
Distillation (no-PI) 60.6±0.2
60.6±0.2
60.6±0.2
60.6±0.2
CIFAR-100N TRAM
59.7±0.3
62.4±0.3
34.9±0.2
67.4±0.3
Approximate FM
60.0±0.2
66.4±0.2
20.1±0.3
70.2±0.1
Distillation (PI)
61.1±0.2 61.8±0.3 60.5±0.2
61.5±0.3
no-PI
47.7±0.8
47.7±0.8
47.7±0.8
47.7±0.8
ImageNet-PI Distillation (no-PI) 50.2±0.8
50.2±0.8
50.2±0.8
50.2±0.8
(high-noise) TRAM
53.3±0.5
53.6±0.5
41.0±0.7
56.5±0.3
Approximate FM
55.6±0.3
55.3±0.6
0.8±0.2
58.3±0.1
Distillation (PI)
51.0±0.4 50.6±0.2 39.0±4.6
27.5±22.7
as proxy to identify mislabeled samples. Intuitively, the
main assumption is that if the PI can properly identify the
mislabeled examples, then it should act as expert knowledge
that would discourage focusing on the hard mislabeled
instances, and instead, promote learning only on the correct
easy ones (Vapnik & Vashist, 2009).
Train acc (no-PI / mislabeled)Figure 3. Performance of different PI baselines on CIFAR-10H when increasing the parameter count of their feature extractor keeping the PI tower fixed. Larger models suffer from overfitting as they tend to use their larger capacity to overfit to mislabeled examples, discouraging the model from exploiting the PI. mislabeled examples on the no-PI head for larger PI heads.Finding: Increasing the capacity of the PI tower encourages using the PI as a shortcut.Train acc (no-PI / mislabeled) TRAM AFMFigure 4. Performance of different PI approaches on CIFAR-10H when increasing the PI head size. A larger PI head size incentivizes the model to memorize the mislabeled examples using the PI, thus further exploiting PI as a shortcut.10 5
10 7
Number of parameters
40.0%
50.0%
60.0%
70.0%
Test accuracy
10 5
10 7
Number of parameters
25%
50%
75%
100%
TRAM
AFM
no-PI
10 2
10 3
PI Tower Width
55.0%
60.0%
65.0%
Test accuracy
10 2
10 3
PI Tower Width
40%
60%
80%
Table 2 .
2Performance comparison of no-PI, TRAM, TRAM++, SOP, HET, TRAM+SOP and TRAM+HET on the different PI datasets.no-PI
TRAM TRAM++
SOP
TRAM+SOP
HET
TRAM+HET
CIFAR-10H (worst)
55.0±1.5 64.9±0.8 66.8±0.3
59.2±0.8
70.9±0.5
50.8±1.4
67.7±0.7
CIFAR-10N (worst)
80.6±0.2 80.5±0.5 83.9±0.2
87.9±0.2
88.5±0.3
81.9±0.4
82.0±0.3
CIFAR-100N
60.4±0.5 59.7±0.3 61.1±0.2
65.3±0.3
66.1±0.2
60.8±0.4
62.1±0.1
ImageNet-PI (high-noise) 47.7±0.8 53.3±0.5 53.9±0.4
-
-
51.5±0.6
55.8±0.3
for natural language tasks. In Conference on Empirical Methods in Natural Language Processing (EMNLP), October 2008.Song, H., Kim, M., Park, D., and Lee, J. Learning from noisy labels with deep neural networks: A survey. CoRR,abs/2007.08199, 2020.
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., and Wojna,
Z. Rethinking the inception architecture for computer
vision. In IEEE Conference on Computer Vision and
Pattern Recognition, (CVPR), 2016.
Toneva, M., Sordoni, A., des Combes, R. T., Trischler, A.,
Bengio, Y., and Gordon, G. J. An empirical study of
example forgetting during deep neural network learning.
In nternational Conference on Learning Representations
(ICLR), 2019.
Vapnik, V. and Izmailov, R. Learning using privileged
information: similarity control and knowledge transfer. J.
Mach. Learn. Res., 16:2023-2049, 2015.
Vapnik, V. and Vashist, A. A new learning paradigm: Learn-
ing using privileged information. Neural Networks, 22
(5-6), 2009.
Veit, A., Alldrin, N., Chechik, G., Krasin, I., Gupta, A., and
Belongie, S. J. Learning from noisy large-scale datasets
with minimal supervision. In IEEE Conference on Com-
puter Vision and Pattern Recognition (CVPR), 2017.
Veitch, V., D'Amour, A., Yadlowsky, S., and Eisenstein,
J. Counterfactual invariance to spurious correlations in
text classification. In Advances in Neural Information
Processing Systems (NeurIPS), 2021.
Wei, J., Zhu, Z., Cheng, H., Liu, T., Niu, G., and Liu, Y.
Learning with noisy labels revisited: A study using real-
world human annotations. In International Conference
on Learning Representations (ICLR), 2022.
Yang, H., Zhou, J. T., Cai, J., and Ong, Y. MIML-FCN+:
multi-instance multi-label learning via fully convolutional
networks with privileged information. In 2017 IEEE
Conference on Computer Vision and Pattern Recognition,
(CVPR), 2017.
Yang, S., Sanghavi, S., Rahmanian, H., Bakus, J., and Vish-
wanathan, S. Toward understanding privileged features
distillation in learning-to-rank. In Advances in Neural
Information Processing Systems (NeurIPS), 2022.
Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals,
O. Understanding deep learning requires rethinking gen-
eralization. In International Conference on Learning
Representations (ICLR), 2017.
Zhao, P., Yang, Y., and He, Q.-C. High-dimensional linear
regression via implicit regularization. Biometrika, 109
(4):1033-1046, feb 2022. doi: 10.1093/biomet/asac010.
URL https://doi.org/10.1093%2Fbiomet%
2Fasac010.
# Get the predictive distribution of the model annotator. pred_dist = model.predict(...)# Sample the temperature.
temperature = tf.random.gamma(
[tf.shape(pred_dist)[0]],
alpha=tf.constant([1.]),
beta=tf.constant([beta_parameter]))
# Compute the new predictive distribution.
log_probs = tf.math.log(pred_dist) / temperature
new_pred_dist = tf.nn.softmax(log_probs)
# Sample from the new predictive distribution.
class_predictions = tf.random.categorical(tf.math.log(new_pred_dist), 1)[:,0]
5, 10]. For CIFAR-10N and CIFAR-100N we split the original training set into a training and a validation set; 98% of the examples are used for training and the remaining 2% used as a validation set. Due to the smaller size of the CIFAR-10H training set (which following Collier et al. (2022) is actually the original CIFAR test set), 96% of the original training set is used as a training set with the remaining 4% used as a validation set. For TRAM++ and where relevant for AFM, we search over a no-PI loss weight of {0.1, 0.5}, a PI tower width of {512, 1024, 2048, 4096} and a random PI length of {8, 14, 28}. For heteroscedastic CIFAR models, we set the number of factors for the low-rank component of the heteroscedastic covariance matrix
Table 4 .
4Testaccuracy of several methods trained using different features as PI not using early stopping (baselines in gray and italics do
not use PI). Here, Original denotes the standard PI of the dataset, Indicator a binary signal that separates clean from noisy examples,
Labels the one-hot encoded labels, and Near-optimal a synthetic feature consisting on giving the annotator label to those examples that are
miss-annotated and a zero-vector otherwise. Bold numbers represent significant maximum values across PI features where significance
means p-value < 0.05.
Original
Indicator
Labels
Near-optimal
no-PI
42.6±0.3
42.6±0.3 42.6±0.3
42.6±0.3
CIFAR-10H Distillation (no-PI) 45.2±0.1
45.2±0.1 45.2±0.1
45.2±0.1
(worst)
TRAM
59.2±0.2
46.5±0.6 39.9±0.3
77.4±0.1
Approximate FM
61.2±0.7
39.3±0.6 10.0±0.0
79.3±0.3
Distillation (PI)
45.2±0.0 46.3±0.1 45.7±0.1
46.4±0.1
no-PI
67.7±0.6
67.7±0.6 67.7±0.6
67.7±0.6
CIFAR-10N Distillation (no-PI) 71.4±0.2
71.4±0.2 71.4±0.2
71.4±0.2
(worst)
TRAM
67.0±0.4
79.2±0.9 51.6±0.2
91.9±0.2
Approximate FM
69.8±0.5
89.6±0.1 10.0±0.0
92.3±0.2
Distillation (PI)
71.1±0.2 73.1±0.3 70.9±0.3
70.9±0.2
no-PI
55.8±0.2
55.8±0.2 55.8±0.2
55.8±0.2
Distillation (no-PI) 58.6±0.1
58.6±0.1 58.6±0.1
58.6±0.1
CIFAR-100N TRAM
56.4±0.3
56.0±0.4 34.9±0.2
67.5±0.3
Approximate FM
58.9±0.3
65.4±0.3
4.1±0.1
70.8±0.4
Distillation (PI)
58.9±0.2
58.9±0.3 56.8±0.2
60.4±0.4
no-PI
47.7±0.8
47.7±0.8 47.7±0.8
47.7±0.8
ImageNet-PI Distillation (no-PI) 50.4±0.8
50.4±0.8 50.4±0.8
50.4±0.8
(high-noise) TRAM
53.3±0.4
53.5±0.4 41.0±0.8
56.5±0.3
Approximate FM
55.0±0.4
55.5±0.3
0.4±0.1
58.3±0.1
Distillation (PI)
50.9±0.4 50.6±0.2 39.1±5.1
18.0±24.7
Table 5 .
5Testaccuracy of several methods trained using different features as PI using a clean validation set to select the best hyperparam-
eters (baselines in gray and italics do not use PI). Here, Original denotes the standard PI of the dataset, Indicator a binary signal that
separates clean from noisy examples, Labels the one-hot encoded labels, and Near-optimal a synthetic feature consisting on giving the
annotator label to those examples that are miss-annotated and a zero-vector otherwise. Bold numbers represent significant maximum
values across PI features where significance means p-value < 0.05.
Original
Indicator
Labels
Near-optimal
no-PI
53.2±1.0
53.2±1.0
53.2±1.0
53.2±1.0
CIFAR-10H Distillation (no-PI) 53.4±1.0
53.4±1.0
53.4±1.0
53.4±1.0
(worst)
TRAM
67.7±0.1
64.9±0.6
39.7±0.3
77.4±0.1
Approximate FM
70.6±0.4
66.7±2.1
29.5±0.5
79.1±0.2
Distillation (PI)
53.9±0.4 53.3±0.6 53.4±0.4
51.6±0.0
no-PI
81.4±0.5
81.4±0.5
81.4±0.5
81.4±0.5
CIFAR-10N Distillation (no-PI) 82.9±0.4
82.9±0.4
82.9±0.4
82.9±0.4
(worst)
TRAM
81.9±0.3
89.1±0.3
51.6±0.1
91.1±0.1
Approximate FM
82.0±0.3
91.2±0.3
22.6±0.2
92.3±0.2
Distillation (PI)
80.8±0.3 80.7±0.5 81.1±0.4
80.8±0.2
no-PI
60.8±0.2
60.8±0.2
60.8±0.2
60.8±0.2
Distillation (no-PI) 60.8±0.1
60.8±0.1
60.8±0.1
60.8±0.1
CIFAR-100N TRAM
60.6±0.3
63.3±0.2
34.8±0.4
67.3±0.3
Approximate FM
60.2±0.1
67.8±0.3
20.1±0.3
70.9±0.2
Distillation (PI)
61.1±0.2 61.9±0.2 60.5±0.2
61.5±0.3
no-PI
48.3±0.1
48.3±0.1
48.3±0.1
48.3±0.1
ImageNet-PI Distillation (no-PI) 50.5±0.7
50.5±0.7
50.5±0.7
50.5±0.7
(high-noise) TRAM
53.3±0.3
53.8±0.7
40.7±0.8
56.5±0.2
Approximate FM
55.6±0.3
55.5±0.4
0.8±0.2
58.2±0.1
Distillation (PI)
51.0±0.4 50.7±0.3 39.1±4.4
27.6±22.7
Figure 7. Dynamics of TRAM on CIFAR-100N with different PI features. Top left: Test accuracy. Top center: Train accuracy on noisy examples evaluated at the no-PI head, Top right: Train accuracy on noisy examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated at the no-PI head. Bottom right: Train accuracy of clean examples evaluated at the PI head.Figure 8. Dynamics of AFM on CIFAR-10H with different PI features. Top left: Test accuracy. Top center: Train accuracy on noisy examples evaluated with marginalization, Top right: Train accuracy on noisy examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated with marginalization. Bottom right: Train accuracy of clean examples evaluated at the PI head. Train acc. (no-PI / mislabeled) Train acc. (PI / mislabeled) Train acc. (no-PI / correct) Train acc. (PI / correct)0
20
40
60
80
Epochs
20%
40%
60%
80%
Test accuracy
Original
Label
Indicator
Near-optimal
no-PI
0
20
40
60
80
Epochs
25%
50%
75%
100%
Train acc. (no-PI / mislabeled)
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
Train acc. (PI / mislabeled)
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
Train acc. (no-PI / correct)
0
20
40
60
80
Epochs
40%
60%
80%
100%
Train acc. (PI / correct)
0
20
40
60
80
Epochs
20%
40%
60%
Test accuracy
Original
Label
Indicator
Near-optimal
no-PI
0
20
40
60
80
Epochs
25%
50%
75%
100%
Train acc. (no-PI / mislabeled)
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
Train acc. (PI / mislabeled)
0
20
40
60
80
Epochs
25%
50%
75%
100%
Train acc. (no-PI / correct)
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
Train acc. (PI / correct)
0
20
40
60
80
Epochs
20%
40%
60%
80%
Test accuracy
Original
Label
Indicator
Near-optimal
no-PI
0
20
40
60
80
Epochs
0%
25%
50%
75%
100%
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
0
20
40
60
80
Epochs
20%
40%
60%
80%
100%
0
20
40
60
80
Epochs
40%
60%
80%
100%
Table 1 )
1. In this regard, increasing the PI head size does not lead to betterTrain acc. (no-PI / mislabeled) Figure 10. Dynamics of AFM on CIFAR-100N with different PI features. Top left: Test accuracy. Top center: Train accuracy on noisy examples evaluated with marginalization, Top right: Train accuracy on noisy examples evaluated at the PI-head. Bottom center: Train accuracy on clean examples evaluated with marginalization. Bottom right: Train accuracy of clean examples evaluated at the PI head.Train acc (no-PI / mislabeled)Figure 11. Performance of different PI baselines on CIFAR-10N when increasing the parameter count of their feature extractor keeping the PI tower fixed. Larger models suffer from overfitting as they tend to use their larger capacity to overfit to noisy examples, discouraging the model from exploiting the PI.Train acc (no-PI / mislabeled)Figure 12. Performance of different PI baselines on CIFAR-100N when increasing the parameter count of their feature extractor keeping the PI tower fixed. Larger models suffer from overfitting as they tend to use their larger capacity to overfit to noisy examples, discouraging the model from exploiting the PI. performance as there is nothing to extract from the PI.Train acc (no-PI / mislabeled) TRAM AFMFigure 13. Performance of different PI baselines on CIFAR-10N when increasing the PI head size. A larger PI head size incentivizes the model to memorize the noisy examples using the PI making more use of the PI as a shortcut.Train acc (no-PI / mislabeled) TRAM AFMFigure 14. Performance of different PI baselines on CIFAR-100N when increasing the PI head size. A larger PI head size incentivizes the model to memorize the noisy examples using the PI making more use of the PI as a shortcut.0
20
40
60
80
Epochs
0%
20%
40%
60%
Test accuracy
Label
Indicator
Near-optimal
no-PI
0
20
40
60
80
Epochs
0%
25%
50%
75%
100%
0
20
40
60
80
Epochs
0%
25%
50%
75%
100%
Train acc. (PI / mislabeled)
0
20
40
60
80
Epochs
0%
25%
50%
75%
100%
Train acc. (no-PI / correct)
0
20
40
60
80
Epochs
25%
50%
75%
100%
Train acc. (PI / correct)
10 5
10 7
Number of parameters
60.0%
70.0%
80.0%
Test accuracy
10 5
10 7
Number of parameters
25%
50%
75%
100%
TRAM
AFM
no-PI
10 5
10 7
Number of parameters
20%
40%
60%
Test accuracy
10 5
10 7
Number of parameters
50%
100%
TRAM
AFM
no-PI
Table 6 .
6Performance comparison of no-PI, AFM and AFM++ on the different PI datasets. CIFAR-10N (worst) 80.6±0.2 82.0±0.3 84.6±0.2 CIFAR-100N 60.4±0.5 60.0±0.2 61.9±0.2 ImageNet-PI (high-noise) 47.7±0.8 55.6±0.3 55.0±0.6no-PI
AFM
AFM++
CIFAR-10H (worst)
55.0±1.5 64.0±0.6 68.2±0.6
Table 7 .
7Performance comparison of no-PI, Label smoothing (LS), TRAM TRAM + LS on different PI datasets. CIFAR-10N (worst) 80.6±0.2 80.5±0.5 80.5±0.4no-PI
TRAM
LS
TRAM+LS
CIFAR-10H (worst) 55.0±1.5 64.9±0.8 59.9±1.5
65.4±0.9
82.4±0.2
CIFAR-100N
60.4±0.5 59.7±0.3 60.0±0.46
61.9±0.3
Table 8 .
8Performance comparison of TRAM, TRAM++, HET, TRAM+HET (without additional random PI), and TRAM+HET (with additional random PI) on the different PI datasets. ImageNet-PI (high-noise) 53.3±0.5 53.9±0.4 51.5±0.6TRAM TRAM++
HET
TRAM+HET (w/o random) TRAM+HET (+random)
CIFAR-10H (worst)
64.9±0.8 66.8±0.3 50.8±1.4
67.7±0.7
56.5±0.7
CIFAR-10N (worst)
80.5±0.5 83.9±0.2 81.9±0.4
82.0±0.3
83.5±0.1
CIFAR-100N
59.7±0.3 61.1±0.2 60.8±0.4
62.1±0.1
61.2±0.3
55.8±0.3
55.4±0.4
We present results for high-noise in the main text. A reproduction ofTable 1with low-noise can be found in Appendix C.
Near-optimal does not always outperform original when using Distillation (PI), but note that in general the gains of Distillation (PI) over (no-PI) are much smaller than for TRAM and AFM. In this regard, we leave the objective of finding a near-optimal policy for Distillation (PI) as an open question for future work.
We do not provide results for ImageNet-PI as SOP cannot be easily scaled to such a large dataset.
https://www.tensorflow.org/api docs/python/tf/keras/applications
AcknowledgementsWe thank Jannik Kossen for helpful comments on this work. We also thank Josip Djolonga and Joan Puigcerver for helpful discussions related to infrastructure and data processing.
Understanding and improving early stopping for learning with noisy labels. Y Bai, E Yang, B Han, Y Yang, J Li, Y Mao, G Niu, T Liu, Advances in Neural Information Processing Systems (NeurIPS). 2021Bai, Y., Yang, E., Han, B., Yang, Y., Li, J., Mao, Y., Niu, G., and Liu, T. Understanding and improving early stopping for learning with noisy labels. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
Deep learning through the lens of example difficulty. R J N Baldock, H Maennel, B Neyshabur, Advances in Neural Information Processing Systems (NeurIPS). 2021Baldock, R. J. N., Maennel, H., and Neyshabur, B. Deep learning through the lens of example difficulty. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
Learning with instance-dependent label noise: A sample sieve approach. H Cheng, Z Zhu, X Li, Y Gong, X Sun, Y Liu, International Conference on Learning Representations (ICLR. 2021Cheng, H., Zhu, Z., Li, X., Gong, Y., Sun, X., and Liu, Y. Learning with instance-dependent label noise: A sample sieve approach. In International Conference on Learning Representations (ICLR), 2021.
Correlated input-dependent label noise in large-scale image classification. M Collier, B Mustafa, E Kokiopoulou, R Jenatton, J Berent, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2021Collier, M., Mustafa, B., Kokiopoulou, E., Jenatton, R., and Berent, J. Correlated input-dependent label noise in large-scale image classification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
Transfer and marginalize: Explaining away label noise with privileged information. M Collier, R Jenatton, E Kokiopoulou, J Berent, International Conference on Machine Learning (ICML). 2022Collier, M., Jenatton, R., Kokiopoulou, E., and Berent, J. Transfer and marginalize: Explaining away label noise with privileged information. In International Conference on Machine Learning (ICML), 2022.
Underspecification presents challenges for credibility in modern machine learning. A D'amour, K A Heller, D Moldovan, B Adlam, B Alipanahi, A Beutel, C Chen, J Deaton, J Eisenstein, M D Hoffman, F Hormozdiari, N Houlsby, S Hou, G Jerfel, A Karthikesalingam, M Lucic, Y Ma, C Y Mclean, D Mincu, A Mitani, A Montanari, Z Nado, V Natarajan, C Nielson, T F Osborne, R Raman, K Ramasamy, R Sayres, J Schrouff, M Seneviratne, S Sequeira, H Suresh, V Veitch, M Vladymyrov, X Wang, K Webster, S Yadlowsky, T Yun, X Zhai, D Sculley, CoRR, abs/2011.03395, 2020D'Amour, A., Heller, K. A., Moldovan, D., Adlam, B., Alipanahi, B., Beutel, A., Chen, C., Deaton, J., Eisen- stein, J., Hoffman, M. D., Hormozdiari, F., Houlsby, N., Hou, S., Jerfel, G., Karthikesalingam, A., Lucic, M., Ma, Y., McLean, C. Y., Mincu, D., Mitani, A., Monta- nari, A., Nado, Z., Natarajan, V., Nielson, C., Osborne, T. F., Raman, R., Ramasamy, K., Sayres, R., Schrouff, J., Seneviratne, M., Sequeira, S., Suresh, H., Veitch, V., Vladymyrov, M., Wang, X., Webster, K., Yadlowsky, S., Yun, T., Zhai, X., and Sculley, D. Underspecifica- tion presents challenges for credibility in modern ma- chine learning. CoRR, abs/2011.03395, 2020. URL https://arxiv.org/abs/2011.03395.
ImageNet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L J Li, L Kai, F F Li, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Deng, J., Dong, W., Socher, R., Li, L. J., Kai, L., and Li, F. F. ImageNet: A large-scale hierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248-255, 2009.
Shortcut learning in deep neural networks. R Geirhos, J Jacobsen, C Michaelis, R S Zemel, W Brendel, M Bethge, F A Wichmann, Nat. Mach. Intell. 211Geirhos, R., Jacobsen, J., Michaelis, C., Zemel, R. S., Bren- del, W., Bethge, M., and Wichmann, F. A. Shortcut learning in deep neural networks. Nat. Mach. Intell., 2 (11):665-673, 2020.
Co-teaching: Robust training of deep neural networks with extremely noisy labels. B Han, Q Yao, X Yu, G Niu, M Xu, W Hu, I W Tsang, M Sugiyama, Advances in Neural Information Processing Systems (NeurIPS). Han, B., Yao, Q., Yu, X., Niu, G., Xu, M., Hu, W., Tsang, I. W., and Sugiyama, M. Co-teaching: Robust train- ing of deep neural networks with extremely noisy labels. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
Distilling the knowledge in a neural network. G E Hinton, O Vinyals, J Dean, abs/1503.02531CoRRHinton, G. E., Vinyals, O., and Dean, J. Distilling the knowledge in a neural network. CoRR, abs/1503.02531, 2015.
Removing hidden confounding by experimental grounding. N Kallus, A M Puli, U Shalit, Advances in Neural Information Processing Systems (NeurIPS). Kallus, N., Puli, A. M., and Shalit, U. Removing hidden confounding by experimental grounding. In Advances in Neural Information Processing Systems (NeurIPS), 2018.
Learning Multiple Layers of Features from Tiny Images. A Krizhevsky, University of TorontoTechnical reportKrizhevsky, A. Learning Multiple Layers of Features from Tiny Images. Technical report, University of Toronto, 2009.
Deep learning under privileged information using heteroscedastic dropout. J Lambert, O Sener, S Savarese, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Lambert, J., Sener, O., and Savarese, S. Deep learning un- der privileged information using heteroscedastic dropout. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018.
Learning with noisy labels as semi-supervised learning. J Li, R Socher, S C H Hoi, Dividemix, International Conference on Learning Representations (ICLR). 2020Li, J., Socher, R., and Hoi, S. C. H. Dividemix: Learning with noisy labels as semi-supervised learning. In Interna- tional Conference on Learning Representations (ICLR), 2020.
Early-learning regularization prevents memorization of noisy labels. S Liu, J Niles-Weed, N Razavian, C Fernandez-Granda, Advances in Neural Information Processing Systems (NeurIPS). 2020Liu, S., Niles-Weed, J., Razavian, N., and Fernandez- Granda, C. Early-learning regularization prevents memo- rization of noisy labels. In Advances in Neural Informa- tion Processing Systems (NeurIPS), 2020.
Robust training under label noise by over-parameterization. S Liu, Z Zhu, Q Qu, C You, International Conference on Machine Learning (ICML). 2022Liu, S., Zhu, Z., Qu, Q., and You, C. Robust training under label noise by over-parameterization. In International Conference on Machine Learning (ICML), 2022.
Unifying distillation and privileged information. D Lopez-Paz, L Bottou, B Schölkopf, V Vapnik, International Conference on Learning Representations (ICLR). Lopez-Paz, D., Bottou, L., Schölkopf, B., and Vapnik, V. Unifying distillation and privileged information. In Inter- national Conference on Learning Representations (ICLR), 2016.
What do neural networks learn when trained with random labels?. H Maennel, I M Alabdulmohsin, I O Tolstikhin, R J N Baldock, O Bousquet, S Gelly, D Keysers, Advances in Neural Information Processing Systems (NeurIPS). 2020Maennel, H., Alabdulmohsin, I. M., Tolstikhin, I. O., Bal- dock, R. J. N., Bousquet, O., Gelly, S., and Keysers, D. What do neural networks learn when trained with random labels? In Advances in Neural Information Processing Systems (NeurIPS), 2020.
Causally motivated shortcut removal using auxiliary labels. M Makar, B Packer, D Moldovan, D Blalock, Y Halpern, A Amour, International Conference on Artificial Intelligence and Statistics. AISTATS2022Makar, M., Packer, B., Moldovan, D., Blalock, D., Halpern, Y., and D'Amour, A. Causally motivated shortcut removal using auxiliary labels. In International Conference on Artificial Intelligence and Statistics, (AISTATS), 2022.
Automatic shortcut removal for self-supervised representation learning. M Minderer, O Bachem, N Houlsby, M Tschannen, International Conference on Machine Learning (ICML). 2020Minderer, M., Bachem, O., Houlsby, N., and Tschannen, M. Automatic shortcut removal for self-supervised represen- tation learning. In International Conference on Machine Learning (ICML), 2020.
Z Nado, N Band, M Collier, J Djolonga, M W Dusenberry, S Farquhar, Q Feng, A Filos, M Havasi, R Jenatton, arXiv:2106.04015Uncertainty baselines: Benchmarks for uncertainty & robustness in deep learning. arXiv preprintNado, Z., Band, N., Collier, M., Djolonga, J., Dusenberry, M. W., Farquhar, S., Feng, Q., Filos, A., Havasi, M., Jenatton, R., et al. Uncertainty baselines: Benchmarks for uncertainty & robustness in deep learning. arXiv preprint arXiv:2106.04015, 2021.
Making deep neural networks robust to label noise: A loss correction approach. G Patrini, A Rozza, A K Menon, R Nock, L Qu, IEEE Conference on Computer Vision and Pattern Recognition. Patrini, G., Rozza, A., Menon, A. K., Nock, R., and Qu, L. Making deep neural networks robust to label noise: A loss correction approach. In IEEE Conference on Computer Vision and Pattern Recognition, (CVPR), 2017.
Causality: Models, Reasoning and Inference. J Pearl, Cambridge University PressPearl, J. Causality: Models, Reasoning and Inference. Cam- bridge University Press, 2009.
Human uncertainty makes classification more robust. J C Peterson, R M Battleday, T L Griffiths, O Russakovsky, 2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019. Seoul, KoreaPeterson, J. C., Battleday, R. M., Griffiths, T. L., and Rus- sakovsky, O. Human uncertainty makes classification more robust. In 2019 IEEE/CVF International Con- ference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019, 2019.
Gradient starvation: A learning proclivity in neural networks. M Pezeshki, S Kaba, Y Bengio, A C Courville, D Precup, G Lajoie, Advances in Neural Information Processing Systems (NeurIPS). 2021Pezeshki, M., Kaba, S., Bengio, Y., Courville, A. C., Pre- cup, D., and Lajoie, G. Gradient starvation: A learning proclivity in neural networks. In Advances in Neural Information Processing Systems (NeurIPS), 2021.
Adapting self-supervised vision transformers by probing attention-conditioned masking consistency. V Prabhu, S Yenamandra, A Singh, J Hoffman, Advances in Neural Information Processing Systems (NeurIPS). 2022Prabhu, V., Yenamandra, S., Singh, A., and Hoffman, J. Adapting self-supervised vision transformers by probing attention-conditioned masking consistency. In Advances in Neural Information Processing Systems (NeurIPS), 2022.
Learning transferable visual models from natural language supervision. A Radford, J W Kim, C Hallacy, A Ramesh, G Goh, S Agarwal, G Sastry, A Askell, P Mishkin, J Clark, International Conference on Machine Learning. PMLRRadford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pp. 8748-8763. PMLR, 2021.
D Rolnick, A Veit, S Belongie, N Shavit, arXiv:1705.10694Deep learning is robust to massive label noise. arXiv preprintRolnick, D., Veit, A., Belongie, S., and Shavit, N. Deep learning is robust to massive label noise. arXiv preprint arXiv:1705.10694, 2017.
Get another label? improving data quality and data mining using multiple, noisy labelers. V S Sheng, F Provost, P G Ipeirotis, Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD)Sheng, V. S., Provost, F., and Ipeirotis, P. G. Get another label? improving data quality and data mining using mul- tiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Dis- covery and Data Mining (KDD), 2008.
Cheap and fast -but is it good? evaluating non-expert annotations. R Snow, B O'connor, D Jurafsky, A Ng, Snow, R., O'Connor, B., Jurafsky, D., and Ng, A. Cheap and fast -but is it good? evaluating non-expert annotations
| [
"https://github.com/google/"
] |
[
"On the challenge of obtaining an accurate solvation energy estimate in simulations of electrocatalysis",
"On the challenge of obtaining an accurate solvation energy estimate in simulations of electrocatalysis"
] | [
"Björn Kirchhoff \nInstitute of Electrochemistry\nUlm University\nAlbert-Einstein-Allee 4789081UlmGermany\n\nScience Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland\n",
"Elvar Ö Jónsson \nScience Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland\n",
"Timo Jacob \nInstitute of Electrochemistry\nUlm University\nAlbert-Einstein-Allee 4789081UlmGermany\n\nHelmholtz-Institute Ulm (HIU) Electrochemical Energy Storage\nHelmholtz-Straße 1689081UlmGermany\n\nKarlsruhe Institute of Technology (KIT)\nP.O. Box 364076021KarlsruheGermany\n",
"Hannes Jónsson \nScience Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland\n"
] | [
"Institute of Electrochemistry\nUlm University\nAlbert-Einstein-Allee 4789081UlmGermany",
"Science Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland",
"Science Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland",
"Institute of Electrochemistry\nUlm University\nAlbert-Einstein-Allee 4789081UlmGermany",
"Helmholtz-Institute Ulm (HIU) Electrochemical Energy Storage\nHelmholtz-Straße 1689081UlmGermany",
"Karlsruhe Institute of Technology (KIT)\nP.O. Box 364076021KarlsruheGermany",
"Science Institute and Faculty of Physical Sciences\nUniversity of Iceland\nHjarðarhagi 2107ReykjavíkVR-IIIIceland"
] | [] | The effect of solvent on the free energy of reaction intermediates adsorbed on electrocatalyst surfaces can significantly change the thermochemical overpotential, but accurate calculations of this are challenging. Here, we present computational estimates of the solvation energy for reaction intermediates in oxygen reduction reaction (ORR) on a B-doped graphene (BG) model system where the overpotential is found to reduce by up to 0.6 V due to solvation. BG is experimentally reported to be an active ORR catalyst but recent computational estimates using state-of-the-art hybrid density functionals in the absence of solvation effects have indicated low activity. To test whether the inclusion of explicit solvation can bring the calculated activity estimates closer to the experimental reports, up to 4 layers of water molecules are included 1 Springer Nature 2021 L A T E X template 2 B-Doped Graphene Solvation in the simulations reported here. The calculations are based on classical molecular dynamics and local minimization of energy using atomic forces evaluated from electron density functional theory. Data sets are obtained from regular and coarse-grained dynamics, as well as local minimization of structures resampled from dynamics simulations. The results differ greatly depending on the method used and the solvation energy estimates and are deemed untrustworthy. It is concluded that a significantly larger number of water molecules is required to obtain converged results for the solvation energy. As the present system includes up to 139 atoms, it already strains the limits of computational feasibility, so this points to the need for a hybrid simulation approach where efficient simulations of much larger number of solvent molecules is carried out using a lower level of theory while retaining the higher level of theory for the reacting molecules as well as their near neighbors and the catalyst. The results reported here provide a word of caution to the computational catalysis community: activity predictions can be inaccurate if too few solvent molecules are included in the calculations. | 10.1007/s11244-023-01829-0 | [
"https://export.arxiv.org/pdf/2303.02092v1.pdf"
] | 257,353,436 | 2303.02092 | 2399fe3b401b7ed7d37d2783d35341763a50b7b1 |
On the challenge of obtaining an accurate solvation energy estimate in simulations of electrocatalysis
3 Mar 2023
Björn Kirchhoff
Institute of Electrochemistry
Ulm University
Albert-Einstein-Allee 4789081UlmGermany
Science Institute and Faculty of Physical Sciences
University of Iceland
Hjarðarhagi 2107ReykjavíkVR-IIIIceland
Elvar Ö Jónsson
Science Institute and Faculty of Physical Sciences
University of Iceland
Hjarðarhagi 2107ReykjavíkVR-IIIIceland
Timo Jacob
Institute of Electrochemistry
Ulm University
Albert-Einstein-Allee 4789081UlmGermany
Helmholtz-Institute Ulm (HIU) Electrochemical Energy Storage
Helmholtz-Straße 1689081UlmGermany
Karlsruhe Institute of Technology (KIT)
P.O. Box 364076021KarlsruheGermany
Hannes Jónsson
Science Institute and Faculty of Physical Sciences
University of Iceland
Hjarðarhagi 2107ReykjavíkVR-IIIIceland
On the challenge of obtaining an accurate solvation energy estimate in simulations of electrocatalysis
3 Mar 2023Springer Nature 2021 L A T E X templateSolvationElectrochemistryOxygen Reduction ReactionDoped Graphene
The effect of solvent on the free energy of reaction intermediates adsorbed on electrocatalyst surfaces can significantly change the thermochemical overpotential, but accurate calculations of this are challenging. Here, we present computational estimates of the solvation energy for reaction intermediates in oxygen reduction reaction (ORR) on a B-doped graphene (BG) model system where the overpotential is found to reduce by up to 0.6 V due to solvation. BG is experimentally reported to be an active ORR catalyst but recent computational estimates using state-of-the-art hybrid density functionals in the absence of solvation effects have indicated low activity. To test whether the inclusion of explicit solvation can bring the calculated activity estimates closer to the experimental reports, up to 4 layers of water molecules are included 1 Springer Nature 2021 L A T E X template 2 B-Doped Graphene Solvation in the simulations reported here. The calculations are based on classical molecular dynamics and local minimization of energy using atomic forces evaluated from electron density functional theory. Data sets are obtained from regular and coarse-grained dynamics, as well as local minimization of structures resampled from dynamics simulations. The results differ greatly depending on the method used and the solvation energy estimates and are deemed untrustworthy. It is concluded that a significantly larger number of water molecules is required to obtain converged results for the solvation energy. As the present system includes up to 139 atoms, it already strains the limits of computational feasibility, so this points to the need for a hybrid simulation approach where efficient simulations of much larger number of solvent molecules is carried out using a lower level of theory while retaining the higher level of theory for the reacting molecules as well as their near neighbors and the catalyst. The results reported here provide a word of caution to the computational catalysis community: activity predictions can be inaccurate if too few solvent molecules are included in the calculations.
*Corresponding author(s). E-mail(s):
[email protected];
Abstract
The effect of solvent on the free energy of reaction intermediates adsorbed on electrocatalyst surfaces can significantly change the thermochemical overpotential, but accurate calculations of this are challenging. Here, we present computational estimates of the solvation energy for reaction intermediates in oxygen reduction reaction (ORR) on a B-doped graphene (BG) model system where the overpotential is found to reduce by up to 0.6 V due to solvation. BG is experimentally reported to be an active ORR catalyst but recent computational estimates using state-of-the-art hybrid density functionals in the absence of solvation effects have indicated low activity. To test whether the inclusion of explicit solvation can bring the calculated activity estimates closer to the experimental reports, up to 4 layers of water molecules are included in the simulations reported here. The calculations are based on classical molecular dynamics and local minimization of energy using atomic forces evaluated from electron density functional theory. Data sets are obtained from regular and coarse-grained dynamics, as well as local minimization of structures resampled from dynamics simulations. The results differ greatly depending on the method used and the solvation energy estimates and are deemed untrustworthy. It is concluded that a significantly larger number of water molecules is required to obtain converged results for the solvation energy. As the present system includes up to 139 atoms, it already strains the limits of computational feasibility, so this points to the need for a hybrid simulation approach where efficient simulations of much larger number of solvent molecules is carried out using a lower level of theory while retaining the higher level of theory for the reacting molecules as well as their near neighbors and the catalyst. The results reported here provide a word of caution to the computational catalysis community: activity predictions can be inaccurate if too few solvent molecules are included in the calculations.
Keywords: Solvation, Electrochemistry, Oxygen Reduction Reaction, Doped Graphene
Introduction
The replacement of costly and rare precious metals with cheaper and more abundant elements in catalysts, for example in the oxygen reduction reaction (ORR) in fuel cells, is an important milestone towards sustainable energy production. To this end, heteroatom-doped graphenes have been explored extensively [1][2][3] [3] Computational predictions of the ORR activity of BG have overall been promising. The free energy approach using the computational hydrogen electrode (CHE) [13] is typically used to evaluate the ORR activity of computational models. Since the estimate of an overpotential obtained by this approach only reflects thermodynamic free energy of intermediates as well as initial and final states, it will be referred to as the thermochemical overpotential, η TCM , in the following.
Jiao and co-workers predict a η TCM range of 0.4-0.6 V for both BG and NG based on calculations using the B3LYP functional and molecular flake model systems, in good agreement with their experimental measurements. [12] A similar value, 0.38 V, is reported by Wang et al. for a BG nanoribbon [14] using the PBE functional and DFT-D3 [15,16] dispersion correction. The most optimistic prediction is reported by Fazio and co-workers with a η TCM of 0.29 V in a B3LYP-based study of a BG flake model system. [17] For reference, the measured overpotential of a typical Pt/C electrocatalyst is 0.3-0.4 V. [18] The experimental overpotential, however, depends on many other factors besides adsorption strength of the ORR intermediates, hence η TCM values are only a rough and purely thermodynamic estimate of the actual overpotential.
The exact mechanism of the ORR on BG is a matter of ongoing investigation. Fazio and co-workers established that the associative 4 epathway should be dominant for BG from a theoretical perspective. [ Jiao et al. [12] finds that a top adsorption geometry should be favored for the critical *O intermediate on BG while other studies [14,17,19] typically find a B-C bridge site to be favored for *O adsorption. It can be summarized that the active site debate for the ORR mechanism on BG is not settled yet.
Furthermore, the stabilization of the ORR intermediates on BG by water molecules, which has been found to be critical to correct energetic description of the ORR on NG, [20][21][22][23] In the study by Jiao et al. [12] solvation effects are estimated using implicit [24] solvation models. However, implicit solvation models have in some cases been shown to fail at reproducing experimental solvation energy measurements or solvation energy results from simulations using many explicit solvent molecules. [25][26][27][28] We recently presented results for the ORR on NG where it was shown that high-level DFT calculations based on hybrid functionals yield a η TCM estimate close to 1.0 V vs. SHE, [29] indicating catalytic inactivity.
Calculation of the confidence interval for average ensemble properties
The confidence interval (CI) is a useful statistical measure for the error bar of an average result sampled from a normal distribution of values. It is therefore also useful to estimate the error bar of ensemble averages sampled through molecular dynamics integration; see Grossfield et al. [31] for more details. The CI defines an interval in which the true ensemble average lies with a certain probability. Here, a 95 % probability threshold is used to define the error bars,
i.e., the 95 % CI.
The two-sided CI < x > of a variable x is defined as
< x >=x ± U,(5)
wherex is the ensemble average and U is the expanded uncertainty. The expanded uncertainty is defined as
U = k s(x),(6)
where k is the coverage factor and s(x) is the experimental standard deviation of the mean. s(x) is defined as
s(x) = s(x) √ n ,(7)
where s(x) is the experimental standard deviation
s(x) = n j=1 (x j −x) 2 n − 1(8)
with the sample values x j , the arithmetric mean of the ensemble propertyx, and the number of independent samples n. Figure 1 shows a representative illustration of the BG sheet model with an *O adatom in contact with 4 layers of water molecules; illustrations of sheet models with *OH and *OOH admolecules as well as models in contact with 1-3 layers of water are shown in figures S1 and S2, respectively. In agreement with studies by Fazio et al., [17] Ferrighi et al., [19] and Wang et al. [14] but in disagreement with the study by Jiao et al., [12] we find adsorption of the *O intermediate on the C-B bridge position to be energetically most favorable. The *OH and *OOH adspecies are found to adsorb most favorably on the B top position, which is in agreement with all previously mentioned studies.
The 32-atomic BG model system is converged with respect to the adsorption energy of the ORR intermediates, *O, *OH, and *OOH, see figure S4. This model therefore allows for the study of the adsorption energy -and the influence of solvation thereon -for a dilute system where the electronic effects of both the dopant atom and the adspecies are isolated and crowding effects can be ruled out.
Simulation parameters
The obtained data sets, including input files with simulation parameters, are distributed alongside this article and are available under DOI:10.5281/zenodo.7684918.
Choice of DFT code and functional
All simulations were performed with the VASP software version 6.2.0. [32][33][34][35] All calculations used the RPBE density functional [36] with DFT-D3 dispersion correction. [15,16] The RPBE-D3 method has been shown to yield water configurations in good agreement with experiments and higher-level methods at comparatively low computational cost. [37] Previous work on NG showed that adsorption energy values for the ORR intermediates can be wrong by up to 0.4 eV compared to the best estimate provided by the HSE06 hybrid functional, which was found to give the lowest error of 5 % compared to a diffusion Monte Carlo benchmark calculation. [29] Similar results were obtained for BG, [30] see table S1, where η TCM with the HSE06 functional was ca. 1.0 V vs. SHE and GGA functionals underestimated this best-estimate value by up to 0.6 V. Figure S3 shows the free energy trends for the ORR on BG obtained with various density functionals. However, our previous work also showed that ∆∆E solv does not share the same strong dependency on the functional. [29] This realization enables the present study since FPMD simulations as long as required for this work are currently not computationally feasible using hybrid functionals.
Static DFT calculations
Static calculations constitute single-point electronic energy calculations as well as minimization of the total energy with respect to the atomic coordinates.
Wave functions were self-consistently optimized until the energy in subsequent iterations changed by less than 10 −6 eV. The wave function was sampled using Monkhorst-Pack k point grids. [38] A k point density larger than 2×2×1 was found to give converged results for ∆∆E solv , see figure S5. Due to the wide variety of structures calculated in this work, refer to the data set distributed alongside this article to see the chosen k point density for each subset of calculations.
Simulations were carried out using a plane wave basis set with an energy cutoff of 600 eV to represent valence electrons and the projector-augmented wave (PAW) method [39,40] was used to account for the effect of inner electrons. See figure S6 for a convergence study for the PAW energy cutoff.
Gaussian-type finite temperature smearing was used to speed up convergence.
The smearing width is chosen so that the electronic entropy was smaller than 1 meV in all cases. Real-space evaluation of the projection operators was used to speed up calculations of larger systems, using a precision of 10 −3 eV atom −1 .
Atomic coordinates were optimized until forces reached below 10 −2 eV Å −1 .
The L-BFGS limited-memory Broyden optimizer from the VASP Transition State Tools (VTST) software package was used to minimize the forces with respect to the atomic coordinates. The periodic images are separated by 14 Å of vacuum and a dipole correction is applied perpendicular to the slab.
Classical molecular dynamics simulations
Classical molecular dynamics (MD) simulations were carried out in an NVT ensemble at 300 K using the Langevin dynamics [41] implemented in VASP. The simulations used similar parameters to those outlined in section 3.3.1 but used a lower PAW energy cutoff of 400 eV and a 3×3×1 Monkhorst-Pack k point grid for computational efficiency. A Langevin friction parameter of γ = 4.91 was used throughout all simulations.
Dynamics were calculated initially until the total energy and temperature were converged. This equilibration period is not considered in the evaluation and was optimized on a case-by-case basis. After equilibration had been achieved, the actual sampling was performed over a period of time. In all simulations the geometry of the graphene sheet and the adspecies were constrained to the geometry obtained from a one-shot geometry optimization of the system in contact with n = 1 − 4 water layers, respectively. Only the water molecules were allowed to move during simulations. The Etot vs. To obtain ∆∆E solv , configurations were sampled every 1 ps, yielding 10 samples for the flexible MD data set and 100 samples for the constrained MD data set. This choice of sampling frequency is informed by the correlation time of water. The correlation time is the time it takes for complete re-orientation of the water arrangement, thus yielding a new, independent sample configuration that is statistically significant. It was found to be ca. 1.7 ps for water at room temperature using nuclear magnetic resonance spectroscopy. [43] The chosen sampling rate of 1 ps is smaller than this value as a result of the significant computational effort of performing long dynamics simulations. To minimize the risk of oversampling, Langevin dynamics was chosen to describe coupling to a heat bath. Langevin dynamics introduces a stochastic component to the propagation which can help to diversify configurations more quickly compared to fully deterministic dynamics.
Results
One-shot minimization of atomic coordinates
The first data set is generated by bringing the BG model system with *O, *OH, and *OOH adspecies into contact with 4-32 molecules of water and minimizing the resulting configurations with respect to the atomic forces. This data set will be referred to as the one-shot minimization data set going forward. The chosen water configurations are modeled after those used by Reda et al. to calculate the solvation stabilization energy for the ORR intermediates on NG sheet model systems. [23] Configurations were created so that water molecules are only on one side of the BG sheet model or on both sides, denoted with the † and ‡ symbols, respectively, in table 2 and figure 2.
The ∆∆E solv results obtained from the one-shot minimization data set give rise to several trends. First, when water molecules are placed only on one side of the model, ∆∆E solv for the *O intermediate does not appear to be converged within the tested series of models as ∆∆E solv still increases from
NVT simulations
In order to probe if insufficient sampling of the configurational space is responsible for the inconsistent results of the one-shot minimization data set, This result potentially indicates that the bond length constraint affects the coordination fine structure around the adspecies and thus may help to explain the differences between the flexible MD and constrained MD data sets. However, more detailed investigation is required to validate the importance of this observed difference.
∆∆E
It can be summarized that coarse-grained MD simulations yielded a data set that is significantly different from the more similar-to-each-other flexible MD and one-shot minimization data sets but did not yield more consistent
Re-sampling and energy minimization
The flexible MD and constrained MD data sets did not yield converged ∆∆E solv results. There are, however, two technical limitations which may reduce the significance of these data sets:
1. For these data sets, ∆∆E solv is calculated by using the average total energy from an NVT ensemble (T = 300 K) for the energy terms labeled "with solvent" in equation (4). The energy terms labeled "without solvent" are obtained from energy minimization calculations of the systems without solvent which are technically at 0 K temperature. While the BG sheet model and adspecies were kept frozen in the atomic configuration from a 0 K energy minimization during the MD and only water molecules were allowed to move, it cannot be fully excluded that results are biased due to a mismatch between the averaged finite-temperature MD values on one side and the locally optimized, 0 K values on the other side of the equation.
2. As outlined in section 3, the MD simulations -as well as the corresponding reference simulations of the systems "without solvent" needed for equation (4) -used a reduced PAW energy cutoff value of 400 eV to enable longer simulation times. This value is technically not converged for adsorption energy calculations, see figure S5.
In order to address both of these limitations, a fourth data set is produced. To this end, 20 structures are randomly sampled from each flexible MD trajectory and subsequently energy-minimized using the settings presented in section 3,
i.e., with a larger PAW energy cutoff of 600 eV. This way, the diversity of the MD-generated configurations is maintained but all values entering equation (4) are obtained from energy-minimized atomic configurations using safe accuracy settings. This data set will be referred to as the resampled data set going forward. Figure 4 visualizes the ∆∆E solv results obtained from this data set.
The resampled data set shares similarities with the flexible MD and one- First, convergence of ∆∆E solv , i.e. changes of < 0.05 eV between subsequent data points, is not observed in any case. It is impossible at this point to give a confident estimate of ∆∆E solv for the tested adspecies on the BG sheet model. This result indicates that more than 32 molecules (4 layers) of water are likely necessary to obtain converged results.
Converging the ∆∆E solv value to changes within chemical accuracy is of crucial importance. For example, consider the potential-dependent free energy trends for the ORR on the BG model presented in figure S3. These trends were obtained according to the free energy approach using the computational hydrogen electrode. [13] Using the most reliable functional for adsorption energy calculations on this material class according to benchmarks, [29,44,45] the HSE06 hybrid functional, the potential-determining step is the formation As an intermediary conclusion, the most likely explanation for the nonconversion of the ∆∆E solv results in general, as well as for the non-systematic differences between data sets more specifically, is that significantly more water molecules need to be included in simulations. It is unclear at this point how many water molecules would be required to achieve convergence. Sakong et al.
found that 6 layers of water are needed to obtain bulk water behavior and converged work function estimates in the case of FPMD simulations of a Pt (111) surface in contact with water. [46] However, Pt(111) is a strongly-coordinating surface compared to the hydrophobic BG sheet model in the present study.
Furthermore, the group tested for convergence of the work function and not for ∆∆E solv of reaction intermediates. Hence, it is unlikely that the number of 6 necessary water layers will also be the correct number of layers to include for the present system.
For these reasons, it is currently not possible to foresee the ultimately required number of water molecules required to obtain converged ∆∆E solv results for this system. Attempting to find this number systematically by dynamics simulations with DFT atomic forces quickly becomes computationally unfeasible; simulations for the models in contact with 32 water molecules in this work already required several weeks of computational time. Even if these considerable time and energy resources would be spent to identify this number for the present problem, such a study would have to be repeated for every new material under investigation. Even though the influence of solvation has been shown to significantly affect free energy trends, the authors are therefore convinced that such simulations cannot (yet) be performed routinely.
We have thus come to the decision to publish the present results as-is and to not continue running simulations with model systems that include more and more water molecules at ever increasing computational cost. Instead, we are currently focusing research efforts into development of a 2D periodic polarizable-embedding QMMM method that will allow for simulations with thousands of water molecules while retaining electronic structure level accuracy for the surface model and the closest few layers of water molecules. This method will use the Single Center Multipole Expansion (SCME) ansatz to describe polarization of water molecules which is crucial to accurately describe interface processes such as charge transfer. [47,48] Because the boundary plane between the QM and MM regions has exclusively water molecules on both sides, and because it is not necessary to describe diffusion to or from the surface to obtain ∆∆E solv results, an efficient restrictive boundary method can be used. The SAFIRES method recently developed in our groups was build to support 2D periodic boundary conditions. [49] A publication on the technical implementation of the 2D periodic polarizable-embedding QMMM ansatz for the open-source GPAW and ASE programs is currently in preparation in our groups. The goal is to use this method to revisit the BG model system in the present work.
Analysis of potential error sources
To conclude the discussion of the data sets presented in this work, the following sections will rule out various potential error sources that readers familiar with dynamics simulations and the pitfalls of solvation energy calculations may be concerned about.
Influence of the sampling frequency on the results
Configurations were sampled from the dynamics simulations at an interval of 1 ps. It is important to ask how the ∆∆E solv results are affected by changes of the sampling frequency. Figure S8 compares ∆∆E solv results from the flexible MD and constrained MD data sets analyzed every 2 ps, 1 ps, 100 fs, and 10 fs.
The ∆∆E solv results appear to be robust against the choice of sampling frequency. The only significant differences are observed when between sampling the flexible MD data set every 2 ps (5 total samples) or every 1 ps (10 total samples) and faster. This difference can be attributed to the poor statistics in the case of the 2 ps sampling frequency.
The size of the error bars is affected significantly by the sampling frequency because the square root of the number of samples, √ n, enters the divisor of equation (7). This test therefore highlights the importance of choosing a reasonable sampling frequency based on the physical properties of the system to obtain a meaningful error bar. It is easy to get lured into a false sense of security by oversampling the results to obtain small error bars.
Spurious dipole and quadrupole corrections
Total energy calculations were performed using dipole and quadrupole correction perpendicular to the surface to avoid interactions between periodic repetitions of the simulation box. It is known that first-row semiconductors with defects, of which BG is an example, can lead to large dipole and quadrupole moments, thus making the correction necessary. However, our simulations showed that the correction can sometimes give erroneously large corrections of several eV for unknown reasons. After re-optimizing the wave function in a single-point calculation, the correction is then found to be of a reasonable magnitude again, usually on the order of some meV.
Because it is impossible to perform this manual correction for all calculations in this work, the consistency of the results is representatively examined by analyzing the average dipole and quadrupole correction energy (and uncertainty thereof) of the resampled data set. Figure S9 shows the results of this analysis. The average correction energy is <= 0.02 eV in all cases, which better than chemical accuracy. Error bars are found to be as large as 0.01 eV in some cases and close to 0.02 eV in one extreme case (BG-OOH in contact with 24 water molecules), indicating that the dipole and quadrupole energy correction is indeed volatile (in relation to the absolute values) and dependent on the exact geometry of the system. However, due to the small overall magnitude of the correction, it can be concluded that this correction should not significantly influence the calculation results. shows that the results do not become more consistent when the dispersion energy contribution is removed. Hence, it can be concluded that any volatility of the dispersion correction results is also not the cause for but most likely the result of the erratic nature of the entire data set.
Spurious dispersion correction
One caveat in this analysis and discussion, however, is that this a posteriori removal of the final dispersion correction energy does not remove the entire influence of dispersion correction on the data set. Both the MD simulations and the local minimization of the structures in the resampled data used dispersion correction throughout, hence the final structures (re-)analyzed here are generated on the RPBE-D3 potential surface. Despite this caveat, it is still unlikely that dispersion is the driving factor behind the erratic results since in particular the RPBE-D3 functional combination has been shown in the past to produce water structure that is in good agreement with experiments. [37]
Influence of simulation cell size
Influence of minimizing the reference systems
This concern is related to the discussion about inconsistent cell size in section 5.2.4. As pointed out there, the reference systems were obtained from the solvated parent systems by removal of the water molecules and subsequent energy-minimizaion of the resulting atomic configurations. This approach was chosen to account for the possibility that the most stable atomic arrangement of the BG-adspecies system may change once water molecules are removed.
However, this approach creates a potential inconsistency: by optimizing the atomic configuration of the reference systems, the ∆∆E solv values obtained from equation (4) do not only contain the interaction of the BG-adspecies system with the water molecules but also the reorganization energy of the systems when going from a system in vacuum to a solvated system. Overall, however, the differences appear to be systematic across the board and do not change the trends. Therefore, this factor is also not responsible for the erratic, non-converging behavior of ∆∆E solv with increasing number of water molecules.
Influence of using the lowest-energy structures to
obtain the solvation stabilization energy Hence, this approach is applied to the present data set. The flexible MD data set was re-analyzed to find the structure with the lowest total energy for each combination of adspecies and number of water molecules. The obtained images were then energy-minimized using the tight accuracy settings outlined in section 3. Figure S12 shows the results of this approach. It can be concluded that this approach not only did not resolve the erratic results but can further distort the results because the close-to-ideal local configurations optimized in this case likely do not represent the average configurations of water molecules around the adspecies in real, finite-temperature systems.
Influence of constraining the geometry of the BG sheet
100 ps of classical dynamics without bond length constraints on the water molecules and no geometry constraint on the BG sheet and adspecies were accidentally performed for the BG-OOH system in contact with 1 layer of water.
This mistake, however, can be used to probe the influence of the geometry constraint on the BG-OOH system. Figure S13 compares the total energy and temperature trends over the course of the simulation time for the simulations with and without geometry constraint on the BG-OOH system. Most notably, the total energy fluctuations are significantly increased in the case of the model without constraint.
The increased amplitude of fluctuations translate to a larger error bar. Hence, without the geometry constraint on the BG-OOH backbone, more sampling statistics is required to reduce the uncertainty to an appropriate level. In the interest of computational feasibility, the geometry constraint therefore turns out to be an almost necessary prerequisite.
Finally, figure S14 compares the g(z) of the systems where the BG sheet was constrained against that of the non-constrained system. No significant differences were observed. This result indicates that constraining the BG sheet does not significantly affect the interactions between the surface and the first water layer from a structural point of view.
Embedded solvation approach
The embedded solvation approach, where a small cluster of explicit solvent molecules is used in combination with an implicit continuum description of the solvent bulk, has recently been employed to good effect. [50,51] In the beginning of this study, the one-shot minimization data set was, in fact, computed using the embedded approach and similarly erratic results were obtained. The implicit solvent model was then discarded for the remainder of this study to reduce the number of potential error sources. A detailed discussion of the simulation parameters and potential error sources is provided to rule out that technical errors lead to these erratic results.
We conclude that 32 water molecules, which is the equivalent of 4 layers of water in this model system, are not sufficient to describe solvation of the adspecies within chemical accuracy. Chemical accuracy, i.e. convergence of ∆∆E solv to changes of < 0.05 eV when adding more and more water molecules, is essential since any reduction of the free energy of the potential-determining intermediate will lead to a proportional reduction of the thermochemical overpotential as well.
These results emphasize that new simulation methods are required to be able to calculate large enough systems to obtain converged ∆∆E solv results since molecular dynamics simulations with DFT forces quickly become computationally unfeasible when adding more and more water molecules. Our groups are therefore focused on implementing a 2D periodic hybrid method (often referred to as QM/MM) for the open-source ASE and GPAW software packages which will enable calculations with thousands of water molecules.
Another promising approach to tackle this problem is the recently developed on-the-fly machine learning force field training method.
[52] This approach could be used to train a machine learning force field on a small system and then upscale the system to contain many water molecules while retaining close-to-DFT accuracy.
Finally, we believe in the importance of presenting these negative results to the catalysis community as a word of caution. It is easy to underestimate the number of explicit water molecules required to obtain sufficiently accurate solvation energy results. Figure S1 illustrates the 32-atomic BG sheet model with the *O, *OH, and *OOH adspecies used throughout this study. Figure S2 illustrates the BG sheet model with an *O adatom in contact with 1-4 layers of water molecules.
Supplementary information. A Supplementary
Interactive visualizations of the model systems can be found online at https:// bjk24.gitlab.io/bg-solvation/docs/visualization.html. Using η TCM as a descriptor for ORR activity, this test is performed to investigate how strongly the thermochemical results depend on the chosen density reproduce a Diffusion Monte Carlo benchmark value most accurately out of all tested functionals. [29] Thus, the HSE06 result is used as a reference value in the following. Figure S3 shows free energy changes of the ORR intermediates on the BG model at 0 V vs. SHE and at the extrapolated onset potential for each functional. To further illustrate this conclusion, thermochemical overpotentials η TCM are calculated from the extrapolated onset potentials U onset as
η TCM = 1.23 V − U onset .(9)
η TCM results are summarized in Table S1. The hybrid functionals The test below calculates energy differences according to
∆E = E ads tot − E clean tot ,(10)
where E ads tot is the total energy of a system with an adspecies (*O, *OH, or *OOH) and E clean tot is the total energy of the BG sheet without any adspecies.
To test for supercell size convergence, it is not necessary to calculate an actual adsorption energy by taking into account the total energy of the adspecies calculated from molecules such as O 2 , H 2 , and H 2 O because their total energy will change only in small ways as the size of the supercell increases. There is always a slight change because plane waves always fill the entire box, also for molecules, which makes the total energy dependent on the box size. However, this effect is insignificant compared to the influence of decreasing dopant and adatom concentration as a function of the supercell size. Figure S4 shows the results of this test.
Since these convergence tests were performed at a much earlier date 1. This data set uses a PAW energy cutoff of ENCUT = 500. As can be seen further below in section 7.3.3, the choice of ENCUT = 600, which was used in the main article for geometry optimization calculations, is on the paranoid end of safe and there is no reason to assume that using a cutoff of 500 eV in this instance had an adverse effect on the results.
2. Note that the k grid was individually optimized for each system.
The k grid optimization runs are not shown for the sake of brevity.
The entire calculated data set is provided in a archive found under DOI:10.5281/zenodo.7684918; consult the KPOINTS files in the respective subfolders for the converged k grid settings. 3. We expect that the trends obtained from these functionals (PBE and HSE06, the latter of which constitutes a PBE functional with 25% exact exchange and a screening parameter) are fully transferable to the RPBE functional with DFT-D3 dispersion correction that was used for production calculations later on. RPBE and PBE belong to the same family of GGA-rung functionals, more specifically RPBE is a re-parameterized PBE functional optimized towards surface (adsorption) calculations. The differences between PBE and RPBE are minor, unlike for example the differences between PBE and functionals like BEEF or SCAN which use fundamentally different potential terms and would therefore require careful re-investigation. This convergence test shows that the energy differences are well converged from the beginning. This result is in contrast to what was observed for NG where the the size 16 and 24 data points did not show converged results yet. [29] However, to stay consistent with the previous work on NG, we chose to use the 32 atomic model system going forward.
There is also another reason for using the slightly larger system: there is the possibility that if the system size is chosen too small, the water atoms in the individual layers become too crowded and do not have the necessary space to relax and accommodate the surface and adspecies properly. To investigate crowding effects in the lateral directions properly, the MD simulations presented later on in section Results should be repeated for a set of surface models with increasing size in x and y direction; however, such a test was not computationally feasible at the time of this study.
k grid convergence of the solvation stabilization energy
Convergence of ∆∆E solv with respect to the k grid is tested. Our hypothesis was that the k grid density of ∆E ads and ∆∆E solv should be different because the interaction is fundamentally different (covalent interaction of adatom with periodic surface vs. non-convalent long-range interaction of solvent molecules with -mostly -the adspecies and only very lightly with the hydrophobic graphene surface). This hypothesis was strengthened by the observation that ∆E ads and ∆∆E solv do not share the same dependence on the density functional. [29] In this case ∆∆E O * solv for the BG system with an *O adatom is calculated as Figure S5 shows the results of this test.
∆∆E * O solv = E BG+O+solv tot − E BG+O tot − E solv tot ,(11)
From this test, ∆∆E solv results are converged using a 2x2x1 k grid.
Arguably, the results can be regarded converged already at a 1x1x1 grid since the energy difference between the smallest and the next k grid is only ca. 0.012 eV. For the MD simulations, we erred on the side of caution and used a 3x3x1
Convergence of the plane-augmented wave energy cutoff
The same approach as above for the supercell size was used to establish the relationship between ∆E and the plane-augmented wave energy cutoff. Only the *O adatom is tested in this case since behavior appears to be similar for all intermediates (see for example figure S4) and because the *O intermediate in particular was found to be the most notorious in benchmark calculations with NG. [29] The 32-atomic BG sheet model was used. Aside from using the PBE functional and a changing ENCUT parameter, the other simulation parameters were consistent with those summarized in the Computational Details section in the main article. Figure S6 shows the results of this test.
Results show that a PAW cutoff energy of 500 eV is sufficient to obtain This test is performed to check if potentially erratic dipole and quadrupole correction values, which were empirically observed in this work to sometimes occur for no apparent reason, are causing the ∆∆E solv results to be erratic. Figure S9 visualizes the dipole and quadrupole energy correction results for the resampled data set. Figure S10b reproduces the ∆∆E solv results for the resampled data set shown in Figure 4 in the main manuscript but with the dispersion energy contribution removed from the total energy values before calculating ∆∆E solv . Results indicate that while the dispersion correction is significant in terms of absolute values, removing the E disp does not stabilize the ∆∆E solv trends either (figure S10b). There is, however, an important caveat to this test: dispersion correction was included during the MD simulations from which the resampled data set was generated and also during minimization calculations of the structures in the resampled data set. Hence, this a posteriori removal of the dispersion contribution can only be a rough indicator of its influence.
∆E disp = E BG−X disp − E BG disp .(12)
The data sets would need to be reproduced completely without dispersion correction to conclusively rule out this parameter. Table S2 summarizes the total energy values of the reference systems without water molecules from the resampled data set used to calculate ∆∆E solv . The reference systems were generated from the parent systems that include water molecules by removing the latter. This step was necessary because the simulation box dimensions are different depending on how many water molecules are included. This table illustrates the differences introduced into the total energy by variable cell dimensions. The differences are < 0.01 eV in all cases and, even if the cell dimension had not been corrected for in this way, are unlikely to distort the simulation results in a meaningful way.
Influence of the simulation box dimensions
Influence of energy-minimizing the non-solvated reference systems
Another potential source of inconsistency is the way that total energy values for the non-solvated reference systems are obtained. In the main article, the configurations of the non-solvated systems were obtained by removing the water molecules from the solvated systems and minimizing the resulting atomic configurations. However, with this approach, ∆∆E solv does not only contain the interaction between the surface-adspecies system with the water molecules but also the rearrangement energy from the relaxation of the reference system. Figure S11 explores in figure S12 loosely resemble the trends for the resampled data set, ∆∆E solv for the *O adspecies is significantly more negative than with any other analysis strategy. It can be concluded that this approach not only did not resolve the erratic trends but likely further distorted the results because the close-toideal local configurations optimized in this case likely do not represent the average configurations of water molecules around the adspecies in real, finitetemperature systems.
Influence of freezing the BG sheet
The flexible MD simulation of BG-OOH in contact with 8 (flexible) water molecules was accidentally performed with a non-constrained BG sheet. While the data shown in the main article was obtained with the correctly constrained model, this mistake allows to probe the influence of this geometry constraint. This result may indicate that significantly more sampling would be required in the case of the non-constrained BG sheet to obtain a good estimate of ∆∆E solv with a small-enough error bar. Figure S14 does not show any significant differences between the two systems, indicating that freezing the BG sheet and adspecies does not significantly change the interaction with the first layer of solvent molecules.
has only been considered by one group so far to the best of the authors' knowledge. Fazio et al. used a cluster of 6 water molecules in contact with a molecular flake model representing BG to estimate the effects of solvation.[17] The group found that while the stability of the *O intermediate is barely affected by solvation, the *OH and *OOH intermediates are stabilized by -0.37 eV and -0.46 eV, respectively. The low predicted η TCM of 0.29 V vs. SHE in this study results in part from the stabilizing effect of solvation.
Fig. 1
1Rendered illustration of the BG sheet model system with an *O adatom in contact with 32 water molecules (4 layers).
t and T vs. t trends for all simulations are shown in the online SI. Two data sets were generated: 1. First, simulations were performed without any constraints on the water molecules and with a time step of 0.1 fs. Simulations were continued up to a total simulation time of 10 ps after thermalization. This set of MD simulations will be referred to as the flexible MD data set going forward. 2. Second, simulations were repeated after placing a Rattle-type bond length constraint[42] on the O-H and H-H bonds to keep the geometry of water molecules rigid throughout simulations, thus enabling a coarse-grained time step of 1.0 fs. Simulations were continued up to a total simulation time of 100 ps after thermalization. This set of MD simulations set will be referred to as the constrained MD data set going forward.
Fig. 2
2∆∆E solv results for the *O intermediate on BG in contact with 4-32 molecules of water obtained from the one-shot minimization data set. The blue line shows ∆∆E solv when water molecules are exclusively placed on the side of the model where the adatom is located. The orange line shows ∆∆E solv values from select models where water molecules are placed on both sides of the model. For the orange line, the x axis indicates the number of water molecules on the side with the adatom and not the total number of water molecules. The † and ‡ indicators connect the values in this figure to the corresponding data values in table 2.
- 0 .
020 eV to -0.06 from 24 to 32 molecules. Values can be deemed converged if changes are below ca. 0.05 eV or 1 kcal mol −1 , i.e., chemical accuracy. Second, the results for simulations where molecules are placed only on the side of the sheet model with the adatom ( †) are inconsistent with simulations where molecules are placed on both sides of the model ( ‡). For example, deviations of < 0.05 eV are found between simulations where 16 molecules are placed on the side of the adatom and 0, 8, and 16 molecules are placed on the other side. This result would potentially indicate that water molecules on the opposite side of where the adspecies is located have negligible influence and can be omitted. However, the deviation between ∆∆E solv values where 8 molecules are placed on the side with the adspecies and 0 or 8 molecules are placed on the other side is 0.19 eV. Similarly, the deviation between ∆∆E solv values where 24 molecules are placed on the side with *O and 0 or 8 molecules are placed on the other side is 0.16 eV. Results from one-shot minimization data set are therefore inconsistent. From this data, it is unclear if and when ∆∆E solv will converge as a function of the number of added water molecules and it cannot be assessed with confidence if water molecules do or do not need to be present on the side of the sheet opposite of the adspecies. One potential reason for the inconsistent behavior lies in the one-shot nature of the data set: water molecule arrangements are flexible and form a complex energy landscape where minimization algorithms can easily become stuck in local minimum configurations. This limitation can be overcome by rigorous sampling of the configurational space by MD integration.
solv is subsequently determined as an ensemble average by performing MD simulations for a total of 10 ps using a time step of 0.1 fs. No constraint was placed on the O-H and H-H bonds of water molecules. This set of simulations is referred to as the flexible MD data set. Due to the significant computational effort of these simulations, only water configurations where water molecules are placed on the side of the adspecies are considered. Simulations are performed for the clean BG sheet model, for the BG sheet with an *O adatom in contact with 8-32 molecules, and for the *OH and *OOH adspecies in contact with 8-24 molecules of water. Figure 3a visualizes the ∆∆E solv results calculated from this data set.
Fig. 3
3∆∆E solv results for the *O (blue curve), *OH (orange curve), and *OOH (green curve) adspecies on BG in contact with 8-32 molecules of water obtained as ensemble averages from a 10 ps of MD using a time step of 0.1 fs where water molecules were flexible and b 100 ps of MD using a time step of 1.0 fs where water molecules were constrained. The error bars indicate the two-sided 95 % CI calculated according to equations (5)-(8).Focusing on the *O intermediate (blue curve), a similar trend of ∆∆E solv vs. the number of water molecules emerges as before from the one-shot minimization data set: the values oscillate and there is an increase of ∆∆E solv from -0.3 eV to 0.2 eV from 24 to 32 molecules, indicating significant destabilization of this adspecies. In general, the differences between subsequent data points are found to be larger than in the case of the one-shot minimization data set.It can be summarized that the flexible MD data set did not yield more consistent ∆∆E solv results than the one-shot minimization data set. While a similar overall ∆∆E solv trend is observed for the *O adspecies, differences between subsequent data points are larger than in the case of the one-shot minimization data set.Another important observation is the significant sizes of the error bars, which extend from 0.25 eV up to over 0.5 eV in some cases. Note that in the case of the *O intermediate, the error bar span becomes larger as a function of the number of water molecules. This effect is much less pronounced, if at all, for the *OH and *OOH intermediates. However, it is clear from the size of the error bars that the length of simulation time is too short compared to the correlation time of water and thus simulations only yielded 10 independent samples that entered into the evaluation. In an effort to extend the simulation time, a coarse-graining approach was chosen where the O-H and H-H bond lengths of water molecules were constrained to the average corresponding bond lengths obtained in the flexible MD data set. This bond length constraint allows for larger simulation time steps to be taken without the risk of spurious discretization errors from inadequate sampling of the fast O-H vibrations. A subsequent set of dynamics simulations of the same model systems thus used a time step of 1.0 fs and was continued for a total of 100 ps simulation time, yielding 100 independent samples.∆∆E solv results from this constrained MD data set are visualized in figure 3b.∆∆E solv trends from the constrained MD data set, while also showing no signs of converging behavior, differ significantly from the flexible MD and oneshot optimization data sets. The obtained ∆∆E solv values for the *O adspecies do not oscillate as in the case of the other data sets but continuously increase with increasing number of water molecules. From this data set, the presence of 24 and 32 water molecules is predicted to significantly destabilize this intermediate. With ca. 0.25 eV, the data point for 32 water molecules from this data set is similar to the flexible MD data set, however, this data set does not show the reduction of ∆∆E solv at 24 molecules that was observed for both the flexible MD and the one-shot minimization data sets. The *OH and *OOH adspecies show similar ∆∆E solv trends that parallel each other in this data set; however, values oscillate by up to 0.5 eV when the number of water molecules is increased. Finally, the factor 10 longer simulation time affects the size of the error bars which is now on the scale of ca. 0.1 eV. Similar to results from the flexible MD data set, the error bars for ∆∆E solv of the *O adspecies are found to increase with increasing number of water molecules in the simulation while no such trend is observed for the *OH and *OOH intermediates.Finally, the local structure of the water molecules around the adspecies is analyzed using z distribution function, g(z), seefigure S7. The g(z) distributions are obtained by calculating distances between the O atoms of water molecules and an x−y plane within the BG sheet model. The g(z) show distinct bands for the first and second solvation layer. The bands for 3 and 4 layers are significantly more broadened, indicating that the surface-adjacent double layer is more strongly coordinated compared to subsequent layers. Notably, shoulders at the first band are visible in the g(z) from the flexible MD data set which are not visible in the constrained MD data set. However, this result is presented with the caveat that the data is more noisy compared to the smoother constrained molecule g(z) results due to the 10x smaller sampling statistics.
Fig. 4
4∆∆E solv results for the *O, *OH, and *OOH adspecies on BG in contact with 8-32 molecules of water obtained as average values over 20 images per data point which were randomly resampled from the flexible MD data set and subsequently energy-minimized with respect to the atomic coordinates. The error bars indicate the two-sided 95 % CI calculated according to equations (5)of the results from different data setsFigure 5 shows a side-by-side comparison of ∆∆E solv as a function of the number of water molecules for the *O, *OH, and *OOH adspecies from the four obtained data sets. The resampled data set is the most significant data set among those obtained in this work as it combines the broad configurational diversification of the MD simulations with the methodological consistency of calculating ∆∆E solv using strict accuracy parameters and exclusively on the basis of energy-minimized structures. By comparing the data sets with each other and with the resampled data set in particular, several important aspects can be highlighted.
Fig. 5
5Comparison of ∆∆E solv results for the a *O, b *OH, and c *OOH adspecies from the one-shot minimization data set, the flexible MD and constrained MD data sets, and the resampled data set.
of the *OOH intermediate by a significant margin. The extrapolated thermochemical overpotential, η TCM , for the ORR on the present BG model is ca. 1.0V vs. SHE. Stabilization of the *OOH intermediate by roughly -0.4 eV (8 water molecules), -0.6 eV (16 water molecules), or -0.2 eV (24 water molecules)will therefore proportionally reduce η TCM to 0.6 V, 0.4 V, and 0.8 V vs. SHE, respectively. Depending on the number of included water molecules, one can predict a mostly inactive (η TCM = 0.8 V, 24 molecules) or moderately active (η TCM = 0.4 V, 16 molecules) ORR electrocatalyst. The overpotential of a typical reference Pt/C electrocatalyst is 0.3-0.4 V.[18] Therefore, ∆∆E solv must be converged within the limits of chemical accuracy before any trustworthy prediction can be made.Second, there appears to be no obvious systematicity to whether trends from the different data sets agree with each other or not. For example, values from different data sets for the *OOH intermediate are in reasonable agreement and show similar overall trends. In the case of the *O adatom, there is some correlation between trends from correlated data sets (in particular the flexible MD data set and the resampled data set which was generated from the former) and only the constrained MD data set behaves significantly different. In the case of *OH, however, there appears to be no shared trends between results from either of the data sets. Further research is needed to analyze why there is reasonable agreement in some cases and no agreement in other cases.Third, the error bars in all cases are significantly larger than chemical accuracy (± 0.05 eV). Aside from the fluctuation amplitude of the total energy values, the size of the error bar is governed by the number of independent samples. Because of the long experimentally measured correlation time of water, significantly longer statistics may be required to reduce the uncertainty to within chemical accuracy. See also section 5.2.1 for a detailed analysis of the influence of sampling frequency. Fourth, from the results presented in table 2, it cannot be completely ruled out that water molecules may have to be added to both sides of the BG sheet model to obtain correct results. This result stands in contrast to results by Reda et al. for NG where results for placing water molecules on one side or both sides of the model were close to identical.[23] This result therefore shows that ∆∆E solv values obtained for one material cannot be transferred to others, even if they are as closely related as NG and BG. Fifth, analysis of the z distributions, g(z), of oxygen atoms from the water molecules based on the MD data sets provided some first evidence that the bond length constraint used to obtain the constrained MD data set may have affected the coordination fine structure around the adspecies. However, due to the poor statistics resulting from the small required time step of the flexible MD data set, it would be necessary to extend these simulations by a factor 5-10 to obtain enough independent samples to make sure that this observation is significant. To the best of our knowledge, there is only one other study in literature where ∆∆E solv values from explicit solvation were calculated for the ORR intermediates on BG. Fazio et al. used a molecular BG flake model in contact with a cluster of 6 water molecules to obtain ∆∆E solv .[17] The group used the B3LYP hybrid functional in combination with DFT-D3 dispersion correction. From this model, they obtained ∆∆E solv values of -0.06 eV, -0.37 eV, and -0.46 eV for the *O, *OH, and *OOH intermediates. The values for *O and *OOH are in reasonable agreement with the results for 8 water molecules in the present study, which is the closest point of reference. The value for *OH is 0.15 to 0.20 eV more positive than in the present work. However, because the ∆∆E solv values in the present work are not converged even when 32 water molecules are included, an in-depth discussion about potential reasons for the (dis-)agreement of the present results and the results by Fazio et al. is not appropriate.
DFT-D3 dispersion correction values are significantly larger in magnitude than the dipole and quadrupole correction energy discussed in section 5.2.2.
Figure
S10a uses the resampled data set to show the dispersion energy dif-ference ∆E disp = E BG−adspecies disp − E BG−clean dispbetween the BG systems with the adspecies *O, *OH, and *OOH and the clean system, all of which are in contact with water. This analysis therefore highlights the contribution of the dispersion energy to the adsorption energy for the solvated model systems.
Figure 9b
9breproduces the ∆∆E solv results as a function of the number of water molecules shown in figure 4 but with the dispersion energy removed from the total energy. This analysis shows that the dispersion contributions increase with the size of the solute. ∆E disp is close to zero for the *O adatom but ca. -0.5 eV for *OOH in contact with 16 water molecules. The values for *OH and *OOH fluctuate significantly between subsequent data points, raising the question if the dispersion correction may be partially responsible for the erratic behavior of the ∆∆E solv trends. However, analyzing the ∆∆E solv trends in figure S10b
∆∆E solv of the ORR intermediates on NG was calculated by Yu et al. in 2011 by introducing 41 water molecules to a NG model, performing classical dynamics simulations with DFT fores, and finally minimizing the lowest-energy solvated structures obtained from the MD simulation with respect to the atomic coordinates.[21] The group obtained ∆∆E solv values of -0.53, -0.38, and -0.49 eV for the *O, *OH, and *OOH intermediates, respectively. While this approach fails to capture the vast structural diversity accessible to the system and is therefore less representative of the system under experimental conditions, it has value from a computational perspective because ∆∆E solv according to equation 4 is calculated exclusively from 4 values total, all of which represent the best possible guess for the global minimum energy configuration of each system.
FigFig. S2
S2. S1 Illustrations of the BG sheet model (a) in contact with *O (b), *OH (c), and *OOH (d) adspecies. Illustrations of the BG sheet model with an *O adatom in contact with 1 (a), 2 (b), 3 (c), and 4 layers (d) of water molecules.7.2 Density functional benchmark and ORR free energy trends Adsorption free energy values for the ORR intermediates *O, *OH, and *OOH and, from these, thermochemical overpotentials η TCM are calculated for the 32-atomic BG model using various different density functionals using the computational hydrogen electrode free energy method.[13] Potential-dependent free energy values are calculated the same way as in an earlier work on NG.[29]
Fig. S3
S3Free energy diagrams for the 32-atomic BG model at 0 V obtained using GGA and meta-GGA functionals (top) as well as PBE and hybrid functionals (bottom). A hypothetical ideal ORR catalyst with a free energy change of 1.23 V at each reaction step is shown as a dotted line.The potential determining step (PDS) in all cases is the formation of the *OOH intermediate. Compared to a hypothetical ideal ORR catalyst with a free energy change of 1.23 V at each reaction step, the *OOH intermediate is underbound in the case of all functionals, with hybrid functionals underbinding *OOH more strongly than meta-GGA and GGA functionals. The *O intermediate is significantly overbound in the case of GGA and meta-GGA functionals which is in agreement with benchmark calculations on undoped graphene and free energy calculations on NG.[29, 44, 45] The energetic description of the *OH intermediate is similar for all tested functionals and aligns well with the hypothetical ideal catalyst. The free energy results highlight a significant issue: all functionals are in good agreement for the *OH intermediate but differ in their results for the *OOH and *O adspecies. Therefore, different functionals are bound to produce different overall trends.
lower η TCM of 0.93 V. The tested GGA and meta-GGA functionals give significantly lower η TCM values of 0.68 V (PBE), 0.61 V (SCAN), 0.52 V (TPSS), and 0.44 V (BEEF-vdW). These trends are analogous to previous computational results for NG.[29] Notably, choosing a meta-GGA functional will not provide significant improvements over GGAs. Bayesian error estimation is performed based on an ensemble of 2000 functionals generated by BEEF-vdW to obtain standard deviations for the adsorption free energy values of the ORR intermediates. Based on the error estimation, the largest-and smallest-possible η TCM value can be calculated which should be indicative of the overall uncertainty of GGA functionals for this application. The η TCM range obtained this way for BEEF-vdW is 0.26-0.80 V. Notably, this range does not include values obtained by hybrid functionals. Since the HSE06 and PBE0 hybrid functionals are able to reproduce DMC benchmark values for graphene-based materials,[29, 44, 45] this result indicates that GGA functionals lack some fundamental contributionlikely exact exchange -that is necessary to accurately describe the electronic structure of this material class. size convergence study is performed with the generalized gradient approximation (GGA) functional by Perdew, Burke, and Ernzerhof (PBE) as well as with the hybrid functional by Heyd, Scuseria, and Ernzerhof (HSE06) where the calculation of exact exchange was downsampled: 1. PBE-based optimization (labeled "PBE") 2. HSE06-based optimization where the k grid was reduced to the Γ point for the Hartree-Fock portion of the calculation (labeled "HSE06-fast") The downsampling of the HSE06 functional reduces the k grid to the Γ point for the calculation of the Hartree-Fock exchange energy, thereby making minimization of the atomic coordinates computationally feasible at the hybrid DFT level.
than the production calculations shown in the Results section of the main manuscript, slightly different settings were used in this test compared to the settings summarized in the Computational Details section of the main manuscript. The following list addresses the differences and their potential impact on the results:
Fig. S4
S4Supercell size convergence study where ∆E is calculated according to equation (10) for the BG sheet in contact with O, OH, and OOH adspecies using the PBE and a downsampled HSE06 functional (see text for details).
4 .
4The influence of DFT-D3 on the ∆E results is negligible because the adspecies are covalently bound. DFT-D3 is therefore unlikely to affect the results of this kind of convergence test but will become more important as layers of non-covalently bound water molecules are added to the model systems. The influence of DFT-D3 on the results is investigated in more detail in the Discussion section.
the total energy of the BG sheet model with the adatom and an overlayer of 8 H 2 O molecules, E BG+O tot is the total energy of the BG sheet model with the adatom without any solvent molecules, and E solv tot is the total energy of only the 8 H 2 O molecules. All total energy values are obtained as single-point results from the same starting geometry; the atomic positions are not optimized for the different subsystems.
k grid. Static calculations used various k grids depending on the exact systems; consult the data set under DOI:10.5281/zenodo.7684918 for exact k grid settings for each subset of simulations.
Fig
. S5 Convergence study of ∆∆E solv with respect to the k grid, where ∆∆E solv is calculated according to equation (11) for the BG sheet in contact with an O adatom using the PBE functional.
Fig
converged relative energy results. For geometry optimization calculations, a cutoff of 600 eV was chosen in an attempt to err on the side of caution. However, MD simulations were performed with ENCUT = 400 due to the excessiveFig. S6 Plane-augmented wave energy cutoff convergence study where ∆E is calculated according to equation (10) for the BG sheet in contact with an O adatom using the PBE functional. computational cost of higher energy cutoff values. See section Discussion in the main article for an analysis of this disparity. As above for the supercell size convergence, there is no reason to assume that this convergence test with the PBE functional would not translate to the RPBE + DFT-D3 functional combination later on as they are closely related functionals from the GGA-DFT rung.7.4 Oxygen z distribution (g(z)) resultsFigure S7shows results for the z distribution of O atoms in the model systems.The distributions were obtained from the constrained MD and the flexible MD data sets. The distance pairs were calculated only from pairs that involved the surface as one of the partners; hence, the surface is located at z = 0 in the figures below and the distribution can be interpreted as the coordination of water atoms relative to the surface. The adspecies (*O, *OH, *OOH) are omitted from the analysis since the surface and adspecies were frozen during MD simulations and would show up as a sharp density peak higher than bands resulting from the water structure which is actually of interest. . S7 Distributions of O atoms in z direction, g(z), for the different systems in contact with 1-4 layers (8-32 molecules) of water: clean BG sheet model without adspecies, BG sheet with *O adatom, BG sheet with *OH admolecule, and BG sheet with *OOH admolecule. Results from the constrained MD simulations (100 ps total) are always shown on the left, results from the flexible MD (10 ps total) are shown on the right. The g(z) is sampled every 100 fs in both cases. Distances are calculated between all water O atoms and the x − y plane located inside the BG sheet model. Results were normalized so that the maximum g(z) value in every distribution is 1.0 for better comparability.
FigureFig
S7 shows that results from the constrained MD data set are smoother due to the factor 10 longer sampling. However, constraining the water molecules also appears to remove some of the fine structure observed in the case of the flexible MD data set; this is most apparent for the *OH and *OOH adspecies in contact with 3 layers of water molecules. This observation may hint towards the Rattle constraint changing the interaction with the surface and adspecies but more research is needed to make sure that this observation is not noise, i.e., the result of poor sampling statistics.7.5 Influence of the MD sampling frequency on the solvation stabilization energy results Figure S8 summarizes ∆∆E solv results from the flexible MD and constrained MD data sets analogous to figure 4 in the main manuscript except that different intervals of sampling are tested (. S8 Influence of the MD sampling frequency on the ∆∆E solv results. The original sampling frequency shown in the main manuscript is 1 ps, indicated in this figure as sampling factor = 1. Shown here are additional sampling factors of 0.5, 10.0, and 100.0 which correspond to sampling frequencies of 2 ps, 100 fs, and 10 fs.
Figure
S8 shows that the average ∆∆E solv results are robust against the sampling frequency. The only significant change is observed going from sampling factor 0.5 (2 ps) to factor 1.0 (1 ps) in the case of the flexible MD data set. This change constitutes the the difference between 5 and 10 evaluated images for this data set. It can therefore be concluded that 10 independent images is the minimum number of images required to obtain a robust average ∆∆E solv from this data set.The choice of sampling frequency affects the size of the error bars significantly. This result highlights that it is important to chose the sampling frequency according to physical considerations (here: correlation time of water) since only checking for convergence of the average results can create a false sense of security from oversampling the data.
Fig
. S9 Average dipole and quadrupole energy correction values for the systems in the resampled data set. While large fluctuations are noted in particular for the *OOH adspecies in contact with 24 and 16 water molecules and the *O adspecies in contact with 24 water molecules, the fluctuations remain below the limit of chemical accuracy (< 0.05 eV). The error bars indicate the two-sided 95 % confidence interval.Results show that the average correction energy values are below 0.05 eV and therefore within chemical accuracy. However, the error bars for some systems are large, in particular for the *OOH adspecies in contact with 16 and 24 water molecules and the *O adspecies in contact with 24 molecules. This observation indicates that the correction can be somewhat erratic for certain systems and arrangements. Ultimately, the low absolute values for this correction are unlikely to impact results in a significant way.7.7 Influence of the dispersion correction on the resultsSimilar to the dipole and quadrupole correction energy, the influence of the DFT-D3 dispersion correction on the obtained ∆∆E solv results is tested.
Figure
S10a shows the difference of the dispersion contributions of the clean BG surface and the BG surface with an adspecies (X = O, OH, OOH):
Fig
. S10 Influence of the dispersion correction on the calculated ∆∆E solv results. a Absolute dispersion correction energy for each tested system averaged over the data set of 20 images from the resampled data set as outlined in the main article. Shown here is∆E disp = E BG−X disp −E BG disp ,i.e., the difference of the dispersion contributions of the clean BG surface and the BG surface with an adspecies (X = O, OH, OOH). b ∆∆E solv trends for the resampled data set where E disp has been deducted from the total energy values before calculating ∆∆E solv . The error bars indicate the two-sided 95 % confidence interval.
Fig. S11
S11Exploring the influence of energy-minimizing the non-solvated reference systems. Left: non-solvated reference configurations were energy-minimized with respect to the atomic coordinates ('vacuum'). Right: non-solvated reference systems were not energy-minimized and energy values were obtained from single-point calculations ('vac-sp').7.10 The solvation stabilization energy obtained only from the lowest-energy MD configurations An article by Yu et al. from 2011 details yet another way of obtaining ∆∆E solv .[21] The group performed MD simulations with NG model systems and the ORR adspecies in contact with explicit water molecules. They then picked the lowest-energy configuration generated in each MD simulation and performed an energy minimization calculation with respect to the atomic coordinates. The minimized systems were then used to calculate solvated free energy values.
FigureFig
S12 applies this strategy to the flexible MD data set.The closest comparison for this analysis are the results from the resampled data set, seefigure 4in the main article. While the trends for *OH and *OOH . S12 ∆∆E solv obtained from only the lowest-energy configurations in the flexible MD data set which were subsequently energy minimized.
FigureFig
S13 compares the E tot vs. t and T vs. t trends of flexible MD simulations with the unconstrained BG sheet (a) and the properly constrained BG sheet (b). Figure S14 compares the surface-O g(z) distributions obtained from flexible MD simulations with these two models. . S13 Comparison of Etot vs. t and T vs. t trends of flexible MD simulations with the properly constrained BG sheet (a) and the unconstrained BG sheet (b).
Figure
S13 shows that the total energy and temperature fluctuations are significantly increased if the BG sheet with the adspecies is not constrained (b).
Fig. S14
S14Comparison surface-O g(z) distributions obtained from flexible MD simulations using the constrained and non-constrained BG sheet.
In the same vein, Xu et al.[11] and Jiao et al.[12] synthesized NG and BG using Hummer's method. Both groups report that NG and BG are efficient ORR catalysts, showing similarly high ORR activity in their experiments and corresponding calculations. Further experimental work is summarized in a 2016 review by Agnoli and Favaro.following experiments showing high ORR activity
of a nitrogen-doped graphene (NG) electrocatalyst in 2010.[4] Soon after the
reports of high catalytic activity of NGs, boron-doped graphene (BG) emerged
as another promising candidate for efficient ORR electrocatalysis.
Sheng et al.[5] measured favorable alkaline ORR activity for BG with
3.2 % dopant concentration synthesized using Hummer's method.[6, 7] Their
BG material catalyzed the 4e -ORR pathway and showed good tolerance to
CO poisoning. Note that Hummer's method has become subject to criticism
as it can deposit significant amounts of transition metal impurities in the
material[8, 9] which cannot be removed using typical wet-chemical purification
methods.[10]
17] They found O 2 adsorption to occur via an open-shell end-on intermediate using a molecular flake model system and the B3LYP functional. Ferrighi et al. proposed the formation of stable B−O 3 bulk oxides on BG which they hypothesize to be the first step in the ORR mechanism on BG.[19] They, however, did not detail further reaction steps. Ferrighi et al. used a molecular flake model and theB3LYP functional as well as periodic surface models and the PBE functional
in their study. Contrarily, Wang and co-workers recently identified a cluster of
two B dopants in para arrangement to enable the associative 4 e -ORR path-
way, including energetically favorable O 2 adsorption.[14] They used a periodic
nanoribbon model and the PBE functional with DFT-D3 dispersion correc-
tion. Using a molecular flake model and the B3LYP functional, the study by
siderably improve the catalyst activity predictions. To illustrate this effect, we applied two sets of solvation stabilization energy, ∆∆E solv , data for the ORR intermediates on NG taken from literature sources (Reda et al.[23] and Yu et al.[21]). and found η TCM to be reduced by up to 0.5 V. However, the published ∆∆E solv data set were calculated in different ways and disagreed significantly, leading to different η TCM estimates depending on the choice of ∆∆E solv data set.The accurate hybrid DFT approach was also applied to BG with similar results: a η TCM estimate above 1.0 V vs. SHE, indicating catalytic inactivity.[30] This result is in stark contrast to other more optimistic studies which, importantly, used functionals such as PBE and B3LYP as well as molecular flake models which were shown to produce unreliable adsorption free energy results.[29] However, the high η TCM prediction for BG did not include any solvation effects. Informed by the report from Fazio et al. on the significant impact of ∆∆E solv on the free energy trends and by our own observations of the same for NG, the present study was conceived to systematically investigate the effect of an increasing number of explicit water molecules on the stability of the ORR intermediates *O, *OH, and *OOH, as represented by the ∆∆E solv descriptor. Simulations were performed with the 32-atom BG model system used previously[30] in contact with up to 4 layers (32 molecules) of water. Both local minimization calculations as well as regular and coarse-grained classical dynamics simulations were performed using atomic forces estimated from density functional theory (DFT) calculations to obtain statistical estimates of ∆∆E solv . Additionally, local optimization calculations were performed on structures re-sampled from these data sets. In short, none of the data sets generated in this way yielded converged and trustworthy ∆∆E solv results.Technical aspects of the simulations are discussed in detail and the conclusion is that a much larger number of water molecules needs to be included in the calculations to provide reliable estimates of the solvation effect. The present model system includes up to 139 atoms and the dynamics simulations span up to 100 ps, thereby already straining the computational resources. Moreover, the ∆∆E solv estimates are highly system dependent and would need to be reestablished for every new (electro-) catalyst model. Hence, we highlight the need for hybrid simulation methods that enable simulations of systems including hundreds or even thousands of water molecules using a lower level of theory while retaining electronic structure level accuracy in the surface region where reactions occur.The solvation stabilization energy ∆∆E solv is estimated as the difference H2 tot to serve as the reference energy for an O adatom. Because these values are always gasphase reference energy values, also in the case of the solvated model systems, they cancel out in the ∆∆E solv calculation.∆∆E solv = E BG + adatom with solventThe choice of
hybrid functional was made as a result of benchmarking against a diffusion
Monte Carlo data set. Generalized gradient approximation functionals were
found to underestimate η TCM by up to 0.4 eV, thereby indicating much too
high catalytic activity. However, it was noted that solvation effects could con-
2 Methodology
2.1 Calculation of the solvation stabilization energy
between the adsorption energy calculated for models in contact with explicit
solvent (∆E with solvent
ads
) and models without inclusion of any solvent molecules
(∆E without solvent
ads
):
∆∆E solv = ∆E with solvent
ads
− ∆E without solvent
ads
,
(1)
where
∆E with solvent
ads
= E BG + adatom with solvent
tot
− E BG with solvent
tot
−E adatom reference
tot
(2)
and
∆E without solvent
ads
= E BG + adatom without solvent
tot
− E BG without solvent
tot
−E adatom reference
tot
.
(3)
Here, E adatom reference
tot
is the total energy of any combination of gas-
phase molecules used to calculate the adsorption energy. For example,
E adatom reference
tot
may be expanded to E H2O
tot − E Therefore, equation (1) reduces to:
tot
− E BG with solvent
tot
−(E BG + adatom without solvent
tot
− E BG without solvent
tot
)
(4)
The coverage factor k is a measure for the number of independent samples taken into account during calculation of the standard deviation. For the 95 % CI used in this work, the coverage factors k are given by Grossfield et al. asTable 1Coverage factors k as a function of the number of independent samples n.The model system used in this study is a 32-atomic graphene sheet with one B dopant atom, analogous to our previous works on NG and BG.[29,30] Tostudy the influence of solvation on the ORR intermediates *O, *OH, and *OOH, 1-4 layers of water molecules with 8 water molecules per layer are added to the model. The water configurations built initially were inspired by the configurations presented by Reda et al. in a study of the solvation of ORR intermediates on NG.[23] The group showed that the maximum H 2 O coverage per layer for NG is Θ H2O = 2 3 monolayers which the present results confirm. Hence, a maximum of 24 atoms (8 molecules) can be placed per layer before lateral crowding destabilizes the water configuration and formation of a new layer begins.follows:
Reproduced from Grossfield et al.[31]
n
k
6
2.57
11
2.23
16
2.13
21
2.09
26
2.06
51
2.01
101
1.98
3 Computational Details
3.1 BG sheet model system
Table 2
2Summary of the calculated ∆∆E solv results based on the one-shot minimization data set. † and ‡ indicate values where water molecules are place only on one side or on both sides of the BG sheet model, respectively.# of water molecules
Arrangement
∆∆E solv / eV
4 H 2 O
on side with *O
0.19 †
8 H 2 O
on side with *O
-0.04 †
16 H 2 O
on side with *O
-0.11 †
16 H 2 O
8 on both sides
0.15 ‡
24 H 2 O
16 on side with *O, 8 on the other side
-0.07 ‡
24 H 2 O
on side with *O
-0.20 †
32 H 2 O
24 on side with *O, 8 on the other side
-0.04 ‡
32 H 2 O
on side with *O
-0.06 †
32 H 2 O
16 on side with *O, 16 on the other side -0.06
∆∆E solv results either. Finally, the bond length constraint is found to change the ∆∆E solv results compared to the flexible MD data set; however, since there are currently no converged reference values for ∆∆E solv , it is impossible to assess if the changes introduced by the Rattle-type constraint are detrimental or not.
shot optimization data sets, for example the characteristic dip of ∆∆E solv for the *O adatom at 24 water molecules. This result further indicates that the bond length constraint used to obtain the constrained MD data set is likely altering the trends in a significant way. The previously discussed trend regarding error bar spans increasing with increasing number of molecules is distinctly present both for the *O and the *OH adspecies. Ultimately, this data set does not provide fundamentally different insights into the ∆∆E solv trends compared to the preceding analyses.
Simulation cells varied in size between simulations with different number of included water molecules. Because a PAW code was used and PAWs always fill the entire simulation cell, the c cell parameter was minimized on a case-by-case basis to minimize the computational effort. Increasing or decreasing the box size also changes the total energy in a small way, hence it is important that all energy values used to calculate ∆∆E solv in equation (4) use the same cell dimensions. Consistency in this regard was ensured by generating the reference systems without solvent by removing water molecules from the original system; the reference systems are given alongside the solvated parent models in the data set available under DOI:10.5281/zenodo.7684918. Furthermore, table S2 summarizes the total energy results for various reference systems without solvent from the MD data sets. The differences between system are, despite differences in the c cell parameter, < 0.01 eV. Hence, the contribution from inconsistent cell dimensions, even if left untreated, are unlikely to distort results enough to account for the erratic results in this work.
To investigate if energy minimization of the atomic configuration of the reference systems creates a bias, figure S11 compares ∆∆E solv results from the one-shot optimization data set where the reference systems without solvent were either minimized or where the reference energy contributions figurations. This result is unsurprising because the reference systems without water molecules can be assumed to be in a slightly unfavorable configuration when not allowed to relax under the new environmental conditions.E BG + adatom without solvent
tot
and E BG without solvent
tot
were obtained from single-
point total energy calculations. Results from this test show that the overall
trends are identical. However, ∆∆E solv for the adspecies in contact with 16,
24, and 32 water molecules are ca. 0.2 eV more negative when obtained from
single-point energy calculations based on the formerly-solvated atomic con-
Figure S12shows that the ∆∆E solv results for *OH and *OOH are somewhat comparable to the resampled data set, which is most closely related to this test, in terms of relative trends but less so in terms of absolute values. However, the *O intermediate shows significantly more negative ∆∆E solv results.
6 Conclusion
6Density functional theory-driven minimization calculations and classical molecular dynamics simulations were used to obtain the solvation stabilization energy, ∆∆E solv , for the oxygen reduction reaction intermediates *O, *OH, and *OOH adsorbed on a Boron-doped graphene sheet in contact with 8,16,24, and 32 molecules of water. The goal of this study was to apply the obtained ∆∆E solv values to accurate hybrid DFT adsorption energy results for the ORR intermediates to refine potential-dependent free-energy predictions by including the influence of solvation. Although 4 different data set were obtained that sampled ∆∆E solv from the model systems in different ways using static and dynamic calculations, no converged ∆∆E solv result were obtained.
Table S1
S1Thermochemical overpotentials η TCM obtained for the 32-atomic BG model with various density functionals. * Largest and smallest possible η TCM obtained using standard deviations of the adsorption free energy values of the ORR intermediates, based on Bayesian error estimation using an ensemble of 2000 functionals.Density Functional
η TCM / V
HSE06
1.06
PBE0
1.02
B3LYP
0.93
PBE
0.68
SCAN
0.61
TPSS
0.52
BEEF-vdW
0.44
(0.26-0.80) *
HSE06 and PBE0, which constitute the most reliable result according to
benchmarking,[29] perform similarly and give the highest η TCM out of all
Table S2
S2Summary of total energy values of the reference systems without water molecules used to calculate ∆∆E solv . The reference systems are from the resampled data set.System
# of H 2 O
Etot / eV
BG sheet (clean)
8
-290.425863
16
-290.425863
24
-290.429015
32
-290.426769
BG-O*
8
-295.994912
16
-295.994912
24
-295.998699
32
-295.996450
BG-OH*
8
-300.638397
16
-300.638397
24
-300.638397
BG-OOH*
8
-304.701104
8
-304.701104
8
-304.701104
for the one-shot minimization data set whether not minimizing the reference systems, i.e., obtaining the reference energy values from single-point calculations on the formerly-solvated systems where water molecules have been removed, makes a difference. Results show that the differences are minimal at best. The relative trend does not change at all but absolute values are slightly more negative in the case of the single-point reference calculations compared to the minimized reference calculations for 16, 24, and 32 water molecules. ΔΔE solv f om 'vac##m' efe ence H2O only on side of *O H2O on both sides ΔΔE solv f om 'vac-sp' reference H2O only on side of *O H2O on both sides4 8 16 24 32
# of wate molec#les
−0.2
−0.1
0.0
0.1
0.2
ΔΔE
solv / eV
4 8 16 24 32
# of wate molec#les
−0.2
−0.1
0.0
0.1
0.2
ΔΔE
solv / eV
Acknowledgments. This work was supported in part by the Icelandic
Review on recent progress in nitrogen-doped graphene: Synthesis, characterization, and its potential applications. H Wang, T Maiyalagan, X Wang, 10.1021/cs200652yACS Catal. 25Wang, H., Maiyalagan, T. & Wang, X. Review on recent progress in nitrogen-doped graphene: Synthesis, characterization, and its potential applications. ACS Catal. 2 (5), 781-794 (2012). https://doi.org/10.1021/ cs200652y .
Carbon-based electrocatalysts for advanced energy conversion and storage. J Zhang, Z Xia, L Dai, 10.1126/sciadv.1500564Sci Adv. 17Zhang, J., Xia, Z. & Dai, L. Carbon-based electrocatalysts for advanced energy conversion and storage. Sci Adv 1 (7) (2015). https://doi.org/10. 1126/sciadv.1500564 .
Doping graphene with boron: a review of synthesis methods, physicochemical characterization, and emerging applications. S Agnoli, M Favaro, Agnoli, S. & Favaro, M. Doping graphene with boron: a review of synthe- sis methods, physicochemical characterization, and emerging applications.
. 10.1039/C5TA10599DJ. Mater. Chem. A. 414J. Mater. Chem. A 4 (14), 5002-5025 (2016). https://doi.org/10.1039/ C5TA10599D .
Nitrogen-doped graphene as efficient metal-free electrocatalyst for oxygen reduction in fuel cells. L Qu, Y Liu, J.-B Baek, L Dai, 10.1016/j.ijhydene.2013.12.079ACS Nano. Qu, L., Liu, Y., Baek, J.-B. & Dai, L. Nitrogen-doped graphene as efficient metal-free electrocatalyst for oxygen reduction in fuel cells. ACS Nano org/10.1016/j.ijhydene.2013.12.079 .
Origin of the electrocatalytic oxygen reduction activity of graphene-based catalysts: A roadmap to achieve the best performance. Y Jiao, Y Zheng, M Jaroniec, S Z Qiao, 10.1021/ja500432hJ. Am. Chem. Soc. 13611Jiao, Y., Zheng, Y., Jaroniec, M. & Qiao, S. Z. Origin of the electrocat- alytic oxygen reduction activity of graphene-based catalysts: A roadmap to achieve the best performance. J. Am. Chem. Soc. 136 (11), 4394-4403 (2014). https://doi.org/10.1021/ja500432h .
Origin of the overpotential for oxygen reduction at a fuel-cell cathode. J K Nørskov, J. Phys. Chem. B. 10846Nørskov, J. K. et al. Origin of the overpotential for oxygen reduction at a fuel-cell cathode. J. Phys. Chem. B 108 (46), 17886-17892 (2004).
. 10.1021/jp047349jhttps://doi.org/10.1021/jp047349j .
Potential application of novel boron-doped graphene nanoribbon as oxygen reduction reaction catalyst. L Wang, 10.1021/acs.jpcc.6b04639J. Phys. Chem. C. 12031Wang, L. et al. Potential application of novel boron-doped graphene nanoribbon as oxygen reduction reaction catalyst. J. Phys. Chem. C 120 (31), 17427-17434 (2016). https://doi.org/10.1021/acs.jpcc.6b04639 .
A consistent and accurate ab initio parametrization of density functional dispersion correction (DFT-d) for the 94 elements h-pu. S Grimme, J Antony, S Ehrlich, H Krieg, 10.1063/1.3382344J. Chem. Phys. 13215154104Grimme, S., Antony, J., Ehrlich, S. & Krieg, H. A consistent and accu- rate ab initio parametrization of density functional dispersion correction (DFT-d) for the 94 elements h-pu. J. Chem. Phys 132 (15), 154104 (2010). https://doi.org/10.1063/1.3382344 .
Effect of the damping function in dispersion corrected density functional theory. S Grimme, S Ehrlich, L Goerigk, 10.1002/jcc.21759J. Comput. Chem. 327Grimme, S., Ehrlich, S. & Goerigk, L. Effect of the damping function in dispersion corrected density functional theory. J. Comput. Chem. 32 (7), 1456-1465 (2011). https://doi.org/10.1002/jcc.21759 .
Boron-doped graphene as active electrocatalyst for oxygen reduction reaction at a fuel-cell cathode. G Fazio, L Ferrighi, C D Valentin, Fazio, G., Ferrighi, L. & Valentin, C. D. Boron-doped graphene as active electrocatalyst for oxygen reduction reaction at a fuel-cell cathode. J.
. Catal, 10.1016/J.JCAT.2014.07.024318Catal. 318, 203-210 (2014). https://doi.org/10.1016/J.JCAT.2014.07.024 .
W Vielstich, A Lamm, H Gasteiger, Handbook of fuel cells: fundamentals, technology and applications. ChichesterWileyVielstich, W., Lamm, A. & Gasteiger, H. (eds) Handbook of fuel cells: fundamentals, technology and applications (Wiley, Chichester, 2003).
Boosting graphene reactivity with oxygen by boron doping: Density functional theory modeling of the reaction path. L Ferrighi, M Datteo, C Di Valentin, 10.1021/jp410966rJ. Phys. Chem. C. 1181Ferrighi, L., Datteo, M. & Di Valentin, C. Boosting graphene reactivity with oxygen by boron doping: Density functional theory modeling of the reaction path. J. Phys. Chem. C 118 (1), 223-230 (2014). https://doi. org/10.1021/jp410966r .
First-principles molecular dynamics simulation of o2 reduction on nitrogen-doped carbon. Y Okamoto, Appl. Surf. Sci. 2561Okamoto, Y. First-principles molecular dynamics simulation of o2 reduc- tion on nitrogen-doped carbon. Appl. Surf. Sci. 256 (1), 335-341 (2009).
. 10.1016/j.apsusc.2009.08.027https://doi.org/10.1016/j.apsusc.2009.08.027 .
Oxygen reduction reaction mechanism on nitrogen-doped graphene: A density functional theory study. L Yu, X Pan, X Cao, P Hu, X Bao, 10.1016/J.JCAT.2011.06.015J. Catal. 282Yu, L., Pan, X., Cao, X., Hu, P. & Bao, X. Oxygen reduction reac- tion mechanism on nitrogen-doped graphene: A density functional theory study. J. Catal. 282, 183-190 (2011). https://doi.org/10.1016/J.JCAT. 2011.06.015 .
Active sites and mechanisms for oxygen reduction reaction on nitrogen-doped carbon alloy catalysts: Stone-wales defect and curvature effect. G.-L Chai, Z Hou, D.-J Shu, T Ikeda, K Terakura, J. Am. Chem. Chai, G.-L., Hou, Z., Shu, D.-J., Ikeda, T. & Terakura, K. Active sites and mechanisms for oxygen reduction reaction on nitrogen-doped carbon alloy catalysts: Stone-wales defect and curvature effect. J. Am. Chem.
. 10.1021/ja502646cSoc. 13639Soc. 136 (39), 13629-13640 (2014). https://doi.org/10.1021/ja502646c .
Dft study of stabilization effects on n-doped graphene for orr catalysis. M Reda, H A Hansen, T Vegge, Catal. Today. 312Reda, M., Hansen, H. A. & Vegge, T. Dft study of stabilization effects on n-doped graphene for orr catalysis. Catal. Today 312, 118-125 (2018).
. 10.1016/J.CATTOD.2018.02.015https://doi.org/10.1016/J.CATTOD.2018.02.015 .
Implicit solvation models: Equilibria, structure, spectra, and dynamics. C J Cramer, D G Truhlar, Chem. Rev. 998Cramer, C. J. & Truhlar, D. G. Implicit solvation models: Equilibria, structure, spectra, and dynamics. Chem. Rev. 99 (8), 2161-2200 (1999).
. 10.1021/cr960149mhttps://doi.org/10.1021/cr960149m .
A review of methods for the calculation of solution free energies and the modelling of systems in solution. E Skyner, R , L Mcdonagh, J , R Groom, C Mourik, T V Mitchell, J B , Phys. Chem. Chem. E. Skyner, R., L. McDonagh, J., R. Groom, C., Mourik, T. v. & O. Mitchell, J. B. A review of methods for the calculation of solution free energies and the modelling of systems in solution. Phys. Chem. Chem.
. 10.1039/C5CP00288EPhys. 179Phys. 17 (9), 6174-6191 (2015). https://doi.org/10.1039/C5CP00288E .
Comparison of Implicit and Explicit Solvent Models for the Calculation of Solvation Free Energy in Organic Solvents. J Zhang, H Zhang, T Wu, Q Wang, D Van Der Spoel, 10.1021/acs.jctc.7b00169J. Chem. Theory Comput. 133Zhang, J., Zhang, H., Wu, T., Wang, Q. & van der Spoel, D. Comparison of Implicit and Explicit Solvent Models for the Calculation of Solvation Free Energy in Organic Solvents. J. Chem. Theory Comput. 13 (3), 1034- 1043 (2017). https://doi.org/10.1021/acs.jctc.7b00169 .
Quantifying solvation energies at solid/liquid interfaces using continuum solvation methods. C M Gray, K Saravanan, G Wang, J A Keith, Gray, C. M., Saravanan, K., Wang, G. & Keith, J. A. Quantifying solva- tion energies at solid/liquid interfaces using continuum solvation methods.
. 10.1080/08927022.2016.1273525Mol. Simul. 435-6Mol. Simul. 43 (5-6), 420-427 (2017). https://doi.org/10.1080/08927022. 2016.1273525 .
Solvation at metal/water interfaces: An ab initio molecular dynamics benchmark of common computational approaches. H H Heenen, J A Gauthier, H H Kristoffersen, T Ludwig, K Chan, 10.1063/1.5144912J. Chem. Phys. 15214144703Heenen, H. H., Gauthier, J. A., Kristoffersen, H. H., Ludwig, T. & Chan, K. Solvation at metal/water interfaces: An ab initio molecular dynam- ics benchmark of common computational approaches. J. Chem. Phys. 152 (14), 144703 (2020). https://doi.org/10.1063/1.5144912 .
Assessment of the accuracy of density functionals for calculating oxygen reduction reaction on nitrogen-doped graphene. B Kirchhoff, Kirchhoff, B. et al. Assessment of the accuracy of density functionals for calculating oxygen reduction reaction on nitrogen-doped graphene. J.
. 10.1021/acs.jctc.1c00377Chem. Theory Comput. 17Chem. Theory Comput. 17, 6405-6415 (2021). https://doi.org/10.1021/ acs.jctc.1c00377 .
Computational studies of oxygen reduction catalysts. B Kirchhoff, Kirchhoff, B. Computational studies of oxygen reduction catalysts (2021).
Best practices for quantification of uncertainty and sampling quality in molecular simulations. A Grossfield, 10.33011/livecoms.1.1.5067Living J. Comp. Mol. Sci. 1Grossfield, A. et al. Best practices for quantification of uncertainty and sampling quality in molecular simulations. Living J. Comp. Mol. Sci. 1 (2019). https://doi.org/10.33011/livecoms.1.1.5067 .
Ab initio molecular dynamics for liquid metals. G Kresse, J Hafner, 10.1103/PhysRevB.47.558Phys. Rev. B. 471Kresse, G. & Hafner, J. Ab initio molecular dynamics for liquid metals. Phys. Rev. B 47 (1), 558-561 (1993). https://doi.org/10.1103/PhysRevB. 47.558 .
Ab initio molecular-dynamics simulation of the liquid-metal-amorphous-semiconductor transition in germanium. G Kresse, J Hafner, 10.1103/PhysRevB.49.14251Phys. Rev. B. 4920Kresse, G. & Hafner, J. Ab initio molecular-dynamics simulation of the liquid-metal-amorphous-semiconductor transition in germanium. Phys. Rev. B 49 (20), 14251-14269 (1994). https://doi.org/10.1103/PhysRevB. 49.14251 .
Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set. G Kresse, J Furthmüller, Kresse, G. & Furthmüller, J. Efficiency of ab-initio total energy cal- culations for metals and semiconductors using a plane-wave basis set.
. 10.1016/0927-0256(96)00008-0Comput. Mater. Sci. 61Comput. Mater. Sci. 6 (1), 15-50 (1996). https://doi.org/10.1016/0927- 0256(96)00008-0 .
Efficient iterative schemes for ab initio totalenergy calculations using a plane-wave basis set. G Kresse, J Furthmüller, 10.1103/PhysRevB.54.11169Phys. Rev. B. 5416Kresse, G. & Furthmüller, J. Efficient iterative schemes for ab initio total- energy calculations using a plane-wave basis set. Phys. Rev. B 54 (16), 11169-11186 (1996). https://doi.org/10.1103/PhysRevB.54.11169 .
Improved adsorption energetics within density-functional theory using revised perdew-burkeernzerhof functionals. B Hammer, L B Hansen, J K Nørskov, Phys. Rev. B. 59Hammer, B., Hansen, L. B. & Nørskov, J. K. Improved adsorption energetics within density-functional theory using revised perdew-burke- ernzerhof functionals. Phys. Rev. B 59, 7413-7421 (1999) .
Dispersion corrected rpbe studies of liquid water. K Forster-Tonigold, A Groß, 10.1063/1.4892400J. Chem. Phys. 14164501Forster-Tonigold, K. & Groß, A. Dispersion corrected rpbe studies of liquid water. J. Chem. Phys. 141, 064501 (2014). https://doi.org/10. 1063/1.4892400 .
Special points for brillouin-zone integrations. H J Monkhorst, 10.1103/PhysRevB.13.5188Phys. Rev. B. 1312Monkhorst, H. J. Special points for brillouin-zone integrations. Phys. Rev. B 13 (12), 5188-5192 (1976). https://doi.org/10.1103/PhysRevB. 13.5188 .
Projector augmented-wave method. P E Blöchl, 10.1103/PhysRevB.50.17953Phys. Rev. B. 5024Blöchl, P. E. Projector augmented-wave method. Phys. Rev. B 50 (24), 17953-17979 (1994). https://doi.org/https://doi.org/10.1103/PhysRevB. 50.17953 .
From ultrasoft pseudopotentials to the projector augmented-wave method. G Kresse, D Joubert, Phys. Rev. B. 593Kresse, G. & Joubert, D. From ultrasoft pseudopotentials to the pro- jector augmented-wave method. Phys. Rev. B 59 (3), 1758-1775 (1999).
. 10.1103/PhysRevB.59.1758https://doi.org/10.1103/PhysRevB.59.1758 .
Second-order integrators for Langevin equations with holonomic constraints. E Vanden-Eijnden, G Ciccotti, 10.1016/j.cplett.2006.07.086Chem. Phys. Lett. 4291Vanden-Eijnden, E. & Ciccotti, G. Second-order integrators for Langevin equations with holonomic constraints. Chem. Phys. Lett. 429 (1), 310-316 (2006). https://doi.org/10.1016/j.cplett.2006.07.086 .
Rattle: A "velocity" version of the shake algorithm for molecular dynamics calculations. H C Andersen, J. Comput. Phys. 521Andersen, H. C. Rattle: A "velocity" version of the shake algorithm for molecular dynamics calculations. J. Comput. Phys. 52 (1), 24-34 (1983).
. 10.1016/0021-9991(83)90014-1https://doi.org/10.1016/0021-9991(83)90014-1 .
Determination of the Rotational Correlation Time of Water by Proton NMR Relaxation in H217O and Some Related Results. D Lankhorst, J Schriever, J C Leyte, 10.1002/BBPC.19820860308Ber. Bunsenges. Phys. Chem. 863Lankhorst, D., Schriever, J. & Leyte, J. C. Determination of the Rota- tional Correlation Time of Water by Proton NMR Relaxation in H217O and Some Related Results. Ber. Bunsenges. Phys. Chem. 86 (3), 215-221 (1982). https://doi.org/https://doi.org/10.1002/BBPC.19820860308 .
Quantum monte carlo investigations of adsorption energetics on graphene. C R Hsing, C M Wei, M Y Chou, 10.1088/0953-8984/24/39/395002J. Phys. Condens. Matter. 2439395002Hsing, C. R., Wei, C. M. & Chou, M. Y. Quantum monte carlo investigations of adsorption energetics on graphene. J. Phys. Condens. Matter 24 (39), 395002 (2012). https://doi.org/10.1088/0953-8984/24/ 39/395002 .
Accurate surface chemistry beyond the generalized gradient approximation: Illustrations for graphene adatoms. B G Janesko, V Barone, E N Brothers, 10.1021/ct400736wJ. Chem. Theory Comput. 911Janesko, B. G., Barone, V. & Brothers, E. N. Accurate surface chemistry beyond the generalized gradient approximation: Illustrations for graphene adatoms. J. Chem. Theory Comput. 9 (11), 4853-4859 (2013). https:// doi.org/10.1021/ct400736w .
The structure of water at a pt(111) electrode and the potential of zero charge studied from first principles. S Sakong, K Forster-Tonigold, A Groß, 10.1063/1.4948638J. Chem. Phys. 144194701Sakong, S., Forster-Tonigold, K. & Groß, A. The structure of water at a pt(111) electrode and the potential of zero charge studied from first principles. J. Chem. Phys. 144, 194701 (2016). https://doi.org/10.1063/ 1.4948638 .
Polarizable embedding with a transferable h2o potential function i: Formulation and tests on dimer. E Örn Jónsson, A O Dohn, H Jónsson, Örn Jónsson, E., Dohn, A. O. & Jónsson, H. Polarizable embedding with a transferable h2o potential function i: Formulation and tests on dimer. J.
. 10.1021/acs.jctc.9b00777Chem. Theory Comput. 15Chem. Theory Comput. 15, 6562-6577 (2019). https://doi.org/10.1021/ acs.jctc.9b00777 .
Polarizable embedding with a transferable h2o potential function ii: Application to (h2o)n clusters and liquid water. A O Dohn, E Örn Jónsson, H Jónsson, 10.1021/acs.jctc.9b00778J. Chem. Theory Comput. 15Dohn, A. O., Örn Jónsson, E. & Jónsson, H. Polarizable embedding with a transferable h2o potential function ii: Application to (h2o)n clusters and liquid water. J. Chem. Theory Comput. 15, 6578-6587 (2019). https:// doi.org/10.1021/acs.jctc.9b00778 .
Elastic collision based dynamic partitioning scheme for hybrid simulations. B Kirchhoff, E Örn Jónsson, A O Dohn, T Jacob, H Jónsson, 10.1021/acs.jctc.1c00522J. Chem. Theory Comput. 17Kirchhoff, B., Örn Jónsson, E., Dohn, A. O., Jacob, T. & Jónsson, H. Elastic collision based dynamic partitioning scheme for hybrid simula- tions. J. Chem. Theory Comput. 17, 5863-5875 (2021). https://doi.org/ 10.1021/acs.jctc.1c00522 .
Solvation effects on methanol decomposition on pd(111), pt(111), and ru(0001). M Garcia-Ratés, R García-Muelas, N López, J. Phys. Garcia-Ratés, M., García-Muelas, R. & López, N. Solvation effects on methanol decomposition on pd(111), pt(111), and ru(0001). J. Phys.
. https:/arxiv.org/abs/https:/doi.org/10.1021/acs.jpcc.7b05545Chem. C. 12125Chem. C 121 (25), 13803-13809 (2017). URL https://doi.org/10.1021/ acs.jpcc.7b05545. https://doi.org/10.1021/acs.jpcc.7b05545, https://doi. org/10.1021/acs.jpcc.7b05545 .
Assessment of Constant-Potential Implicit Solvation Calculations of Electrochemical Energy Barriers for H2 Evolution on Pt. M Van Den Bossche, E Skúlason, C Rose-Petruck, H Jónsson, 10.1021/acs.jpcc.8b10046The Journal of Physical Chemistry C. 1237Van den Bossche, M., Skúlason, E., Rose-Petruck, C. & Jónsson, H. Assessment of Constant-Potential Implicit Solvation Calculations of Elec- trochemical Energy Barriers for H2 Evolution on Pt. The Journal of Physical Chemistry C 123 (7), 4116-4124 (2019). https://doi.org/10.
| [] |
[
"Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors",
"Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors",
"Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors",
"Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors"
] | [
"J E Hirsch \nDepartment of Physics\nUniversity of California\nSan Diego, La Jolla92093-0319CA\n",
"F Marsiglio \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonAlbertaCanada\n",
"J E Hirsch \nDepartment of Physics\nUniversity of California\nSan Diego, La Jolla92093-0319CA\n",
"F Marsiglio \nDepartment of Physics\nUniversity of Alberta\nT6G 2E1EdmontonAlbertaCanada\n"
] | [
"Department of Physics\nUniversity of California\nSan Diego, La Jolla92093-0319CA",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonAlbertaCanada",
"Department of Physics\nUniversity of California\nSan Diego, La Jolla92093-0319CA",
"Department of Physics\nUniversity of Alberta\nT6G 2E1EdmontonAlbertaCanada"
] | [] | It has recently been reported that hydrogen-rich materials under high pressure trap magnetic flux, a tell-tale signature of superconductivity[1]. Here we point out that under the protocol used in these experiments the measured results indicate that the materials don t trap magnetic flux. Instead, the measured results are either experimental artifacts or originate in magnetic properties of the sample or its environment unrelated to superconductivity, Together with other experimental evidence analyzed earlier, this clearly indicates that these materials are not superconductors. In a second part, we discuss magnetic field screening and expulsion.PACS numbers: | 10.1007/s10948-022-06365-8 10.1007/s10948-023-06569-6 | [
"https://export.arxiv.org/pdf/2207.01541v4.pdf"
] | 250,264,208 | 2207.01541 | 41936928f9fa30f032742e98fd6bfd67ae9260e7 |
Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors
J E Hirsch
Department of Physics
University of California
San Diego, La Jolla92093-0319CA
F Marsiglio
Department of Physics
University of Alberta
T6G 2E1EdmontonAlbertaCanada
Evidence against superconductivity in flux trapping experiments on hydrides under high pressure & On magnetic field screening and expulsion in hydride superconductors
PACS numbers:
It has recently been reported that hydrogen-rich materials under high pressure trap magnetic flux, a tell-tale signature of superconductivity[1]. Here we point out that under the protocol used in these experiments the measured results indicate that the materials don t trap magnetic flux. Instead, the measured results are either experimental artifacts or originate in magnetic properties of the sample or its environment unrelated to superconductivity, Together with other experimental evidence analyzed earlier, this clearly indicates that these materials are not superconductors. In a second part, we discuss magnetic field screening and expulsion.PACS numbers:
I. INTRODUCTION
Following the paper "Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system", published in 2015 [2], several other hydrogen rich materials under high pressure have been reported in recent years to be high-temperature superconductors based on observed drops in resistance versus temperature [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19]. Many more such materials have been determined to be conventional high-temperature superconductors based on theoretical evidence [20][21][22][23][24][25]. However, little magnetic evidence has so far been provided in support of the claims of superconductivity [2,[26][27][28][29], and what evidence does exist has been strongly called into question [30][31][32][33].
In particular, these materials show no trace of magnetic flux expulsion, i.e. the Meissner effect, when cooled in the presence of a magnetic field [2,26,27]. They also apparently are able to screen very large applied magnetic fields [28]. This has been interpreted as indicating that the materials are "hard superconductors" with very strong pinning centers that prevent both flux penetration and flux expulsion [26][27][28]. We have argued that if that is the case the materials should also trap large magnetic fields [34], and that observation of flux trapping would provide definitive evidence that the materials can sustain persistent currents, hence are indeed superconductors [34].
Experiments aimed at detecting flux trapping were recently performed by Minkov et al. and the results analyzed and reported in Ref. [1]. Ref. [1] interprets the measured data as clearly indicating that the materials are superconductors. Instead, we analyze here the information presented in Ref. [1] and conclude that it proves the absence of superconductivity in these materials.
II. EXPERIMENTAL PROTOCOLS
The flux trapping experiments on sulfur hydride (H 3 S) [1] were performed under zero-field-cooling (ZFC) conditions for 13 values of applied field ranging from 0 to 6T , figt1-eps-converted-to FIG. 1: Trapped magnetic moment for H3S at 30K, from Ref. [1]. The points are experimental data [35], the lines are a fit to the data performed in Ref. [1]. and under field cooling (FC) conditions for one field value only, 4T . The results for both protocols for field 4T were reported to agree [1]. In the ZFC protocol, the sample was cooled to low temperatures in zero magnetic field, a magnetic field was then applied and gradually increased to reach value H M , then after 1 hour the external field was gradually decreased to zero, then the resulting magnetic moment was measured with a SQUID magnetometer. Fig. 1 shows the experimental data and a theoretical fit to the data given in Ref. [1]. Note in particular that the measured magnetic moment rises linearly from zero when the applied field exceeds the threshold value H p both for the experimental data and for the theoretical fit.
The experimental results were reportedly analyzed in Ref. [1] assuming the Bean model [36] controls the behavior of fields and currents in the material. experimental results, Ref. [1] inferred the parameters: H p = 0.042T =threshold value of the applied field where it begins to penetrate the sample at low temperatures. Assuming demagnetization 1/(1 − N ) = 8.5, this implies a lower critical field value H c1 = 0.36T .
H * = 0.835T = minimum applied field that reaches the center of the sample (called "full penetration field"), with assumed sample diameter and height d = 85µm, h = 2.5µm.
The measured moment was found to increase with magnetic field H M up to a maximum value of approximately m s = 15.9 × 10 −9 Am 2 for T = 30K when the applied magnetic field was ∼ 1.7T ≡ H sat M or larger. Following the Bean model, Ref. [1] concluded that
H sat M = 2H * + H p(1)
from which the value of H * was extracted. The theoretical fit performed in Ref. [1] assumed the magnetic moment is given by (with j c the critical current)
m = d/2 r πr 2 j c hdr = m s [1 − ( r d/2 ) 3 ] (2a) r = r(H M ) = d 2 (1 − H M − H p 2H * )(2b)
so that r(H p ) = d/2, r(2H * + H p ) = 0.
III. OUR ANALYSIS
Just as Ref. [1], we assume the validity of the Bean model. However we disagree that Eqs. (2a), (2b) used by the authors of [1] is the proper way to calculate the trapped magnetic moment under ZFC conditions. Instead, we argue that Eq. (2a) is the proper way to calculate the trapped moment under FC conditions, provided Eq. (2b) is replaced by
r = r(H M ) = d 2 (1 − H M − H p H * )(3)
for H M < H * + H p , r = 0 for H M > H * + H p , with H p = 0. This is illustrated in the left panels of Fig. 2.
For ZFC conditions instead, the diagrams shown in the right panels of Fig. 2 apply. For that case, the magnetic moment is given by
m = m s [1 − 2( r 1 d/2 ) 3 + ( r 2 d/2 ) 3 ](4)
where, for H M < H * + H p
r 1 = d 2 (1 − H M − H p 2H * ) (5a) r 2 = d 2 (1 − H M − H p H * ).(5b)
For H * + H p < H M < 2H * + H p , r 1 is given by Eq. (5a) and r 2 = 0, and for H M > 2H * + H p , r 1 = r 2 = 0. Fig. 3 shows what these expressions predict for the trapped magnetic moment versus magnetization field H M for the parameters assumed in Ref. [1]. Most importantly, the moment rises from zero linearly under !"#$%&'("&')$*+'%,-*µ . / ! *012* 13"44%-*5"#$%&'6*5)5%$&*078. 9 FC conditions and quadratically for ZFC conditions. As seen in the inset, for small fields the ZFC moment is very much smaller than the FC moment and in stark disagreement with the experimental observations. The experimental results of Ref. [1] are actually well fit by our FC calculation for all values of the magnetization field H M if we take the value of H * to be twice as large as inferred in Ref. [1], i.e. H * = 1.67T . This is shown in Fig. 4. We conclude that this agreement is accidental, since the experimental protocol was ZFC for all but one experimental point [1].
In order to try to fit the low field ZFC experimental data to the ZFC calculation we would have to take a much smaller value of H * . Fig. 5 shows the results for H * = 0.2T , chosen to fit as well as possible the low field data. In addition to not fitting the low field data very well, the higher field data deviate strongly from the theoretical ZFC curve. For this assumed value of H * the trapped moment saturates for H sat M = 0.44T (Eq. (1)), in clear contradiction with the experimental data that show no saturation till H M > 1T .
IV. DISCUSSION
Is it possible that under the ZFC protocol of the experiment with the field H M applied for 1 hour, the field could penetrate sufficiently so as to mimic the FC protocol? It is not possible, because Ref. [1] also measured the rate of flux creep and there was negligible flux creep over a 1 hour period even at temperatures as high as 165K. Also, according to the NRS experiment [28] the flux didn't penetrate over times substantially larger than 1 hour. Therefore, the experimental results of Ref. [1] shown in Fig. 1 of this paper are incompatible with the interpretation that the magnetic moment observed originates in flux trapping. If the magnetic moment had originated in flux trapping, it would rise quadratically from zero as function of the magnetization field H M under the ZFC conditions of the experiment, not linearly as observed. Therefore, the experiment indicates that there is no flux trapping in this material, H 3 S. As argued in Refs. [32,34], if the material doesn't trap flux, and in addition it does not expel flux, then the material is not a superconductor.
The question then arises, what is the origin of the magnetic moments measured in Ref. [1] shown in Fig. 1? We suggest they are either experimental artifacts associated with the experimental apparatus used (SQUID magnetometer) or magnetic moments of localized spins originating either in the sample or in the diamond anvil cell environment (gasket, etc). It is also possible that the measurements could signal unexpected collective magnetic behavior of hydrogen-rich materials under high pressure, as suggested in Ref. [37].
To confirm the results of our analysis we suggest that it would be of interest to repeat the measurements of Ref. [1] under FC conditions. We expect that the results will be similar to the results under ZFC conditions, in contradiction with what is expected from trapped flux shown in Figs. 3 and 4, namely a marked difference between FC and ZFC behavior, and consistent with the hypothesis that the origin of the magnetic moments measured is localized spins rather than delocalized supercurrents. We suggest that it would also be informative to perform these experiments using FC and ZFC protocols for a known hard superconductor and verify the expected qualitatively different behavior shown in Figs. 3 and 4.
Finally, we would like to point out that the interpretation of the measurements of magnetic moment of Ref. [1] as originating in flux trapping with H * ∼ 0.8T appear to be in contradiction with the magnetic moment measurements presented in Ref. [26]. For example, according to the former (see our Fig. 2 top right panel) for an applied field H ∼ H * /4 = 0.2T the magnetic field should still be excluded from more than 75% of the sample even at temperature T ∼ 100K (see Fig. 1c of [1]). Instead, the magnetic moment measurements shown in Fig. 3a of Ref. [26] indicate that the diamagnetism has essentially disappeared at that point.
Acknowledgments
FM was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by an MIF from the Province of Alberta. We are grateful to the authors of Ref. [1] and particularly V. Minkov for clarifying information.
2022). What follows is a new manuscript, under consideration for publication in Nature Communications as "Matters
Arising", submitted to arxiv on 11/14/2022, that arxiv decided should be combined with the above manuscript to allow posting.
On magnetic field screening and expulsion in hydride superconductors
Ref. [1] presents evidence for magnetic field screening and "subtle" evidence for magnetic field expulsion in hydrides under high pressure, which is argued to support the claim that these materials are high temperature superconductors. We point out here that data presented in Ref. [1] appear to be inconsistent (i) with one another, (ii) with other measurements reported by the same authors on the same samples [2,3], and (iii) with expected behavior of standard superconductors. This suggests that these magnetic phenomena reported for these materials are not associated with superconductivity, undermining the claim that these materials are high temperature superconductors.
In 2015, Eremets and coworkers reported high temperature superconductivity in sulfur hydride (hereafter H 3 S) under pressure [2], starting the hydride superconductivity epoch. Since then to the present, considerable evidence for superconductivity in various pressurized hydrides has been presented based on resistance measurements [4], however little magnetic evidence for superconductivity has been reported so far. In their original paper [2] Eremets and coworkers presented some magnetic evidence based on SQUID measurements. After a long hiatus, new evidence was presented this year in Nat. Comm. 13, 3194 (2022) [1]. That evidence is the subject of this comment. We focus here on the magnetic measurements reported for sulfur hydride (H 3 S), but exactly the same considerations apply to the same measurements reported for lanthanum hydride (LaH 10 ) in Ref. [1], the only other hydride material for which magnetic measurements have been reported to date. Figure 6 left and center panels reproduce Fig. 3a and Fig. 3e of ref. [1]. To the best of our understanding from carefully reading the paper, both panels show in their light-blue and blue curves respectively the same quantity: magnetic moment versus magnetic field, for the same sample at the same temperature (100K) and same pressure (155 GPa). The middle blue curve in the center panel is the virgin curve, which starts (when properly shifted vertically, as shown in Fig. S10 of [1]) with zero moment for zero applied field. It should be the same as the light blue curve labeled 100K on the left panel. Yet the curves look very different. The left panel curve shows an upturn for magnetic field beyond 95mT while the center panel curve show no upturn. When plotting both curves on the same scale in the right panel in Fig. 6 it is apparent that they are very different in magnitude and shape.
It should also be noted that the rapid decrease in the magnitude of the magnetic moments beyond the minimum points of the curves shown in Fig. 6 left panel is inconsistent with what is expected for a type II superconductor with very large upper critical field [5], estimated in Ref. [1] to be H c2 (T = 0) ∼ 97T . For example, at T = 100K H c2 (T ) should be above 60T . When corrected for demagnetization factor estimated as 1/(1 − N ) ∼ 8.5 in Ref. [1], it implies that the curve labeled T = 100K should evolve smoothly from its value attained at H ∼ 95mT approaching zero at or beyond H c2 (T )(1 − N ) ∼ 7T . This is qualitatively inconsistent with the behavior seen in Fig. 6 left panel that shows that the magnetic moment magnitude has already decreased to less than 15% of its maximum value for a field as small as H ∼ 0.2T ∼ H c2 (T )/35. Furthermore, in the presence of strong pinning, which Ref. [1] claims has to exist in order to explain the absence of flux expulsion in their samples, the decay of the induced diamagnetic moment should be even slower than for an ideal type II superconductor [7,36], hence very much slower than what is shown in Fig. 6 left panel.
We also point out that the magnitude of diamagnetic moment versus temperature under zero field cooling reported in Ref. [1] Figs. 2e and S1 left middle panel differs by a factor of 4 or more from the same quantity reported in 2015 in Ref. [2] Figs. 4a and extended data Fig. 6c for samples estimated to be of similar size, with the earlier result showing the larger moment. While in field cooling experiments one may expect substantial variations in magnetic moment depending on sample quality, this is not expected to be the case for zero field cooling experiments. Figure 7 shows as a blue curve the magnetic moment versus magnetic field at temperature 100K from the left panel of Fig. 6, i.e. Fig. 3a of Ref. [1], compared with the magnetic moment versus magnetic field for a hysteresis cycle at the same temperature for the same sample at the same pressure reported in Fig. 4a of Ref. [3], that was used to obtain the critical current data shown in Fig. S5 of Ref. [1]. The blue curve on the left panel of Fig. 7 should be the virgin curve for this hysteresis cycle, joining smoothly the green curve, as is universally seen in such measurements for superconductors. One such typical example is shown on the right panel of Fig. 7, from Ref. [7]. It can be seen that the blue curve on the [3]. Those data were used to obtain the critical current data shown in Fig. S5 of Ref. [1]. The blue curve on the left panel shows the magnetic moment versus magnetic field for 100K from the light blue curve on the left panel of Fig. 6, which is Fig. 3a of Ref. [1]. Right panel: a typical hysteresis cycle for a type II hard superconductor, from Ref. [7]. The virgin curve starting at the origin smoothly joins the hysteresis loop curve.
left panel shows no hint of joining the green curve. In other words, these measured results on the same sample for the same temperature and pressure measured in the same laboratory [1,3] are completely incompatible with one another under the assumption that they arise from superconductivity in the sample.
Ref. [1] says that it uses a background subtraction procedure. However, neither is the background signal given in Ref. [1] nor is the procedure used clearly explained. Perhaps, more information on the data processing that has been performed would help explain some of the anomalies pointed out above. But even with such clarification we believe that the above analysis indicates that the reported magnetic measurements [1] are inconsistent with the assumption that they originate in superconductivity. Instead, we suggest that they originate in localized magnetic moments associated with the samples, the diamond anvil cell environment, and/or the measuring apparatus.
The signature property of superconductors, that cannot be mimicked by localized magnetic moments, is the Meissner effect, the ability to expel magnetic fields when cooled in a field (FC). In Ref. [1], the authors claim to find "subtle Meissner effect in FC measurements at 2 mT" indicated by the light blue curve shown in their Fig. S1 middle left panel. However, when the same data are plotted in Fig. SI1 middle left panel of Ref. [3] without the light blue curve, no evidence for a Meissner effect is seen. While for some standard superconductors with strong pinning the percentage of flux expulsion (Meissner fraction) can be very small for larger fields, it rapidly increases for small fields, as shown e.g. in Refs. [8][9][10][11]. The Meissner fraction is expected to depend on the ratio H/H c1 [12], and for H 3 S H c1 is estimated to be 0.82T [1], which is more than an order of magnitude larger than lower critical fields for standard superconductors with high T c such as cuprates and pnictides. So the field 2mT of Fig. S1 of Ref. [1] is equivalent to a field of less than 2 Oe for those other materials, for which a sizable Meissner fraction is found [8][9][10][11]. It should also be noted that in Ref. [2] Extended Data Fig. 6 (c) the authors plotted magnetic moment under FC for magnetic fields down to 0.2mT showing no evidence for a Meissner effect. Additionally, the Meissner fraction is expected to increase as the thickness of the sample decreases [10,13], and the samples used in these high pressure experiments are rather thin.
Elsewhere we have also called attention to the facts that (i) Fig. SI1 of Ref. [1] lower left panel shows that the ZFC and FC magnetic moment curves for the precursor sample, not expected to be superconducting, also show an unexplained divergence around 200K [14], (ii) the behavior of magnetic moment versus temperature shown in Fig. 6 is incompatible with the claim of Ref. [15], referenced in Ref. [1] in support of superconductivity of sulfur hydride, that a magnetic field as large as 0.68T is excluded from the sample [16], and (iii) ac magnetic susceptibility measurements for sulfur hydride [17] referenced in Ref. [1] as evidence for superconductivity were shown to result from an experimental artifact [18].
Recently, the authors of Ref. [1] also reported measurement of trapped magnetic flux in their samples as evidence for superconductivity [19]. We pointed out [20] that the reported linear behavior of trapped moment versus field in zero field cooling experiments [19] is inconsistent with the expected behavior of hard superconductors [6]. In addition, the magnetic moment measurements reported in Ref. [1] indicate that at low temperatures magnetic fields of up to 95mT are excluded from the sample (see Fig. 6 left panel here), which is inconsistent with the reported finding in Ref. [19] that applied fields as small as 50mT penetrate and are trapped by the same samples.
In summary, we argue that the matters pointed out here cast doubt on the interpretation of Ref. [1] that the reported measurements originate in superconductivity.
FIG. 2 :
2Magnetic fields and currents predicted by the Bean model under field-cooled (FC) and zero-field-cooled (ZFC) protocols. Here we assume Hp = 0 for simplicity.
FIG. 4 :
4Expected trapped magnetic moment versus magnetization field HM assuming H * is twice the value inferred in Ref.[1] , i.e. H * = 1.67T , compared to the experimental points. Remarkably, the experimental points obtained with the ZFC protocol are actually fitted by the calculation assuming FC.
FIG. 5 :
5Trapped magnetic moment versus magnetization field HM assuming H * = 0.2T to fit the low-field experimental data. The ZFC curve saturates at H sat M = 0.44T , well before the experimental data.
[ 1 ]
1V. S. Minkov, V. Ksenofontov, S. L. Budko, E. F. Talantsev and M. I. Eremets, "Trapped magnetic flux in hydrogen-rich high-temperature superconductors", arXiv:2206.14108v1 (2022). [2] A.P. Drozdov, M.I. Eremets, I. A.Troyan, V. Ksenofontov and S. I. Shylin, 'Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system', Nature 525, 73-76 (2015). [3] A.P. Drozdov, M. I. Eremets and I. A. Troyan, "Superconductivity above 100 K in PH3 at high pressures", arXiv:1508.06224 (2015). [4] A.P. Drozdov et al., 'Superconductivity at 250 K in lanthanum hydride under high pressures', Nature 569, 528-531 (2019). [5] F. Hong et al, "Superconductivity of Lanthanum Superhydride Investigated Using the Standard Four-Probe Configuration under High Pressures", Chin. Phys. Lett. 37, 107401 (2020). [6] M. Somayazulu et al., 'Evidence for superconductivity above 260 K in lanthanum superhydride at megabar pressures', Phys. Rev. Lett. 122, 027001 (2019). [7] A. D. Grockowiak et al, "Hot Hydride Superconductivity above 550 K", Front. Electron. Mater, 04 March 2022. [8] P. P. Kong et al, "Superconductivity up to 243 K in the yttrium-hydrogen system under high pressure", Nat Commun 12, 5075 (2021). [9] Y. A. Troyan et al., "Anomalous high-temperature superconductivity in Y H6", Adv. Mater. 2006832 (2021). [10] E. Snider et al, "Synthesis of Yttrium Superhydride Superconductor with a Transition Temperature up to 262 K by Catalytic Hydrogenation at High Pressures", Phys. Rev. Lett. 126, 117003 (2021). [11] D. V. Semenok et al., "Superconductivity at 161 K in thorium hydride T hH10: Synthesis and properties", Materials Today 33, 36-44 (2020). [12] D. Zhou et al, "Superconducting praseodymium superhydrides", Science Advances 6, eaax6849 (2020). [13] D. V. Semenok et al, "Superconductivity at 253 K in lanthanum-yttrium ternary hydrides", Materials Today 48, 18 (2021). [14] E. Snider et al., 'Room-temperature superconductivity in a carbonaceous sulfur hydride', Nature 586, 373 (2020). [15] W. Chen et al, "High-Temperature Superconducting Phases in Cerium Superhydride with a Tc up to 115 K below a Pressure of 1 Megabar", Phys. Rev. Lett. 127, 117001 (2021). [16] F. Hong et al, "Possible superconductivity at ∼ 70K in V. SECOND PART The above manuscript was posted on arxiv on 14 Jul 2022 and published in J Supercond Nov Magn 35, 3141-3145 (
FIG. 6 :
6Left panel: magnetic moment versus applied field for H3S under pressure, from Fig. 3a of Ref. [1]. Center panel: Magnetic moment versus applied field in a hysteresis cycle, from Fig. 3e of Ref. [1]. The middle blue curve in the center panel is presumably the virgin curve, which should be identical to the light blue curve on the left panel labeled 100K. Right panel: quantitative comparison of the virgin curves for 100K from the left panel (3a) and the center panel (3e).
Figure 4 FIG. 7 :
47of https://www.researchsquare.com/article/rs-Green curve, left panel: hysteresis cycle for magnetic moment of H3S at 100K, from Fig. 4a of Ref.
AcknowledgmentsWe acknowledge some helpful correspondence with the authors of Ref.[1]. JEH is grateful to R. Prozorov for illuminating discussions. FM was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by an MIF from the Province of Alberta.Author contributions: JEH initiated the study. JEH and FM analyzed the data and prepared the manuscript. Competing interests: the authors declare no competing interests. Data availability statement: The data that support the findings of this study are available from the authors upon reasonable request.
. Materials Today Physics. 22100596hydride SnHxunder high pressurehydride SnHxunder high pressure", Materials Today Physics 22, 100596 (2022).
Synthesis of molecular metallic barium superhydride: pseudocubic BaH12. W Chen, Nature Comm. 12273W. Chen et al, "Synthesis of molecular metallic barium superhydride: pseudocubic BaH12, Nature Comm. 12, 273 (2021).
High-Temperature Superconducting Phase in Clathrate Calcium Hydride CaH6up to 215 K at a Pressure of 172 GPa. L Ma, https:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.128.167001Phys. Rev. Lett. 128167001L. Ma et al, "High-Temperature Superconducting Phase in Clathrate Calcium Hydride CaH6up to 215 K at a Pressure of 172 GPa", Phys. Rev. Lett. 128, 167001 (2022).
Superconductivity above 200 K discovered in superhydrides of calcium. Z W Li, Nat Commun. 132863Z. W. Li et al, "Superconductivity above 200 K discov- ered in superhydrides of calcium', Nat Commun 13, 2863 (2022).
On Distribution of Superconductivity in Metal Hydrides. D V Semenok, Current Opinion in Solid State and Materials Science. 24and references thereinD. V. Semenok et al, "On Distribution of Superconduc- tivity in Metal Hydrides", Current Opinion in Solid State and Materials Science 24, 100808 (2020) and references therein.
Superconducting Hydrides Under Pressure. C J Pickard, I Errea, M I Eremets, https:/www.annualreviews.org/doi/10.1146/annurev-conmatphys-031218-013413Ann. Rev. Cond. Matt. Phys. 11and references thereinC. J. Pickard, I. Errea and M. I. Eremets, "Superconduct- ing Hydrides Under Pressure", Ann. Rev. Cond. Matt. Phys. 11, pp 57-76 (2020) and references therein.
A perspective on conventional high-temperature superconductors at high pressure: Methods and materials. J A Flores-Livas, Physics Reports. 856J. A. Flores-Livas et al., "A perspective on conven- tional high-temperature superconductors at high pres- sure: Methods and materials", Physics Reports 856, 1-78 (2020).
Viewpoint: the road to roomtemperature conventional superconductivity. L Boeri, B Bachelet, https:/iopscience.iop.org/article/10.1088/1361-648X/ab0db2/metaJ. Phys. Cond. Matt. 31234002L. Boeri and B. Bachelet, "Viewpoint: the road to room- temperature conventional superconductivity", J. Phys. Cond. Matt. 31, 234002 (2019).
Compressed hydrides as metallic hydrogen superconductors. Y Quan, S S Ghosh, W E Pickett, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.100.184505Phys. Rev. B. 100184505Y. Quan, S. S. Ghosh, and W. E. Pickett, "Compressed hydrides as metallic hydrogen superconductors", Phys. Rev. B 100, 184505 (2019).
High-temperature superconductivity in alkaline and rare earth polyhydrides at high pressure: A theoretical perspective. E Zurek, T Bi, https:/aip.scitation.org/doi/full/10.1063/1.5079225J. Chem. Phys. 15050901E. Zurek and T. Bi, "High-temperature superconductiv- ity in alkaline and rare earth polyhydrides at high pres- sure: A theoretical perspective", J. Chem. Phys. 150, 050901 (2019).
Magnetic field screening in hydrogenrich high-temperature superconductors. V S Minkov, Nat Commun. 133194V. S. Minkov et al, "Magnetic field screening in hydrogen- rich high-temperature superconductors", Nat Commun 13, 3194 (2022).
High-temperature superconductivity in hydrides: experimental evidence and details. M I Eremets, https:/link.springer.com/article/10.1007/s10948-022-06148-1965M. I. Eremets et al, "High-temperature superconductiv- ity in hydrides: experimental evidence and details", J Supercond Nov Magn 35, 965 (2022).
Observation of superconductivity in hydrogen sulfide from nuclear resonant scattering. I Troyan, Science. 3511303I. Troyan et al, "Observation of superconductivity in hy- drogen sulfide from nuclear resonant scattering", Science 351, 1303 (2016).
High-temperature superconductivity in sulfur hydride evidenced by alternating-current magnetic susceptibility. X Huang, Nat. Sci. Rev. 6713X. Huang et al, "High-temperature superconductivity in sulfur hydride evidenced by alternating-current magnetic susceptibility", Nat. Sci. Rev. 6, 713 (2019).
Meissner effect in nonstandard superconductors. J E Hirsch, F Marsiglio, Physica C. 5871353896J. E. Hirsch and F. Marsiglio, "Meissner effect in non- standard superconductors", Physica C 587, 1353896 (2021).
Absence of magnetic evidence for superconductivity in hydrides under high pressure. J E Hirsch, F Marsiglio, Physica C. 5841353866J. E. Hirsch and F. Marsiglio, "Absence of magnetic evi- dence for superconductivity in hydrides under high pres- sure", Physica C 584, 1353866 (2021).
Clear evidence against superconductivity in hydrides under high pressure. J E Hirsch, F Marsiglio, arXiv:2110.07568to be published in MREJ. E. Hirsch and F. Marsiglio, "Clear evidence against superconductivity in hydrides under high pressure", arXiv:2110.07568 (2021), to be published in MRE.
Faulty evidence for superconductivity in ac magnetic susceptibility of sulfur hydride under pressure. J E Hirsch, National Science Review. 986J. E. Hirsch, "Faulty evidence for superconductivity in ac magnetic susceptibility of sulfur hydride under pressure", National Science Review 9, nwac086 (2022).
Flux trapping in superconducting hydrides under high pressure. J E Hirsch, F Marsiglio, Physica C. 5891353916J. E. Hirsch and F. Marsiglio, "Flux trapping in su- perconducting hydrides under high pressure", Physica C 589, 1353916 (2021).
The trapped magnetic moment was determined as the difference between the measured magnetic moment after magnetization cycle and the residual magnetic moment, which arises from the body of the miniature DAC above the corresponding Tc (see Supplementary Figure 1). Therefore a subtraction is involved in the magnetic moment data presented in RefReference [1] states "The trapped magnetic moment was determined as the difference between the measured mag- netic moment after magnetization cycle and the resid- ual magnetic moment, which arises from the body of the miniature DAC above the corresponding Tc (see Supple- mentary Figure 1)." Therefore a subtraction is involved in the magnetic moment data presented in Ref. [1].
Magnetization of High-Field Superconductors. C P Bean, https:/journals.aps.org/rmp/abstract/10.1103/RevModPhys.36.31Rev. Mod. Phys. 3631C. P. Bean, "Magnetization of High-Field Superconduc- tors", Rev. Mod. Phys. 36, 31 (1964).
Superconductivity and hydromagnetism. J E Hirsch, Phys. Lett. A. 141Physica BJ. E. Hirsch, "Ferromagnetism in metallic hydrogen", Phys. Lett. A 141, 191-195 (1989); "Superconductivity and hydromagnetism", Physica B 163, 291-298 (1990).
Magnetic field screening in hydrogenrich high-temperature superconductors. V Minkov, Nat Commun. 133194V. Minkov et al, "Magnetic field screening in hydrogen- rich high-temperature superconductors", Nat Commun 13, 3194 (2022).
Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system. A P Drozdov, M I Eremets, I A Troyan, V Ksenofontov, S I Shylin, Nature. 525A.P. Drozdov, M.I. Eremets, I. A.Troyan, V. Ksenofontov and S. I. Shylin, "Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system", Nature 525, 73-76 (2015).
The Meissner effect in hightemperature hydrogen-rich superconductors under high pressure. V S Minkov, 10.21203/rs.3.rs-936317/v1Res. Sq. V. S. Minkov et al, "The Meissner effect in high- temperature hydrogen-rich superconductors under high pressure", Res. Sq. DOI:10.21203/rs.3.rs-936317/v1 (2021).
High-temperature superconductivity in hydrides. I A Troyan, Phys. Usp. 65(2022) and references thereinI. A. Troyan et al, "High-temperature superconductivity in hydrides", Phys. Usp. 65 748-761 (2022) and refer- ences therein.
Introduction to superconductivity. M Tinkham, Second Edition. McGraw Hill, New Yorkfigure 5.2M. Tinkham, "Introduction to superconductivity", Sec- ond Edition, McGraw Hill, New York, 1996, figure 5.2.
Magnetization of Hard Superconductors. C P Bean, https:/journals.aps.org/prl/abstract/10.1103/PhysRevLett.8.250Phys. Rev. Lett. 8250C. P. Bean, "Magnetization of Hard Superconductors", Phys. Rev. Lett. 8, 250 (1962).
Magnetic Hysteresis in La1.85Sr0.15CuO4. M Oussena, S Senoussi, G Collin, https:/iopscience.iop.org/article/10.1209/0295-5075/4/5/018EPL. 4625M. Oussena, S. Senoussi and G. Collin, "Magnetic Hys- teresis in La1.85Sr0.15CuO4", EPL 4, 625 (1987).
Low-field Meissner fraction of YBaCuO in a flux pinning model. L Krusin-Elbaum, 1469L. Krusin-Elbaum et al, "Low-field Meissner fraction of YBaCuO in a flux pinning model", Physica C 153-155, 1469 (1988).
On the low-field Meissner fraction in high-Tc ceramics. M Wetzstein, Physica B. 169M.Wetzstein et al, "On the low-field Meissner fraction in high-Tc ceramics", Physica B 169, (1991).
The Meissner and shielding effects of high-temperature oxide superconductors. Y Tomioka, M Naito, K Kishio, K Kitazawa, Physica C. 223347Y. Tomioka, M. Naito, K. Kishio and K. Kitazawa, "The Meissner and shielding effects of high-temperature oxide superconductors", Physica C 223, 347 (1994).
Anomalous Meissner effect in pnictide superconductors. R Prozorov, https:/journals.aps.org/prb/abstract/10.1103/PhysRevB.82.180513Phys. Rev. B. 82180513R. Prozorov et al, "Anomalous Meissner effect in pnictide superconductors", Phys. Rev. B 82, 180513(R) (2010).
The Meissner effect in superconductors with strong vortex pinning. V V Moshchalkov, A A Zhukov, Physica B. 169601V.V.Moshchalkov and A.A.Zhukov, "The Meissner effect in superconductors with strong vortex pinning", Physica B 169, 601 (1991).
Sample size effect on the Meissner fraction in Y Ba2Cu307−x single crystals. M V Kartsovnik, G Yu, K Logvenov, Ya, Soifer, Cryogenics. 30647M.V. Kartsovnik, G.Yu. Logvenov and K.Ya. Soifer, "Sample size effect on the Meissner fraction in Y Ba2Cu307−x single crystals", Cryogenics 30, 647 (1990).
Clear evidence against superconductivity in hydrides under high pressure. J E Hirsch, F Marsiglio, https:/aip.scitation.org/doi/10.1063/5.0091404758401Matter and Radiation at ExtremesJ. E. Hirsch and F. Marsiglio, "Clear evidence against su- perconductivity in hydrides under high pressure", Matter and Radiation at Extremes 7, 058401 (2022).
Observation of superconductivity in hydrogen sulfide from nuclear resonant scattering. I Troyan, Science. 3511303I. Troyan et al, "Observation of superconductivity in hy- drogen sulfide from nuclear resonant scattering", Science 351, 1303 (2016).
On the Analysis of the Tin-Inside-H3S Mossbauer Experiment. J E Hirsch, ; J Supercond, https:/link.springer.com/article/10.1007/s10948-022-06391-6Comment onJ. E. Hirsch, "Comment on "On the Analysis of the Tin- Inside-H3S Mossbauer Experiment" ", J Supercond Nov Mag. 35, 3115-3117 (2022).
High-temperature superconductivity in sulfur hydride evidenced by alternating-current magnetic susceptibility. X Huang, Nat. Sci. Rev. 6713X. Huang et al, "High-temperature superconductivity in sulfur hydride evidenced by alternating-current magnetic susceptibility", Nat. Sci. Rev. 6, 713 (2019).
Faulty evidence for superconductivity in ac magnetic susceptibility of sulfur hydride under pressure. J E Hirsch, National Science Review. 986J. E. Hirsch, "Faulty evidence for superconductivity in ac magnetic susceptibility of sulfur hydride under pressure", National Science Review, 9, nwac086 (2022).
Trapped magnetic flux in hydrogen-rich high-temperature superconductors. V S Minkov, V Ksenofontov, S L Budko, E F Talantsev, M I Eremets, arXiv:2206.14108v2V. S. Minkov, V. Ksenofontov, S. L. Budko, E. F. Talantsev and M. I. Eremets, "Trapped magnetic flux in hydrogen-rich high-temperature superconductors", arXiv:2206.14108v2 (2022).
Evidence Against Superconductivity in Flux Trapping Experiments on Hydrides Under High Pressure. J E Hirsch, F Marsiglio, Supercond, https:/link.springer.com/article/10.1007/s10948-022-06365-8J. E. Hirsch and F. Marsiglio, "Evidence Against Su- perconductivity in Flux Trapping Experiments on Hy- drides Under High Pressure", J Supercond Nov Magn 35, 3141-3145 (2022).
| [] |
[
"EDIS: Entity-Driven Image Search over Multimodal Web Content",
"EDIS: Entity-Driven Image Search over Multimodal Web Content"
] | [
"Siqi Liu \nCornell University\n\n",
"Weixi Feng [email protected] \nSanta Barbara\n",
"Wenhu Chen [email protected] \nUniversity of Waterloo\nVector Institute\n\n",
"William Yang Wang [email protected] \nSanta Barbara\n"
] | [
"Cornell University\n",
"Santa Barbara",
"University of Waterloo\nVector Institute\n",
"Santa Barbara"
] | [] | Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce Entity-Driven Image Search (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description. Unlike datasets that assume a small set of single-modality candidates, EDIS reflects realworld web image search scenarios by including a million multimodal image-text pairs as candidates. EDIS encourages the development of retrieval models that simultaneously address cross-modal information fusion and matching. To achieve accurate ranking results, a model must: 1) understand named entities and events from text queries, 2) ground entities onto images or text descriptions, and 3) effectively fuse textual and visual representations. Our experimental results show that EDIS challenges stateof-the-art methods with dense entities and the large-scale candidate set. The ablation study also proves that fusing textual features with visual features is critical in improving retrieval results 1 | 10.48550/arxiv.2305.13631 | [
"https://export.arxiv.org/pdf/2305.13631v1.pdf"
] | 258,841,061 | 2305.13631 | bc1e80f23e98e0b500b94ef82e7b54359b6e8615 |
EDIS: Entity-Driven Image Search over Multimodal Web Content
Siqi Liu
Cornell University
Weixi Feng [email protected]
Santa Barbara
Wenhu Chen [email protected]
University of Waterloo
Vector Institute
William Yang Wang [email protected]
Santa Barbara
EDIS: Entity-Driven Image Search over Multimodal Web Content
Making image retrieval methods practical for real-world search applications requires significant progress in dataset scales, entity comprehension, and multimodal information fusion. In this work, we introduce Entity-Driven Image Search (EDIS), a challenging dataset for cross-modal image search in the news domain. EDIS consists of 1 million web images from actual search engine results and curated datasets, with each image paired with a textual description. Unlike datasets that assume a small set of single-modality candidates, EDIS reflects realworld web image search scenarios by including a million multimodal image-text pairs as candidates. EDIS encourages the development of retrieval models that simultaneously address cross-modal information fusion and matching. To achieve accurate ranking results, a model must: 1) understand named entities and events from text queries, 2) ground entities onto images or text descriptions, and 3) effectively fuse textual and visual representations. Our experimental results show that EDIS challenges stateof-the-art methods with dense entities and the large-scale candidate set. The ablation study also proves that fusing textual features with visual features is critical in improving retrieval results 1
Introduction
Image search, also known as text-to-image retrieval, is to retrieve matching images from a candidate set given a text query. Despite the advancements in large-scale vision-and-language models Zhang et al., 2021;Li et al., 2020b), accurately retrieving images from a large web-scale corpus remains a challenging problem. There remain several critical issues: 1) Lack of large-scale datasets: existing image retrieval datasets typically contain 30K-100K images New Hampshire debate, round two: Mitt Romney fields sharp attacks from Republican candidates Mitt Romney: wins. He is so far ahead in the polls it would be one of the biggest upsets in US political history if he were to lose
Missing Entity
Missing Entity Incorrect Event Figure 1: EDIS contains entity-rich queries and multimodal candidates. EDIS requires models to recognize subtle differences across different modalities to identify the correct candidates. For instance, the last three sample candidates either miss entities in the image or describe a different event. (Plummer et al., 2015;Lin et al., 2014), which is far less than the number of images that search engines must deal with in real applications. 2) Insufficient entity-specific content: existing datasets focus on generic objects without specific identities. Specific entities ("Statue of Liberty") in web images and text may be recognized as general objects ("building"). 3) Modality mismatch: existing image retrieval methods usually measure image-text similarity. However, for web image search, the surrounding text of an image also plays a crucial part in this fast and robust retrieval process.
Recently, there has been a continuous interest in event-centric tasks and methods in the news domain (Reddy et al., 2021;Varab and Schluter, 2021;Spangher et al., 2022). For instance, NY-Times800K (Tran et al., 2020), and Visual News are large-scale entity-aware benchmarks for news image captioning. TARA (Fu et al., 2022) is proposed to address time and location reasoning over news images. NewsStories (Tan et al., 2022) aims at illustrating events from news articles using visual summarization. While many of these tasks require accurate web image search results as a premise, a large-scale image retrieval dataset is lacking to address the challenges of understanding entities and events.
Therefore, to tackle the aforementioned three key challenges, we introduce a large-scale dataset named Entity-Driven Image Search (EDIS) in the news domain. As is shown in Fig.1, EDIS has a much larger candidate set and more entities in the image and text modalities. In addition to images, text segments surrounding an image are another important information source for retrieval. In news articles, headlines efficiently summarize the events and impress readers in the first place (Panthaplackel et al., 2022;Gabriel et al., 2022). Hence, to simulate web image search with multi-modal information, we pair each image with the news headline as textual summarization of the event. As a result, EDIS requires models to retrieve over image-headline candidates, which is a novel setup over existing datasets.
Given a text query, existing models can only measure query-image or query-text similarity alone. BM25 (Robertson et al., 2009), and DPR (Karpukhin et al., 2020) fail to utilize the visual features, while vision-language models like Visual-Bert (Li et al., 2019) and Oscar (Li et al., 2020b) cannot be adopted directly for image-headline candidates and are infeasible for large-scale retrieval. Dual-stream encoder designs like CLIP (Radford et al., 2021) are efficient for large-scale retrieval and can compute a weighted sum of query-image and query-text similarities to utilize both modalities. However, as is shown later, such multi-modal fusion is sub-optimal for EDIS. In this work, we evaluate image retrieval models on EDIS and reveal that the information from images and headlines cannot be effectively utilized with score-level fusion. Therefore, we further proposed a featurelevel fusion method to utilize information from both images and headlines effectively. Our contribution is three-fold:
• We collect and annotate EDIS for large-scale image search, which characterizes singlemodality queries and multi-modal candidates. EDIS is curated to include images and text segments from open sources that depict a significant amount of entities and events.
• We propose a feature-level fusion method for multi-modal inputs before measuring alignment with query features. We show that images and headlines are exclusively crucial sources for accurate retrieval results yet cannot be solved by naive reranking.
• We evaluate existing approaches on EDIS and demonstrate that EDIS is more challenging than previous datasets due to its large scale and entity-rich characteristics.
Related Work
Cross-Modal Retrieval Datasets Given a query sample, cross-modal retrieval aims to retrieve matching candidates from another modality (Bain et al., 2021;Hu and Lee, 2022;Sangkloy et al., 2022). Several datasets have been proposed or repurposed for text-to-image retrieval. For instance, MSCOCO (Lin et al., 2014), andFlickr-30K (Plummer et al., 2015) are the two widely used datasets that consist of Flickr images of common objects. CxC extends MSCOCO image-caption pairs with continuous similarity scores for better retrieval evaluation. Changpinyo et al.(2021) repurposes ADE20K (Pont-Tuset et al., 2020) for image retrieval with local narratives and mouse trace. WebQA (Chang et al., 2022) has a similar scale to EDIS and defines source retrieval as a prerequisite step for answering questions. The source candidates are either text snippets or images paired with a short description. In contrast, EDIS is a large-scale entity-rich dataset with multi-modal candidates that aligns better with realistic image search scenarios.
Text-to-Image Retrieval Methods Text-toimage retrieval has become a standard task for vision-language (VL) understanding (Lu et al., 2019;Li et al., 2020a;Jia et al., 2021;Fu et al., 2021;Dou et al., 2022). Single-stream approaches like VisualBert (Li et al., 2019), and UNITER rely on a unified transformer with concatenated image and text tokens as input. Dual-stream approaches like CLIP (Radford et al., 2021) or ALIGN (Jia Filter data by complexity, entity count, and image-text or text-text similarities Figure 2: EDIS data collection process. The process consists of query selection, candidate collection and annotation, and hard negative mining.
et al., 2021) have separate encoders for image and text modality, thus, are much more efficient for retrieval. However, adopting these models for multi-modal candidates leads to architecture modification or suboptimal modality fusion. In contrast, ViSTA (Cheng et al., 2022) can aggregate scene text as an additional input to the candidate encoding branch. In this work, we propose a method named mBLIP to perform feature-level fusion, achieving more effective information fusion for multi-modal candidates.
Task Formation
EDIS contains a set of text queries Q = {q 1 , q 2 , . . .}, and a set of candidates c m , essentially image-headline pairs B = {c 1 = (i 1 , h 1 ), c 2 = (i 2 , h 2 ), . . .} where i n denotes an image and h n denotes the associated headline. For a query q m , a retrieval model needs to rank top-k most relevant image-headline pairs from B. As is shown in Fig. 1, both images and headlines contain entities that are useful for matching with the query. We evaluate approaches with both distractor and full candidate sets. The distractor setup is similar to conventional text-to-image retrieval using MSCOCO (Lin et al., 2014), or Flickr30K (Plummer et al., 2015, where images are retrieved from a limited setB with ∼2.5K (image, headline) pairs. The full setting requires the model to retrieve images from the entire candidate set B.
Entity-Driven Image Search (EDIS)
We select queries and candidates from humanwritten news articles and scraped web pages with different stages of filtering. Then we employ human annotators to label relevance scores. Fig. 2 illustrates the overall dataset collection pipeline.
Query Collection
We extract queries and ground truth images from the VisualNews , and TARA (Fu et al., 2022) datasets. These datasets contain news articles that have a headline, an image, and an image caption. We adopt captions as text queries and use image-headline pairs as the retrieval candidates.
We design a series of four filters to select highquality, entity-rich queries. 1) Query complexity: we first evaluate the complexity of queries and remove simple ones with less than ten tokens. 2) Query entity count: we use spaCy to estimate average entity counts in the remaining queries and remove 20% queries with the lowest entity counts. The resulting query set has an average entity count above 4.0. 3) Query-image similarity: to ensure a strong correlation between queries and the corresponding ground truth image, we calculate the similarity score between query-image using CLIP (Radford et al., 2021) and remove 15% samples with the lowest scores. 4) Query-text similarity: we calculate the query-text similarity using Sentence-BERT (Reimers and Gurevych, 2019) and remove the top 10% most similar data to force the retrieval model to rely on visual representations.
To avoid repeating queries, we compare each query to all other queries using BM25 (Robertson et al., 2009). We remove queries with high similarity scores as it potentially describes the same news event and leads to the same retrieved images. As shown in Table 1 Table 1: Statistics of EDIS and existing image retrieval datasets. EDIS has a larger set of multi-modal candidates, unrestricted image sources, multi-scale annotations, and entity-rich queries compared to previous datasets.
Candidate Collection
In web image search experience, multiple relevant images exist for a single query. Therefore, we expand the candidate pool so that each query corresponds to multiple image-headline pairs. Additional candidates are collected from Google Image Search and the rest of the VisualNews dataset. For each query from VisualNews, we select seven image-headline pairs from Google search. For each query from TARA, we select five image-headline pairs from Google search. and two image-headline pairs from VisualNews. Then, we ask annotators to label the relevance score for each candidate on a three-point Likert scale. Score 1 means "not relevant" while 3 means "highly relevant". Formally, denote E(·) as the entity set and E(·) as the event of a query q m or a candidate c n = (i n , h n ), we define the ground truth relevance scores as:
rel(m, n) = 3 if E(q m ) ⊆ E(c n ) and E(q m ) = E(c n ) 2 if E(c n ) ⊂ E(q m ) and E(q m ) = E(c n ) 1 if E(q m ) ∩ E(i n ) = ∅ or E(q m ) ̸ = E(c n )(1)
Each candidate is annotated by at least three workers, and it is selected only when all workers reach a consensus. Controversial candidates that workers cannot agree upon after two rounds of annotations are discarded from the candidate pool. Additionally, one negative candidate is added to each annotation task to verify workers' attention. The final conformity rate among all annotations is over 91.5%.
Hard Negatives Mining We discover that EDIS queries can be challenging to Google Image Search in some cases. Among the 200K images from Google Search, 29K (∼15%) are annotated with a score of 1, and 124K (∼62%) are annotated with a score of 2. These candidates are hard negatives that require retrieval models to understand and ground visual entities in the images. As for candidates from VisualNews, there are 9.7K (∼41%) with a score of 1 and 9.5K (∼40%) candidates with a score of 2. We refer to these samples as in-domain hard negatives as their headlines share some entities with the query but refer to different events with discrepant visual representations.
Soft Negative Mining Lastly, we utilize the rest of the image-headline pairs from VisualNews and TARA to augment the candidate pool. These candidates are naturally negative candidates with a relevance score of 1 because of the unique article contents and extensive diversity in topics. Therefore, our dataset consists of 1,040,919 image-headline candidates in total.
Dataset Statistics
We demonstrate the major advantage of EDIS over existing datasets in Table 1. EDIS has the largest candidate set with a consistent candidate modality. Our images are not restricted to a specific source as a result of collecting images from a real search engine. Queries from EDIS are entity-rich compared to datasets with general objects (e.g. MSCOCO).
In Fig. 3 (left), we show the score distribution of human annotations. Candidates mined from Visual News are mostly in-domain hard negatives, while the images represent missing entities or different events. These candidates are mostly annotated with scores of 1 or 2. As for Google search candidates, many images depict the same event but with missing entities. Therefore, the annotations concentrate on score 2. In Fig. 3 (right), we show that most of the queries have at least one hard negative, usually more score 2 negatives than score 1 negatives. About half of the queries have more than one positive candidate (score 3). We show more examples of EDIS candidates in Fig. 8-11.
Multi-Modal Retrieval Method
Given a text query q, a model should be able to encode both images i n and headlines h n to match the query encoding. Therefore, the model should include a multi-modal candidate encoder f C and a query encoder f Q . Within f C , there is a branch for image input f I and a branch for headline f H . We formalize the matching process between a query q m and a candidate c n = (i n , h n ) as:
s m,n = f Q (q m ) T f C (i n , h n )(2)
where s m,n is the similarity score between q m and c n . Based on the design of f C , we categorize methods into score-level fusion and feature-level fusion.
Score-level Fusion These methods encode image and headline independently and compute a weighted sum of the features, i.e., f C (i n , h n ) = w 1 f I (i n ) + w 2 f H (h n ). Therefore, s m,n is equivalent to a weighted sum of query-image similarity s i k,k and query-headline similarity s h k,k :
s m,n = f Q (q m ) T (w 1 f I (i n ) + w 2 f H (h n )) (3) = w 1 s I m,n + w 2 s H m,n(4)
Specifically, CLIP (Radford et al., 2021), BLIP (Li et al., 2022), and a combination of models like CLIP and BM25 (Robertson et al., 2009) belong to this category.
Feature-level Fusion In Sec. 6, we show that score-level fusion is a compromised choice for encoding multi-modal candidates. Therefore, we propose a modified version of BLIP (mBLIP) to fuse features throughout the encoding process. The overall fusion process can be abstracted as follows: As is shown in Fig. 4, we first extract image embeddings f I (·) using the image encoder and then feed f I (·) into the cross-attention layers of f H . The output from f H is a feature vector v i,h that fuses the information from both image and text modalities. We separately obtain the query feature v q = f Q (q m ) where f Q shares the same architecture and weights with f H , except that the cross-attention layers are not utilized. We adopt the Image-Text Contrastive (ITC) loss between v i,h and v q to align the fused features with query features.
f C (i n , h n ) = f H (h n , f I (i n ))(5)s m,n = f Q (q m ) T f H (h n , f I (i n )).(6)
6 Experiment Setup
Baselines
For score-level fusion mentioned in Sec. 5, we consider CLIP, BLIP, fine-tuned BLIP, and BM25+CLIP reranking to utilize both modalities of the candidates. In addition, we benchmark existing text-to-image retrieval methods, and text document retrieval methods, including VinVL (Zhang et al., 2021), ALBEF , and BM25 (Robertson et al., 2009). Although they are not designed for multi-modal candidates, benchmarking these methods facilitate our understanding of the importance of single modality in the retrieval process. We do not consider single-stream approaches like UNITER (Chen et al., 2020) as they are not efficient for large-scale retrieval and result in extremely long execution time (see Appendix A).
Evaluation Metrics
We evaluate retrieval models with the standard metric Recall@k (R@k) that computes the recall rate of the top-k retrieved candidates. k is set to 1, 5, 10. We report mean Average Precision (mAP) to reflect the retrieval precision considering the ranking position of all relevant documents. Formally,
Recall@k = 1 |Q| |Q| m=1 k n=1 rel(m, n) n rel(m, n) (7) mAP = 1 |Q| |Q| m=1 n P (m, n)rel(m, n) n rel(m, n)(8)
rel(m, n) = 1 if rel(m, n) = 3 0 otherwise (9) where P (m, n) is the Precision@n of a query q m . For R@k and mAP, candidates with relevant score 3 are positive candidates, while candidates with scores 2 or 1 are (hard) negative samples. These two metrics reflect the model's ability to retrieve the most relevant candidates, which aligns with the definition in Fig. 2.
To give merits to candidates with a score of 2, we also report Normalized Discounted Cumulative Gain (NDCG). NDCG assigns importance weights proportional to the relevance score so that ranking score 2 candidates before score 1 candidate will lead to a higher metric value.
DCG(m) = n rel(m, n) − 1 log 2 (1 + n) (10) N DCG = 1 |Q| |Q| m=1 DCG(m) IDCG(m) ,(11)
where IDCG(m) is the DCG value of q m with the ideal candidate ranking.
Implementation Details
For BLIP fine-tuning, we adopt the same loss and hyperparameters as reported in the original implementation. We increase the learning rate to 1e-5 for optimal validation results. We directly rank the candidates by computing the cosine similarity of query features and candidate features and do not use any linear regression heads for reranking. Therefore, we abandon the image-text matching (ITM) loss in mBLIP fine-tuning and increase the learning rate to 5e-5 for optimal performance. More details can be found in Appendix A.
Experimental Results
BLIP-based fusion methods
We first investigate the performance difference between score-level and feature-level fusion as mentioned in Sec. 5. We implement these two ap- Table 2: Retrieval performance of BLIP and mBLIP under the distractor and full settings. We use grid search on the validation split to find the best score fusion weights (see Eq. 4) for zero-shot and fine-tuned BLIP.
proaches on BLIP (Li et al., 2022). Table 2 compares the result under two different setups where "BLIP" denotes the score-level fusion using the original BLIP architecture, and "mBLIP" denotes our proposed feature-level fusion. For score-level fusion, we obtain the weights from a grid search on the validation set under the distractor setup.
Distrator Set Pre-trained BLIP achieves 18.4 R@1 and 46.6 R@5, which means that almost onethird of the queries have a positive candidate in top-1 results, and around half of the positive candidates are retrieved in top-5 results. After finetuning, BLIP doubles R@1 to 32.6 and achieves significant gain in other metrics. The improvement shows that entities in EDIS are out-of-domain concepts for zero-shot BLIP, and EDIS training split is useful for models to adapt to the news domain. mBLIP outperforms BLIP in all metrics except R@1. The overall improvement entails that featurelevel fusion is superior to score-level fusion by utilizing headlines more effectively. The degradation in R@1 can be attributed to the fact that the image-query alignment is accurate enough for a small number of queries. Therefore, utilizing headlines slightly harms the results as they only provide high-level summarization.
Full Set Retrieving from the full candidate set significantly degrades the performance by over 50% in all metrics. Though the distractor setup was widely adopted in previous work, we show that a larger candidate set imposes remarkable challenges to the SOTA models. We can observe similar trends by comparing the three variants of BLIP. mBLIP achieves over 17% relative improvement across all metrics except R@1, even more significant than 4-12% relative improvement under the distractor set. The degradation in R@1 is also much less severe. Therefore, feature-level fusion is a more effective way to encode multi-modal candidates, considering that users usually receive more than one searched image in reality.
Additional Baselines
In Table 3, the defective recall rates of BM25 and CLIP text encoder imply that headlines solely are insufficient for accurate retrieval. However, textbased retrieval achieves promising NDCG values, indicating that headlines are useful for ranking score 2 candidates to higher positions.
Score-level Fusion "BM25+CLIP" first ranks candidates using BM25 and then reranks the top 50 or 200 candidates with CLIP to utilize the images. Despite the improvement compared to text-based methods, it underperforms zero-shot CLIP or BLIP. This implies that ranking with query-headline similarity imposes a bottleneck on the reranking process. CLIP achieves the best performance in terms of R@1/5/10 and mAP compared to other methods. We hypothesize that the "CLIP filtering" step in Sec. 4.1 eliminates hard negative query-image pairs for CLIP and thus introduces performance bias towards CLIP. Fine-tuned CLIP does not show apparent improvement and thus is not shown in Table 3. Therefore, EDIS is still challenging for SOTA retrieval models. score 2 candidates higher. We conjecture that many score 2 images have insufficient entities, resulting in lower query-image similarity scores. Hence, models must rely on headlines to simultaneously recognize entities from multiple modalities.
Ablation Study
Component Analysis Table 4 shows the performance of two fusion approaches without either image or headline branch. BLIP achieves much lower performance when relying solely on query-headline alignment (6.6 R@1, 29.7 mAP) compared to utilizing images only (33.9 R@1, 54.0 mAP). BLIP only achieves comparable and slightly degraded performance when using images and headlines for score fusion. Therefore, score-level fusion cannot easily tackle multi-modal candidates in EDIS.
In contrast, mBLIP shows improved performance with the headline encoder while decreased performance with the image encoder only. This is intuitive as the BLIP fine-tuning process only utilizes images without headlines, yet mBLIP utilizes both images and headlines. More interestingly, when using both image and headline encoders, Figure 5: A success case (top) and a failure case (bottom) of mBLIP compared to BLIP. mBLIP demonstrates over 20% relative increase in all metrics. The results imply that feature-level fusion is a more effective method to combine candidate features from multiple modalities.
Case Study
Success Case We show one success case and one failure case of mBLIP in Fig. 5. In the success case (top), mBLIP manages to retrieve all four relevant images while BLIP retrieves five false positives. Since all ten images contain a "cruise", we conjecture that entities in headlines (e.g., "Cambodia", "Sihanoukville") play a critical role for mBLIP to outperform BLIP in this case. The case shows feature-level fusion is much more effective in utilizing headline features than score-level fusion.
Failure Case As for the failure case in Fig. 5 (bottom), BLIP and mBLIP fail to retrieve the positive candidates in the top-5 results. Both methods fail to recognize "John Podesta" and align the text with the visual representation. For example, the top-2 candidates retrieved by mBLIP depict a different person from a different event. "Hillary Clinton" becomes a distracting entity in the query, and the model must understand the event instead of just matching entities to achieve accurate retrieval results. The third candidate of mBLIP shows the image with the correct person but from a different event. It further proves that the EDIS is a challenging dataset that requires specific knowledge of entities, cross-modal entity matching, and event understanding.
Conclusion
Training and evaluating large-scale image retrieval datasets is an inevitable step toward real image search applications. To mitigate the gap between existing datasets and real-world image search challenges, we propose a large-scale dataset EDIS with a novel retrieval setting and one million candidates. EDIS queries and candidates are collected from the news domain describing abundant entities and events. EDIS candidates are image-headline pairs since realistic image search utilizes the surrounding text of an image to facilitate accurate searching results. As a primary step towards handling multimodal candidates in EDIS, we review two primary fusion approaches and propose a feature-level fusion method to utilize the information from both images and headlines effectively. Our experimental results show ample space for improvement on EDIS. Future work should consider more principled solutions involving knowledge graphs, entity linking, and training algorithm design.
In this study, we only cover image retrieval datasets with English instructions. Queries and headlines in other languages may characterize different types of ambiguity or underspecification. Thus, expanding the datasets to multi-lingual image retrieval based on our dataset is important. Secondly, we only consider the news domain to collect entityrich queries and images. We plan to expand our dataset to open-domain where other entities like iconic spots will be included. In addition, we only consider the headlines as the text information to utilize in the retrieval process. However, in real image search scenarios, search engines usually utilize multiple paragraphs of the surrounding text to determine the relevance of the image. In the future, we will expand the text of the multimodal candidates with news articles or segments of the articles. Our dataset and models trained on it could be biased if the model is not accurate enough. The model may return completely incorrect candidates and cause users to confuse persons or objects with incorrect identities. We will provide all ground truth annotations with visualization code to help users learn about the ground truth candidates. Last but not least, we do not consider the phenomenon of underspecification in the image search experience. Users search with phrases or incomplete sentences to save typing efforts. Therefore, more realistic queries can be underspecified and grammatically incorrect. However, this is a problem universal to all existing image retrieval datasets, as collecting real human search results could be challenging. We plan to make our dataset more realistic in the future by utilizing powerful tools such as large language models to generate underspecified, near-realistic queries.
Ethics Consideration
We will release our dataset EDIS for academic purposes only and should not be used outside of research. We strictly follow any licenses stated in the datasets that we have newly annotated. As introduced in Sec. 4.2, we annotated the data with crowd-workers through Amazon Mechanical Turk. The data annotation part of the project is classified as exempt by our Human Subject Committee via IRB protocols. We required the workers to be in English-speaking regions (Australia, Canada, New Zealand, the United Kingdom, and the United States). We keep the identity of workers anonymized throughout the collection and postprocessing stages. We also require the workers to have a HIT approval rating of ≥ 96% or higher. We pay each completed HIT $0.2, and each HIT takes around 30-40 seconds to complete on average. Therefore, this resulted in an hourly wage of $18-$24, as determined by the estimation of completing time for each annotation task. Example screenshots of our annotation interface can be found in Fig. 6-7 under Appendix A.
Figure 3 :
3Left: Annotated candidates distribution by relevance score. Right: Query distribution by the scores of annotated candidates.
Figure 4 :
4Score-level fusion encodes each modality into a single feature vector, while feature-level fusion outputs a single feature vector for multi-modal candidates by adopting cross-attention layers.
, we end up with 32,493 queries split into 26K/3.2K/3.2K for train/validation/test.Retrieval Candidate
Text Query
# Img
Modality
Source
Label
# Train # Val # Test # Entity
Flickr30K (Plummer et al., 2015)
1K
Image
Flickr
Binary
145K
5K
5K
0.35
MSCOCO (Lin et al., 2014)
5K
Image
Flickr
Binary
566K
25K
25K
0.18
CxC (Parekh et al., 2021)
5K
Image
Flickr
Continuous
-
25K
25K
0.18
ADE20K (Changpinyo et al., 2021)
2K
Image
Flickr
Binary
20K
2K
-
0.16
WebQA (Chang et al., 2022)
390K Text/Image
Wikimedia
Binary
34K
5K
7.5K
1.96
EDIS (ours)
1M
Image-Text Open (Google)
3-Likert
26K
3.2K 3.2K
4.03
Table 3 :
3Evaluation results on additional baselines.
Table 4 :
4Ablation study on the effectiveness of feature fusion in BLIP and mBLIP.
Query: The Westerdam cruse ship docked in Sihanoukville, Cambodia, on Monday. Query: John Podesta, Hillary Clinton's campaign chairman looks on before the vice presidential debate in Farmville, VA on Oct 4 2016 A portrait of John Podesta, based entirely on his hacked emails Democratic candidates Hillary Clinton and Sen Clinton faces sharp attacks on Wall Street ties, Iraq vote at second Democratic debateOur selection of some of the
best news photographs taken
around the world during the
past 24 hours.
A squadron of Russian warships has
passed through the English Channel in
what the Royal Navy described as a
"routine" movement
Ahead of second quarter results
on Tuesday, Carnival has cruised
to the top of the FTSE 100
NCL-Norwegian Cruise Line
sending its North America-based
ships to Europe -Cruise News
Journalists arrested after
flying a drone in the City
of Love They say it was in
the name of journalism
Virus fears rise after Cambodia's
acceptance of cruise ship
Cruise ship turned away in
other ports docks in Cambodia
COVID-19: Cambodia PM
defends ship docking
Cruise Ship Cast Out Over Virus
Fears Docks at Sihanoukville
Cambodia's Coronavirus Complacency
May Exact a Global Toll
Rank 1
Rank 2
Rank 3
Rank 4
Rank 5
BLIP
Score-level Fusion
mBLIP
Feature-level Fusion
The 68th Primetime Emmy
Awards averaged a meager 11
Democratic candidates Hillary
Clinton and Sen
Red, blue states move in
opposite directions in a new
era of single party control
Lester Holt is basically invisible
at the presidential debate
Frank Mankiewicz, political
and media insider, dies at 90
Five days before the New York primary,
Hillary Clinton and Bernie Sanders faced
off in Brooklyn on Thursday night in the
ninth Democratic debate
Hillary Clinton is only up 2
points on Bernie Sanders
nationally? Be skeptical
Rank 1
Rank 2
Rank 3
Rank 4
BLIP
Score-level Fusion
mBLIP
Feature-level Fusion
Rank 5
A AppendixA.1 Additional Implementation Details Hyperparameters We fine-tuned BLIP and mBLIP on 4 40GB NVIDIA A100. It takes 5 hours for BLIP fine-tuning and 3 hours for mBLIP. For both BLIP and mBLIP, we train the model for 6 epochs with batch size 16 per GPU. The model checkpoint with the best recall rate over the validation set is selected for final evaluation. We apply grid search for score-level fusion using BLIP or CLIP to find the optimal w 1 . We first search over ten intermediate numbers between 0 and 1 and then narrow the range to search for 100 intermediate numbers. Finally, we found training and validation results stable without much randomness for all implemented methods. Therefore, we evaluate every model once and report the metric values of one-time evaluation.BLIP Training For BLIP fine-tuning, we follow the original implementation and adopt the original image-text contrastive (ITC) loss and image-text matching (ITM) loss. We only utilize images with scores of 3 and text queries for training. As for mBLIP, the headline encoder and the query encoder shares the same weight. We utilize images with scores of 3, associated headlines, and text queries for training. The output from the image encoder is fed into the transformer layers of the headline encoder through cross-attention layers. Then the output of the headline encoders can be treated as the fused feature of image-headline pairs. We compute ITC loss based on the headline encoder outputs and the query encoder outputs.Single Stream Models We do not evaluate any single stream models or modules due to time complexity. Consider m queries and n candidates. The complexity for a dual encoder model to obtain all features is O(m+n). The computation cost of computing cosine similarity is trivial compared to the forward process of a model and can be neglected. However, for a single stream model, it takes O(mn) to obtain similarity scores for all query-candidate pairs. Since it takes around 3.5 minutes for BLIP to evaluate over 3.2k queries with 25K candidates, it is taking more than 5 days for a single encoder model to complete retrieval under the distractor setting. It takes more than a year to complete retrieval under the full setting. Query: A dog was rescued last month from a farm in Wonju, South Korea. Humane Society International offers to pay farmers to release dogs so they can be sent abroad to be adopted
Gül Varol, and Andrew Zisserman. 2021. Frozen in time: A joint video and image encoder for end-to-end retrieval. Max Bain, Arsha Nagrani, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionMax Bain, Arsha Nagrani, Gül Varol, and Andrew Zis- serman. 2021. Frozen in time: A joint video and image encoder for end-to-end retrieval. In Proceed- ings of the IEEE/CVF International Conference on Computer Vision, pages 1728-1738.
Webqa: Multihop and multimodal qa. Yingshan Chang, Mridu Narang, Hisami Suzuki, Guihong Cao, Jianfeng Gao, Yonatan Bisk, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionYingshan Chang, Mridu Narang, Hisami Suzuki, Gui- hong Cao, Jianfeng Gao, and Yonatan Bisk. 2022. Webqa: Multihop and multimodal qa. In Proceed- ings of the IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition, pages 16495-16504.
Telling the what while pointing to the where: Multimodal queries for image retrieval. Soravit Changpinyo, Jordi Pont-Tuset, Vittorio Ferrari, Radu Soricut, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionSoravit Changpinyo, Jordi Pont-Tuset, Vittorio Ferrari, and Radu Soricut. 2021. Telling the what while point- ing to the where: Multimodal queries for image re- trieval. In Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, pages 12136- 12146.
Uniter: Universal image-text representation learning. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, European conference on computer vision. SpringerYen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. Uniter: Universal image-text representation learning. In European conference on computer vision, pages 104-120. Springer.
Vista: Vision and scene text aggregation for cross-modal retrieval. Mengjun Cheng, Yipeng Sun, Longchao Wang, Xiongwei Zhu, Kun Yao, Jie Chen, Guoli Song, Junyu Han, Jingtuo Liu, Errui Ding, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionMengjun Cheng, Yipeng Sun, Longchao Wang, Xiong- wei Zhu, Kun Yao, Jie Chen, Guoli Song, Junyu Han, Jingtuo Liu, Errui Ding, et al. 2022. Vista: Vision and scene text aggregation for cross-modal re- trieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5184-5193.
An empirical study of training end-to-end vision-and-language transformers. Zi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionZi-Yi Dou, Yichong Xu, Zhe Gan, Jianfeng Wang, Shuohang Wang, Lijuan Wang, Chenguang Zhu, Pengchuan Zhang, Lu Yuan, Nanyun Peng, et al. 2022. An empirical study of training end-to-end vision-and-language transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18166-18176.
Violet: End-to-end video-language transformers with masked visual-token modeling. Tsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, Zicheng Liu, arXiv:2111.12681arXiv preprintTsu-Jui Fu, Linjie Li, Zhe Gan, Kevin Lin, William Yang Wang, Lijuan Wang, and Zicheng Liu. 2021. Violet: End-to-end video-language trans- formers with masked visual-token modeling. arXiv preprint arXiv:2111.12681.
There'sa time and place for reasoning beyond the image. Xingyu Fu, Ben Zhou, Ishaan Chandratreya, Carl Vondrick, Dan Roth, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Xingyu Fu, Ben Zhou, Ishaan Chandratreya, Carl Von- drick, and Dan Roth. 2022. There'sa time and place for reasoning beyond the image. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1138-1149.
Misinfo reaction frames: Reasoning about readers' reactions to news headlines. Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, Yejin Choi, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saadia Gabriel, Skyler Hallinan, Maarten Sap, Pemi Nguyen, Franziska Roesner, Eunsol Choi, and Yejin Choi. 2022. Misinfo reaction frames: Reasoning about readers' reactions to news headlines. In Pro- ceedings of the 60th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 3108-3127.
Feature representation learning for unsupervised cross-domain image retrieval. Conghui Hu, Gim Hee Lee, European Conference on Computer Vision. SpringerConghui Hu and Gim Hee Lee. 2022. Feature represen- tation learning for unsupervised cross-domain image retrieval. In European Conference on Computer Vi- sion, pages 529-544. Springer.
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, PMLRInternational Conference on Machine Learning. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. 2021. Scaling up visual and vision-language representation learning with noisy text supervision. In International Conference on Machine Learning, pages 4904-4916. PMLR.
Dense passage retrieval for opendomain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6769-6781.
Vilt: Vision-and-language transformer without convolution or region supervision. Wonjae Kim, Bokyung Son, Ildoo Kim, PMLRInternational Conference on Machine Learning. Wonjae Kim, Bokyung Son, and Ildoo Kim. 2021. Vilt: Vision-and-language transformer without convolu- tion or region supervision. In International Con- ference on Machine Learning, pages 5583-5594. PMLR.
Unicoder-vl: A universal encoder for vision and language by cross-modal pretraining. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. 2020a. Unicoder-vl: A universal en- coder for vision and language by cross-modal pre- training. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11336- 11344.
BLIP: Bootstrapping language-image pretraining for unified vision-language understanding and generation. Junnan Li, Dongxu Li, Caiming Xiong, Steven Hoi, PMLRProceedings of the 39th International Conference on Machine Learning. the 39th International Conference on Machine Learning162Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. BLIP: Bootstrapping language-image pre- training for unified vision-language understanding and generation. In Proceedings of the 39th Interna- tional Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 12888-12900. PMLR.
Align before fuse: Vision and language representation learning with momentum distillation. Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, Steven Chu Hong Hoi, Advances in neural information processing systems. 34Junnan Li, Ramprasaath Selvaraju, Akhilesh Gotmare, Shafiq Joty, Caiming Xiong, and Steven Chu Hong Hoi. 2021. Align before fuse: Vision and language representation learning with momentum distillation. Advances in neural information processing systems, 34:9694-9705.
Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, arXiv:1908.03557Visualbert: A simple and performant baseline for vision and language. arXiv preprintLiunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. Visualbert: A sim- ple and performant baseline for vision and language. arXiv preprint arXiv:1908.03557.
Oscar: Objectsemantics aligned pre-training for vision-language tasks. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, European Conference on Computer Vision. SpringerXiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, et al. 2020b. Oscar: Object- semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, pages 121-137. Springer.
Microsoft coco: Common objects in context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. SpringerTsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. 2014. Microsoft coco: Common objects in context. In European confer- ence on computer vision, pages 740-755. Springer.
Visual news: Benchmark and challenges in news image captioning. Fuxiao Liu, Yinghan Wang, Tianlu Wang, Vicente Ordonez, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingFuxiao Liu, Yinghan Wang, Tianlu Wang, and Vicente Ordonez. 2021. Visual news: Benchmark and chal- lenges in news image captioning. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6761-6771.
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Advances in neural information processing systems. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, 32Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. Vilbert: Pretraining task-agnostic visiolinguis- tic representations for vision-and-language tasks. Ad- vances in neural information processing systems, 32.
Updated headline generation: Creating updated summaries for evolving news stories. Sheena Panthaplackel, Adrian Benton, Mark Dredze, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1Sheena Panthaplackel, Adrian Benton, and Mark Dredze. 2022. Updated headline generation: Cre- ating updated summaries for evolving news stories. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 6438-6461.
Crisscrossed captions: Extended intramodal and intermodal semantic similarity judgments for MS-COCO. Zarana Parekh, Jason Baldridge, Daniel Cer, Austin Waters, Yinfei Yang, 10.18653/v1/2021.eacl-main.249Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsZarana Parekh, Jason Baldridge, Daniel Cer, Austin Wa- ters, and Yinfei Yang. 2021. Crisscrossed captions: Extended intramodal and intermodal semantic sim- ilarity judgments for MS-COCO. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 2855-2870, Online. Association for Computational Linguistics.
Flickr30k entities: Collecting region-to-phrase correspondences for richer imageto-sentence models. A Bryan, Liwei Plummer, Chris M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionBryan A Plummer, Liwei Wang, Chris M Cervantes, Juan C Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2015. Flickr30k entities: Collecting region-to-phrase correspondences for richer image- to-sentence models. In Proceedings of the IEEE international conference on computer vision, pages 2641-2649.
Radu Soricut, and Vittorio Ferrari. 2020. Connecting vision and language with localized narratives. Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, ECCV. Jordi Pont-Tuset, Jasper Uijlings, Soravit Changpinyo, Radu Soricut, and Vittorio Ferrari. 2020. Connecting vision and language with localized narratives. In ECCV.
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, PMLRInternational Conference on Machine Learning. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sas- try, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In International Conference on Machine Learning, pages 8748-8763. PMLR.
Sai Revanth Gangi Reddy, Zhenhailong Chetan, Yi R Wang, Kathryn Fung, Ahmed Conger, Martha Elsayed, Preslav Palmer, Eduard Nakov, Kevin Hovy, Small, arXiv:2112.08544Newsclaims: A new benchmark for claim detection from news with attribute knowledge. arXiv preprintRevanth Gangi Reddy, Sai Chetan, Zhenhailong Wang, Yi R Fung, Kathryn Conger, Ahmed Elsayed, Martha Palmer, Preslav Nakov, Eduard Hovy, Kevin Small, et al. 2021. Newsclaims: A new benchmark for claim detection from news with attribute knowledge. arXiv preprint arXiv:2112.08544.
Sentence-bert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Nils Reimers and Iryna Gurevych. 2019. Sentence-bert: Sentence embeddings using siamese bert-networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3982-3992.
The probabilistic relevance framework: Bm25 and beyond. Stephen Robertson, Hugo Zaragoza, Foundations and Trends® in Information Retrieval. 34Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends® in Information Re- trieval, 3(4):333-389.
A sketch is worth a thousand words: Image retrieval with text and sketch. Patsorn Sangkloy, Wittawat Jitkrittum, Diyi Yang, James Hays, European Conference on Computer Vision. SpringerPatsorn Sangkloy, Wittawat Jitkrittum, Diyi Yang, and James Hays. 2022. A sketch is worth a thousand words: Image retrieval with text and sketch. In Eu- ropean Conference on Computer Vision, pages 251- 267. Springer.
Newsedits: A news article revision dataset and a novel document-level reasoning challenge. Alexander Spangher, Xiang Ren, Jonathan May, Nanyun Peng, Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAlexander Spangher, Xiang Ren, Jonathan May, and Nanyun Peng. 2022. Newsedits: A news article re- vision dataset and a novel document-level reasoning challenge. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 127-157.
Newsstories: Illustrating articles with visual summaries. Reuben Tan, A Bryan, Kate Plummer, Saenko, Avneesh Lewis, Thomas Sud, Leung, European Conference on Computer Vision. SpringerReuben Tan, Bryan A Plummer, Kate Saenko, JP Lewis, Avneesh Sud, and Thomas Leung. 2022. Newssto- ries: Illustrating articles with visual summaries. In European Conference on Computer Vision, pages 644-661. Springer.
Transform and tell: Entity-aware news image captioning. Alasdair Tran, Alexander Mathews, Lexing Xie, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionAlasdair Tran, Alexander Mathews, and Lexing Xie. 2020. Transform and tell: Entity-aware news image captioning. In Proceedings of the IEEE/CVF Confer- ence on Computer Vision and Pattern Recognition, pages 13035-13045.
Massivesumm: a very large-scale, very multilingual, news summarisation dataset. Daniel Varab, Natalie Schluter, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDaniel Varab and Natalie Schluter. 2021. Massivesumm: a very large-scale, very multilingual, news summari- sation dataset. In Proceedings of the 2021 Confer- ence on Empirical Methods in Natural Language Processing, pages 10150-10161.
Geb+: A benchmark for generic event boundary captioning, grounding and retrieval. Yuxuan Wang, Difei Gao, Licheng Yu, Weixian Lei, Matt Feiszli, Mike Zheng Shou, European Conference on Computer Vision. SpringerYuxuan Wang, Difei Gao, Licheng Yu, Weixian Lei, Matt Feiszli, and Mike Zheng Shou. 2022. Geb+: A benchmark for generic event boundary captioning, grounding and retrieval. In European Conference on Computer Vision, pages 709-725. Springer.
Simvlm: Simple visual language model pretraining with weak supervision. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yulia Tsvetkov, Yuan Cao, International Conference on Learning Representations. Zirui Wang, Jiahui Yu, Adams Wei Yu, Zihang Dai, Yu- lia Tsvetkov, and Yuan Cao. 2021. Simvlm: Simple visual language model pretraining with weak super- vision. In International Conference on Learning Representations.
Vinvl: Making visual representations matter in vision-language models. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jian- feng Gao. 2021. Vinvl: Making visual representa- tions matter in vision-language models. CVPR 2021.
| [] |
[
"Why semantics matters: A deep study on semantic particle-filtering localization in a LiDAR semantic pole-map Journal Title XX(X):1-19",
"Why semantics matters: A deep study on semantic particle-filtering localization in a LiDAR semantic pole-map Journal Title XX(X):1-19"
] | [
"Yuming Huang ",
"Yi Gu ",
"Chengzhong Xu ",
"Hui Kong "
] | [] | [] | In most urban and suburban areas, pole-like structures such as tree trunks or utility poles are ubiquitous. These structural landmarks are very useful for the localization of autonomous vehicles given their geometrical locations in maps and measurements from sensors. In this work, we aim at creating an accurate map for autonomous vehicles or robots with pole-like structures as the dominant localization landmarks, hence called pole-map. In contrast to the previous pole-based mapping or localization methods, we exploit the semantics of pole-like structures. Specifically, semantic segmentation is achieved by a new mask-range transformer network in a mask-classfication paradigm. With the semantics extracted for the pole-like structures in each frame, a multi-layer semantic pole-map is created by aggregating the detected pole-like structures from all frames. Given the semantic pole-map, we propose a semantic particle-filtering localization scheme for vehicle localization. Theoretically, we have analyzed why the semantic information can benefit the particle-filter localization, and empirically it is validated on the public SemanticKITTI dataset that the particle-filtering localization with semantics achieves much better performance than the counterpart without semantics when each particle's odometry prediction and/or the online observation is subject to uncertainties at significant levels. | 10.48550/arxiv.2305.14038 | [
"https://export.arxiv.org/pdf/2305.14038v1.pdf"
] | 258,841,359 | 2305.14038 | 25266ec8490af13d7fac719d9502b97fd3f879e6 |
Why semantics matters: A deep study on semantic particle-filtering localization in a LiDAR semantic pole-map Journal Title XX(X):1-19
Yuming Huang
Yi Gu
Chengzhong Xu
Hui Kong
Why semantics matters: A deep study on semantic particle-filtering localization in a LiDAR semantic pole-map Journal Title XX(X):1-19
10.1177/ToBeAssignedSAGESemantic Point-Cloud SegmentationParticle Filter LocalizationSLAMAutonomous Vehicles
In most urban and suburban areas, pole-like structures such as tree trunks or utility poles are ubiquitous. These structural landmarks are very useful for the localization of autonomous vehicles given their geometrical locations in maps and measurements from sensors. In this work, we aim at creating an accurate map for autonomous vehicles or robots with pole-like structures as the dominant localization landmarks, hence called pole-map. In contrast to the previous pole-based mapping or localization methods, we exploit the semantics of pole-like structures. Specifically, semantic segmentation is achieved by a new mask-range transformer network in a mask-classfication paradigm. With the semantics extracted for the pole-like structures in each frame, a multi-layer semantic pole-map is created by aggregating the detected pole-like structures from all frames. Given the semantic pole-map, we propose a semantic particle-filtering localization scheme for vehicle localization. Theoretically, we have analyzed why the semantic information can benefit the particle-filter localization, and empirically it is validated on the public SemanticKITTI dataset that the particle-filtering localization with semantics achieves much better performance than the counterpart without semantics when each particle's odometry prediction and/or the online observation is subject to uncertainties at significant levels.
Introduction
The precise acquisition of vehicle pose in realtime in an urban or suburban environment enables autonomous vehicles to plan a path and navigate to a specified destination location. While GPS can provide accurate position estimates at a global scale, it does not provide sufficiently accurate estimates because it suffers from substantial errors due to multi-path effects in urban canyons or signal degradation caused by occlusions by trees. A popular alternative way to autonomous vehicle localization is based on matching sensor observations against the data already saved in a previously acquired map.
In many existing methods, the same type of sensor is usually used during mapping creation and during vehicle localization given the map. Cameras are generally cheap and lightweight sensors and are available widely. The monocular camera is not capable of providing absolute range information. Therefore, during the mapping process, binocular cameras can be adopted (Campos et al. 2021) or a monocular camera can be combined with a multiplechannel LiDAR sensor (Qin et al. 2021;Caselitz et al. 2016) if camera sensors are preferably selected for localization. Generally, localization based on cameras is sensitive to illumination variations and view angles, although relying on matching the geometry of a sparse set of 3D points reconstructed from image features with the map's point cloud (Caselitz et al. 2016) or seeking illumination-invariant image features for matching sequences (Arroyo et al. 2015).
In contrast, LiDAR sensors are almost insensitive to external light conditions and can provide accurately range measurements. Our method exploits using a multiplechannel LiDAR sensor during both mapping creation and during localization given the map. This type of sensor setting for both mapping and localization has been widely used so far (Zhang and Singh 2014;Shan and Englot 2018;Chen et al. 2020). In general, the existing LiDARbased map representation has a large memory cost although some existing approaches can achieve a significant memory consumption by only saving extracted high-curvature/corner points in the map.
Our method proposed in this paper belongs to the route of LiDAR-based mapping and localization approaches. Specifically, we aim at creating an accurate map for autonomous vehicles or robots with pole-like structures as the dominant localization landmarks, hence called polemap. In contrast to the previous pole-based maps or egomotion estimation methods, we exploit the semantics of polelike structures. Specifically, semantic segmentation can be achieved by a new mask-range network, where pole-like structures with semantics are obtained in each frame. The semantics are utilized in the offline mapping process. Given the semantic pole-map, we propose a semantic particlefiltering localization method for vehicle localization. It is shown that when the uncertainty of each particle's prediction is subject to a nonlinear increase or abrupt change, the particle-filtering localization with semantics achieves much better performance than the counterpart without semantics.
Our contributions are threefold:
• We propose a relatively complete framework for semantic mapping and localization where the localization is achieved based on semantic particle filtering in a semantic pole-map created offline by a multi-channel LiDAR sensor. • For the offline semantic pole-map creation, based on LiDAR's range-view representation, we first propose a semantic segmentation transformer network in a maskclassification paradigm to segment pole-like structures from LiDAR scans. Then a multi-layer semantic polemap is created by aggregating the detected pole-like structures based on the vehicle's ego-motion with semantic-feature embeddings. • For the online vehicle localization given the created semantic pole-map, we theoretically analyze how the semantic information can benefit the particlefilter localization. Different from existing works, our method utilizes both the geometric and semantic discrepancy to improve particle-filter localization by both improving its proximity to the truth pose and discouraging the bad proposal of poses. Empirically, we have demonstrated its effectiveness and improvement over conventional methods based on the multi-layer semantic map in the real-world SemanticKITTI dataset with simulated uncertainty.
Related Works
Different types of sensors have been utilized for vehicle localization given a 3D map. These sensors can be used individually, although in many applications they compensate each other as well as with other sensors (e.g. Inertial Measurement Unit (IMU), wheel odometry, radar) for optimal localization performance. However, for true redundancy, these sub-systems must work independently as well to avoid dangerous situations from sensor malfunction. In this paper, we are only interested in vehicle localization in cities or suburbs. Specifically, we only review localization in a prior point-cloud map with an individual camera (including a stereo camera) or a multiple-channel LiDAR sensor. The related works on indoor localization are beyond the scope of this work. Camera-based localization within a 3D map: Generally, given a 3D map, camera-based localization is cheaper than LiDAR-based localization. Compared with the LiDARbased method, monocular or RGB-D cameras are generally sensitive to illumination variation, and seasonal and adversarial weather changes. Wolcott and Eustice (2014) proposed a camera-based localization within a 3D prior map (augmented with surface reflectivities) by a 3D LIDAR sensor. Given an initial pose, quite a few synthetic views of the environment are generated from the 3D prior map, camera localization is achieved by matching the live camera view with these synthetic views based on normalized mutual information. Maddern et al. (2014) propose an online 6-DoF visual localization across a wide range of outdoor illumination conditions throughout the day and night using a 3D scene prior. An illuminationinvariant image representation is adopted with a normalized information distance as the matching measure between the online image and the colored 3D LiDAR point cloud.
Another way to deal with illumination variation is based on matching point clouds. Caselitz et al. (2016) propose to locate a monocular camera in a prebuilt point cloud map by a LiDAR sensor. They reconstruct a sparse set of 3D points from image features based on local bundle adjustment, which are continuously matched against the map to track the camera pose in an online fashion. Similarly, Yabuuchi et al. (2021) also propose to continuously estimate similarity transformations that align the 3D structure reconstructed by visual SLAM to the point cloud map. Stereo cameras have also been applied for vehicle localization. aim to achieve the consumer level global positioning system (GPS) accuracy by matching the depth from the stereo disparity with 3D LiDAR maps.
Aside from pure geometric clues, semantic information has also been exploited extensively for both efficiency and accuracy of localization. Schreiber et al. (2013) propose to use lanes as localization cues. Toward this goal, they manually annotated lane markers in the LiDAR intensity map and these lane markers are then detected online using a stereo camera, and matched against the ones on the map. Qu et al. (2015) and Welzel et al. (2015) utilize traffic signs to implement image-based localization. Schönberger et al. (2018) built dense semantic maps using image segmentation and conducted localization by matching both semantic and geometric cues. To save map size, Ma et al. (2019) formulate localization problem in a Bayesian filtering framework, and exploit lanes, traffic signs, as well as vehicle dynamics to localize robustly with respect to a sparse semantic map. A similar work is proposed in Qin et al. (2021) which exploits semantic road elements for lightweight map-based camera localization.
Besides corner points, lines have also been used for localization. In Yu et al. (2020), correspondence between 2D lines in video sequences and 3D lines are established for a rough Monocular Camera Localization in Prior LiDAR Maps with 2D-3D Line Correspondences. Lecrosnier et al. (2019) also propose a vehicle relocalization method in a 3D linefeature map using Perspective-n-Line given a known vertical direction.
In addition, integration of more than one type of sensor can also improve localization accuracy, e.g., radar plus LiDAR in fire-disaster scenes (Park et al. 2019). Wan et al. (2018) propose a sensor-fusion-based localization method in a 3D map, where they adaptively use information from complementary sensors such as GNSS, LiDAR, and IMU to achieve high localization accuracy and resilience in challenging scenes, such as urban downtown, highways, and tunnels.
LiDAR-based localization within a 3D map: Due to the fact that LiDAR sensors can provide high-precision data irrespective of the distance measured, LiDAR sensors have also been widely adopted in localization within a given 3D map. Yoneda et al. (2014) propose a feature quantity for scan data based on the distribution of clusters for localization based on the lidar data and a precise 3-D map. Carle et al. (2010) propose the Multi-frame Odometry compensated Global Alignment (MOGA) algorithm to globally localize a rover by matching features from a three-dimensional (3D) orbital elevation map to features from the rover-based 3D LIDAR scans. Hata and Wolf (2015) propose to extract road-marker and curb features from multiple-channel LiDAR data and these features are stored in a high-resolution occupancy grid map and a Monte-Carlo localization (MCL) is used for vehicle localization. Similarly, Qin et al. (2012) introduce an MCL method using the curb-intersection features on urban roads with a single tilted 2D LIDAR. Wang et al. (2020c) also propose using curb and vertical features for localization. Chen et al. (2020) propose a neural networkbased observation model that predicts the overlap and yaw angle offset between the online LiDAR reading and virtual frames generated from a pre-built map.
Besides, LiDAR data have also been converted into an image-like representation and used to match the online data with the ones stored in the map for vehicle localization, e.g., Scan Context , LiDAR Iris (Wang et al. 2020b), Intensity Scan Context (Wang et al. 2020a). These methods can be used for rough localization instead of at centimeter-level accuracy.
Pole-based Mapping and Localization
Pole-like structures have also been used as landmarks in mapping and localization because of their invariance over time and across viewpoints. Spangenberg et al. (2016) propose to extract poles from depth images by a stereo camera. However, depth estimation from stereo cameras is sensitive to illumination conditions. Sefati et al. (2017) use a LiDAR in addition to a stereo camera to extract the poles. With the popularization of LiDAR, recently, some methods have been proposed to only utilize LiDAR for localization. Weng et al. (2018); Li et al. (2021); Lu et al. (2020);Chen et al. (2021) propose to extract poles from voxelized point clouds. Chen et al. (2021) incorporate the curb extracted from the Birds-Eye View (BEV) projection to achieve polecurb fusion for localization. Schaefer et al. (2019Schaefer et al. ( , 2021 explicitly model the occupied and free space by ray tracing that considers both the start point and end point. However, their runtime is limited as the 3D voxel is computationally consuming. In contrast, Dong et al. (2021Dong et al. ( , 2023 use rangeview-based methods to extract the poles. They use a series of geometric heuristics to cluster the poles in range view and fit a circle for each pole to obtain the center position of the pole on the global map. Wang et al. (2021) train the RangeNet++ for pole segmentation on they own labeled dataset. Dong et al. (2023) propose to train the SalsaNext (Cortinhal et al. 2020) for pole segmentation using the pole labels generated by the geometric heuristics across several datasets and achieve higher localization performance than directly using the generated labels. Different from those works, we use the mask-classification paradigm to segment the poles to achieve better mapping performance without introducing many computational costs.
Having a map built from the extracted poles, these works use non-linear optimization or a Monte Carlo particle filter in localization. Schaefer et al. (2019Schaefer et al. ( , 2021; Dong et al. (2021Dong et al. ( , 2023) use the Monte Carlo particle filter to estimate the pose with the nearest neighbor searching as correspondence between observation and landmarks in the map. Li et al. (2021) use 4D vector including position, radius, and height for corresponding pole finding. Chen et al. (2021) propose a Branch-and-Bound-based global optimization method to tackle the data association problem of poles and use a non-linear optimization method to fuse the pole cost and curb cost to obtain the vehicle location. Wang et al. (2021) propose to match the poles between local and global maps according to the semantic and geometric consistency of the poles. After finding the correspondences, the Iterative Closest Point (Besl and McKay 1992) algorithm on the pole centroid and pole point cloud is utilized to optimize the pose for relocalization, and then combined with the LiDAR odometry for localization. In contrast, we incorporate semantic information predicted from the segmentation network into the Monte Carlo particle filter for localization.
Particle-Filter Localization with Semantics
The distance field (JIANG et al. 2021;Miller et al. 2021;Akai et al. 2020) is utilized to describe the nearest distance from obstacles or surfaces in each semantic category. However, the map representation is point-wise and might be memory inefficient. Observed objects match the ones in the built map with the corresponding semantic category (Bavle et al. 2018;Zimmerman et al. 2022), where they depend on camera-based detection and recognition in indoor environments. All these methods utilize the geometric discrepancy (position and orientation difference) and incorporate semantic information into the association for better geometric discrepancy calculation.
In Jeong et al. (2020); Bernuy and Ruiz-del Solar (2018), the particle weights are updated by calculating the discrepancy in semantics between the online observations and the ones in the built map. In Jeong et al. (2020), the semantic discrepancy is measured by bitwise AND operation of segmented labels in bird-eye-view projection by images. In Bernuy and Ruiz-del Solar (2018), the semantic features are represented by histograms of the number of segmented labels in images, and the cosine similarity between these features is used as the discrepancy between online observations and the ones in the built topological map. It is noted that these methods adopt cameras as the main sensor and are sensitive to illumination variations. In Yan et al. (2019), semantic descriptors are extracted from LiDAR scans as observations and compared with map information from the OpenStreetMap (OpenStreetMap contributors 2017) with Hamming distance as the discrepancy. However, only the semantic discrepancy is utilized in this work and the absence of geometric discrepancy might limit the localization performance. Different from the existing works, we utilize both the semantic and geometric discrepancy of pole-like landmarks, extracted by the LiDAR sensor, which is efficient and robust to illumination.
Our Method
In this section, we introduce our methods for extracting polelike objects, building pole-maps, and vehicle localization in the created pole-maps. To extract pole-like objects, we first segment the LiDAR scans into discrete regions to distinguish pole-like objects and other objects and then cluster the segmented pole-like objects. While building the pole-map, we convert the extracted pole-like objects into map landmarks with geometric and semantic information. In localization, we propose the semantic-aware Monte Carlo particle filter to improve accuracy and robustness. We have shown that when pole-like objects are very sparsely distributed in the map or when the prediction of each particle's pose is subject to large uncertainty, semantics play a key role in improving localization accuracy.
Pole Segmentation
Deep neural networks are popular in semantic prediction. To obtain the pole-like objects with semantics from LiDAR scans, one can use an object detection network to predict the semantic category with a bounding box for each pole-like object, or a segmentation network to predict the semantic category for each point. The former directly predicts the semantics with geometric information, and the latter provides the semantic information for each point for further processing. We choose the latter because it can estimate more accurate pole-like instances by geometric clustering with per-point semantics than directly using the bounding box, as the bounding box is a cube rather than a cylinder and it is difficult to generalize it to predict the parameters of an object's shape.
An efficient representation of the LiDAR point cloud for LiDAR-based segmentation is the range-view image. By this representation, the LiDAR point cloud is projected into a range-view image according to spherical projection as follows,
u v = 1 2 1 − arctan(y, x)π −1 W 1 − (arcsin(zr −1 ) + f up ) 1 f H ,(1)
where W and H are the width and height of the range image, respectively. f = f up + f down is the LiDAR's vertical fieldof-view. The range value r = x 2 + y 2 + z 2 is calculated according to the point coordinates [x, y, z] T and (u, v) T are the image coordinates in the range view. The rangeview image representation is used to predict the pixelwise category, which is then projected to the original point cloud to obtain the point-wise category. We chose the range-view-based representation because it is generally more efficient compared with point-based and voxel-based semantic segmentation methods.
LiDAR Segmentation by Mask-Classification
Following Long et al. (2015), previous range-view-based methods Cortinhal et al. 2020;Zhao et al. 2021) mostly predict the pixel-wise category in the range-view image by per-pixel classification. Differently, we use the mask-classification paradigm (Cheng et al. 2021) to predict the region for each category as well as the pixelwise category. Mask classification achieves state-of-the-art performance in image segmentation (Cheng et al. 2022), however, few works have investigated its effectiveness in LiDAR-based segmentation. In this work, we investigate the mask-classification paradigm in the semantic segmentation of pole-like objects for semantic mapping. Specifically, we use SalsaNext (Cortinhal et al. 2020) network structure as the backbone and the same transformer decoder as in Cheng et al. (2021). In the training procedure, the supervision loss combines the classification loss L cls and binary mask loss L mask as in Cheng et al. (2021),
L total = L cls + L mask .(2)
In Cheng et al. (2021), the classification loss L cls is the crossentropy loss and the binary mask loss L mask combines the focal loss (Lin et al. 2017) and the dice loss (Milletari et al. 2016). In addition to them, we add Lovász loss (Berman et al. 2018) to L mask to directly optimize the Jaccard index, which is shown effective in LiDAR data in Cortinhal et al. (2020), and remove the supervision of pixels invalid in the range-view image from L mask . For the inference procedure, the class prediction and mask prediction are combined to obtain the final category for each pixel in the same way as Cheng et al. (2021). The network is trained by all semantic categories from labels, and we focus on the performance of pole-like categories, e.g., pole, trunk, and traffic sign. The network architecture is shown in Fig. 1. Using the maskclassification paradigm with data augmentation which is introduced in the following section, it has been shown in the experimental section that the performance of pole-like object segmentation is superior to the previous range-view based methods (Cortinhal et al. 2020;Zhao et al. 2021).
Data Augmentation for Mask-Classification
In the training procedure of LiDAR-based segmentation, simple data augmentations for LiDAR point cloud have been widely used, including random rotation, translation, flipping, and point dropping. However, with these common augmentations, the performance of mask classification cannot be on par with that of its per-pixel baseline counterpart (Cortinhal et al. 2020). Although the maskclassification paradigm achieves unified semantic and panoptic segmentation with SOTA performance in RGB image (Cheng et al. 2021(Cheng et al. , 2022, the mask-classification paradigm still has the following issues when applied to LiDAR's range-view based segmentation.
1. The mask-classification paradigm demands a large amount of diverse training data. The publicly available LiDAR segmentation datasets are relatively limited and small, thus cannot satisfy the requirement of transformer architectures (Vaswani et al. 2017;Dosovitskiy et al. 2021) which usually require more training data compared with the convolutional neural networks. 2. The mask-classification paradigm relies heavily on contextual cues. Limited datasets are often biased and mislead neural networks to find shortcuts during training and result in poor model generalization ability to rare or unseen situations (Nekrasov et al. 2021;Shetty et al. 2019), especially for maskclassification paradigm since the transformer decoder only queries the global context embeddings to find objects. Unfortunately, pole-like things are often related to context knowledge, e.g., poles are often present at the side rather than in the middle of the road. 3. Very biased to the high-frequency classes. Maskclassification paradigm makes predictions with queries whose number is larger than the categories. The supervision of this procedure is query-wise rather than pixel-wise, thus a mistake in the category of a query usually results in mistakes in a large area. Because the LiDAR data collected in driving scenarios are very long-tailed, e.g., points in pole-like categories are evidently less than some other dominant categories such as road, the network may perform poorly in the rare classes.
To deal with the aforementioned problems, we propose a novel data-augmentation method, which is constituted of three meta-operations: "Weighted", "Paste" and "Drop". The procedure is shown in 2. Initially, two frames are randomly selected from the dataset (noted as the first and second frames), and the common data augmentation is applied. The paste operation first selects the long-tailed objects from the second frame, then adds them to the first frame. The drop operation selects the non-long-tailed class points in the first frame, then deletes these points 2a. The weighted operation is to add a probability to the paste and drop 2b.
Our Weighted Paste Drop (WPD) data augmentation significantly enlarges the size and diversity of the dataset. To prevent our model from relying on too many contextual cues, our WPD scheme weakens the role of the context priors, as shown in 2c. The "paste" operation can create unusual or even impossible scene scenarios for the training set. In this example, a "pole" is "pasted" to the middle of a road, which never appears in the original dataset and cannot be created by standard data augmentation. By this way, the "paste" operation can weaken the context priors. Further, the "drop" process directly reduces the context bias by removing the area with high context information. In this example, the network is trained to recognize a "pole" without a sidewalk in its vicinity. To make the dataset less biased, we drop the highfrequency classes, such as road and car, with high probability and paste less-frequency classes for the pole-like objects frequently. In the experiments, we show that the WPD data augmentation improves the performance of both the baseline method and ours. In contrast, the improvement of our method is more evident, indicating that the mask classification is effective in semantic segmentation for our pole-like objects with our data augmentation.
Extraction of Pole Information
We choose K pole-like categories from semantic segmentation as the classes for pole-like objects, e.g., poles, trunks, and traffic signs. After training, the segmentation model predicts the region for pole-like objects with the corresponding category labels. We use DBSCAN (Ester et al. 1996) to cluster the pole-like objects from the region corresponding to these categories. Then we extract the geometric and semantic information for the clustered objects. Let P = {p 1 , ..., p N } represent all the N points in a scan,
P i = {p 1 i , .
.., p n i } represent the n points belonging to the i th pole-like object, each element of C i = {c 1 i , ..., c n i } represent the predicted probability vector of each point belonging to the K classes in P i .
To estimate the pole geometry, we follow Dong et al. (2021Dong et al. ( , 2023 to use the least-squares circle fitting (Bullock 2006) to obtain the position l = (l x , l y ) and radius r for the i th pole-like object. First, P i is projected onto the XY (horizontal) plane to obtain the X coordinates
U i = {u 1 i , ..., u n i } and Y coordinates V i = {v 1 i , ..., v n i }.
Then the relative coordinates (u c , v c ) of circle center is obtained by solving the following equation:
S uu u c + S uv v c = (S uuu + S uvv )/2, S uv u c + S vv v c = (S uuv + S vvv )/2,(3)
where
S uu = j (u j i −ū) 2 , S vv = j (v j i −v) 2 , S uv = j (u j i −ū)(v j i −v), S uuu = j (u j i −ū) 3 , S vvv = j (v j i −v) 3 , S uuv = j (u j i −ū) 2 (v j i −v), S uvv = j (u j i −ū)(v j i −v) 2 ,ū = 1 n j u j i ,v = 1 n j v j i .
Finally, the circle L i = (l x , l y , r) with position (l x , l y ) and radius r is obtained by
l x = u c +ū, l y = v c +v, r = 1 n n j=1 (u j i − l x ) 2 + (v j i − l y ) 2 ,(4)
Here r is directly calculated rather than the method in Bullock (2006). To extract the semantic information, we obtain the feature vectors
F i = {f 1 i , ..., f n i }, where f j i , j = 1, .
.., n is a ddimension embedding of each pixel in the range-view image that belongs to the i th pole-like object. Then the feature vector F i of this pole-like object is calculated by
F i = max(F i , 2),(5)
where max(F i , 1) means the max operation for each column of F i and max(F i , 2) means the max operation for each row.
The probability vector C i and categoryŷ i are calculated as
C i = 1 n j c j i ,(6)y i = argmax C i ,(7)
where argmax takes the index of the maximum value in probability vector C i . Finally, the geometric attribute L i and semantic attributes F i , C i ,ŷ i are used to describe the i th pole-like objects. The pole extraction procedure is shown in Fig. 3. The aboveobserved geometric and semantic information is used in both mapping and localization. In the next section, we will introduce the mapping for those pole objects.
Creation of Multi-layer Semantic Pole-Map
After extracting information for pole-like instances, they are represented as circles L i with semantic information in each LiDAR scan. Then the pole-like instances from all scans in a sequence are integrated and added to the global map to build the semantic pole-map. As a pole-like object may be observed multiple times during mapping, the polelike instances may duplicate if they are the observations of the same pole-like object. Thus, we utilize the ground-truth ego-motion to integrate the duplicates of each segmented instance. A final landmark on the map is the aggregation of these duplicates. Different from Dong et al. (2021Dong et al. ( , 2023, we take the categories of pole-like instances into consideration for more robust mapping.
Specifically, we divide the instances into groups according to their categoriesŷ i . For each group, we aggregate the polelike instances into clusters according to their connectivity. The two instances are deemed as connective if they overlap with each other. In this way, multiple clusters are generated for one pole-like object according to different categories from different observations. We denote the i th cluster of circles, feature vectors, and probability vectors as L c
i = {L 1 , ..., L m }, F c i = {F 1 , ..., F m }, C c i = {C 1 , .
.., C m }, respectively, where m is the number of pole-like instances in this cluster. The final circle L g i , feature vector F g i , probability vector C g i and categoryŷ g i of the i th pole-like landmark in map are the aggregation of instances in each cluster,
L g i = 1 m j L j , L j ∈ L c i , F g i = 1 m j F j , F j ∈ F c i , C g i = 1 m j C j , C j ∈ C c i , y g i = argmax C g i .(8)
The instances with different categories are aggregated into different clusters and generate different landmarks. In this way, a multi-layer pole-map is built, in which each layer contains pole-like landmarks for the same category. This is different from Dong et al. (2021Dong et al. ( , 2023 as we aggregate the instances in separated categories, as shown in Fig. 4. When taking the average of geometric and semantic information of instances for each cluster, the multi-layer mapping reduces the mixture of ambiguous information in different categories from noisy pole segmentation and extraction. Thus, multilayer mapping provides more robust landmarks in polemaps. In the ablation study Section Multi-Layer Mapping we will investigate the superiority of multi-layer mapping. The mapping procedure is shown in Fig. 3.
Semantic-Aware Particle Filtering
In localization given the semantic pole-map, the observed pole-like objects are used to estimate the vehicle's egomotion in the map (Fig. 3). We propose a semantic-aware particle-filter localization approach based on observing the pole-like structures to achieve robustness and accuracy of the localization. Conventionally, the general particle filter has been widely used in localization in a map (Dellaert et al. 1999).
The critical issues in particle-filter localization given a map are twofold. The first one is on data association, which establishes the correspondences between the pole-like objects from the online observation and those from the map. The association is unknown and should be created online. The second issue is on updating each particle's weight.
The conventional methods in dealing with the first issue are based on matching the nearest neighbors. However, the data association step usually contains erroneous correspondences due to the effect of dense dynamic participators, especially the presence of pole-like pedestrians during online localization or mapping. To deal with this issue, some methods have been proposed to incorporate semantic information into optimization for proper data association (JIANG et al. 2021;Miller et al. 2021;Akai et al. 2020;Bavle et al. 2018;Zimmerman et al. 2022).
For the second issue, the weight of each particle in the conventional particle filter-based method is estimated based on the probability of the observation conditioned on this particle's state (Dellaert et al. 1999). Given the k th particle, assuming its weight
at time t − 1 is w k t−1 , let s k p i t w k t−1 ,(12)
Although a few existing works have proposed to utilize semantic information to update particle weights (Jeong et al. 2020;Bernuy and Ruiz-del Solar 2018;Yan et al. 2019), different from the existing works, we utilize the semantic information of pole-like landmarks, extracted by the LiDAR sensor, which is efficient and robust to illumination. In this work, the semantic attributes F i andŷ i in Eq. 5 and Eq. 7, respectively, and F g i andŷ g i in Eq. 8 are utilized to improve the landmark correspondence and particle weight calculation. With these semantic attributes, poles in different types can be distinguished, and the correspondence quality is taken into consideration. In the following, we introduce them in turn.
Semantic-Aware Inconsistency
Following Eq. 11, the nearest-neighbor searching is used to find the correspondences for each particle. As shown in The expected landmark observations by the three particles are shown in the grey rectangles, respectively. The dark hexagon and square are two landmarks on the map. The associations of the expected landmarks of s k t , s k+1 t , and s k+2 t to the map landmarks are obtained by the nearest neighbor searching and shown as the red and green solid lines. Obviously, the red solid lines correspond to the wrong associations and the true associations should be the ones represented by the green solid lines and red dotted ones. In this example, we can observe that the distance between the wrong associated landmarks is smaller than that of the true associations. Therefore, based on Eq. 11, the nearestneighbor searching can result in wrong particle weights estimation, i.e., an over-estimation of a particle's weight which is supposed to have a low weight.
For this problem, formally let us suppose that we have m landmark observations in the map at time t with the ground truth correspondences
A = {ā 0 ,ā 1 , ...,ā m }(13)
and the approximated correspondences by the k th particle (based on nearest-neighbor search Eq. 11)
A k = {a k 0 , a k 1 , ..., a k m }(14)
whereā i and a k i are the indices of pole-like landmarks in map M, we havē
D k t = {d k t (j) |d k t (j) = ∥Mā j − s k t o j t ∥ 2 , j = 1, ..., m}, D k t = {d k t (j) | d k t (j) = ∥M a k j − s k t o j t ∥ 2 , j = 1, ..., m},(15)
where M # is the location of the # th landmark in the map M. The ground truth correspondenceĀ is independent of particle state, while the approximated correspondence A k is dependent on a specific particle state when using the nearest neighbor scheme to find the approximate correspondence such as in Eq. 11. From the above analysis with Fig. 5, we know
∥M a k j − s k t o j t ∥ 2 ≤ ∥Mā j − s k t o j t ∥ 2 , d k t (j) ≤d k t (j),(16)
Based on Eq. 10, thus the probabilityp k t given ground truth correspondenceĀ and the probability p k t given approximated correspondence A k hold
p k t ≥p k t .(17)
The equality holds when the approximated correspondence of the particle is the same as the ground truth correspondence, which can be achieved when the particle's state is the same as or close to the vehicle's ground-truth state (shown by the particle s k t from Fig. 5). However, for the particles whose states are not close to ground-truth poses (shown by the particle s k+1 t and s k+2 t in Fig. 5), the inequality holds, which brings higher weights for these particles than expected. Thus, the equality holds for those "good" particles s k t and the inequality holds for those "bad" particles s k+1 t and s k+2 t , which means large weights are assigned to those "bad" particle and hinders the convergence and incurs degraded localization performance.
To address this problem, we incorporate the dataassociation quality of k th particle into the calculation of particle weights. Given the semantic observation O s t consisting of F i andŷ i , and map M s consisting of F g i and y g i , we define semantic-aware correspondence inconsistency I k t to evaluate the data-association quality as
I k t = I(A k | O s t , M s ),(18)
where I will be introduced later. Then the probability of particle s k t in Eq. 9 is replaced bỹ
p k t = P(O t , O s t | s k t ; M; M s ) = P(O t | s k t ; M)P(O s t | s k t ; M s ),(19)
where O t and O s t are assumed to be independent. Because the condition probability of O s t is only dependent on A k (decided by s k t ) and M s ,p k t can be reduced tõ
p k t = p k t P(O s t | A k ; M s ).(20)
Different from s k t which is unique for all particles, the A k from different particles can be the same as long as their correspondences are the same. Supposing the function I returns 0 for correct correspondences and 1 for wrong correspondences. As in Eq. 10, Eq. 20 can be realized as the product of Gaussian distributions assuming the independence of each observation,
p k t = p k t P(I k t ) = p k t Nt i P(0 | 0; σ 2 ) N f i P(1 | 0; σ 2 ) ≤ p k t Nt+N f i P(0 | 0; σ 2 ) = p k t P(O s t |Ā; M s ).(21)
where I k t represents the correspondence inconsistency between the ground-truth semantic observations O s t and the expected ones made by the k th particle given the correspondence A k and the semantic pole-map M s . N t and N f are the numbers of correct and wrong correspondences, respectively. Again, because the approximation of correspondence is accurate for the particles close to ground truth, the equality holds for those "good" particles and the inequality holds for those "bad" particles, which means smaller weights are assigned to those "bad" particles and improves the localization performance.
In Eq. 21, we need the correspondence inconsistency evaluation function I to updatep k t . Therefore, we propose to use the semantic discrepancy to realize function I as
I(A k ) = {1 − cos(F g a k i , F i ) | i ∈ (0, 1, ..., n)},(22)
where cos() is the cosine similarity and a k i is the index of associated landmark in M s of the i th observation given the k th particle. In Eq. 22, when the d-dimension feature vector F g a k i and F i are close, their inconsistency is small and they have high possibility to be the same pole-like objects, which can be used to approximate the function I.
Semantic-Aware Nearest Neighbor
To further improve the correspondence quality, the semantic categories of these pole-like landmarks are used, as shown in Fig. 6. We replace M in Eq. 11 by its subset M ′ = {l j | l j ∈ M,ŷ j =ŷ i }, whereŷ i is the predicted semantic category of the i th observed pole-like landmark andŷ g j of the j th pole-like landmark in map, we have
D ′k t = D ′ (O t | s k t ; M ′ ) = {d ′k t (j) | d ′k t (j) = min l ′ ∈M ′ ∥l ′ − o j t s k t ∥ 2 , o j t ∈ O j t },(23)
This way, the correspondences between the pole-like landmarks in observation and in the map are restricted to the same semantic category. Comparing D ′k t with D k t will have,
min l ′ ∈M ′ ∥l ′ − o j t s k t ∥ 2 ≥ min l∈M ∥l − o j t s k t ∥ 2 , d ′k t (j) ≥ d k t (j),(24)
and thus p ′k t holds
p ′k t ≤ p k t .(25)
Similar to Eq. 16 and 17, in one of the K classes for polelike objects, the nearest neighbor is always closer than other neighbors, including the neighbor representing the ground truth correspondence, we have,
d ′k t (j) ≤d k t (j),(26)p ′k t ≥p k t ,(27)
and combined with Eq. 24 and 25 we have,
d k t (j) ≤ d ′k t (j) ≤d k t (j),(28)p k t ≥ p ′k t ≥p k t .(29)
This means that the probability of a particle with semantic data association (p ′k t ) is always smaller than or equal to that with the normal data association (p k t ), and is always closer to the expected probability derived from ground truth correspondences than the normal data association. The equality holds for the particles whose poses are close to the true poses when both the nearest neighbor and the semantic nearest neighbor searching find the true correspondences. The inequality holds for the particles whose poses are far from the true ones when the semantic nearest neighbor searching finds better correspondences than the nearest neighbor searching. In another word, the weights of these "bad" particles are reduced by finding a further correspondence. Simply speaking, the proposed semanticaware nearest neighbor approach to the true correspondences using semantic information, which improves the localization performance.
Experiments
Datasets
We use SemanticKITTI ) to validate our methods. SemanticKITTI provides 43551 LiDAR scans collected by Velodyne-HDLE64 LiDAR in 22 sequences. To evaluate semantic segmentation performance, sequences 00-10 except 08 are used as the training set and sequence 08 as the validation set. The semantic labels are available in the training and validation set. Besides, it provides ground truth poses estimated by Behley and Stachniss (2018) for all sequences. Localization performance is evaluated in sequences 11-21 to show its generalization ability.
Pole Segmentation
Metrics. We use mIoU (Everingham et al. 2015) as the metric for the semantic segmentation of pole-like objects, by which the higher mIoU the more overlap between prediction and ground truth. We use sequences 00-10 except 08 to train the segmentation model and to evaluate the model on sequence 08. We use SalsaNext (Cortinhal et al. 2020) as the backbone in the mask-classification paradigm. We use all the categories, e.g., road, car, and building, for training, and focus the performance on 3 pole-like categories, i.e., pole, trunk, and traffic sign. These 3 categories are used to build the semantic pole-map. During training and inference, the point cloud is projected into a range view with a resolution of 64 * 2048. Although there is information loss after projection, i.e., some points are out of view and some points are occluded by the ones in front of them, we do not re-project the segmentation result back to the original point cloud, so there is no leaking problem as in Milioto et al. (2019). This means we only predict the semantics for those points available in the range-view representation instead of all points and these points are enough for feature learning and pole-map construction. We compare ours with SalsaNext (Cortinhal et al. 2020) and FIDNet in evaluating semantic segmentation performance, as they are semantic segmentation methods based on the rangeview-based representation for LiDAR scans. As shown in Table 1, we can see that our model trained by the maskclassification paradigm with data augmentation achieves the best performance. Fig. 7 shows three examples of LiDAR semantic segmentation with extracted poles. The color of each LiDAR point represents the predicted semantic label. The cylinders represent the poles extracted from the predicted semantic labels with categories denoted by color and radius denoted by thickness. On the left, middle, and right show the scenes with regular plated trees, pole-like pedestrians, dense dynamic participators on the road or roadside. As shown on the left, our method accurately extracts the pole-like structures, i.e., trees on the roadside. As shown on the middle, no pole-like pedestrians have been misclassified as poles. As shown on the right, pole-like structures, i.e., traffic-signs on the highway, can be localized among dense dynamic participators. Next, we will analyze the mapping performance based on semantic segmentation results.
Pole-Map Creation
Metrics. We use the F1 score (Sokolova and Lapalme 2009) to evaluate the precision of our pole-map. The F1 score is the combination of recall and precision for pole-like objects. A pole-like object is regarded as a True Positive (TP) if the distance between the detected pole and its nearest neighbor in the ground truth map is no larger than 1 meter. Otherwise, it is a False Positive (FP). The poles that are present in the ground truth map but not predicted are regarded as False Negatives (FN). The recall R, precision P , and F1 score F is calculated as:
R = N T P /(N T P + N F N ), P = N T P /(N T P + N F P ), F = 2 * R * P/(R + P ),(30)
where N T P , N F P , N F N is th number of TPs, FPs, FNs.
To build our pole-map, we use the ground truth ego-poses provided by the SemanticKITTI dataset. Specifically, we use the frames every δ d meters (selected as the keyframes) to build our pole-map for each sequence, i.e., the polelike objects are segmented in each keyframe and aggregated based on their locations and the ground-truth ego-poses of the vehicle. δ d is set to 10 in our experiments, as the position difference of two consecutive frames between mapping and localization is 5 meters on average, which is large enough to evaluate the localization performance. The different choices of δ d will be investigated in Section The Effect of δ d .
Because the depth of objects beyond 50 meters is generally quite noisy, pole-like objects beyond the range of 50 meters are ignored. To evaluate our pole-map accuracy, we compare it with the ground truth pole-map provided by Dong et al. (2023), which is built from the ground-truth semantic labels in SemanticKITTI. In SemanticKITTI sequences 00-10, sequence 08 is usually used as the validation set and the remaining sequences are used as the training set. In addition to this splitting, we also use sequence 01 as the validation set and the remaining as the training set to further demonstrate the generalization ability.
The comparison is shown in Table 2. We compare our pole-map accuracy with that of Dong et al. (2021Dong et al. ( , 2023 which created their own pole-map based on their pole detection results. Although the maps in Dong et al. (2021Dong et al. ( , 2023 are built with many more frames than ours in each sequence, our method outperforms them by a large margin. Because the codes of the learning-based method in Dong et al. (2023) is not yet released, we only fine-tune the hyperparameters in Dong et al. (2021) to build the map with our defined dataset as the baseline method ) † †. Besides, the accuracy of pole-map built with our baseline segmentation method (Cortinhal et al. 2020) is also compared. Among those methods, our model achieves the best performance in mapping. Table 3 shows that our method can detect more poles on average, N s , from a LiDAR scan than the baseline method ) † †, contributing to the increase of an average number of observations N o of a pole, and not largely increasing the number of poles N m in a map. It shows that our segmentation method obtains more consistent pole extraction results. Fig. 8 shows that the baseline method tends to predict more pole-like objects in sequence 01 and Figure 7. The visualization of LiDAR semantic segmentation of poles. The color of each LiDAR point represents the predicted semantic label. The cylinders represent the poles extracted from the predicted semantic labels with categories denoted by color and radius denoted by thickness. Note that we also show the semantics of some other objects such as roads for better visualization. Dong et al. (2021Dong et al. ( , 2023 where the maps are built with more dense frames than ours in each sequence. † † the baseline method tuned based on the published code in Dong et al. (2021). less in sequence 08, producing more false positives and false negatives. In contrast, our method is much higher in precision and recall. Next, we will compare the localization performance to show the importance of accurate pole-map creation.
Pole Localization
Metrics. We calculate the mean absolute errors (∆) and root-mean-squared errors (RMSE) on position and heading to evaluate the performance of localization. These metrics represent the distance in meters from the prediction to the ground truth position and the difference in angle between the prediction and ground truth heading. The smaller these metrics the better the performance. Dataset. To investigate the effectiveness of semantics information in localization, we evaluate the performance in the SemanticKITTI dataset. Different from pole segmentation and pole-map creation which are evaluated in SemanticKITTI validation sequences 01 and 08, localization performance is evaluated in sequences 11-21 of the SemanticKITTI dataset to show its generalization ability.
In the SemanticKITTI dataset, the vehicle mostly drives in each specified route only once. Therefore, we do not have enough repeated routes to test the localization performance. Thus, we generate data suitable for location purposes from the original SemanticKITTI dataset to evaluate our method's localization performance. Our expectation on the data for localization is that the generated data should be as close as possible to the real ones that are collected when a vehicle drives in the same scene at least twice, and the data used for localization should be different from the ones used for mapping.
For that, we build our pole-maps with the keyframes based on the ground-truth ego-poses with the keyframes sampled every δ d meters along the route. Our localization data is generated by selecting those frames which are just in the middle of two consecutive keyframes used for creating the pole-map. Fig. 9 illustrates how we separate each original sequence of the SemanticKITTI dataset for pole-map creation and localization purposes, respectively. In this way, two subsets of frames of the same original SemanticKITTI sequence are separated without overlap for mapping and localization, respectively. Fig. 10 shows the examples of subsets of frames from SemanticKITTI sequence 08 using different δ d , in which the sequence is separated when δ d is set to 6 and 10, respectively. Generally, the localization based on polelike landmarks is difficult in scenarios where the pole-like landmarks are absent or very sparse because there are no enough landmarks to support the update of each particle's weight. Besides, the larger the accumulated error from the motion model that is used for predicting a particle's next movement, the harder the particle convergences. From this point of view, the localization task is more difficult with a larger δ d because the frames available for mapping are fewer and more accumulated errors will occur between two localization frames with a longer span, and the frames used for localization are more different from frames for mapping. If the motion model used for particle pose prediction has high uncertainty, localization will be even harder. The choice of δ d will be investigated in Section The Effect of δ d .
Uncertainty. In a practical particle-filter based localization framework given the map and odometry prediction (from wheel-odometer or lidar/camera-based odometry), the localization errors are mainly from the uncertainty of odometry prediction and noise of online observation.
To test the robustness of our localization performance against the two types of uncertainty interruption, we simulate these two kinds of noise as follows. To simulate the odometry uncertainty, we add random noise of different degrees to the ground-truth poses provided by SemanticKITTI to generate the noisy odometry. Specifically, the odometry between two timestamps during localization is the odometry sampled . The illustration of data separation used for pole-map creation and localization test, respectively, from the same original SemanticKITTI sequence. Two subsets of frames are generated for each sequence. The pole-map corresponding to each sequence is built based on the selected keyframes (blue) along the route every δ d meters according to the ground-truth ego-poses. Correspondingly, the localization test set is created by selecting those frames (red) which are just in the middle way of two consecutive keyframes used for creating the pole-map.
from the multivariate normal distribution governed by the ground truth odometry as the mean and the covariance decided by ϕ odo . For the simulation of noisy online observations, we random drop ϕ O percent of the poles detected online during localization to mimic the scenario of missing landmarks.
Compared methods. We compare our localization performance with different pole-maps and different localization methods. Dong et al. (2021) proposed to extract the poles in the LiDAR's range-view representation by geometric rules, used the ground-truth ego-poses for mapping, and then did particle filter localization in the built map. Wang et al. (2021) proposed to build the pole-map with the segmentation of pole-like landmarks based on RangeNet++ and did the localization with iterative closest point (ICP) (Besl and McKay 1992) based on these pole-like landmarks. We fine-tuned their methods to build the polemap with our defined dataset and localize in the pole-map with simulation noise as described above. Then we compare the localization performance of our approaches with theirs.
Finally, we use our pole-map to investigate the effectiveness of incorporating semantic information into the localization based on particle filtering.
Results. Table 4 shows that our pole-map is better in terms of localization than the baselines. Dong et al. (2021) utilize LiDAR scans to extract pole-like landmarks. Comparing this baseline method with our PF, in all the sequences, localization in our pole-maps achieves better performance. This means that compared with the baseline method which only exploits vertical structures in the polemap without considering their actual semantics, our polemap extracts landmarks useful for localization from our semantic segmentation. As shown in Table 3 from Section Pole-Map Creation, the baseline method provides fewer poles in each frame but generates a similar number of landmarks in the final pole-map, which means that it obtains less consistent pole-like objects. The landmarks we extracted are more consistent across different frames than the baseline method. Wang et al. (2021) propose utilizing the LiDAR semantics predicted by a semantic segmentation model for mapping and localization. Empirically, we find that this method must rely on enough pole-like landmarks to obtain semantic descriptors and match them in localization, otherwise it is easy to fail. Because the larger δ d results in fewer pole-like landmarks in the map, in some of the sequences where there are not enough poles it fails to find the matched landmarks across the throughout sequences, and the localization never succeed. In these cases, we set δ d to 1.5 for denser poles and mark the localization results in this setting with * . Additionally, we set ϕ O as 0% in its experiments.
Comparing this baseline method with our I+N-PF, we achieve better performance with lower position ∆ in most sequences based on our pole-maps. In the sequence 12 where our method is worse, the number of frames used for Figure 10. The examples of subsets of frames from SemanticKITTI sequence 08 using different δ d . Left, middle and right correspond to 1.5, 6, and 10 meters for δ d , respectively. The blue points represent the frames used in mapping, and the red points represent the frames used in localization. Table 4. The comparison of localization performance in our semantic pole-maps and the maps of Dong et al. (2021); Wang et al. (2021) created based on SemanticKITTI sequences 11-21. In our pole-map, the normal particle filter localization scheme (Dellaert et al. 1999) and our semantic particle filter localization are compared. PF denotes the normal particle filter localization (Dellaert et al. 1999). I-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency. I+N-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency and the Semantic-Aware Nearest Neighbor.
In this experiment, ϕ odo is set to 40%. ϕO is set to 80% in our methods and Dong et al. (2021), and 0% in Wang et al. (2021). For results of Wang et al. (2021), * means that the localization fails across the whole sequence when δ d = 6 or δ d = 10, and therefore the results of Wang et al. (2021) shown in this table are obtained when δ d = 1.5. Wang et al. (2021) are 4 to 6 times ours. In the other sequences, although with more frames for mapping and lower observation uncertainty ϕ O , Wang et al. (2021) cannot achieve higher performance. The results show that our method is better than the ICP-based localization which requires confident matching.
As shown in Fig. 11, the baseline method ) drifts along the route (left), as the inconsistent poles provide inconsistent updating for particle weights, which hinders the particles convergence. On contrary, our pole segmentation provides consistent poles along the route and makes particle convergent to the ground truth location (middle, right). However, as the odometry is really noisy and the observation makes false negative detection simulated by us, the particles with dispersed distribution estimate inaccurate pose (middle). When incorporating semantic information into particle filter, the particles with more discriminative weight are easier to concentrated to the ground truth poses as we expected (right). As shown in Fig. 12, while the normal particle filter without semantic information has activation at multiple centers, our semantic-aware particle filter with both semantic inconsistency and nearest neighbor schemes provides more deterministic estimation. Quantitatively, in Table 4, we ) and normal particle filter (Dellaert et al. 1999). Middle: The localization trajectory with our pole-map and normal particle filter. Right: The localization trajectory with our pole-map and our semantic-aware particle filter with both semantic inconsistency and nearest neighbor schemes. In this experiment, ϕ odo is set to 40% and ϕO is set to 80%.
Figure 12. The visualization of particle weights in particle filter estimated with and without semantic information. The particles are distributed in the square of its vicinity. The lighter color represents the higher weights. Top: The weights of normal particle filter (Dellaert et al. 1999). Bottom: The weights of our semantic-aware particle filter with both semantic inconsistency and nearest neighbor schemes.
can see that the semantic particle filter improves the localization performance compared with the one without semantics. For example, when δ d is set to 10, the semanticaware inconsistency I-PF reduces the position ∆ by 0.462 (20.84%) and heading ∆ by 0.087 (8.28%). When incorporating both the semantic-aware inconsistency and semantic-aware nearest neighbor I+N-PF, the semanticaware particle filter reduces the position ∆ by 0.542 (24.46%) and heading ∆ by 0.099 (9.38%). Noted that in this experimental setting, the odometry noise level is ϕ odo is set to 40% and the online landmark observation noise level ϕ O are set to 80%. In ablation studies, we will have a detailed analysis of the different settings of mapping, different noises in localization, and different ways to utilize semantic information.
Ablation Study
In this section, we investigate (i) the mapping and localization performance with and without multi-layer mapping, (ii) the localization performance under different choices of δ d (the distance between two keyframes during mapping), (iii) the localization performance under different odometry noise ϕ odo and observation noise ϕ O , and (iv) how our pole-map and semantic information benefit the localization performance.
Multi-Layer Mapping
In our proposed methods, semantic information is utilized to improve particle-filter localization. However, it is often inevitable that semantics can contain noise from segmentation. When incorporating imperfect semantic information into localization, the noise from pole segmentation can degrade the localization performance. To deal with this issue, we propose multi-layer mapping to reduce the semantic ambiguity in the pole-map and improve the semantic-aware localization performance with this map. To investigate the effectiveness, we compare the mapping and localization performance with normal single-layer mapping and the proposed multi-layer mapping. In this experiment ϕ odo is set to 40%. Because this experiment is designed only for evaluating mapping performance, ϕ O are set to 0% without simulating online pole detection noise. Table 5 shows that multi-layer mapping improves both precision and recall for mapping, indicating it provides more TPs without introducing many FPs and FNs. More importantly, as shown in Table 6, localization performance with multi-layer mapping is improved when incorporating semantic information into the particlefilter localization, indicating the robustness of multi-layer mapping while the semantic information is noisy.
The Effect of δ d As mentioned before, the distance δ d is designed to split a SemanticKITTI sequence into non-overlapping subsets as mapping and localization data, respectively, to evaluate the localization performance. To investigate a reasonable δ d we compare the localization performance of baseline ) and our method with different δ d . In the experiments ϕ odo is set to 40% and ϕ O are set to 0%. Our method is based on a normal particle filter localization for ) sequence 01 and 08 by our methods using single-layer and multi-layer mapping. S. denotes the single-layer mapping as in Dong et al. (2021). M. denotes our proposed multi-layer mapping. comparison. As shown in Fig. 13, the mean absolute error (∆) increases when δ d increases. Throughout the paper, δ d is set to 10 without specification.
Different Levels of Uncertainties ϕ odo and ϕ O
As mentioned in Section Pole Localization, we add noise to the odometry and observation to simulate the noisy scenarios. In this section, we investigate how different noise levels influence localization performance. When the poles are sparse or absent in a specific scene and the odometry used for particle pose prediction is subject to a large uncertainty at the same time, the estimated position can drift. If the drift is large enough, the particle weights can stop updating, as all the particles fail to find the corresponding poles in the data association. The noisier the odometry and online observation, the more frequently this situation happens. We chose different ϕ odo and ϕ O to investigate the effectiveness of semantic information in particle-filter localization in such scenarios. As shown in Table 7, the localization performance drops with the increasing odometry noise ϕ odo or observation noise ϕ O . When incorporating semantic information, the localization performance can be improved in almost all the choices of ϕ odo and ϕ O . More importantly, semantic information can bring a larger gain in localization performance when confronted with larger uncertainties. This can be explained that semantic information can help the particles converge more quickly, as analyzed in Section Semantic-Aware Inconsistency and Semantic-Aware Nearest Neighbor.
Semantic-Aware Particle Filter Localization
Generally, localization is accurate and fast when the particles are densely concentrated with a single-mode distribution, which means most of the particles share the same set of associated landmarks in the map. Imagine that we will have only one set of correspondence between the online observation and map landmarks if all particles are very close to the ground-truth state.
In practice, three additional quantitative metrics can be used to reflect the particle localization performance. The first one is N A k , representing the number of sets of associations (number of different sets of A k in Eq. 14) for all k particles. The second one is ϕĀ representing the ratio of the number of correct landmark associations to all associations, which is empirically evaluated as the average of the similarity of an approximated correspondence being a correct one, i.e., through the cos() term in Eq. 22. The third one ϕȳ represents the ratio of the number of correspondences with consistent pole categories to the total number of correspondences.
We quantitatively investigate the localization performance along with the above three metrics above in the same experimental setting as in Section Pole Localization. Finally, we take the average N A k , ϕĀ, and ϕȳ of all frames and show the results in Table 8. Comparing particle filter with semantic-aware inconsistency (I-PF) with the one without semantic (PF), the number of N A k is largely reduced. This shows that particles quickly converge when incorporating semantic-aware inconsistency in Eq. 21 as we expected. Meanwhile, the association accuracy ϕĀ and category accuracy ϕȳ are improved, indicating that the particles find more accurate correspondences, and thus higher localization performance is achieved. Interestingly, compared with I-PF, although the particle filter with semantic-aware nearest neighbor (N-PF) achieve larger reduction in N A k and larger improvement in ϕȳ, it cannot achieve larger improvement at ϕĀ and localization performance ∆ pos . This shows that the semantic nearest neighbor working alone brings limited performance improvement because the semantic nearest neighbor may mislead when the observation is incorrect. However it works well with semantic-aware inconsistency I+N-PF because the erroneous correspondences in such situation can be suppressed, which takes both advantages and achieves the most improvement.
Conclusion
In this work, we propose a full framework for semantic mapping and localization where the localization is achieved based on semantic particle filtering in a multi-layer semantic pole-map created offline by a multi-channel LiDAR sensor. The semantic pole-map is built based Table 7. The comparison of localization performance of normal particle-filter (Dellaert et al. 1999) and the proposed semantic-aware particle-filter. Localization performance is evaluated at different levels of odometry noise ϕ odo and observation noise ϕO. The same as in Table 4, PF denotes the normal particle filter localization (Dellaert et al. 1999). I-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency. I+N-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency and the Semantic-Aware Nearest Neighbor. Ipr. denotes the improvement of I+N-PF compared with PF. Table 8. The quantitative results of the proposed semantic-aware particle-filter localization. Results are evaluated with respect to N A k , ϕĀ, and ϕȳ from Section Semantic-Aware Particle Filter Localization. Localization performance ∆pos in Section Pole
Localization is also compared. The same as in Table 4, PF denotes the normal particle filter localization (Dellaert et al. 1999). I-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency. N-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Nearest Neighbor. I+N-PF denotes semantic particle filter localization by incorporating the Semantic-Aware Inconsistency and the Semantic-Aware Nearest Neighbor. In this experiment, ϕ odo is set to 40% and ϕO is set to 80%. on the pole semantics extracted by an efficient semantic segmentation method in the mask-classification paradigm. On applying this semantic pole-map for online localization, we have proposed a semantic particle-filter based scheme with poles as observations. We have both theoretically and empirically shown that our semantic particle-filter localization method given the semantic pole-map achieves very promising performance even with significant levels of uncertainties. In the future, we will investigate utilizing other semantic categories in our pole-map to improve localization performance.
Figure 1 .
1The architecture of pole segmentation network. The left is the network design. The top right is the training procedure. The bottom right is the inference stage.
Figure 2 .
2(a) Paste and Drop Operation. (b) Weight for Paste and Drop Operation. (c) An example of our Weighted Paste and Drop Data Augmentation. In this example, the pole is pasted into the middle of the road and the road is dropped.
Figure 3 .
3The procedure of pole extraction, mapping, and localization. On the top is the pole extraction, the bottom left is the mapping, and the bottom right is the localization.
Figure 4 .
4Left: The aggregation of observations of the same landmark without considering their semantics categories. When the observation is uncertain during pole segmentation and extraction, the aggregation of information from different categories into one landmark makes semantic features ambiguous (e.g., the trunk is categorized as a pole in one of the observations). Right: The aggregation considering their semantics. Observations of different semantics are distinguished and aggregated into different layers (landmarks), which provides more robust geometric and semantic features.
Figure 5 .
5Illustration of why the nearest neighbor data association can result in an overestimation of the particle's updated weight, see the text for explanation.
Fig. 5 ,
5four particles (circular points) are sampled with one as the ground-truth vehicle pose (the dark circular point) and the other three (s k t , s k+1 t , and s k+2 t ) are not at the vehicle's real location).
Figure 6 .
6Left: the nearest neighbor searching results in wrong correspondences. Right: the semantic nearest neighbor searching finds the correct correspondences. The light-colored nodes in the rectangle are the observations, and the dark-colored nodes out of the rectangle are the landmarks in the map. Different colors of nodes represent different semantic categories, and the red edges represent the wrong correspondences.
Figure 8 .
8Visualization of pole-like maps for SemanticKITTI sequences 01 and 08. The top row shows the maps built by the baseline method) tuned based on their published code, and the bottom shows our pole-maps. The blue and red points represent the pole-like objects in the built map, where the blue points are the true positives (TP) and the red points are the false positives (FP) by comparing with the ground-truth map provided byDong et al. (2023). The light-red circles represent the false negatives (FN). Our method has fewer FPs and FNs compared with the baseline method.
Figure 9
9Figure 9. The illustration of data separation used for pole-map creation and localization test, respectively, from the same original SemanticKITTI sequence. Two subsets of frames are generated for each sequence. The pole-map corresponding to each sequence is built based on the selected keyframes (blue)
Sequence (Wang et al. 2021) (Dong et al. 2021) PF I-PF I+N-PF (Wang et al. 2021) (Dong et al. 2021) PF I-PF I+N-PF (Wang et al. 2021) (Dong et al. 2021) PF I-PF I+N-PF (Wang et al. 2021) (Dong et al. 2021) PF I-PF I+N-PF
Figure 11 .
11The visualization of localization trajectory of Dong et al. (2021) and our methods in SemanticKITTI (Behley et al. 2019) sequence 15. The ground truth trajectory is shown in grey and the estimated trajectory is shown in red. Left: The localization trajectory with baseline map
Table 1 .
1The semantic segmentation performance of three types of pole-like landmarks on SemanticKITTI) validation set. † represents the evaluation of points that are available in the range-view representation instead of re-projecting them back to the original point cloud. Experiments without † represent the evaluation of all points by re-projecting points available in the range-view representation back to the original point cloud.Method
Mean IoU Pole Trunk Traffic-sign Others
SalsaNext (Cortinhal et al. 2020) †
59.7
58.7 63.9
44.9
60.4
FIDNet (Zhao et al. 2021) †
60.4
60.1 68.0
44.1
61.0
SalsaNext w/ WPD †
63.4
64.3 66.7
49.3
64.0
Ours w/o WPD †
59.1
64.5 66.9
45.8
59.1
Ours †
66.4
65.0 67.5
48.4
67.5
SalsaNext (Cortinhal et al. 2020)
57.5
56.7 60.8
44.9
58.1
FIDNet (Zhao et al. 2021)
58.8
57.6 64.0
43.7
59.5
SalsaNext w/ WPD
60.9
61.3 62.4
48.1
61.6
Ours w/o WPD
57.2
61.5 62.7
45.3
57.3
Ours
63.7
61.9 63.2
47.6
64.9
Table 2 .
2The comparison of mapping performance on the SemanticKITTI (Behley et al. 2019) sequences 01 and 08. † the mapping results reported at
Table 3 .
3The comparison of the number of poles Nm in a map, the average number of poles Ns for a LiDAR scan, and the average number of observations No of a pole in SemanticKITTI (Behley et al. 2019) sequence 01 and 08. † † the baseline method tuned based on the published code in Dong et al. (2021).Sequence (validation)
01
08
Method
N m
N s
N o
N m
N s
N o
Baseline (Dong et al. 2021) † † 339 1.99 1.45 895 4.84 1.73
Ours
325 3.21 2.44 960 10.08 3.37
Table 5 .
5The comparison of mapping performance in SemanticKITTI
Table 6 .
6The comparison of localization performance in SemanticKITTI) sequence 11-21 by our methods using single layer and multi-layer mapping. S. denotes the single layer mapping as inDong et al. (2021). M. denotes our proposed multi-layer mapping. In this experiment ϕ odo is set to 40% and ϕO are set to 0% without simulated pole extraction noise.Method PF/S. PF/M. I+N-PF/S. I+N-PF/M.Figure 13. The comparison of localization performance when choosing different distance threshold δ d . The baseline method (Dong et al. 2021) (blue) and our method without semantics (orange) are compared.In the experiments ϕ odo is set to 40% and ϕO are set to 0% without simulated pole extraction noise.Sequence
01
08
Method
S.
M.
S.
M.
Precision 0.68 0.73 0.74 0.76
Recall
0.39 0.56 0.75 0.86
F1
0.49 0.63 0.74 0.81
Sequence
11-21
∆ pos
0.899 0.914
0.884
0.869
RMSE pos 1.302 1.363
1.246
1.265
∆ ang
0.551 0.528
0.534
0.522
RMSE ang 0.980 0.958
0.948
0.979
• ] δ d ϕ odo ϕ O PF I-PF I+N-PF Ipr. PF I-PF I+N-PF Ipr. PF I-PF I+N-PF Ipr. PF I-PF I+N-PF Ipr.∆ pos
RMSE pos
∆ ang
RMSE ang
[m]
[m]
[ • ]
[ 6
0.2
0 0.381 0.380 0.373 2.25% 0.515 0.510 0.495 3.85% 0.250 0.245 0.245 1.87% 0.405 0.403 0.402 0.87%
0.2 0.404 0.396 0.397 1.96% 0.537 0.523 0.528 1.71% 0.264 0.249 0.257 2.74% 0.423 0.405 0.414 1.95%
0.5 0.443 0.440 0.438 1.12% 0.591 0.581 0.577 2.47% 0.280 0.283 0.275 1.73% 0.441 0.445 0.433 1.94%
0.8 0.651 0.613 0.601 7.72% 0.926 0.846 0.828 10.63% 0.411 0.394 0.398 3.09% 0.633 0.611 0.619 2.19%
0.4
0 0.675 0.660 0.651 3.56% 0.972 0.930 0.921 5.28% 0.404 0.408 0.403 0.10% 0.718 0.718 0.716 0.28%
0.2 0.731 0.701 0.703 3.72% 1.048 0.988 1.005 4.06% 0.474 0.447 0.457 3.46% 0.832 0.789 0.821 1.22%
0.5 0.888 0.837 0.830 6.47% 1.357 1.246 1.236 8.93% 0.546 0.522 0.531 2.66% 0.957 0.916 0.923 3.54%
0.8 1.507 1.260 1.249 17.11% 2.211 1.796 1.806 18.30% 0.832 0.751 0.732 12.06% 1.336 1.214 1.194 10.58%
10
0.2
0 0.483 0.482 0.478 0.90% 0.647 0.659 0.652 -0.69% 0.301 0.302 0.285 5.19% 0.489 0.503 0.481 1.53%
0.2 0.510 0.506 0.499 2.05% 0.698 0.684 0.680 2.56% 0.323 0.313 0.311 3.67% 0.530 0.514 0.517 2.43%
0.5 0.598 0.594 0.576 3.70% 0.810 0.785 0.762 5.87% 0.362 0.360 0.345 4.93% 0.573 0.566 0.548 4.37%
0.8 0.957 0.857 0.827 13.62% 1.308 1.186 1.140 12.87% 0.502 0.484 0.475 5.47% 0.745 0.729 0.713 4.35%
0.4
0 0.914 0.874 0.869 4.88% 1.363 1.264 1.265 7.23% 0.528 0.527 0.522 1.13% 0.958 0.967 0.979 -2.29%
0.2 1.011 0.957 0.933 7.70% 1.579 1.481 1.410 10.65% 0.574 0.560 0.558 2.73% 1.066 1.041 1.030 3.36%
0.5 1.186 1.092 1.083 8.65% 1.698 1.533 1.508 11.17% 0.669 0.643 0.634 5.23% 1.119 1.088 1.070 4.41%
0.8 2.215 1.753 1.673 24.46% 3.125 2.434 2.304 26.27% 1.052 0.965 0.954 9.38% 1.555 1.451 1.434 7.80%
Prepared using sagej.cls
t be the state of the particle k at time t. With observations O t = {o 1 t , ..., o m t } at time t, the probability of O t conditioned on s k t and map M can be calculated asp k t = P(O t | s k t ; M),(9)Where M is the map consisting of landmark positions l = (l x , l y ). In the implementation, Eq. 9 can be realized as the product of Gaussian distributions assuming the independence of each observation,p k t = P(D k t ) = d k t (j)∈D k t P(d k t (j) | 0; σ 2 ) = d k t (j)∈D k t 1 σ √ 2π e − (d k t (j)) 2 2σ 2 , j = 1, ..., m,(10)where D k t represents the positional discrepancy D between the actual observations O t and the expected ones made by the k th particle given the particle's state s k t and map M. Formally, D k t is defined asD k t = D(O t | s k t ; M) = {d k t (j) | d k t (j) = min l∈M ∥l − s k t o j t ∥ 2 , o j t ∈ O t },(11)where s k t o j t represents the estimated position of observation o j t in the map given the k th particle's state s k t . The min l∈M ∥l − s k t o j t ∥ 2 is used to measure the discrepancy between s k t o j t and its neighbor through some data association strategies, where the nearest neighbor searching scheme is often used for simplicity. Then the weight of the k th particle, w k t , is updated asw k t = p k t i
Journal Title XX(X)
Semantic localization considering uncertainty of object recognition. N Akai, T Hirayama, H Murase, IEEE Robotics and Automation Letters. 53Akai N, Hirayama T and Murase H (2020) Semantic localization considering uncertainty of object recognition. IEEE Robotics and Automation Letters 5(3): 4384-4391.
Towards life-long visual localization using an efficient matching of binary sequences from images. R Arroyo, P F Alcantarilla, L M Bergasa, E Romera, 2015 IEEE international conference on robotics and automation (ICRA). Arroyo R, Alcantarilla PF, Bergasa LM and Romera E (2015) Towards life-long visual localization using an efficient matching of binary sequences from images. In: 2015 IEEE international conference on robotics and automation (ICRA).
. IEEE. IEEE, pp. 6328-6335.
Stereo visual odometry and semantics based localization of aerial robots in indoor environments. H Bavle, S Manthe, P De La Puente, A Rodriguez-Ramos, C Sampedro, P Campoy, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEBavle H, Manthe S, De La Puente P, Rodriguez-Ramos A, Sampedro C and Campoy P (2018) Stereo visual odometry and semantics based localization of aerial robots in indoor environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1018-1023.
Semantickitti: A dataset for semantic scene understanding of lidar sequences. J Behley, M Garbade, A Milioto, J Quenzel, S Behnke, C Stachniss, J Gall, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionBehley J, Garbade M, Milioto A, Quenzel J, Behnke S, Stachniss C and Gall J (2019) Semantickitti: A dataset for semantic scene understanding of lidar sequences. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9297-9307.
Efficient surfel-based slam using 3d laser range data in urban environments. J Behley, C Stachniss, Robotics: Science and Systems. 201859Behley J and Stachniss C (2018) Efficient surfel-based slam using 3d laser range data in urban environments. In: Robotics: Science and Systems, volume 2018. p. 59.
The lovászsoftmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. M Berman, Rannen Triki, A Blaschko, M B , Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionBerman M, Rannen Triki A and Blaschko MB (2018) The lovász- softmax loss: a tractable surrogate for the optimization of the intersection-over-union measure in neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 4413-4421.
Topological semantic mapping and localization in urban road scenarios. F Bernuy, Ruiz-Del Solar, J , Journal of Intelligent & Robotic Systems. 92Bernuy F and Ruiz-del Solar J (2018) Topological semantic mapping and localization in urban road scenarios. Journal of Intelligent & Robotic Systems 92: 19-32.
Method for registration of 3-d shapes. P J Besl, N D Mckay, Sensor fusion IV: control paradigms and data structures, volume 1611. Spie. Besl PJ and McKay ND (1992) Method for registration of 3-d shapes. In: Sensor fusion IV: control paradigms and data structures, volume 1611. Spie, pp. 586-606.
Least-squares circle fit. R Bullock, 3Bullock R (2006) Least-squares circle fit. Developmental testbed center 3.
Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam. C Campos, R Elvira, Jjg Rodríguez, J M Montiel, J D Tardós, IEEE Transactions on Robotics. 376Campos C, Elvira R, Rodríguez JJG, Montiel JM and Tardós JD (2021) Orb-slam3: An accurate open-source library for visual, visual-inertial, and multimap slam. IEEE Transactions on Robotics 37(6): 1874-1890.
Long-range rover localization by matching lidar scans to orbital elevation maps. P J Carle, P T Furgale, T D Barfoot, Journal of Field Robotics. 273Carle PJ, Furgale PT and Barfoot TD (2010) Long-range rover localization by matching lidar scans to orbital elevation maps. Journal of Field Robotics 27(3): 344-370.
Monocular camera localization in 3d lidar maps. T Caselitz, B Steder, M Ruhnke, W Burgard, 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEECaselitz T, Steder B, Ruhnke M and Burgard W (2016) Monocular camera localization in 3d lidar maps. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1926-1931.
Pole-curb fusion based robust and efficient autonomous vehicle localization system with branch-and-bound global optimization and local grid map method. G Chen, F Lu, Z Li, Y Liu, J Dong, J Zhao, Yu J Knoll, A , IEEE Transactions on Vehicular Technology. 7011Chen G, Lu F, Li Z, Liu Y, Dong J, Zhao J, Yu J and Knoll A (2021) Pole-curb fusion based robust and efficient autonomous vehicle localization system with branch-and-bound global optimization and local grid map method. IEEE Transactions on Vehicular Technology 70(11): 11283-11294.
Learning an overlap-based observation model for 3d lidar localization. X Chen, T Läbe, L Nardi, J Behley, C Stachniss, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEChen X, Läbe T, Nardi L, Behley J and Stachniss C (2020) Learning an overlap-based observation model for 3d lidar localization. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4602-4608.
Masked-attention mask transformer for universal image segmentation. B Cheng, I Misra, A G Schwing, A Kirillov, R Girdhar, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionCheng B, Misra I, Schwing AG, Kirillov A and Girdhar R (2022) Masked-attention mask transformer for universal image segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1290-1299.
Per-pixel classification is not all you need for semantic segmentation. B Cheng, A Schwing, A Kirillov, Advances in Neural Information Processing Systems. 34Cheng B, Schwing A and Kirillov A (2021) Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems 34: 17864-17875.
Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. T Cortinhal, G Tzelepis, Erdal Aksoy, E , International Symposium on Visual Computing. SpringerCortinhal T, Tzelepis G and Erdal Aksoy E (2020) Salsanext: Fast, uncertainty-aware semantic segmentation of lidar point clouds. In: International Symposium on Visual Computing. Springer, pp. 207-222.
Monte carlo localization for mobile robots. F Dellaert, D Fox, W Burgard, S Thrun, Proceedings 1999 IEEE international conference on robotics and automation (Cat. No. 99CH36288C). 1999 IEEE international conference on robotics and automation (Cat. No. 99CH36288C)IEEE2Dellaert F, Fox D, Burgard W and Thrun S (1999) Monte carlo localization for mobile robots. In: Proceedings 1999 IEEE international conference on robotics and automation (Cat. No. 99CH36288C), volume 2. IEEE, pp. 1322-1328.
Online pole segmentation on range images for long-term lidar localization in urban environments. H Dong, X Chen, S Särkkä, C Stachniss, Robotics and Autonomous Systems. 159104283Dong H, Chen X, Särkkä S and Stachniss C (2023) Online pole segmentation on range images for long-term lidar localization in urban environments. Robotics and Autonomous Systems 159: 104283.
Online range imagebased pole extractor for long-term lidar localization in urban environments. H Dong, Chen X Stachniss, C , 2021 European Conference on Mobile Robots (ECMR). IEEEDong H, Chen X and Stachniss C (2021) Online range image- based pole extractor for long-term lidar localization in urban environments. In: 2021 European Conference on Mobile Robots (ECMR). IEEE, pp. 1-6.
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, J Uszkoreit, N Houlsby, ICLR. OpenReview.netDosovitskiy A, Beyer L, Kolesnikov A, Weissenborn D, Zhai X, Unterthiner T, Dehghani M, Minderer M, Heigold G, Gelly S, Uszkoreit J and Houlsby N (2021) An image is worth 16x16 words: Transformers for image recognition at scale. In: ICLR. OpenReview.net.
A densitybased algorithm for discovering clusters in large spatial databases with noise. M Ester, H Kriegel, Sander J Xu, X , Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96). Simoudis E, Han J and Fayyad UMthe Second International Conference on Knowledge Discovery and Data Mining (KDD-96)Portland, Oregon, USAAAAI PressEster M, Kriegel H, Sander J and Xu X (1996) A density- based algorithm for discovering clusters in large spatial databases with noise. In: Simoudis E, Han J and Fayyad UM (eds.) Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD-96), Portland, Oregon, USA. AAAI Press, pp. 226- 231. URL http://www.aaai.org/Library/KDD/ 1996/kdd96-037.php.
The pascal visual object classes challenge: A retrospective. M Everingham, S Eslami, L Van Gool, C K Williams, Winn J Zisserman, A , International journal of computer vision. 1111Everingham M, Eslami S, Van Gool L, Williams CK, Winn J and Zisserman A (2015) The pascal visual object classes challenge: A retrospective. International journal of computer vision 111(1): 98-136.
Feature detection for vehicle localization in urban environments using a multilayer lidar. A Y Hata, D F Wolf, IEEE Transactions on Intelligent Transportation Systems. 172Hata AY and Wolf DF (2015) Feature detection for vehicle localization in urban environments using a multilayer lidar. IEEE Transactions on Intelligent Transportation Systems 17(2): 420-429.
Hdmi-loc: Exploiting high definition map image for precise localization via bitwise particle filter. J Jeong, Y Cho, A Kim, IEEE Robotics and Automation Letters. 54Jeong J, Cho Y and Kim A (2020) Hdmi-loc: Exploiting high definition map image for precise localization via bitwise particle filter. IEEE Robotics and Automation Letters 5(4): 6310-6317.
Particle filter relocation with semantic likelihood estimation. Jiang L, Zhu Xiang C, Jy, Liu Q, ACTA ELECTONICA SINICA. 492306JIANG L, XIANG C, ZHU Jy and LIU Q (2021) Particle filter relocation with semantic likelihood estimation. ACTA ELECTONICA SINICA 49(2): 306.
Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. G Kim, A Kim, 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEKim G and Kim A (2018) Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4802-4809.
Stereo camera localization in 3d lidar maps. Y Kim, Jeong J Kim, A , 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE. Kim Y, Jeong J and Kim A (2018) Stereo camera localization in 3d lidar maps. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1-9.
Vision based vehicle relocalization in 3d line-feature map using perspective-n-line with a known vertical direction. L Lecrosnier, R Boutteau, P Vasseur, X Savatier, F Fraundorfer, 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEELecrosnier L, Boutteau R, Vasseur P, Savatier X and Fraundorfer F (2019) Vision based vehicle relocalization in 3d line-feature map using perspective-n-line with a known vertical direction. In: 2019 IEEE Intelligent Transportation Systems Conference (ITSC). IEEE, pp. 1263-1269.
Robust localization for intelligent vehicles based on pole-like features using the point cloud. L Li, M Yang, L Weng, C Wang, IEEE Transactions on Automation Science and Engineering. 192Journal Title XXLi L, Yang M, Weng L and Wang C (2021) Robust localization for intelligent vehicles based on pole-like features using the point cloud. IEEE Transactions on Automation Science and Engineering 19(2): 1095-1108. Journal Title XX(X)
Focal loss for dense object detection. T Y Lin, P Goyal, R Girshick, K He, P Dollár, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionLin TY, Goyal P, Girshick R, He K and Dollár P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, Darrell T , Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionLong J, Shelhamer E and Darrell T (2015) Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3431-3440.
Pole-based localization for autonomous vehicles in urban scenarios using local grid map-based method. F Lu, G Chen, J Dong, X Yuan, S Gu, A Knoll, 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM). IEEELu F, Chen G, Dong J, Yuan X, Gu S and Knoll A (2020) Pole-based localization for autonomous vehicles in urban scenarios using local grid map-based method. In: 2020 5th International Conference on Advanced Robotics and Mechatronics (ICARM). IEEE, pp. 640-645.
Exploiting sparse semantic hd maps for selfdriving vehicle localization. W C Ma, I Tartavull, I A Bârsan, S Wang, M Bai, G Mattyus, N Homayounfar, S K Lakshmikanth, A Pokrovsky, R Urtasun, 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEMa WC, Tartavull I, Bârsan IA, Wang S, Bai M, Mattyus G, Homayounfar N, Lakshmikanth SK, Pokrovsky A and Urtasun R (2019) Exploiting sparse semantic hd maps for self- driving vehicle localization. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 5304-5311.
Laps-ii: 6-dof day and night visual localisation with prior 3d structure for autonomous road vehicles. W Maddern, A D Stewart, P Newman, 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE. Maddern W, Stewart AD and Newman P (2014) Laps-ii: 6-dof day and night visual localisation with prior 3d structure for autonomous road vehicles. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE, pp. 330-337.
Rangenet++: Fast and accurate lidar semantic segmentation. A Milioto, I Vizzo, J Behley, C Stachniss, 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEEMilioto A, Vizzo I, Behley J and Stachniss C (2019) Rangenet++: Fast and accurate lidar semantic segmentation. In: 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp. 4213-4220.
Any way you look at it: Semantic crossview localization and mapping with lidar. I D Miller, A Cowley, R Konkimalla, S S Shivakumar, T Nguyen, T Smith, C J Taylor, V Kumar, IEEE Robotics and Automation Letters. 62Miller ID, Cowley A, Konkimalla R, Shivakumar SS, Nguyen T, Smith T, Taylor CJ and Kumar V (2021) Any way you look at it: Semantic crossview localization and mapping with lidar. IEEE Robotics and Automation Letters 6(2): 2397-2404.
V-net: Fully convolutional neural networks for volumetric medical image segmentation. F Milletari, N Navab, S A Ahmadi, fourth international conference on 3D vision (3DV). IEEEMilletari F, Navab N and Ahmadi SA (2016) V-net: Fully convolutional neural networks for volumetric medical image segmentation. In: 2016 fourth international conference on 3D vision (3DV). IEEE, pp. 565-571.
Mix3d: Out-of-context data augmentation for 3d scenes. A Nekrasov, J Schult, O Litany, B Leibe, F Engelmann, 2021 International Conference on 3D Vision (3DV). IEEE. Nekrasov A, Schult J, Litany O, Leibe B and Engelmann F (2021) Mix3d: Out-of-context data augmentation for 3d scenes. In: 2021 International Conference on 3D Vision (3DV). IEEE, pp. 116-125.
OpenStreetMap contributors. Planet dumpOpenStreetMap contributors (2017) Planet dump retrieved from https://planet.osm.org . https://www.openstreetmap. org.
Radar localization and mapping for indoor disaster environments via multi-modal registration to prior lidar map. Y S Park, Kim J Kim, A , 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEPark YS, Kim J and Kim A (2019) Radar localization and mapping for indoor disaster environments via multi-modal registration to prior lidar map. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 1307- 1314.
Curb-intersection feature based monte carlo localization on urban roads. B Qin, Z Chong, T Bandyopadhyay, M H Ang, Frazzoli E Rus, D , 2012 IEEE International Conference on Robotics and Automation. IEEEQin B, Chong Z, Bandyopadhyay T, Ang MH, Frazzoli E and Rus D (2012) Curb-intersection feature based monte carlo localization on urban roads. In: 2012 IEEE International Conference on Robotics and Automation. IEEE, pp. 2640-2646.
A light-weight semantic map for visual localization towards autonomous driving. T Qin, Y Zheng, T Chen, Chen Y Su, Q , 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEEQin T, Zheng Y, Chen T, Chen Y and Su Q (2021) A light-weight semantic map for visual localization towards autonomous driving. In: 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 11248-11254.
Vehicle localization using mono-camera and geo-referenced traffic signs. X Qu, B Soheilian, N Paparoditis, 2015 IEEE Intelligent Vehicles Symposium (IV). IEEE. Qu X, Soheilian B and Paparoditis N (2015) Vehicle localization using mono-camera and geo-referenced traffic signs. In: 2015 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 605-610.
Long-term urban vehicle localization using pole landmarks extracted from 3-d lidar scans. A Schaefer, D Büscher, J Vertens, L Luft, W Burgard, 2019 European Conference on Mobile Robots (ECMR). IEEESchaefer A, Büscher D, Vertens J, Luft L and Burgard W (2019) Long-term urban vehicle localization using pole landmarks extracted from 3-d lidar scans. In: 2019 European Conference on Mobile Robots (ECMR). IEEE, pp. 1-7.
Long-term vehicle localization in urban environments based on pole landmarks extracted from 3-d lidar scans. A Schaefer, D Büscher, J Vertens, L Luft, W Burgard, Robotics and Autonomous Systems. 136103709Schaefer A, Büscher D, Vertens J, Luft L and Burgard W (2021) Long-term vehicle localization in urban environments based on pole landmarks extracted from 3-d lidar scans. Robotics and Autonomous Systems 136: 103709.
Semantic visual localization. J L Schönberger, M Pollefeys, A Geiger, T Sattler, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionSchönberger JL, Pollefeys M, Geiger A and Sattler T (2018) Semantic visual localization. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 6896-6906.
Laneloc: Lane marking based localization using highly accurate maps. M Schreiber, C Knöppel, U Franke, 2013 IEEE Intelligent Vehicles Symposium (IV). IEEE. Schreiber M, Knöppel C and Franke U (2013) Laneloc: Lane marking based localization using highly accurate maps. In: 2013 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 449-454.
Improving vehicle localization using semantic and pole-like landmarks. M Sefati, M Daum, B Sondermann, K D Kreisköther, A Kampker, 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE. Sefati M, Daum M, Sondermann B, Kreisköther KD and Kampker A (2017) Improving vehicle localization using semantic and pole-like landmarks. In: 2017 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 13-19.
Lego-loam: Lightweight and groundoptimized lidar odometry and mapping on variable terrain. T Shan, B Englot, Shan T and Englot B (2018) Lego-loam: Lightweight and ground- optimized lidar odometry and mapping on variable terrain.
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) : 4758-4765.
Not using the car to see the sidewalk-quantifying and controlling the effects of context in classification and segmentation. R Shetty, B Schiele, M Fritz, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionShetty R, Schiele B and Fritz M (2019) Not using the car to see the sidewalk-quantifying and controlling the effects of context in classification and segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8218-8226.
A systematic analysis of performance measures for classification tasks. M Sokolova, G Lapalme, Information processing & management. 454Sokolova M and Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Information processing & management 45(4): 427-437.
Pole-based localization for autonomous vehicles in urban scenarios. R Spangenberg, D Goehring, R Rojas, 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEESpangenberg R, Goehring D and Rojas R (2016) Pole-based localization for autonomous vehicles in urban scenarios. In: 2016 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp. 2161-2166.
Attention is all you need. Advances in neural information processing systems 30. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Kaiser Ł Polosukhin, I , Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł and Polosukhin I (2017) Attention is all you need. Advances in neural information processing systems 30.
Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. G Wan, X Yang, R Cai, H Li, Y Zhou, Wang H Song, S , 2018 IEEE international conference on robotics and automation (ICRA). IEEEWan G, Yang X, Cai R, Li H, Zhou Y, Wang H and Song S (2018) Robust and precise vehicle localization based on multi-sensor fusion in diverse city scenes. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp. 4670-4677.
Intensity scan context: Coding intensity and geometry relations for loop closure detection. H Wang, Wang C Xie, L , 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEEWang H, Wang C and Xie L (2020a) Intensity scan context: Coding intensity and geometry relations for loop closure detection. In: 2020 IEEE International Conference on Robotics and Automation (ICRA). IEEE, pp. 2095-2101.
Lidar iris for loop-closure detection. Y Wang, Z Sun, C Z Xu, S E Sarma, Yang J Kong, H , 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEWang Y, Sun Z, Xu CZ, Sarma SE, Yang J and Kong H (2020b) Lidar iris for loop-closure detection. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 5769-5775.
Intelligent vehicle self-localization based on double-layer features and multilayer lidar. Z Wang, J Fang, X Dai, H Zhang, L Vlacic, IEEE Transactions on Intelligent Vehicles. 54Wang Z, Fang J, Dai X, Zhang H and Vlacic L (2020c) Intelligent vehicle self-localization based on double-layer features and multilayer lidar. IEEE Transactions on Intelligent Vehicles 5(4): 616-625.
Pole-like objects mapping and long-term robot localization in dynamic urban scenarios. Z Wang, S Li, M Cao, Chen H Liu, Y , 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEEWang Z, Li S, Cao M, Chen H and Liu Y (2021) Pole-like objects mapping and long-term robot localization in dynamic urban scenarios. In: 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO). IEEE, pp. 998-1003.
Improving urban vehicle localization with traffic sign recognition. A Welzel, P Reisdorf, G Wanielik, 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE. Welzel A, Reisdorf P and Wanielik G (2015) Improving urban vehicle localization with traffic sign recognition. In: 2015 IEEE 18th International Conference on Intelligent Transportation Systems. IEEE, pp. 2728-2732.
Pole-based real-time localization for autonomous driving in congested urban scenarios. L Weng, M Yang, L Guo, Wang B Wang, C , 2018 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEEWeng L, Yang M, Guo L, Wang B and Wang C (2018) Pole-based real-time localization for autonomous driving in congested urban scenarios. In: 2018 IEEE International Conference on Real-time Computing and Robotics (RCAR). IEEE, pp. 96-101.
Visual localization within lidar maps for automated urban driving. R W Wolcott, R M Eustice, 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE. Wolcott RW and Eustice RM (2014) Visual localization within lidar maps for automated urban driving. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, pp. 176-183.
Visual localization for autonomous driving using prebuilt point cloud maps. K Yabuuchi, D R Wong, T Ishita, Y Kitsukawa, S Kato, 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE. Yabuuchi K, Wong DR, Ishita T, Kitsukawa Y and Kato S (2021) Visual localization for autonomous driving using pre- built point cloud maps. In: 2021 IEEE Intelligent Vehicles Symposium (IV). IEEE, pp. 913-919.
Global localization on openstreetmap using 4-bit semantic descriptors. F Yan, O Vysotska, C Stachniss, 2019Yan F, Vysotska O and Stachniss C (2019) Global localization on openstreetmap using 4-bit semantic descriptors. In: 2019
European Conference on Mobile Robots (ECMR). IEEE. European Conference on Mobile Robots (ECMR). IEEE, pp. 1-7.
Lidar scan feature for localization with highly precise 3-d map. K Yoneda, H Tehrani, T Ogawa, N Hukuyama, S Mita, 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE. Yoneda K, Tehrani H, Ogawa T, Hukuyama N and Mita S (2014) Lidar scan feature for localization with highly precise 3-d map. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. IEEE, pp. 1345-1350.
Monocular camera localization in prior lidar maps with 2d-3d line correspondences. H Yu, W Zhen, W Yang, J Zhang, S Scherer, 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEYu H, Zhen W, Yang W, Zhang J and Scherer S (2020) Monocular camera localization in prior lidar maps with 2d-3d line correspondences. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4588- 4594.
Loam: Lidar odometry and mapping in real-time. J Zhang, S Singh, Robotics: Science and Systems. 29Zhang J and Singh S (2014) Loam: Lidar odometry and mapping in real-time. Robotics: Science and Systems 2(9): 1-9.
Fidnet: Lidar point cloud semantic segmentation with fully interpolation decoding. Y Zhao, L Bai, X Huang, 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEEZhao Y, Bai L and Huang X (2021) Fidnet: Lidar point cloud semantic segmentation with fully interpolation decoding. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, pp. 4453-4458.
2022) Long-term localization using semantic cues in floor plan maps. N Zimmerman, T Guadagnino, X Chen, J Behley, C Stachniss, IEEE Robotics and Automation Letters. 81Zimmerman N, Guadagnino T, Chen X, Behley J and Stachniss C (2022) Long-term localization using semantic cues in floor plan maps. IEEE Robotics and Automation Letters 8(1): 176-183.
| [] |
[
"Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning",
"Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning"
] | [
"Xiao Yu \nDepartment of Computer Science\nColumbia University\nNew YorkNY\n",
"Maximillian Chen [email protected] \nDepartment of Computer Science\nColumbia University\nNew YorkNY\n",
"Zhou Yu \nDepartment of Computer Science\nColumbia University\nNew YorkNY\n"
] | [
"Department of Computer Science\nColumbia University\nNew YorkNY",
"Department of Computer Science\nColumbia University\nNew YorkNY",
"Department of Computer Science\nColumbia University\nNew YorkNY"
] | [] | Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often require abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-ZERO, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-ZERO prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-ZERO on the goal-oriented task Persua-sionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than Chat-GPT during interactive evaluations.real + simulated history up to turn i+1 | 10.48550/arxiv.2305.13660 | [
"https://export.arxiv.org/pdf/2305.13660v1.pdf"
] | 258,841,449 | 2305.13660 | 3223b14a04fd81012e60e95726ae030cb39175b8 |
Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning
Xiao Yu
Department of Computer Science
Columbia University
New YorkNY
Maximillian Chen [email protected]
Department of Computer Science
Columbia University
New YorkNY
Zhou Yu
Department of Computer Science
Columbia University
New YorkNY
Prompt-Based Monte-Carlo Tree Search for Goal-oriented Dialogue Policy Planning
Planning for goal-oriented dialogue often requires simulating future dialogue interactions and estimating task progress. Many approaches thus consider training neural networks to perform look-ahead search algorithms such as A* search and Monte Carlo Tree Search (MCTS). However, this training often require abundant annotated data, which creates challenges when faced with noisy annotations or low-resource settings. We introduce GDP-ZERO, an approach using Open-Loop MCTS to perform goal-oriented dialogue policy planning without any model training. GDP-ZERO prompts a large language model to act as a policy prior, value function, user simulator, and system model during the tree search. We evaluate GDP-ZERO on the goal-oriented task Persua-sionForGood, and find that its responses are preferred over ChatGPT up to 59.32% of the time, and are rated more persuasive than Chat-GPT during interactive evaluations.real + simulated history up to turn i+1
Introduction
In many goal-oriented conversation tasks, interacting parties must retake initiative (Allen et al., 1999) by executing conversational strategies to lead the conversation a desired outcome (e.g. successful negotiation (Lewis et al., 2017) or emotional support (Liu et al., 2021)). As such, this makes it imperative to have high quality dialogue policy planners which can prescribe an "optimal" strategy at each turn of the dialog (Levin et al., 1997;Zhang et al., 2020b;Liu and Lane, 2017;Liu et al., 2018).
Optimal policy planning is a difficult task. While in task-oriented settings (e.g. restaurant booking) there at least is objectivity with respect to successful planning, many goal-oriented tasks like persuasion are often subjective. Moreover, "optimality" in these complex tasks may require expert domain knowledge (e.g., negotiation skills). This makes collecting and annotating high quality conversations difficult (Chen et al., 2023b).
In this work, we contribute a novel approach to Goal-oriented Dialogue Planning with Zero training (GDP-ZERO). GDP-ZERO prompts a large language model (LLM) to perform planning by simulating future dialogue interactions, making it particularly suitable for tasks which would otherwise require high quality conversations and annotations. Unlike previous approaches, we treat policy planning as a stochastic game, and use prompting for every stage of an open-loop tree search. We evaluate GDP-ZERO on PersuasionForGood due to its difficult planning task (Wang et al., 2019), and found its responses are preferred over Chat-GPT during both static and interactive evaluations.
Related Work
Prompting Methods Few-shot dialogue techniques have a lot of advantages, including out-ofdomain generalization (Zhao and Eskenazi, 2018;Mehri and Eskenazi, 2021) and difficult low resource settings with noisy annotations (Chen et al., 2023b). Recently, prompting LLMs has become the predominant approach to few-shot language tasks, and its applications in dialogue have received much attention. However, this has largely focused on dialogue response generation (e.g. Chen et al. Dialogue Policy Planning Research on dialogue policy planning can be categorized into neuralfocused and algorithmic-focused. Neural-focused approaches use annotated dialogues to train dedicated classifiers/value functions for predicting next dialogue acts without explicit look-ahead planning (Zhang et al., 2022a,b;Cao et al., 2020;Peng et al., 2018). For many goal-oriented dialogues however, arXiv:2305.13660v1 [cs.CL] 23 May 2023 both annotated strategies and dialog responses can be sub-optimal/noisy, as different people can respond differently even given the same context.
To reduce the reliance on a labeled dataset, much work has also attempted to combine neural networks with search algorithms, such as A* search (Cheng et al., 2022) and tree search (Wang et al., 2020;Yang et al., 2021;Jang et al., 2020;Väth et al., 2023). However, these methods still require model training, potentially affected by data quality. For example, Jang et al. (2020) use MCTS for training an RNN-based policy model and Wang et al. (2020) train multiple neural networks for user simulation and value function estimation. Consequently, these methods can face difficulties during dialogue simulation due to a) noisy data annotations causing sub-optimally trained generation models, and b) inaccurate responses generated at turn i compounding errors for simulations at turns > i.
Method
In this work, we introduce GDP-ZERO, an algorithm-focused dialogue policy planner for goaloriented dialogue tasks like persuasion. GDP-ZERO uses zero model training and instead performs Open-Loop MCTS at decision time by prompting an LLM to simulate user/system response, evaluate the current task progress, and predict a prior next dialogue act. Building on findings from Chen et al. (2023b), our approach has two main differences from existing policy planning work: we use few-shot prompting to bypass the need for model training on noisy data, and we use Open-Loop MCTS to reduce compounding simulation errors by continuously re-generating system/user responses during the tree search.
Problem Definition
To introduce tree search methods for dialogue policy planning, we first formulate planning as a Markov Decision Process (MDP). A t turn dialogue between a user and a system can be represented as: h = (a sys 0 , u sys 1 , u usr 1 , ..., a sys t−1 , u sys t , u usr t )
where a sys i is the system's dialogue act at turn i, u sys i is the system's response, and u usr i is the user's utterance at turn i. Similar to Yang et al. (2021) and Wang et al. (2020), we define the task of planning the next a sys as an MDP problem ⟨S, A, R, P, γ⟩. The dialogue act of the system a sys i represents an action a i ∈ A at a turn i, and the corresponding dialogue history up to the i-th turn s i = (a 0 , u sys 1 , u usr 1 , ..., a i−1 , u sys i , u usr i ) represents a state s i ∈ S. A reward function R(s, a) represents the likelihood of a desired conversational outcome, such as persuading a user to donate to a charity. The transition function P : S × A → S represents the probability of transitioning from a dialogue state s i to state s i+1 after executing a i at a turn. Finally, γ ∈ [0, 1) is the discount factor.
In a typical MDP game like Go, employing (closed-loop) MCTS to plan for the next move/action a has seen much success (Silver et al., 2016(Silver et al., , 2017. However, in simulating dialogue interactions during tree search, generating a slightly improbable system or user response for state s ′ and storing it in a search tree could lead to a large compounding error for the rest of the subtree from s ′ (Wang et al. (2020)). We thus treat dialogue policy planning as a stochastic MDP, where the simulated next state s ′ ← P(s, a) is drawn from a large unknown distribution and might not be representative of the most probable s ′ (Perez Liebana et al., 2015).
GDP-ZERO
To solve this stochastic problem, we base our algorithm on Open-Loop MCTS (Weber, 2010;Perez Liebana et al., 2015), a variant of MCTS where each tree node s tr i = (a 0 , ..., a i ) represents the sequence of (dialog) actions to reach a dialog turn i. Instead of using system/user utterances to represent a tree node, this design forces an algorithm to (re)generate the corresponding system and user utterances when traversing the tree (see Figure 1). Over time, a tree node s tr stores statistics derived from executing the sequence of dialogue actions (DA) without relying on some specific instance of user/system utterances, which could cause errors to propagate into future simulations. GDP-ZERO is detailed in Figure 1 and Appendix A. Below we describe each stage of the algorithm.
Selection Given a tree state s tr , the action a * with the highest Predictor Upper Confidence Tree Bound (PUCT) (Silver et al., 2017;Rosin, 2011) is selected to traverse the tree:
PUCT(s tr , a) = Q(s tr , a) + c p a N (s tr , a) 1 + N (s tr , a)
where N records the number of times a (s tr , a) pair has been visited, and c p is a hyperparameter controlling exploration. Since future simulations require a specific dialog history, we either sample from the node's simulation cache if its size reached k, or generate a new simulation based on selected dialogue history h tr by prompting (Appendix B). We repeat this process until s tr becomes leaf node.
Expansion Once a leaf node is reached, we treat a LLM M θ as a prior policy by prompting it to generate a distribution of next dialogue acts. This is done by sampling M θ at temperature τ = 1.0 for m times, and converting the sampled DAs into a distribution (see Appendix A). Finally, each DA is also initialized with Q(s tr , ·) = Q 0 , a hyperparameter controlling exploration.
Evaluation We model the value of a state v(s tr ) by the probability that its dialogue context h tr can lead to task success. In a persuasion task to convince a user to donate to a charity, this can be achieved by appending the utterance "Would you like to make a donation?" to the context, and prompting an LLM l times to simulate the user's current inclination to donation (Appendix A).
Backpropagation At the end of each search, we first store any newly simulated histories h tr in a cache for each s tr . Then, we update the statistics of all nodes along the search path:
N (s tr , a) ← N (s tr , a) + 1 (1) Q(s tr , a) ← Q(s tr , a) + ∆Q(s tr , a) (2) where ∆Q(s tr , a) = v(s tr )−Q(s tr ,a) N (s tr ,a)
. We also store a value estimated for each simulated history h tr along the search path with a running average:
h tr .v ← h tr .v × h tr .n + v(s tr ) h tr .n + 1(3)
and h tr .n stores the number of times h tr is visited.
Prediction After all simulations are done, we select the optimal action a * = arg max a N (s tr 0 , a) based on the number of times an action has been visited, where s tr 0 is the root node of the tree. To avoid generating a response using M θ on a * again, we also extract the system utterance for a * that has the highest estimated value during simulation from h tr * = arg max h tr s tr * .h tr .v, where s tr * = s tr 0 ∪ a * . We call this process "Response Selection".
Experiments
We evaluate GDP-ZERO in the context of Persua-sionForGood (P4G; Wang et al. (2019)), a dataset with 300 annotated dialogues where a "persuader" attempts to persuade a "persuadee" to donate to a charity called Save the Children (Appendix D). Due to the subjective nature of persuasion, it is not clear whether the annotated conversations reflect optimal persuasion strategies (Chen et al., 2022). This makes training and evaluating a policy planner challenging, since different persuaders likely adopt different strategies 1 . Motivated by this challenge, we employ GDP-ZERO to perform decision-time planning based on dialogue simulations, and focus on evaluating our approach's end-to-end performance in achieving the desired task outcome: successfully persuading a user to donate to Save the Children.
Static Evaluation
We use ChatGPT 2 (OpenAI, 2022) as the generation backbone of GDP-ZERO, which has become accepted as one of the most coherent generalpurpose LLM (e.g. Liu et al. (2023b); Guo et al. (2023)). We take the first 20 dialogues from P4G, and produce 154 turns for evaluation. For each turn, we compare the response generated with and without GDP-ZERO for planning. Then, we prompted ChatGPT to choose 3 which generated response is more persuasive (Appendix E). We also performed ablations with this method in Appendix C.
In Table 1, we found that generative approaches using ChatGPT were preferred over human ground truth responses about 90% of the time, with the highest score achieved by GDP-ZERO. In Table 2, we show that responses generated after GDP-ZERO planning were preferred in up to 59.32% of comparisons. We also observe increasing preference for GDP-ZERO when the number of simulations n increases. Finally, we find changing k, Q 0 (controls simulation diversity and exploration, respectively) can slightly improve performance (Appendix A).
However, we consider the possibility that a) ChatGPT is biased towards its own generated dialogues (Liu et al., 2023a), and b) it might not have a robust criteria of what constitutes persuasiveness. As such, we also conducted interactive evaluation.
Interactive Human Evaluation
We conducted interactive human evaluation using the LegoEval platform (Li et al., 2021) with crowdworkers on Amazon Mechanical Turk. We primarily sought to evaluate GDP-ZERO in an end-to-end chatbot against two competitive baselines. The first is prompting ChatGPT for generation without GDP-ZERO planning. The second follows Chen et al. (2023b) by using ChatGPT with RAP (Chen et al., 2022), a rule-based framework for persuasive dialogue generation which blends persuasive dialogue with factual information retrieval and social chit-chat. See Appendix F for more details.
After the conversation, we asked the crowdworkers to evaluate our system based on the criteria in Table 3. We collected 33 survey results for GDP-ZERO, 28 for ChatGPT, and 33 for RAP (Appendix G). Our study revealed that GDP-ZERO achieves the best performance across all metrics related to persuasiveness. We also found that RAP is highly rated for strategy diversity and relevance, indicating the benefit of using expert knowledge in planning. In Appendix I we include some example conversations, and in Appendix H we analyze the dialog act distributions from different planners.
Conclusion
We propose GDP-ZERO, an algorithm to perform look-ahead policy planning with a large language model for goal-oriented dialogues. We find GDP-ZERO can outperform rule-based policy planning and direct prompting with state-of-the-art LLMs on the task of persuasion without any model training. Strong performance in the zero-data regime opens the possibility of future work building dialogue systems in more conversational tasks under datascarce settings.
Limitations
When is using GDP-ZERO appropriate? In this paper, we present GDP-ZERO, a general approach for close-domain dialogue policy planning at turn-level. However, in this work we only evaluated GDP-ZERO on P4G. This is because we believe simulation-based plannings would be most beneficial when the task 1) often requires longhorizon planning to be successful, and 2) does not have "optimal" action annotations readily available for supervised learning. We thus believe tasks like persuasion are most suitable, where planning ahead is crucial to success and policy optimality from human demonstrations is extremely subjective. This is in contrast to other goal-oriented contexts like task-oriented dialogue (TOD), where strong policies can be directly learned due to TOD's mostly passive and objective nature (e.g. He et al. (2022)).
Additionally, while GDP-ZERO can be adapted to task-oriented contexts like Multi-Woz (Budzianowski et al., 2018), it may not necessarily be appropriate. Such task-oriented contexts often have hierarchical policies (e.g. " [hotel] [recommend] name price" and " [restaurant] [inform] food price area"), and adaptation to GDP-ZERO would require converting the hierarchy into a multi-label classification, resulting in a massive action space. We believe this could be very inefficient, and approaches such as building multiple search trees to perform high/low-level planning would be useful (Zhang et al., 2020a).
Runtime One important limitation for GDP-ZERO is runtime. The more exhaustive the tree search (e.g. increasing n or k), the more likely the algorithm is able to find the optimal dialogue policy ( Table 2). However, this comes at the cost of longer simulation time, which may affect the overall user experience, and accordingly, user perceptions of persuasiveness.
With OpenAI API's rate limit and LLM's inference speed, we restricted GDP-ZERO to plan on 7 dialogue acts in P4G, with n = 10, k = 3 for a simulation time of around 35 seconds during interactive evaluation. We believe methods to parallelize tree search (Chaslot et al., 2008) or to re-use part of the simulation subtrees could be helpful to speed up GDP-ZERO. We expect that as research with LLMs progresses, inference speed will continue to improve. In the short-term, one may bypass latency limitations by utilizing multiple accounts to parallelize API calls during simulation.
Simulation Quality GDP-ZERO prompts a LLM (e.g. ChatGPT) to perform dialogue simulation and value estimation. Despite LLM's strong few-shot performance on many tasks, issues with controllable generation can still create errors during simulation (e.g. generated system utterances might not match planned dialogue action). GDP-ZERO accounts for such errors by using an open-loop search with k > 1, but this increases simulation runtime. We believe this trade-off between simulation quality and runtime is also an important aspect for future work to consider.
Ethical Considerations
Our work describes an algorithm to perform dialogue policy planning for goal-oriented tasks without any model training. It is aimed at making future dialogue systems to build, and also better at helping users/systems achieve their tasks/goals. Potential Abuses Generally, while most algorithms are not designed for unethical usage, there is often potential for abuse in their applications. In our experiments with PersuasionForGood (Wang et al., 2019), we apply GDP-ZERO on the goal of increasing users' intention to donate to a charity. However, because GDP-ZERO is fundamentally goal-agnostic, it is possible to use them for unethical tasks, such as scamming. We do not condone the use of GDP-ZERO for any unlawful or morally unjust purposes.
Interactive Human Evaluation In this study, we conducted interactive human evaluation using crowdworkers on the Amazon Mechanical Turk platform. All crowdworkers were informed that they were speaking with a chatbot. All study participants were paid at a rate of $15 per hour. Our study has received IRB approval.
References
James E Allen, Curry I Guinn, and Eric Horvitz. 1999.
Mixed-initiative interaction. IEEE Intelligent Systems and their Applications, 14 (5)
A Additional details on GDP-ZERO
We describe the details of GDP-ZERO in Algorithm 1. Similar to other MCTS algorithms, GDP-ZERO performs simulation based on four stages, selection, expansion, evaluation, and backpropagation, and finally predicts an action based on the simulations. Different from existing implementations, GDP-ZERO performs Open-Loop search using only a generative LLM M θ , by prompting it to do dialog simulation, value function estimation, and prior policy estimation (see Appendix B for prompting details and examples). GDP-ZERO requires a generative LLM M θ as a backbone model, and takes in a dialogue history h i at turn i as input. Given some fixed dialogue action space A (see appendix D for P4G), GDP-ZERO builds a search tree after n simulations. For each state, GDP-ZERO keeps a cache of size k storing newly generated user and system utterances. We use c p = 1.0, and Q 0 = {0.0, 0.25, 0.5} to promote exploration (see Table 2).
B Prompting Details on P4G
For P4G, we used the same one-shot example for all cases, while dynamically changing the representation for each operation.
System response generation. Following Chen et al. (2023b), we include the natural language form of a planned dialogue action (Table A4) in the prompt to perform conditional generation. We present an example in Table A6.
User response generation. We swap the user and the system role for this task, and prompt the LLM to act as a user simulator. We present an example in Table A7.
Value function estimation. To evaluate the user's inclination to donate at a given state, we first append the turn "Persuader: Would you be interested in donating to Save the Children?" to the dialogue history, and then prompt the LLM at temperature τ = 1.1 to sample the user's response for l = 10 times. We define "no donation"=-1.0, "negative reaction"=-0.5, "neutral"=0.0, "positive reaction"=0.5, and "donation"=1.0, and then convert the sampled responses to a score between -1.0 and 1.0. We present an example in Table A8.
Prior policy estimation. We treat the backbone LLM as a prior policy, and prompt it to generate the next dialogue action at temperature τ = 1.1 for 15 times. To promote the diversity of the generated dialogue actions, we use add-1 smoothing to convert the generated dialogue actions to a probability distribution. We present an example in Table A9. update Q(s tr , a), N (s tr , a) with eq. (1) 24: end while 25: // prediction after n simulations 26: a * ← arg max a N (s tr i , a) 27: s tr * ← s tr i ∪ a * 28: u sys * ← arg max u sys s tr * .h tr .v 29: return a * , u sys *
C GDP-ZERO Ablations
In this experiment, we use the first 20 dialogs from the dataset, and use PDP-Zero with both ChatGPT and Codex (Chen et al., 2021) as backbone to generate responses for each turn. At the time of writing, Codex was freely accessible from the OpenAI API. Then, we ask ChatGPT to compare the generated responses against human ground truth responses (Gilardi et al., 2023).
We report the win rate of GDP-ZERO with Table A2: Static evaluation using the first 20 dialogs of P4G with ChatGPT as judge. GT refers to Ground Truth. Results are µ ± σ repeated over three runs. Since ChatGPT generated responses are typically long, we only use the first 3 sentences of each generation in this evaluation. Table A1, and with Chat-GPT backbone in Table A2. We also report the win rate of GDP-ZERO without using the Open-Loop variant (by forcing response generation to be deterministic), and without the "Response Selection" process during prediction (line 28, Algorithm 1). In all experiments, we use n = 20, c p = 1, Q 0 = 0, and k = 3 for PDP-Zero, when applicable. In both Table A1 and Table A2, GDP-ZERO achieves improvement compared to prompting Codex/ChatGPT. Additionally, both results show that performing Open-Loop search and "Response Selection" is beneficial for GDP-ZERO.
Codex backbone in
D GDP-ZERO Setup on P4G
PersuasionForGood (P4G) is annotated with 10 persuasion strategies and 7 important non-persuasive strategies (see Table A3). However, since P4G is collected from human-human interaction, with both the "persuader" and the "persuadee" possibly donating to the charity, some of the dialogue actions are unsuitable when the "persuader" is a chatbot (e.g. self-modeling and personal story). We therefore choose a subset of dialogue actions to plan, by picking 4 frequent persuasive strategies suitable for chatbots, and 3 non-persuasive strategies including "other" to enable the chatbot to deal with unaccounted situations. We present the cho-sen dialogue actions and their prompts for LLM in Table A4.
E Additional details on static evaluation
In our static evaluation, we prompt ChatGPT to choose which generated response is better (e.g. with and without GDP-ZERO planning). Given two responses u a and u b , we ask ChatGPT "Which of the following responses can better help the Persuader convince the Persuadee to donate to Save the Children? Why? A: u a , B: u b , C: Can't tell." after providing the relevant task context and dialogue history (see Table A5). For every evaluation, we sample the result 5 times and perform a majority vote. Interestingly, we find that ChatGPT skewed towards choosing option A, preferred choosing A for 95.45% when u a = u b . We therefore randomly swap option A and B during all of our evaluations.
F Additional details on interactive study
In our interactive evaluation, we compare the rulebased planner from RAP, ChatGPT, and GDP-ZERO in an end-to-end chatbot for the persuasion task.
RAP we use the rule-based planner from RAP, which produces a dialogue action given a dialogue context. We then use the same prompting template in GDP-ZERO (Appendix B, Table A6), and prompt ChatGPT to produce a system response conditioned on the planned dialogue action.
ChatGPT we first use the same prompting template in GDP-ZERO (Appendix B, Table A5: Prompting LLM to specify which generated response "response a" or "response b" is more persuasive.
dialogue actions. We then take the most probable action as the planned dialogue action, and use the same template in GDP-ZERO (Appendix B, Table A6) to prompt ChatGPT again to produce a system response.
GDP-ZERO we use GDP-ZERO with ChatGPT backbone as policy planner, and use the "Response Selection" step to produce both the next dialogue action and the associated system response.
G Additional details on survey results
We require our crowdworkers to be located in the United States and have a HIT acceptance rate of at least 99%. After interacting with each chatbot, each crowdworker was asked to rate their conversational experience. This post-task survey included a validation question which asked what charity they talked to the chatbot about. We had a total of 163 respondents. 53 did not complete the survey, and 16 were removed due to failing the validation question or responding with less than 3 unique sentences. This results in 33 survey results for GDP-ZERO, 28 for ChatGPT, and 33 for RAP.
H Analysis of Planned Dialogue Actions
In Figure A1 we present the distribution of planned dialogue actions for each planner: RAP, ChatGPT, and GDP-ZERO, during interactive evaluations. In general, the planned dialogue actions using Chat-GPT and GDP-ZERO are unevenly distributed across different stages of the dialogue. Across different times of the conversation, ChatGPT and GDP-ZERO shared the most frequent DA at each stage is: "greeting" during turn 1-2, "logical appeal" during turn 3-5, and "emotion appeal" during turn 6-10. However, during turn 3-5 GDP-ZERO had a relatively even preference between "credibility appeal", "emotion appeal", and "logical appeal", while ChatGPT strongly preferred "logical appeal". Additionally, we find that throughout the conversations ChatGPT prefers to use "emotion appeal" significantly over other dialogue actions, while GDP-ZERO balances between "emotion appeal" and "logical appeal", and RAP prefers "task related inquiry". It is also worth noting that GDP-ZERO performs "proposition of donation" less frequently at turn 1-5 than ChatGPT, and more frequently at turn 6-10. This indicates GDP-ZERO potentially adopting a more conservative strategy than ChatGPT at early stages of the conversation, in order to avoid persuasion failure (also see Table A11).
I Example Conversations
We present multiple dialogues from the interactive evaluation (Section 4.2), and compare the strength and weaknesses of the three planners: RAP, ChatGPT, and GDP-ZERO. In Tables A10, A12 and A14 we present conversations where the persuadee "strongly agreed" that the chatbot has increased his/her intention to donation for GDP-ZERO, ChatGPT, and RAP, respectively. In Tables A11, A13 and A15 we present conversations where the persuadee rated "neutral" for the question that the chatbot has increased his/her intention to donation for GDP-ZERO, ChatGPT, and RAP, respectively. We observe that in Table A11, while GDP-ZERO did not succeed in convincing the user to donate, GDP-ZERO has been attempting various strategies and does not perform "proposition of donation". On the other hand, in Tables A13 and A15 both ChatGPT and RAP performs "proposition of donation" at an arguably unsuitable timing. Persuader: Save the Children is an organization that helps children in developing countries, by promoting children's rights and providing relief. It is an amazing charity that helps kids who are in desperate need. They can help with safety, education and more. user Persuadee: That sounds great. I believe in this charity, but still wonder how much of the money I donate actually helps. I am always worried if I donate it will just go to some higher up that is living the high life. The Persuader uses an emotion appeal to convince the Persuadee.
Generation assistant
Save the Children is an international non-governmental organization that is committed to improving the lives of children through education, healthcare, and other support programs. They work in over 100 countries around the world to help children who are living in poverty or affected by conflict. Table A6: Prompting LLM to generate a system response conditioned on a dialogue act. "Role" is used to be consistent with ChatGPT's API. Blue texts are prompts characteristic of this task. The one-shot example changes its formatting across different tasks. Table A7: Prompting LLM to simulate a user response conditioned on past (simulated) history. "Role" is used to be consistent with ChatGPT's API. Blue texts are prompts characteristic of this task. This one-shot example is the same as Table A6, but a) with roles reversed and b) user response is prepended with a label. During a real dialogue all user responses are labelled with the 5 reaction DAs by prompting ChatGPT. ['neutral', 'neutral', 'neutral', 'positive reaction', 'positive reaction', 'positive reaction', 'donate', 'neutral', 'neutral', 'neutral'] Estimation v = psuccess = 0.25 Table A8: Prompting LLM as a value function at the current state. Blue texts are prompts characteristic of this task. "Persuader: Would you be interested in donating to Save the Children?" is manually appended at the end of the turn, and the LLM is prompted 10 times to measure user's current inclination donation. "Role" is used to be consistent with ChatGPT's API. This one-shot example is the same as Table A7. Estimation p(credibility appeal) = 0.03, p(emotion appeal) = 0.13, p(proposition of donation) = 0.03, p(logical appeal) = 0.43, p(task related inquiry) = 0.03, p(greeting) = 0.03, p(other) = 0.03 Table A9: Prompting LLM as a prior policy at the current state. "Role" is used to be consistent with ChatGPT's API. Blue texts are prompts characteristic of this task. The system response is sampled 15 times. The probability distribution is computed according to each DA's frequency after add-1 smoothing. This one-shot example is the same as Table A6, but with dialogue actions appended in front of each system response. Yes, I have heard of Save the Children before. It is an international charity that works to improve the lives of children around the world by providing health care, education, and other essential services. ChatGPT logical appeal That's right! Save the Children is an amazing organization that has been working tirelessly to help children in need for over 100 years. They work in over 100 countries around the world, providing life-saving assistance to children affected by poverty, conflict, and natural disasters. User
Yes, Save the Children is an amazing organization. ChatGPT proposition of donation
Absolutely. Did you know that Save the Children has helped over 157 million children worldwide? They have provided education to over 10 million children, and have helped to save the lives of countless others through their health programs. Your donation can help make a difference in the lives of these children who are in desperate need of assistance. User
Yes, I did know that. I'm inspired and moved by the work that Save the Children does and I'm ready to make a donation to help support their cause.
ChatGPT proposition of donation
That's wonderful to hear! Your donation, no matter how small, can make a huge difference in the life of a child. Just think about the impact that your donation could have on a child who is struggling to survive in a war-torn country or living in extreme poverty. Your generosity can help provide them with access to education, healthcare and other essential services that they desperately need. Thank you so much for considering making a donation to Save the Children! User
You are very welcome! We are so grateful for your kind contribution and we know that it will make a real difference in the lives of vulnerable children around the world. Thank you for extending your support and for being such a compassionate donor. Your generosity is truly appreciated! ChatGPT proposition of donation Thank you so much for your kind words. Would you like to make a donation now? Even a small amount can go a long way in helping children in need. User
Yes, I would love to make a donation. Please tell me how I can do that. ChatGPT other That's great to hear! You can make a donation directly on the Save the Children website. They accept donations of any amount, and you can choose to make a one-time donation or set up a recurring monthly donation. Your contribution will help provide life-saving assistance to children in need around the world. Thank you so much for your generosity and compassion! Save the Children is an organization that works to improve the lives of children in developing countries and war zones. They provide education, healthcare, and other essential services to help children thrive. I think it's a really important cause. User
Interesting, but I think I'd prefer to donate to charities that do work in the United States. ChatGPT proposition of donation I understand your concern, but it's important to remember that children in developing countries and war zones are often the most vulnerable and in need of help. Save the Children is a highly reputable organization that has been doing great work for over 100 years. They have a proven track record of making a real difference in the lives of children around the world. Even a small donation can go a long way towards helping these kids get access to basic necessities like food, clean water, and medical care. Would you be willing to consider making even a small donation? User
Honestly, my concerns right now are more local than global. I hope you can understand that. ChatGPT emotion appeal I completely understand your concerns, and it's important to support causes that are close to our hearts. However, I would like to remind you that children are the future of our world, and we have a responsibility to help those who are in need. Even if you can't donate right now, I encourage you to learn more about Save the Children and their work. You never know when an opportunity might arise for you to make a difference in the life of a child who desperately needs it.
User
If I am in a position to do more, I will certainly consider Save the Children. ChatGPT credibility appeal That's great to hear. I just want to let you know that Save the Children is a highly reputable organization with a proven track record of making a real difference in the lives of children around the world. They have helped millions of children get access to education, healthcare, and other essential services that they would not have had otherwise. If you ever do decide to donate or get involved with their work, I can assure you that your contribution will be put towards making a real impact in the lives of those who need it most. User
Got it. Thanks. ChatGPT other You're welcome. If you have any other questions or concerns, feel free to reach out to me anytime. Table A13: Example conversation with ChatGPT as the policy planner. The user rates "neutral" for the question "the chatbot has increased my intention to donate."
(2023b); Liu and Kilicoglu (2023); Madotto et al. (2021); Liu et al. (2022)), conversation synthesis (e.g. Chen et al. (2023a); Kim et al. (2022); Bae et al. (2022)), and dialogue understanding (e.g. Yang et al. (2022); Gupta et al. (2022)). To date, prompting has not been used for policy planning.
Figure 1 :
1GDP-ZERO with ChatGPT backbone. During Selection, simulations are either sampled from cache or newly generated. During Expansion and Evaluation, we prompt ChatGPT for prior policy π and value estimation.
Figure A1 :
A1Distribution of planned dialogue actions for each planner during interactive evaluations.
Algorithm n k Q 0 Win Rate v. Ground TruthPrompting -
-
-
88.84 ± 0.75%
PDP-Zero 5 3 0.00
87.22 ± 0.61%
PDP-Zero 10 3 0.00
90.69 ± 1.60%
PDP-Zero 20 3 0.00
88.86 ± 1.24%
PDP-Zero 50 3 0.00
89.82 ± 1.10%
Table 1: Static evaluation with ChatGPT as backbone
and judge. Results are µ ± σ repeated over three runs.
PDP-Zero(ChatGPT)
Win Rate v. ChatGPT
n k Q 0 Run Time
5 3 0.00
18s
50.65 ± 3.31%
10 3 0.00
36s
50.86 ± 1.10%
20 3 0.00
75s
53.24 ± 1.91%
50 3 0.00
740s
59.32 ± 1.84%
10 1 0.00
16s
49.57 ± 2.01%
10 2 0.00
29s
51.30 ± 1.59%
10 3 0.25
36s
57.79 ± 2.95%
10 3 0.50
36s
53.03 ± 2.00%
Table 2 :
2Static evaluation ChatGPT as backbone and judge. Results are µ ± σ repeated over three runs.
Table 3 :
3Comparison between using rule-based, Chat-
GPT, and GDP-ZERO as planners, with ChatGPT for
response generation/backbone. Results are µ ± σ. All
scores scaled to [1, 5] except "donation prob." ∈ [0, 1].
Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, et al. 2022. Galaxy: A generative pre-trained model for task-oriented dialog with semisupervised learning and explicit policy injection. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 10749-10757. Xingwei He, Zhenghao Lin, Yeyun Gong, A Jin, Hang Zhang, Chen Lin, Jian Jiao, Siu Ming Yiu, Nan Duan, Weizhu Chen, et al. 2023. Annollm: Making large language models to be better crowdsourced annotators. arXiv preprint arXiv:2303.16854. Youngsoo Jang, Jongmin Lee, and Kee-Eung Kim. 2020. Bayes-adaptive monte-carlo planning and learning for goal-oriented dialogues. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7994-8001. Hyunwoo Kim, Jack Hessel, Liwei Jiang, Ximing Lu, Youngjae Yu, Pei Zhou, Ronan Le Bras, Malihe Alikhani, Gunhee Kim, Maarten Sap, et al. 2022. Soda: Million-scale dialogue distillation with social commonsense contextualization. arXiv preprint arXiv:2212.10465. Esther Levin, Roberto Pieraccini, and Wieland Eckert. 1997. Learning dialogue strategies within the markov decision process framework. In 1997 IEEE Workshop on Automatic Speech Recognition and Understanding Proceedings, pages 72-79. IEEE. Mike Lewis, Denis Yarats, Yann Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning of negotiation dialogues. In Conference on Empirical Methods in Natural Language Processing. Yu Li, Josh Arnold, Feifan Yan, Weiyan Shi, and Zhou Yu. 2021. Legoeval: An open-source toolkit for dialogue system evaluation via crowdsourcing. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 317-324. Bing Liu and Ian Lane. 2017. Iterative policy learning in end-to-end trainable task-oriented neural dialog models. In 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pages 482-489. IEEE. Bing Liu, Gökhan Tür, Dilek Hakkani-Tur, Pararth Shah, and Larry Heck. 2018. Dialogue learning with human teaching and feedback in end-to-end trainable task-oriented dialogue systems. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2060-2069. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th annual meeting of the Association for Computational Linguistics. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023a. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634. Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, Zihao Wu, Dajiang Zhu, Xiang Li, Ning Qiang, Dingang Shen, Tianming Liu, and Bao Ge. 2023b. Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. Yiren Liu and Halil Kilicoglu. 2023. Commonsenseaware prompting for controllable empathetic dialogue generation. arXiv preprint arXiv:2302.01441. Zihan Liu, Mostofa Patwary, Ryan Prenger, Shrimai Prabhumoye, Wei Ping, Mohammad Shoeybi, and Bryan Catanzaro. 2022. Multi-stage prompting for knowledgeable dialogue generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 1317-1337. Andrea Madotto, Zhaojiang Lin, Genta Indra Winata, and Pascale Fung. 2021. Few-shot bot: Promptbased learning for dialogue systems. arXiv preprint arXiv:2110.08118. Shikib Mehri and Maxine Eskenazi. 2021. Schemaguided paradigm for zero-shot dialog. In Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 499-508, Singapore and Online. Association for Computational Linguistics. OpenAI. 2022. Openai: Introducing chatgpt. Alexander Pan, Chan Jun Shern, Andy Zou, Nathaniel Li, Steven Basart, Thomas Woodside, Jonathan Ng, Hanlin Zhang, Scott Emmons, and Dan Hendrycks. 2023. Do the rewards justify the means? measuring trade-offs between rewards and ethical behavior in the machiavelli benchmark. arXiv preprint arXiv:2304.03279. Baolin Peng, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. 2018. Deep Dyna-Q: Integrating planning for task-completion dialogue policy learning. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2182-2192, Melbourne, Australia. Association for Computational Linguistics. Diego Perez Liebana, Jens Dieskau, Martin Hunermund, Sanaz Mostaghim, and Simon Lucas. 2015. Open loop search for general video game playing. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, GECCO '15, page 337-344, New York, NY, USA. Association for Computing Machinery. David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. 2016. Mastering the game of go with deep neural networks and tree search.:14-23.
Sanghwan Bae, Donghyun Kwak, Sungdong Kim,
Donghoon Ham, Soyoung Kang, Sang-Woo Lee, and
Woomyoung Park. 2022. Building a role specified
open-domain dialogue system leveraging large-scale
language models. In Proceedings of the 2022 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, pages 2128-2150.
Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang
Tseng, Iñigo Casanueva, Ultes Stefan, Ramadan Os-
man, and Milica Gašić. 2018. Multiwoz -a large-
scale multi-domain wizard-of-oz dataset for task-
oriented dialogue modelling. In Proceedings of the
2018 Conference on Empirical Methods in Natural
Language Processing (EMNLP).
Yan Cao, Keting Lu, Xiaoping Chen, and Shiqi Zhang.
2020. Adaptive dialog policy learning with hind-
sight and user modeling. In Proceedings of the 21th
Annual Meeting of the Special Interest Group on
Discourse and Dialogue, pages 329-338, 1st virtual
meeting. Association for Computational Linguistics.
Guillaume MJ B Chaslot, Mark HM Winands, and
H Jaap van Den Herik. 2008. Parallel monte-carlo
tree search. In Computers and Games: 6th Interna-
tional Conference, CG 2008, Beijing, China, Septem-
ber 29-October 1, 2008. Proceedings 6, pages 60-71.
Springer.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming
Yuan, Henrique Ponde de Oliveira Pinto, Jared Ka-
plan, Harri Edwards, Yuri Burda, Nicholas Joseph,
Greg Brockman, Alex Ray, Raul Puri, Gretchen
Krueger, Michael Petrov, Heidy Khlaaf, Girish Sas-
try, Pamela Mishkin, Brooke Chan, Scott Gray,
Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz
Kaiser, Mohammad Bavarian, Clemens Winter,
Philippe Tillet, Felipe Petroski Such, Dave Cum-
mings, Matthias Plappert, Fotios Chantzis, Eliza-
beth Barnes, Ariel Herbert-Voss, William Hebgen
Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie
Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain,
William Saunders, Christopher Hesse, Andrew N.
Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan
Morikawa, Alec Radford, Matthew Knight, Miles
Brundage, Mira Murati, Katie Mayer, Peter Welinder,
Bob McGrew, Dario Amodei, Sam McCandlish, Ilya
Sutskever, and Wojciech Zaremba. 2021. Evaluating
large language models trained on code.
Maximillian Chen, Alexandros Papangelis, Chenyang
Tao, Seokhwan Kim, Andy Rosenbaum, Yang Liu,
Zhou Yu, and Dilek Hakkani-Tur. 2023a. PLACES:
Prompting language models for social conversation
synthesis. Findings of the Association for Computa-
tional Linguistics: EACL 2023.
Maximillian Chen, Weiyan Shi, Feifan Yan, Ryan Hou,
Jingwen Zhang, Saurav Sahay, and Zhou Yu. 2022.
Seamlessly integrating factual information and social
content with persuasive dialogue. In Proceedings of
the 2nd Conference of the Asia-Pacific Chapter of
the Association for Computational Linguistics and
the 12th International Joint Conference on Natural
Language Processing, pages 399-413.
Maximillian Chen, Xiao Yu, Weiyan Shi, and Zhou Yu.
2023b. Controllable mixed-initiative dialogue gen-
eration through prompting. Proceedings of the 2023
Conference of the Association for Computational Lin-
guistics.
Yi Cheng, Wenge Liu, Wenjie Li, Jiashuo Wang, Ruihui
Zhao, Bang Liu, Xiaodan Liang, and Yefeng Zheng.
2022. Improving multi-turn emotional support dia-
logue generation with lookahead strategy planning.
In Proceedings of the 2022 Conference on Empiri-
cal Methods in Natural Language Processing, pages
3014-3026, Abu Dhabi, United Arab Emirates. As-
sociation for Computational Linguistics.
Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli.
2023. Chatgpt outperforms crowd-workers for text-
annotation tasks.
Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang,
Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng
Wu. 2023. How close is chatgpt to human experts?
comparison corpus, evaluation, and detection.
Prakhar Gupta, Cathy Jiao, Yi-Ting Yeh, Shikib Mehri,
Maxine Eskenazi, and Jeffrey P Bigham. 2022. In-
structdial: Improving zero and few-shot generaliza-
tion in dialogue through instruction tuning. EMNLP.
Christopher D Rosin. 2011. Multi-armed bandits with
episode context. Annals of Mathematics and Artifi-
cial Intelligence, 61(3):203-230.
Nature, 529(7587):484-
489.
David Silver, Julian Schrittwieser, Karen Simonyan,
Ioannis Antonoglou, Aja Huang, Arthur Guez,
Thomas Hubert, Lucas Baker, Matthew Lai, Adrian
Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui,
Laurent Sifre, George van den Driessche, Thore
Graepel, and Demis Hassabis. 2017. Mastering the
game of Go without human knowledge. Nature,
550(7676):354-359.
Dirk Väth, Lindsey Vanderlyn, and Ngoc Thang Vu.
2023. Conversational tree search: A new hybrid di-
alog task. In Proceedings of the 17th Conference of
the European Chapter of the Association for Compu-
tational Linguistics, pages 1264-1280, Dubrovnik,
Croatia. Association for Computational Linguistics.
Shuohang Wang, Yang Liu, Yichong Xu, Chenguang
Zhu, and Michael Zeng. 2021. Want to reduce
labeling cost? gpt-3 can help. arXiv preprint
arXiv:2108.13487.
Sihan Wang, Kaijie Zhou, Kunfeng Lai, and Jianping
Shen. 2020. Task-completion dialogue policy learn-
ing via Monte Carlo tree search with dueling network.
In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 3461-3471, Online. Association for Computa-
tional Linguistics.
Xuewei Wang, Weiyan Shi, Richard Kim, Yoojung Oh,
Sijia Yang, Jingwen Zhang, and Zhou Yu. 2019. Per-
suasion for good: Towards a personalized persuasive
dialogue system for social good. In Proceedings of
the 57th Annual Meeting of the Association for Com-
putational Linguistics, pages 5635-5649, Florence,
Italy. Association for Computational Linguistics.
Richard Weber. 2010. Optimization and control. Uni-
versity of Cambridge.
Jingxuan Yang, Si Li, and Jun Guo. 2021. Multi-turn
target-guided topic prediction with Monte Carlo tree
search. In Proceedings of the 18th International Con-
ference on Natural Language Processing (ICON),
pages 324-334, National Institute of Technology
Silchar, Silchar, India. NLP Association of India (NL-
PAI).
Yuting Yang, Wenqiang Lei, Juan Cao, Jintao Li,
and Tat-Seng Chua. 2022. Prompt learning for
few-shot dialogue state tracking. arXiv preprint
arXiv:2201.05780.
Cong Zhang, Huilin Jin, Jienan Chen, Jinkuan Zhu,
and Jinting Luo. 2020a. A hierarchy mcts algorithm
for the automated pcb routing. In 2020 IEEE 16th
International Conference on Control & Automation
(ICCA), pages 1366-1371.
Haodi Zhang, Zhichao Zeng, Keting Lu, Kaishun Wu,
and Shiqi Zhang. 2022a. Efficient dialog policy learn-
ing by reasoning with contextual knowledge. In Pro-
ceedings of the AAAI Conference on Artificial Intelli-
gence, volume 36, pages 11667-11675.
Shuo Zhang, Junzhou Zhao, Pinghui Wang, Yu Li,
Yi Huang, and Junlan Feng. 2022b. " think before
you speak": Improving multi-action dialog policy
by planning single-action dialogs. arXiv preprint
arXiv:2204.11481.
Zheng Zhang, Lizi Liao, Xiaoyan Zhu, Tat-Seng Chua,
Zitao Liu, Yan Huang, and Minlie Huang. 2020b.
Learning goal-oriented dialogue policy with opposite
agent awareness. In Proceedings of the 1st Confer-
ence of the Asia-Pacific Chapter of the Association
for Computational Linguistics and the 10th Interna-
tional Joint Conference on Natural Language Pro-
cessing, pages 122-132.
Tiancheng Zhao and Maxine Eskenazi. 2018. Zero-shot
dialog generation with cross-domain latent actions.
In Proceedings of the 19th Annual SIGdial Meeting
on Discourse and Dialogue, pages 1-10.
Algorithm 1 PDP-Zero (M θ ) Require: generative LLM M θ Require: dialogue history h i until turn i Require: dialogue action space a ∈ A a ′ ← arg max a PUCT(s tr , a; c p ) generate s tr .h ← M θ (h tr • a ′ )if len(s tr .h) < k 11: end while 12: h tr ← sample(s tr .h) 13: // expansion 14: generate p(a|s tr ) ← M θ (h tr ) 15: s tr .p ← p(a|s tr ), s tr .Q ← Q 0 , s tr .N = 0 16: // evaluation 17: generate v(s tr ) ← M θ (h tr ) 18: // backpropagation 19: while s tr ̸ = s tr i doRequire: hyperparameter n, k, c p , Q 0
1: Repeat for n searches:
2: initialize root node s tr
i , s tr
i .h ← {h i }
3: s tr ← s tr
i
4: // selection
5: while s tr is not a leaf node do
6:
7:
h tr ← sample(s tr .h)
8:
s tr ← s tr ∪ a ′
9:
10:
20:
update h tr .v with eq. (3)
21:
save s tr .h ← s tr .h ∪ h tr
22:
(s tr , a) ← back to parent of s tr
23:
Table A1 :
A1Static evaluation using the first 20 dialogs of P4G with ChatGPT as judge. GT refers to Ground Truth. Results are µ ± σ repeated over three runs.Algorithm
Backbone Win Rate v. GT
Prompting
ChatGPT 87.21 ± 0.60%
PDP-Zero
ChatGPT 91.13 ± 0.30%
w/o OpenLoop
ChatGPT 88.09± 0.81%
w/o Resp.Select ChatGPT 91.03± 0.75%
Table A3 :
A3Annotated dialog actions in the P4G dataset.
Table A9 )
A9to ob-
tain ChatGPT's own prior distribution of the next
Dialogue Act
Category
Natural Language Form
Logical Appeal
Strategy
The Persuader use of reasoning and evidence to convince the Persuadee.
Emotion Appeal
Strategy
The Persuader uses an emotion appeal to convince the Persuadee.
Credibility Appeal
Strategy
The Persuader establishes credibility of Save the Children by citing its impact.
Task Related Inquiry
Strategy
The Persuader asks about the Persuadee's knowledge or opinion related to
Save the Children.
Proposition of Donation Non-Strategy
The Persuader asks if the Persuadee would like to make a small donation.
Greeting
Non-Strategy
The Persuader greets the Persuadee.
Other
Non-Strategy
The Persuader responds to the Persuadee without using any persuasive strategy.
Table A4 :
A4Mapping of persuader dialogue actions to natural language prompts used for prompting LLMs.Utterance
Prompt
The following is background information about Save the Children.
Save the Children is head-quartered in London, and they work to help fight poverty around the world.
Children need help in developing countries and war zones. Small donations like $1 or $2 go a long way
to help.
The following is a conversation between a Persuader and a Persuadee about a charity called Save the
Children. The Persuader is trying to persuade the Persuadee to donate to Save the Children
{insert dialogue context}
Which of the following responses can better help the Persuader convince the Persuadee to donate to
Save the Children? Why?
A. Persuader: {insert response a}
B. Persuader: {insert response b}
C. Can't tell.
Your can choose from either A, B, or C.
Your choice:
Generation
A. (rest truncated)
in hopes I can at least feel like I did my part. If I donated and some corrupt person took it, that's the worst karma and even worst scandal imaginable user Persuadee: With that all said I do feel like any organization that aims to help the children I am more inclined to donate to them than most. I think helping children is an important thing as they are our future! assistant Persuader: [proposition of donation] I think donating to this cause would def be a step in the right direction to hopefully helping across the world the children that are in despair. I don't want you to donate any more than you want, so if you want to donate how much do you to do? user Persuadee: I would donate 1 dollar to this charity and feel good about it I think. Real dialog system The following is a new conversation between Persuader (you) and a Persuadee. assistant Persuader: [greeting] Hello. How are you? user Persuadee: I am good. What about you? assistant Persuader: [task related inquiry] I'm doing well, thanks for asking. I wanted to talk to you about an important cause that I am passionate about -Save the Children. Have you heard of them before? user Persuadee: No I have not. assistant Persuader: [credibility appeal]Save the Children is an international nongovernmental organization that is committed to improving the lives of children through education, healthcare, and other support programs. They work in over 100 countries around the world to help children who are living in poverty or affected by conflict. user Persuadee: It sounds like a great cause.Generation assistant ['emotion appeal', 'emotion appeal', 'emotion appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal', 'logical appeal']Role (for ChatGPT)
Utterance
One-shot Ex-
ample
system
Save the Children is head-quartered in London, and they work to help fight
poverty around the world. Children need help in developing countries and
war zones. Small donations like 1or2 go a long way to help.
You are Persuader who is trying to persuade the Persuadee to donate to a
charity called Save the Children. You can choose amongst the following
actions during a conversation:
[credibility appeal] [emotion appeal] [proposition of donation] [logical ap-
peal] [task related inquiry] [greeting] [other].
The following is an example conversation between a Persuader and a Per-
suadee.
user
Persuadee: Hello. How are you?
The Persuader greets the Persuadee.
assistant
Persuader: [task related inquiry] Very well. I'm just up organizing info for
my charity called Save the Children. Have you heard of this charity before?
user
Persuadee: No, I have not. Can you tell me more?
assistant
Persuader: [credibility appeal] Save the Children is an organization that helps
children in developing countries, by promoting children's rights and providing
relief. It is an amazing charity that helps kids who are in desperate need.
They can help with safety, education and more.
user
Persuadee: That sounds great. I believe in this charity, but still wonder how
much of the money I donate actually helps. I am always worried if I donate it
will just go to some higher up that is living the high life.
assistant
Persuader: [emotion appeal] Every little bit makes a difference. When you
have people who are so poor, it's amazing what a tiny amount can do. I
usually donate
How are you doing today? User I'm doing great, thanks for asking! How about you? GDP-ZERO task related inquiry I'm doing well, thanks. I was actually wondering if you've ever heard of the charity Save the Children? UserYes, I have. Save the Children is a global charity organization dedicated to providing children around the world with access to quality education, healthcare, nutrition, and protection. It is one of the largest international charities in the world and has been operating since 1919. GDP-ZERO logical appeal That's great to hear! I think it's amazing that they've been able to help so many children for such a long time. Did you know that Save the Children helps children in over 100 countries, including those affected by conflict and disasters? They work tirelessly to ensure that every child gets the chance they deserve. UserNo, I didn't know that! That is really impressive. It's so important for organizations like Save the Children to keep working to make sure that every child has access to the resources they need.GDP-ZERO proposition of donationAbsolutely! And did you know that even a small donation can make a huge impact? For example, just $50 can provide 10 families with clean water for an entire month.And for as little as $5, you can provide a child with life-saving medication to treat malaria. Every dollar counts and goes towards making the world a better place for children in need. Would you consider making a donation to help support Save the Children's mission? User Yes, I would love to make a donation to help support Save the Children's mission. GDP-ZERO other That's fantastic to hear! Your donation will go a long way towards helping children in need. You can easily make a donation by visiting the Save the Children website, where you can choose how much you'd like to donate and which specific area of their work you'd like to support. And if you ever have any questions or concerns about your donation, their team is always available to help. Thank you so much for considering this wonderful cause! UserYou're very welcome! It's great to know that my donation is going to have an impact and help children in need. Thank you for the information and I'm sure I'll be in touch if I have any questions. GDP-ZERO other I'm glad to hear that, thank you for your support! And if you ever have any questions, don't hesitate to reach out. Thank you again for considering a donation -even a small amount can make a huge difference in the lives of children around the world. UserYou're very welcome, it's my pleasure to help. I'm sure the donations will make a big difference. Thank you for allowing me to be a part of it! GDP-ZERO emotion appeal It's wonderful to see people like you who care so much about helping others. Your generosity and kindness will undoubtedly make a huge difference in the lives of children who need it most. Thank you for your support and for being a part of this amazing cause! UserThank you for the kind words! It's an honor to be a part of this cause and I'm humbled by the opportunity to make a difference in the lives of children. I'm glad to know that my efforts are appreciated, and I will continue to do what I can to help those in need.Party
Dialog act
Utterance
GDP-ZERO
greeting
Hello.
Table A10 :
A10Example conversation with GDP-ZERO as the policy planner. The user rates "strongly agree" for the question "the chatbot has increased my intention to donate." How are you doing today? User I'm doing great, thank you. How about you? ChatGPT task related inquiry I'm doing well, thank you. I wanted to talk to you about a charity called Save the Children. Have you heard of it before? UserParty
Dialog act
Utterance
ChatGPT
greeting
Hello.
Table A12 :
A12Example conversation with ChatGPT as the policy planner. The user rates "strongly agree" for the question "the chatbot has increased my intention to donate." How are you doing today?User Fine, thanks. And you? ChatGPT task related inquiry I'm doing well, thank you. I wanted to talk to you about a charity called Save the Children. Have you heard of it before? User I have, but I don't know much about it.Party
Dialog act
Utterance
ChatGPT
greeting
Hello. ChatGPT
logical appeal
In extensive initial experiments, we found optimized RoBERTa (Liu et al., 2019) fine-tuning on binary predictions of whether to "propose to donate" yields 74% accuracy at best.
Version "gpt-3.5-turbo" as of 2023 April 3 ChatGPT can outperform crowdworkers on many annotation and evaluation tasks (e.g. Gilardi et al. (2023); He et al. (2023); Pan et al. (2023); Wang et al. (2021); Liu et al. (2023a))
PartyDialog act Utterance GDP-ZERO greeting Hello. How are you doing today? User I'm doing well. How are you doing? GDP-ZERO task related inquiry I'm doing great, thank you. I was just reaching out to talk to you about a charity called Save the Children. Have you heard of it before? User It sounds familiar, but I don't think that I have. GDP-ZERO credibility appeal Save the Children is an international non-governmental organization that works to promote children's rights, provide relief and support to children in developing countries, and protect them from harm. (rest omitted...) UserThat sounds very helpful. GDP-ZERO logical appeal Yes, Save the Children has helped millions of children in need over the years. They provide food, healthcare, education and protection to children in some of the poorest countries in the world. It's really amazing what they have been able to achieve so far. Do you think you would be interested in supporting this charity? User I'm not sure. How much of each dollar donated goes to the children that need saving and how much goes to organizational expenses? GDP-ZERO logical appeal That's a great question. Save the Children is very transparent about their finances and they have a strong commitment to making sure that donations go directly towards helping children in need. (rest omitted...) UserYou mentioned that "It's really amazing what they have been able to achieve so far". What, specifically, have they been able to achieve so far? GDP-ZERO credibility appeal Save the Children has achieved a lot over the years. They have been able to provide essential services like healthcare, education, and protection to millions of children in need all around the world. For example, they've been able to vaccinate over 2 million children against deadly diseases, (rest omitted...) UserThat sounds really great! Regarding the eleven percent of donations that are kept as administrative expenses-how much does each executive of the charity collect per year as a salary? GDP-ZERO logical appeal Save the Children is a very ethical charity and they are very transparent about their finances. They have a strong commitment to making sure that donations go directly towards helping children in need. According to their most recent financial statements, the CEO of Save the Children US had a salary of around $500,000 per year, which is in line with other similar charities. However, it's important to keep in mind that this salary makes up only a small fraction of their overall expenses and only around 11% of all donations go towards administrative expenses. User Jaime, it just doesn't seem right to me. GDP-ZERO emotion appeal I understand your concern, and it's important to make sure that you feel comfortable with the charities you choose to support. However, I would like to point out that Save the Children is a highly reputable charity that has been operating for over 100 years (rest omitted...).UserWhat do the lowest level employees make per hour? GDP-ZERO logical appeal I don't have that information readily available, but I can assure you that Save the Children is committed to paying all of their employees fairly and equitably. They believe in treating all employees with respect and dignity (rest omitted...). UserSo you're saying that the ends justify the means? GDP-ZERO logical appeal No, that's not what I'm saying. What I'm saying is that it's important to focus on the positive impact that Save the Children is having on children around the world rather than getting bogged down by concerns over salaries. (rest omitted...) User Jaime, I don't think that asking questions about how the charity allocates resources is "getting bogged down by concerns over salaries". GDP-ZERO logical appeal You're right, it's important to ask questions and understand how charities allocate their resources. Save the Children is very transparent about their finances and they have a strong commitment to making sure that donations go directly towards helping children in need. They regularly review their compensation policies to make sure they are competitive with other similar organizations (rest omitted...) UserThis doesn't feel like "a high level of transparency" to me, Jaime. They're not feeding you all the data. It's very opaque. GDP-ZERO emotion appeal I understand your concern and it's important to make sure that charities are transparent about how they allocate their resources. However, I can assure you that Save the Children is committed to being as transparent as possible about their finances. They regularly publish detailed financial statements and annual reports (rest omitted...) User Jaime, this feels like the top 1 percentile of income earners, your CEO, getting the bottom 10%, me, to redistribute some of my wealth to the bottom 1% that they've created AND skimming some of that for themselves. GDP-ZERO logical appeal I understand why you might feel that way, but it's important to keep in mind that Save the Children is a highly reputable charity (rest omitted...). In UK the welfare reform and work act abolished the child poverty act, includes targets to reduce poverty and measure poverty based on family income. RAP emotion appeal I understand your concerns about poverty in the UK, but let's not forget about the millions of children around the world who are living in extreme poverty and war zones. They need our help now more than ever. User Soaring rent costs, insecure wor and low pay plus a patchy welfare system. RAP logical appeal I understand that the issues you mentioned are important and affect many families in the UK. But let's remember that poverty is a global issue, and it's not just limited to developed countries like the UK. Children in developing countries and war zones are often the most vulnerable and need our help urgently. By donating to Save the Children, we can make a difference in their lives and give them hope for a better future. Every little bit counts, so please consider making a small donation today. User Yes sure. I will arrange a small donation. proposition of donation I understand your frustration, but we can't always rely on the wealthy to make a difference. Every little bit counts, and if more people like us donated even just a small amount, it could add up to make a significant impact. Plus, by spreading awareness about the work of organizations like Save the Children, we can encourage others to donate as well. Thank you for considering making a donation and helping to improve the lives of children in need. User exits.Table A15: Example conversation with RAP as the policy planner. The user rates "neutral" for the question "the chatbot has increased my intention to donate." | [] |
[
"Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks",
"Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks",
"Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks",
"Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks"
] | [
"Ruiqi Liu \nState Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina\n\nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Mengnan Jian [email protected] \nState Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina\n\nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Dawei Chen \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Xu Lin \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Yichao Cheng \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Wei Cheng \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Shijun Chen \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Ruiqi Liu \nState Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina\n\nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Mengnan Jian [email protected] \nState Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina\n\nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Dawei Chen \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Xu Lin \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Yichao Cheng \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Wei Cheng \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n",
"Shijun Chen \nWireless Research Institute\nZTE Corporation\n100029BeijingChina\n"
] | [
"State Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"State Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"State Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"State Key Laboratory of Mobile Network and Mobile Multimedia Technology\n518055ShenzhenChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina",
"Wireless Research Institute\nZTE Corporation\n100029BeijingChina"
] | [] | The 6th generation (6G) wireless networks will likely to support a variety of capabilities beyond communication, such as sensing and localization, through the use of communication networks empowered by advanced technologies. Integrated sensing and communication (ISAC) has been recognized as a critical technology as well as an usage scenario for 6G, as widely agreed by leading global standardization bodies. ISAC utilizes communication infrastructure and devices to provide the capability of sensing the environment with high resolution, as well as tracking and localizing moving objects nearby. Meeting both the requirements for communication and sensing simultaneously, ISAC based approaches celebrate the advantages of higher spectral and energy efficiency compared to two separate systems to serve two purposes, and potentially lower costs and easy deployment. A key step towards the standardization and commercialization of ISAC is to carry out comprehensive field trials in practical networks, such as the 5th generation (5G) network, to demonstrate its true capacities in practical scenarios. In this paper, an ISAC based outdoor multi-target detection, tracking and localization approach is proposed and validated in 5G networks. The proposed system comprises of 5G base stations (BSs) which serve nearby mobile users normally, while accomplishing the task of detecting, tracking and localizing drones, vehicles and pedestrians simultaneously. Comprehensive trial results demonstrate the relatively high accuracy of the proposed method in practical outdoor environment when tracking and localizing single targets and multiple targets. | 10.48550/arxiv.2305.13924 | [
"https://export.arxiv.org/pdf/2305.13924v2.pdf"
] | 258,841,458 | 2305.13924 | 6f9919fc6324ac3668741f3b2c70559c2aecc39b |
Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks
Ruiqi Liu
State Key Laboratory of Mobile Network and Mobile Multimedia Technology
518055ShenzhenChina
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Mengnan Jian [email protected]
State Key Laboratory of Mobile Network and Mobile Multimedia Technology
518055ShenzhenChina
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Dawei Chen
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Xu Lin
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Yichao Cheng
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Wei Cheng
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Shijun Chen
Wireless Research Institute
ZTE Corporation
100029BeijingChina
Integrated Sensing and Communication based Outdoor Multi-Target Detection, Tracking and Localization in Practical 5G Networks
1Index Terms-Integrated sensing and communicationproto- type5Gtrackdetectionlocalizationtrial
The 6th generation (6G) wireless networks will likely to support a variety of capabilities beyond communication, such as sensing and localization, through the use of communication networks empowered by advanced technologies. Integrated sensing and communication (ISAC) has been recognized as a critical technology as well as an usage scenario for 6G, as widely agreed by leading global standardization bodies. ISAC utilizes communication infrastructure and devices to provide the capability of sensing the environment with high resolution, as well as tracking and localizing moving objects nearby. Meeting both the requirements for communication and sensing simultaneously, ISAC based approaches celebrate the advantages of higher spectral and energy efficiency compared to two separate systems to serve two purposes, and potentially lower costs and easy deployment. A key step towards the standardization and commercialization of ISAC is to carry out comprehensive field trials in practical networks, such as the 5th generation (5G) network, to demonstrate its true capacities in practical scenarios. In this paper, an ISAC based outdoor multi-target detection, tracking and localization approach is proposed and validated in 5G networks. The proposed system comprises of 5G base stations (BSs) which serve nearby mobile users normally, while accomplishing the task of detecting, tracking and localizing drones, vehicles and pedestrians simultaneously. Comprehensive trial results demonstrate the relatively high accuracy of the proposed method in practical outdoor environment when tracking and localizing single targets and multiple targets.
I. INTRODUCTION
Integrated sensing and communication (ISAC) has been identified as a promising technology for the next generation wireless communication, as well as a critical usage scenario of future communication systems [1]. Being able to provide sensing capabilities utilizing existing communication infrastructure, ISAC has advantages of lower costs, higher spectral efficiency and higher energy efficiency compared to counterparts that require dedicated spectrum and transceivers such as radar systems. A cellular network supporting ISAC as a native usage scenario is illustrated in Fig. 1, where sensing and communication functions are implemented in all base stations (BSs). These dual-functional BSs are able to support communication functions just like conventional BSs, while also able to utilize communication signals for sensing purposes. On top of getting data services, private users carrying user equipments (UEs), vehicles, drones and theoretically, any moving objects in the cell can be detected, tracked and localized by the BSs. It's envisaged that results obtained from sensing can help to improve communication features such as beamforming, and communication functions can also enhance the sensing accuracy. It is also possible for multiple BSs to jointly sense a common target, improving the resolution and accuracy of localization.
Thanks to the propagation properties of wireless signals in the free space, researchers have realized the potential of communication infrastructure in sensing and studied the theoretical and practical aspects of it [2]. Even before the concept of ISAC, or sometimes called joint sensing and communication (JSAC), matures and becomes trendy, there were work on utilizing communication signals for sensing and localization purposes [3]- [6] which usually relied on classical signal processing techniques. Paving the way to mature the technology, studies have been conducted on different aspects such as electromagnetic modelling, optimal waveform design, joint beamforming [7], novel signal processing techniques [8], flexible duplex and control [9] and achieving balance between accurate sensing and robust communication [10]. These research findings build up the foundation of further prototyping and experiments. Coming to the practical side of the research, there are some literature on tests and trials of ISAC in different scenarios. [11] introduces a gesture recognition system that leverages changes in WiFi signal strengths to sense in-air hand gestures around mobile devices. Communication links can be established simultaneously when conducting such trials. The authors of [12] develop an ISAC system using frame structure in the 4th generation (4G) communication system and orthogonal frequency division multiplexing (OFDM) signals. The experiments are conducted in an over-the-air (OTA) manner and reveal a graceful trade-off between the performance for communication and sensing. In [13], an ISAC system for autonomous driving is designed and tested. The signals used for ISAC are designed according to the 5th generation (5G) communication standards, of 64 quadrature modulation (QAM) and cyclic prefix -OFDM (CP-OFDM). Signals are transmitted and received by tailored radio frequency (RF) feeds and receivers working on 28 GHz bands. A recent work utilizes Wi-Fi signals for respiratory monitoring [14]. The advantages of the system include using commercial off-the- shelf (COTS) devices which can provide low-cost and simple respiratory monitoring without requiring specialized hardware.
In [15], authors developed an indoor breath monitoring system using 5G BSs as the transmitters and receivers, and achieved satisfactory results. The tests were conducted under both lineof-sight (LOS) and non line-of-sight (NLOS) propagation conditions and the results were compared and analyzed. Recently, authors of [16] developed an ISAC system that is tested in indoor lab environment with an automated driving vehicle. There are two ISAC systems deployed, distancing 2.7 meters from each other, and the vehicle is moving at a speed of 1 m/s. After data fusion, the positioning root mean square error (RMSE) is reported to be 0.1319 meters. The idea of ISAC is also welcomed by researchers working on other promising technologies for the 6th generation (6G) networks, such as the reconfigurable intelligent surface (RIS) [17], [18]. Authors of [19] developed a RIS based communication system which is also capable of sensing human postures indoors. The system can distinguish four pre-defined human gestures using an optimized configuration. The system comprises of a pair of single antenna transceivers and uses tailored signal structures instead of standardized air interfaces.
To the best of the knowledge of the authors, there exists little work on deploying ISAC based approaches in 5G networks in practical outdoor environments. Thanks to the availability of 5G networks worldwide, even in remote areas [20], ISAC systems utilizing 5G have larger potentials in robustness than the ones relying on other signal sources. The wide range of available spectrum of 5G is also a huge advantage for sensing. What's more, since 6G is going to evolve from 5G and 5Gadvanced, ISAC systems which are compatible to 5G can be readily upgraded to fit into future generation wireless networks upon the release of more design details of 6G. It is also rare to see experimental results of ISAC systems that are able to support detection, tracking and localization of multiple targets simultaneously.
Compared to related work, the prototype developed in this paper is unique in the sense of using standard 5G BSs with no hardware modifications, and the signals in the tests are all 5G signals capable of serving communication needs at the same time. It is also capable of conducting sensing for multiple targets simultaneously.
The rest of this paper is organized as follows. We describe the system model in Section II and the approach to detect, track and localize targets in Section III. In Section IV, experimental results are provided to demonstrate the feasibility and accuracy of the proposed method. Finally, Section V concludes the paper.
II. SYSTEM MODEL
In this paper, a practical outdoor environment is considered. The goal is to utilize 5G BSs without any hardware modification to support sensing and communication simultaneously. The sensing capability is demonstrated by using 5G BSs to detect, localize and track interested targets in the test area. These targets are not required to be connected to the 5G network whatsoever.
There are two BSs involved in the process, one as the transmitter (Tx) and one as the receiver (Rx). The two BSs are not required to be time synchronized. The Tx transmits 5G signals, whose frame structures and physical layer parameters are defined according to 5G standards [21]. Note that to achieve true ISAC, there is no modification to the frame structure so the transmitted signals can still be used for communication purposes. Then, the reflected signals are received by the Rx. Note that since all the tests are conducted in fully practical environment, the propagation condition among the Tx, Rx and the targets are mixed line-of-sight (LOS) and non-LOS (NLOS). This will present some challenges to the experiments. The frame structure of signals used in this paper is presented in Fig. 2 where the subcarrier spacing is 30 kHz and the length of each slot is 0.5 ms. There are 10 slots in one radio frame, which lasts 5 ms. The signal used for target sensing is the remote interference management (RIM) signal, which is specified in [21] and placed on the 13th OFDM symbol of the 2rd slot. It can be seen that the periodicity of the RIM signal, which is used for sensing purposes, is 5 ms.
To make the system more general and applicable to most cases, there is few assumption on the targets. The targets can be small as a single pedestrian or big as a vehicle, while all need to be mobile. The constraint on the minimal mobility is introduced since there are many static objects surrounding the ISAC system such as buildings and trees, and it is necessary to filter these static objects out to enable detection and tracking of mobile targets.
III. THE ISAC APPROACH FOR TARGET DETECTION, TRACKING AND LOCALIZATION
Based on the system model introduced above, an ISAC system based on practical 5G networks is designed in this paper. The approach comprises of several steps, which are elaborated in details below. All the steps introduced below take place in the baseband, since it is assumed that decoding and demodulation are readily performed.
A. Data Pre-processing
During the data pre-processing stage, it is necessary to perform delay and phase compensation as well as differential processing, obtaining the time-domain signal segment for parameter estimation. Pre-processing can effectively improve the quality of the data and estimation accuracy in later stages. Assuming the input data consists of T packets, with each packet indexed by t = 0, . . . , T − 1 and has a size of M × N , which is the frequency-domain data needs to be processed. When the bandwidth is 100 MHz, there are 273 resource block (RB) and each RB has 12 carrier frequencies, M represents the total point number with M = 3276. Moreover, N is antenna number. In the tests, as the BSs are not synchronized and the target can be mobile, there are delays and phase deviations among different data packets. To solve this problem, delay and phase compensation as well as differential processing need to be performed.
Firstly, it is necessary to process and store the data of the first packet. The data of the first packet contains the environmental reflection path data and target reflection path data. The phase of the other data packets can be aligned with the first data packet, and the direct current component in the environment can be eliminated after differential. Secondly, it is necessary to calibrate the delay and phase deviations between different data packets by aligning the delay and phase. The difference among different packets is due to the potential mobility of the target being tracked. The phase calibration can be achieved by using the phase information extracted from the first packet to perform phase alignment for each data packet. At the same time, differential processing is used to calculate the delay between adjacent data packets, and then the delay is calibrated. The details are discussed below.
When receiving data packet indexed by t, the correlation with the stored first packet data is calculated as
R(0 : M − 1, N, t) = Y(0 : M − 1, N, 0)Y(0 : M − 1, N, t) H ,(1)
and the shift conjugate multiplication on the correlation matrix is complete as
A(0 : M −1,N, t) = R(0 : M −1 −m 0 ,N, 0)R(m 0 : M −1,N, t) H ,(2)
where m 0 represents the shift parameter. The delay difference is calculated according to the phase difference as
τ (N, t) = − angle(sum(A(0 : m − 1, N, t))) m 0 .(3)
To calculate the initial phase difference, it is required to perform delay difference compensation on the correlation matrix as
B(0 : M −1,N, t) = R(0 : M −1,N, 0) × D H ,(4)
where D = exp (jkron((− M 2 : M 2 − 1), τ (N, t))) and the initial phase can be calculated as
P ini (N, t) = angle(sum(B(0 : M −1,N, t))).(5)
The data after delay and phase compensation is
C(0 : M −1,N, t) = Y(0 : M −1,N, 0) × D H × P H ,(6)
with P = exp (jkron(ones(M, 1), P ini (N, t))). Based on the above analysis, differential processing is performed on the delay-phase aligned data as
C dif f (0 : M −1,N, t) = C(0 : M −1,N, t) − C(0 : M −1,N, 0),(7)
To obtain the time-domain differential channel, the Nspoint inverse fast Fourier transform (IFFT) transformation is conducted on the delay-phase aligned data as
h dif f (0 : N S − 1, N, t) = IFFT(C dif f (0 : M −1,N, t), N S ).(8)
According to the range that needs to be detected, the data of the time domain channel is intercepted, and the data of T S points are selected for output.
B. Doppler Estimation
Doppler estimation can be used to calculate the frequency shift caused by the Doppler effect during sFignal transmission. Define the steering vector as S(N, AoA id ) where AOA is angle of arrival and its detailed form can be found in [22]. Here AoA id represents the steering vector index whose dimension is AoA H × AoA V . Moreover, AoA H and AoA V are the horizontal and vertical angles of the search, respectively, and their number is determined by search range and step .
First, within the time range corresponding to the total T packets, each packet t outputs the processing result h dif f (T S , N, t) according to the order of signal arrival. Since all packets need to be processed consistently, it is necessary to save the output results of all packets and record the saved results as h dif f (T S , N, 1 : T ).
As a next step, T -point fast Fourier transform (FFT) is performed on the collected data to obtain the Doppler shift as
Doppler(T S , N, D id ) = FFT(h dif f (0 : T S , N, 1 : T )),(9)
where D id represents the Doppler frequency domain index.
For each point in the delay-Doppler domain, AOA spectral peak search is performed to obtain the maximum spectral peak and the corresponding horizontal and vertical angle index. First, the data in the delay-Doppler domain with the steering vector is correlated to obtain the correlation power as
Cor(t S , AoA id , D id )=|Doppler(t S , N, d id ) × S(N, AoA id )| 2 .(10)
Then, for all delay-Doppler-domain points, the maximum peak of the correlation power value is searched, and the value and corresponding index is recorded. This will give v peak , AoA id , d id . For the part near zero Doppler, the AOA peak search can be omitted to reduce computational complexity.
C. Target Selection
In the delay-Doppler domain, the target selection is used to search for target peaks. Perform the following steps in the algorithm to achieve target selection:
In step 1, delay partitioning is executed. In the delay dimension, every m T s partitions a region, with a total of N ts /m regions, where N ts is the length of T s . For any given delay partition, all peaks are searched.
In step 2, the peaks are sorted by energy from largest to smallest and the top k targets are selected. The same operation is performed on all delay partitions to obtain kNts m targets. In step 3, kNts m individual targets are sorted by energy in descending order and the top U targets with energy greater than the pre-defined threshold are selected. These targets will be used for target tracking.
D. Target Tracking
At time t, for each target in U , the number of occurrences is recorded as 1.
At time t + 1, for each target in t, the minimum Euclidean distance to the threshold and less than the threshold is found, and the number of occurrences corresponding to this target is increased by one. Targets that appear in t + 1 but not in t are considered new targets.
However, to decrease the false alarm rate, new targets are not directly considered as formal targets at the first appearance. A new target is considered a formal target only if it appears for p consecutive times, p being the threshold. At the same time, for each p consecutive target, the average velocity of the previous p−1 points must be larger than the minimum velocity v min . Otherwise, the target will be rejected and removed. Therefore, slow targets will be excluded from the target list in the process.
E. Target Localization
Based on the above processing, location-related information can be obtained, thereby determining the coordinates of the target point. For the targets, their coordinates are calculated respectively based on the corresponding appearance time, horizontal angle, and vertical angle information.
The receiving BS is assumed to be located in the origin point. For a particular target, denote (x, y, z) as its coordinates, (a, b, c) as the coordinates of the transmitting BS, r ′ as the distance between the target and the receiving antenna, θ as the elevation angle, and ψ as the azimuth angle. Moreover, the sum of the distance between the transmitting base station to the target and the distance between the target and the receiving base station is represented by
r = r ′ + (x − a) 2 + (y − b) 2 + (z − c) 2 ,(11)
where x = r ′ sin θ cos ψ, (12) y = r ′ sin θ sin ψ,
z = r ′ cos θ.(13)
Thus, r ′ can be re-written as r ′ = a 2 + b 2 + c 2 − r 2 2a sin θ cos ψ + 2b sin θ sin ψ + 2c cos θ − 2r , (15) and the Cartesian coordinate of the target can be obtained.
IV. EXPERIMENTAL RESULTS
In this Section, the results obtained from a group of field trials are presented to demonstrate the feasibility and accuracy of the proposed ISAC based multi-target detection and tracking approach.
The BSs used in all tests are 5G BSs without any hardware modification and are configured in a normal working mode. UEs passing through the test area can normally connect to 5G network during all tests.
There are in total of four trials conducted in this paper, which will be elaborated below in sequential order.
While striving to test the performance of communication on top of sensing, it is very challenging to categorize and test communication performance in practical networks since the BSs used in the trials are up-and-running 5G BSs and it's
A. General Test Setup
To fully test the proposed approach and the system, comprehensive trials are conducted in a practical environment located in the ZTE Xili industrial park, Shenzhen, China. As depicted in Fig. 3, there are 2 BSs involved in the tests, serving as the Tx and Rx, respectively. Both BSs are on the top of the buildings, overlooking the test area in between. The detailed configuration of the BSs are given in Table I, which follows standard 3GPP specifications. As the trials are conducted in practical environment, there are buildings, trees, grassland, parked vehicles and pedestrians in the test area, imposing shadowing and fading effects. During all trials, there is no restriction on the traffic in the test area, which means there can be moving objects such as vehicles and pedestrians. The test results are demonstrated by detecting and tracking intended targets, such as drones controlled by the authors, while other objects nearby are also visible to the ISAC system and can also be detected and tracked.
B. Detection and Localization of A Stepping Pedestrian
As a first step, a simple trial is carried out to detect and localize a pedestrian stepping at a specific location. The ISAC system detects the pedestrian and estimates its position. To evaluate the estimation accuracy, the pedestrian carries a global positioning system (GPS) tracker to obtain a benchmark position. The tests are conducted in 3 different locations in the test area and the results are given in Table II. The distance from the three locations to the BSs are around 100 to 200 meters. As can be seen in Table II, the average error of localizing the stepping pedestrian is 0.99 m. Considering the large area where the pedestrian can appear as well as the practical outdoor environment where other moving objects exist, such an accuracy can be satisfactory.
C. Tracking of A Walking Pedestrian
To test the performance of tracking moving objects, a walking pedestrian is tracked using the ISAC system. As a human being is relatively a very small object and presents only small changes in channel responses, tracking a walking pedestrian is considered challenging, compared to tracking larger objects such as vehicles. As shown in Fig. 4, the test scenario is set at a small parking lot alongside the street, where one car is parked and remains static. The pedestrian walks at a regular pace across the test field, from one side to the other. To demonstrate the real accuracy of the ISAC system, the estimations on positions are based on data acquired from every single shot, without considering history positions or applying any filtering approaches to smooth the estimated trajectory.
The errors of estimation, calculated by the distance from the GPS coordinates to the corresponding estimation at the same time slot, are given by values in tags in Fig. 4. During the test comprised of 20 snapshots, the majority of estimations give a quite satisfactory accuracy with estimation error less than 0.6 m, as depicted in Fig. 5. Notably, there are two estimations with significantly larger errors, specifically, 2.19 m and 3.26 m. This is potentially due to random interference presented in the radio environment at that time. The average localization error is 1.03 m and 0.58 m throughout the test, respectively, taking into account of the two outliers or not. It is also noted that within all estimations, the minimum error is 0.38 m. The accuracy of tracking can be further improved by taking history data into consideration, such as applying the Kalman filter, to smooth the trajectory and minimize the impact of potential outliers.
D. Simultaneous Tracking of UAV, Car and Pedestrian
Theoretically, the ISAC system built on 5G networks can detect and track multiple targets simultaneously, while this capability is rarely demonstrated in previous work. In this section, multi-target detection and tracking is performed in the test area, with three objects of different reflective characteristics, altitude and mobility, a pedestrian, a car and an unmanned aerial vehicle (UAV). The test environment as well as the results are shown in Fig. 6. The UAV cruises from one end of the test area to the other at a speed of approximately 3 m/s while maintaining the height of 30 meters, as depicted by the light blue dots. The car crosses the test area also at a constant speed of 3 m/s, as represented by the pink dots. The pedestrian walks alongside the edge of the green who can be identified by yellow dots. As the test is conducted in an outdoor open area, there can always be other people walking around who enters the test area accidentally. The unwanted appearance is also detected by the ISAC system and tracked as well, as depicted by the dark blue, red and green dots.
The positioning errors of the three intended objects all present a similar distribution to the results when tracking a single target, as depicted in Fig. 5. The minimum error in the three dimensional space is roughly 0.3 m and the largest error is around 3 meters. The results demonstrate that the designed ISAC system in this paper can track and localize multiple target simultaneously without loss of the localization accuracy compared to the case of tracking a single target. How to eliminate outliers is still one of the key to decrease the average positioning error and improve the accuracy.
E. Long Range UAV Tracking
On top of testing the accuracy of the proposed ISAC system, the effective area of it is also of great interest and thus needs to be verified. The effective area can be complicated to describe, as it depends on the geometrical settings of the surrounding environment. To simplify, a largest detection and tracking distance is proposed to be used as a metric to represent the capability of the ISAC system in terms of detecting and tracking targets far away. Note that the transmitting BS used in the test does not have any extra power boosting.
Rx Tx
Test area
To test the largest detection and tracking distance of the proposed ISAC system, another trial is conducted in the same industrial park, as shown in Fig. 7. The cruising path of the drone is set according to local regulations for flying UAVs, taking off from the industrial park and heading west. To better track the drone on this direction, two different BSs are used as Tx and Rx in this particular test, which are still 5G BSs without any hardware modifications. The coordinates of the Tx is (113.927001 E, 22.581170 N, 61.20 m) and the Rx is to serve as a sensing network across the whole deployment area.
V. CONCLUSION
In this paper, world's first ISAC system empowered by practical 5G network in outdoor environment is designed and tested. The proposed system uses 5G BSs serving nearby mobile users normally to enable high-accuracy ISAC, which can achieve the positioning accuracy of approximately 1 meter in a test area of hundreds of meters long and wide. Moreover, the system is capable of detecting, tracking and localizing multiple targets simultaneously, as demonstrated by tracking a pedestrian, a car and an UAV, without loss of localization accuracy compared to the case of tracking a single target. It is also verified that by using 5G BSs without any extra power boosting, the ISAC system can achieve a tracking range of more than 1.4 km. The experimental results obtained in the field trials confirm the feasibility of supporting ISAC using 5G networks, and pave way for future research and engineering of practical ISAC systems. Shijun Chen graduated from Harbin Engineering University in 1999 with a master's degree. He is now an algorithm engineer with the wireless research institute, ZTE Corporation. His current research interests include wireless positioning, wireless perception, channel simulation and industrial software. He has won 10 provincial and municipal science and technology awards. He has applied for more than 100 patents and published over 30 papers.
Fig. 1 :
1Illustration of a cellular network supporting native ISAC capabilities.
Fig. 2 :
2Frame structure of the test signal.
Fig. 3 :
3The test area.
Fig. 4 :
4Tracking of a walking pedestrian, where the GPS coordinates are depicted in yellow dots, estimated positions are represented by red dots and errors are marked in tags.
Fig. 5 :
5Cumulative distribution function of the errors in tracking the location of a walking pedestrian.
Fig. 6 :
6Test environment and results of simultaneous multi-target detection and tracking.
Fig. 7 :
7The long range UAV tracking test.Ruiqi (Richie) Liu (Member, IEEE) received the B.S. and M.S. degree (with honors) in electronic engineering from the Department of Electronic Engineering, Tsinghua University in 2016 and 2019 respectively. He is now a master researcher in the wireless research institute of ZTE Corporation, responsible for long-term research as well as standardization. His research interests include reconfigurable intelligent surfaces, integrated sensing and communication, wireless positioning and quantum communication. He is the author or co-author of several books and book chapters. During his 3-year service at 3GPP from 2019 to 2022, he has authored and submitted more than 500 technical documents with over 100 of them approved, and he served as the co-rapporteur of the work item (WI) on NR RRM enhancement and the feature lead of multiple features. He currently serves as the Vice Chair of ISG RIS in the ETSI. He actively participates in organizing committees, technical sessions, workshops, symposia and industry panels in IEEE conferences as the chair, organizer, moderator, panelist or invited speaker. He served as the guest editor for Digital Signal Processing and the lead guest editor for the special issue on6G in IEEE OJCOMS. He serves as the Editor of ITU Journal of Future and Evolving Technologies (ITU J-FET) and the Associate Editor of IET Quantum Communication. He is the Standardization Officer for IEEE ComSoc ETI on reconfigurable intelligent surfaces (ETI-RIS) and the Standards Liaison Officer for IEEE ComSoc Signal Processing and Computing for Communications Technical Committee (SPCC-TC). He received the Outstanding Service Award from the SPCC-TC in 2022. Mengnan Jian received the B.E. degree in information engineering from the Beijing Institute of Technology, Beijing, China, in 2016 and the M.S. degree from Tsinghua University, Beijing, China, in 2019. She is currently an engineer in ZTE corporation. Her research interests include reconfigurable intelligent surfaces, holographic MIMO and orbital angular momentum. Dawei Chen received the B.S. degree from Northeast Agricultural University in 2012 and the the M.S. degree from Harbin Institute of Technology in 2015. Now he is a senior algorithm engineer of ZTE, with a research focus on integrated sensing and communication and high-precision indoor positioning. He has applied for more than 30 patents and published more than 20 papers. Xu Lin received the B.S. degree from Harbin Institute of Technology (HIT), Harbin, China, in 2014, the M.S. degree from the China Academy of Telecommunications Technology, Beijing, China, in 2017, and the Ph.D. degree from HIT, Harbin, China, in 2022. From 2019 to 2020, he was a Research Trainee with the Department of Electrical and Computer Engineering, McGill University, Montreal, Canada. He is currently an Algorithm Engineer with ZTE Corporation, and he is also a Postdoctoral Research Fellow under the joint supervision of ZTE Corporation and HIT. His research interests include communication signal processing, physical waveform design, transform domain communication systems, and integrated sensing and communication. Yichao Cheng received the B.S. degree from University of Electronic Science and Technology of China in 2005. He is currently an engineer in ZTE Corporation. His research interests include 5G-Advanced software architecture design and 5G-Advanced integrated sensing and communication. Wei Cheng received his Master's degree from Central South University, China in 2011. He is now an algorithm engineer with the wireless research institute, ZTE Corporation. His research interests include wireless communication and integrated sensing and communication.
TABLE I :
IConfigurations of the transmitting and receiving BSParameter
Value
center frequency
4850 MHz
bandwidth
100 MHz
subcarrier spacing
30 kHz
number of Tx antennas
64
number of Rx antennas 64
waveform
OFDM
complicated to evaluate the communication performance since
it depends on the users nearby. Theoretically, communication
functions shouldn't be interfered since the sensing signal is
embedded into the frame structures of 5G signals according
to the 3rd generation partnership project (3GPP) standards.
TABLE II :
IILocalization results of a stepping pedestrianTest
Estimated coordinates
GPS coordinates
Error (m)
1
(113.9302430 E, 22.58280501 N)
(113.9302483 E, 22.58280606 N) 0.55
2
(113.9300548 E, 22.58280501 N)
(113.9300669 E, 22.58280335 N) 1.26
3
(113.9301468 E, 22.58280501 N)
(113.9301580 E, 22.58280619 N) 1.15
UAV takes off near the Rx, to the ending point where the ISAC system cannot track and detect the target anymore. The maximal detection distance achieved in this test is 1433 meters. Considering that the 5G network is deployed in a relatively dense way of approximately 100 BSs in 1 square kilometer (depending on the environment and scenarios), such a tracking distance can empower 5G networks the capabilitylocated at (113.929047 E, 22.582672 N, 64.39 m), as measured
by the GPS. The two BSs still serve UEs nearby as normal
network access nodes during the whole test.
The largest distance is measured from the starting point
where the
© ZTE All rights reserved
A Vision and An Evolutionary Framework for 6G: Scenarios, Capabilities and Enablers. R Liu, R Y Li, M D Renzo, L Hanzo, R. Liu, R. Y.-N. Li, M. D. Renzo, and L. Hanzo, "A Vision and An Evolutionary Framework for 6G: Scenarios, Capabilities and Enablers," 2023. [Online]. Available: https://arxiv.org/abs/2305.13887
Integrating sensing and communications for ubiquitous iot: Applications, trends, and challenges. Y Cui, F Liu, X Jing, J Mu, IEEE Network. 355Y. Cui, F. Liu, X. Jing, and J. Mu, "Integrating sensing and communi- cations for ubiquitous iot: Applications, trends, and challenges," IEEE Network, vol. 35, no. 5, pp. 158-167, 2021.
24-ghz joint radar and radio system capable of time-agile wireless sensing and communication. L Han, K Wu, 2011 IEEE MTT-S International Microwave Symposium. L. Han and K. Wu, "24-ghz joint radar and radio system capable of time-agile wireless sensing and communication," in 2011 IEEE MTT-S International Microwave Symposium, 2011, pp. 1-4.
Tdoa positioning in single frequency networks without transmitter identities. R Liu, C Zhang, P Hou, 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall). R. Liu, C. Zhang, and P. Hou, "Tdoa positioning in single frequency networks without transmitter identities," in 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), 2019, pp. 1-5.
Multi-target detection by distributed passive radar systems without reference signals. R Liu, W Dai, C Zhang, 2021 IEEE Wireless Communications and Networking Conference (WCNC). R. Liu, W. Dai, and C. Zhang, "Multi-target detection by distributed passive radar systems without reference signals," in 2021 IEEE Wireless Communications and Networking Conference (WCNC), 2021, pp. 1-5.
Line of sight component identification and positioning in single frequency networks under multipath propagation. R Liu, C Zhang, J Song, IEEE Transactions on Broadcasting. 652R. Liu, C. Zhang, and J. Song, "Line of sight component identification and positioning in single frequency networks under multipath propaga- tion," IEEE Transactions on Broadcasting, vol. 65, no. 2, pp. 220-233, 2019.
Joint maneuver and beamforming design for uav-enabled integrated sensing and communication. Z Lyu, G Zhu, J Xu, IEEE Transactions on Wireless Communications. Z. Lyu, G. Zhu, and J. Xu, "Joint maneuver and beamforming design for uav-enabled integrated sensing and communication," IEEE Transactions on Wireless Communications, pp. 1-1, 2022.
An overview of signal processing techniques for joint communication and radar sensing. J A Zhang, F Liu, C Masouros, R W Heath, Z Feng, L Zheng, A Petropulu, IEEE Journal of Selected Topics in Signal Processing. 156J. A. Zhang, F. Liu, C. Masouros, R. W. Heath, Z. Feng, L. Zheng, and A. Petropulu, "An overview of signal processing techniques for joint communication and radar sensing," IEEE Journal of Selected Topics in Signal Processing, vol. 15, no. 6, pp. 1295-1315, 2021.
Sensing integrated dft-spread ofdm waveform and deep learning-powered receiver design for terahertz integrated sensing and communication systems. Y Wu, F Lemic, C Han, Z Chen, IEEE Transactions on Communications. Y. Wu, F. Lemic, C. Han, and Z. Chen, "Sensing integrated dft-spread ofdm waveform and deep learning-powered receiver design for terahertz integrated sensing and communication systems," IEEE Transactions on Communications, pp. 1-1, 2022.
Mu-mimo communications with mimo radar: From co-existence to joint transmission. F Liu, C Masouros, A Li, H Sun, L Hanzo, IEEE Transactions on Wireless Communications. 174F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, "Mu-mimo commu- nications with mimo radar: From co-existence to joint transmission," IEEE Transactions on Wireless Communications, vol. 17, no. 4, pp. 2755-2770, 2018.
Wigest demo: A ubiquitous wifi-based gesture recognition system. H Abdelnasser, K A Harras, M Youssef, 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS. H. Abdelnasser, K. A. Harras, and M. Youssef, "Wigest demo: A ubiq- uitous wifi-based gesture recognition system," in 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2015, pp. 17-18.
An experimental proof of concept for integrated sensing and communications waveform design. T Xu, F Liu, C Masouros, I Darwazeh, IEEE Open Journal of the Communications Society. 3T. Xu, F. Liu, C. Masouros, and I. Darwazeh, "An experimental proof of concept for integrated sensing and communications waveform design," IEEE Open Journal of the Communications Society, vol. 3, pp. 1643- 1655, 2022.
Sensing and communication integrated system for autonomous driving vehicles. Q Zhang, H Sun, Z Wei, Z Feng, IEEE INFOCOM 2020 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). Q. Zhang, H. Sun, Z. Wei, and Z. Feng, "Sensing and communication integrated system for autonomous driving vehicles," in IEEE INFOCOM 2020 -IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), 2020, pp. 1278-1279.
Monitoring respiratory motion with wi-fi csi: Characterizing performance and the breathesmart algorithm. S Mosleh, J B Coder, C G Scully, K Forsyth, M O A Kalaa, IEEE Access. 10S. Mosleh, J. B. Coder, C. G. Scully, K. Forsyth, and M. O. A. Kalaa, "Monitoring respiratory motion with wi-fi csi: Characterizing performance and the breathesmart algorithm," IEEE Access, vol. 10, pp. 131 932-131 951, 2022.
Integrated sensing and communication based breath monitoring using 5g network. Z Zhao, R Liu, J Li, 2023 19th International Wireless Communications Mobile Computing Conference (IWCMC). 2023Z. Zhao, R. Liu, and J. Li, "Integrated sensing and communication based breath monitoring using 5g network," in 2023 19th International Wireless Communications Mobile Computing Conference (IWCMC), 2023.
Networking Based ISAC Hardware Testbed and Performance Evaluation. K Ji, Q Zhang, Z Wei, Z Feng, P Zhang, IEEE Communications Magazine. 615K. Ji, Q. Zhang, Z. Wei, Z. Feng, and P. Zhang, "Networking Based ISAC Hardware Testbed and Performance Evaluation," IEEE Commu- nications Magazine, vol. 61, no. 5, pp. 76-82, 2023.
A path to smart radio environments: An industrial viewpoint on reconfigurable intelligent surfaces. R Liu, Q Wu, M Di Renzo, Y Yuan, IEEE Wireless Communications. 291R. Liu, Q. Wu, M. Di Renzo, and Y. Yuan, "A path to smart radio environments: An industrial viewpoint on reconfigurable intelligent surfaces," IEEE Wireless Communications, vol. 29, no. 1, pp. 202-208, 2022.
Simulation and field trial results of reconfigurable intelligent surfaces in 5g networks. R Liu, J Dou, P Li, J Wu, Y Cui, IEEE Access. 10R. Liu, J. Dou, P. Li, J. Wu, and Y. Cui, "Simulation and field trial results of reconfigurable intelligent surfaces in 5g networks," IEEE Access, vol. 10, pp. 122 786-122 795, 2022.
Reconfigurable intelligent surface based rf sensing: Design, optimization, and implementation. J Hu, H Zhang, B Di, L Li, K Bian, L Song, Y Li, Z Han, H V Poor, IEEE Journal on Selected Areas in Communications. 3811J. Hu, H. Zhang, B. Di, L. Li, K. Bian, L. Song, Y. Li, Z. Han, and H. V. Poor, "Reconfigurable intelligent surface based rf sensing: Design, optimization, and implementation," IEEE Journal on Selected Areas in Communications, vol. 38, no. 11, pp. 2700-2716, 2020.
Overview of development and regulatory aspects of high altitude platform system. D Zhou, S Gao, R Liu, F Gao, M Guizani, Intelligent and Converged Networks. 11D. Zhou, S. Gao, R. Liu, F. Gao, and M. Guizani, "Overview of development and regulatory aspects of high altitude platform system," Intelligent and Converged Networks, vol. 1, no. 1, pp. 58-78, 2020.
3GPP TS 38.211: NR Physical channels and modulation. Online"3GPP TS 38.211: NR Physical channels and modulation." [On- line]. Available: https://portal.3gpp.org/desktopmodules/Specifications/ SpecificationDetails.aspx?specificationId=3213
On favorable propagation in massive mimo systems and different antenna configurations. X Wu, N C Beaulieu, D Liu, IEEE Access. 5X. Wu, N. C. Beaulieu, and D. Liu, "On favorable propagation in massive mimo systems and different antenna configurations," IEEE Access, vol. 5, pp. 5578-5593, 2017.
| [] |
[
"An Autoencoder-based Snow Drought Index",
"An Autoencoder-based Snow Drought Index"
] | [
"Sinan Rasiya Koya \nDepartment of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n\n",
"Kanak Kanti Kar \nDepartment of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n\n",
"Shivendra Srivastava \nDepartment of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n\n",
"Tsegaye Tadesse \nNational Drought Mitigation Center\nUniversity of Nebraska-Lincoln\n\n",
"Mark Svoboda \nNational Drought Mitigation Center\nUniversity of Nebraska-Lincoln\n\n",
"TirthankarRoy [email protected] ",
"Tirthankar Roy \nDepartment of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n\n"
] | [
"Department of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n",
"Department of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n",
"Department of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n",
"National Drought Mitigation Center\nUniversity of Nebraska-Lincoln\n",
"National Drought Mitigation Center\nUniversity of Nebraska-Lincoln\n",
"Department of Civil and Environmental Engineering\nUniversity of Nebraska-Lincoln\n"
] | [] | In several regions across the globe, snow has a significant impact on hydrology. The amounts of water that infiltrate the ground and flow as runoff are driven by the melting of snow. Therefore, it is crucial to study the magnitude and effect of snowmelt. Snow droughts, resulting from reduced snow storage, can drastically impact the water supplies in basins where snow predominates, such as in the western United States. Hence, it is important to detect the time and severity of snow droughts efficiently. We propose Snow Drought Response Index or SnoDRI, a novel indicator that could be used to identify and quantify snow drought occurrences. Our index is calculated using cutting-edge ML algorithms from various snow-related variables. The self-supervised learning of an autoencoder is combined with mutual information in the model. In this study, we use random forests for feature extraction for SnoDRI and assess the importance of each variable. We use reanalysis data (NLDAS-2) from 1981 to 2021 for the Pacific United States to study the efficacy of the new snow drought index. We evaluate the index by confirming the coincidence of its interpretation and the actual snow drought incidents. | 10.48550/arxiv.2305.13646 | [
"https://export.arxiv.org/pdf/2305.13646v1.pdf"
] | 258,841,507 | 2305.13646 | 4b6908f05e055a04ad596a1da76e42bcf52af3ca |
An Autoencoder-based Snow Drought Index
Sinan Rasiya Koya
Department of Civil and Environmental Engineering
University of Nebraska-Lincoln
Kanak Kanti Kar
Department of Civil and Environmental Engineering
University of Nebraska-Lincoln
Shivendra Srivastava
Department of Civil and Environmental Engineering
University of Nebraska-Lincoln
Tsegaye Tadesse
National Drought Mitigation Center
University of Nebraska-Lincoln
Mark Svoboda
National Drought Mitigation Center
University of Nebraska-Lincoln
TirthankarRoy [email protected]
Tirthankar Roy
Department of Civil and Environmental Engineering
University of Nebraska-Lincoln
An Autoencoder-based Snow Drought Index
In several regions across the globe, snow has a significant impact on hydrology. The amounts of water that infiltrate the ground and flow as runoff are driven by the melting of snow. Therefore, it is crucial to study the magnitude and effect of snowmelt. Snow droughts, resulting from reduced snow storage, can drastically impact the water supplies in basins where snow predominates, such as in the western United States. Hence, it is important to detect the time and severity of snow droughts efficiently. We propose Snow Drought Response Index or SnoDRI, a novel indicator that could be used to identify and quantify snow drought occurrences. Our index is calculated using cutting-edge ML algorithms from various snow-related variables. The self-supervised learning of an autoencoder is combined with mutual information in the model. In this study, we use random forests for feature extraction for SnoDRI and assess the importance of each variable. We use reanalysis data (NLDAS-2) from 1981 to 2021 for the Pacific United States to study the efficacy of the new snow drought index. We evaluate the index by confirming the coincidence of its interpretation and the actual snow drought incidents.
Introduction
In many regions worldwide, snow has a vital contribution to drought occurrence 1 , evident from the role of snow in regional and global water resources and climate [2][3][4] . Recently, it has led to a broad discussion on the association between droughts and snow, along with the emergence of several studies focusing on "snow drought," indicative of lower-than-normal snow conditions 1,[5][6][7][8][9] . However, a consistent way of characterizing snow droughts is missing in these past studies, resulting in the absence of a solid framework to detect snow droughts.
Different authors defined snow droughts differently, making the analysis of these droughts conditioned upon and potentially sensitive to the definitions. For example, a recent study assessing the global snow drought hotspots and characteristics considered a snow drought event as a deficit of snow water equivalent (SWE) 1 . Another study argued that defining snow droughts just in terms of SWE might not be sufficient, and it referred to snow drought as a combination of general droughts and shortages in snow storage, reflecting both the lack of winter precipitation and SWE 5 . Subsequently, Hatchett & McEvoy (2018) defined different types of snow droughts based on the origination, persistence, and termination of below-normal snow accumulations 9 . Later, several studies expressed snow droughts in terms of threshold percentiles, which is essentially subjective 8,10 . Some of these studies used average SWE to identify snow droughts, whereas others used maximum SWE 8,10 . Although we need an index to study and predict snow droughts, the lack of coherence in the characterization of snow droughts potentially questions the reliability of a snow drought index based on strict definitions. Therefore, we need a framework to calculate a snow drought index independent of such definitions, which, at the same time, can also capture the signals of a snow drought.
To date, only a limited number of studies have been conducted on snow drought indices. The recently developed Standardized Snow Water Equivalent Index (SWEI) is obtained through the inverse standard normal distribution of probabilities associated with SWE integrated over three months 1 . Keyantash & Dracup (2004) developed the aggregated drought index (ADI) based on rainfall, evapotranspiration, streamflow, reservoir storage, soil moisture, and snow water content variable 11 . Here, a principal component analysis (PCA) was employed to reduce the dimensionality and explain the key variabilities represented by the selected variables. Staudinger et al. (2014) developed an extension to the Standardized Precipitation Index (SPI) named Standardized Snow Melt and Rain Index (SMRI), where they used the sum of rainfall and snowmelt variables instead of precipitation 12 . Qiu (2013) modified the standard Palmer Drought Severity Index (PDSI; Palmer, 1965) by including degree day factor (DDF), an empirical threshold temperaturebased snowmelt model, to account for snow processes 13,14 . This modification improved the drought monitoring capabilities in several snow-dominated regions 13,15 . However, these indices often depend upon in-situ observations, which might not be readily available in many regions, and the problem is exacerbated in ungauged or sparsely gauged regions. This calls for an index that can leverage remote sensing datasets and bypass the need for extensive in-situ observation networks.
Merging and extracting necessary information on snow droughts from a wide range of variables present in remote sensing datasets can be challenging since not all variables are equally related to the formation of snow droughts. However, we need to identify important variables so that they can be merged to form one single index. Machine learning (ML)-based feature selection algorithms are promising in this regard since they can filter our variables based on their importance [16][17][18][19] . Thus, we can use ML techniques to infer the influence of hydroclimatic variables on snow droughts. Apart from this, information theory-based methods can manifest the relative influence of variables and their causal connections [20][21][22] . Mutual Information (MI), a measure of how much information about one random variable is contained in another random variable, has been widely applied in feature selection problems 23,24 .
In this study, we are introducing a new snow drought index, Snow Drought Response Index (SnoDRI), using a combination of ML techniques and MI. SnoDRI considers several snow factors while assessing snow drought, including SWE and snow fraction. Importantly, to estimate SnoDRI, we do not require any ground measurements. Our results show that SnoDRI could successfully detect the signals of a historical snow drought event. Moreover, our framework could provide insights into the crucial features impacting the occurrence of snow drought.
Methods
Study Area
The Western United States is characterized as a snow drought hot spot where snow drought becomes widespread, intensified, and prolonged 1 . Snow plays a significant role in the hydrology of the Western United States. Recent events have shown that the water resources and management in this region are hugely influenced by snow droughts 5 . Therefore, this study focused on three states (Pacific Coast States) in the western USA: Washington, California, and Oregon ( Figure 1). The highest elevation of this region ranged from 1,547 m in California to 520 m in Washington State. As a result, elevation and climate variables associated with orographic precipitation and the variability in average annual maximum snow water accumulation are considerable. To validate our results, we selected a mountainous area situated in the upper Tuolumne River basin of the Sierra Nevada 25 . About 60% of water in Southern California is received from the Sierra Nevada snowpack 26 . As such, the snowpack plays a vital role in this region, which was confirmed by the impacts of below-normal snow conditions on water resources, ecosystems and recreation 8 . Due to the mountainous region and varied elevation, individual climatological extreme events can produce a drastically different response in magnitude and spatial variability of the snowpack in this region 27 . Recently, 2014 and 2015 showed a lack of snow accumulation and winter precipitation, indicating snow drought events in the upper Tuolumne basin 5,8,9 . The historical precipitation trends in this region have shifted from snow to rain, resulting in more frequent droughts 10,28 . This shift has impacted headwater hydrology and downstream reservoir management of basins in the Sierra Nevada 29 .
Data used for SnoDRI
NLDAS-2 Variables
The North American Land Data Assimilation System (NLDAS) is a multi-institutional partnership project intended to develop land-surface model datasets from observations and reanalysis with quality control that is coherent across space and time 30 . NLDAS data is comprised of hourly data in gridded format with a spatial resolution of 0.125 ° x 0.125 °. An improved version, NLDAS-2, was later developed by determining and rectifying existing errors in forcing data and models 31 . NLDAS-2 changed data sources and their inherent biases, upgraded the model along with recalibrated parameters, and increased the period of forcing data and simulations 31 . NLDAS-2 data provides a total of eleven variables, given in Table 1. All these variables are spatially aggregated for interested basins and converted to monthly timeseries.
Standardized Precipitation Index (SPI)
The Standardized Precipitation Index (SPI) is an index widely used for quantifying precipitation anomalies (McKkee et al., 1993). The SPI is obtained by mapping the actual probability distribution of precipitation to a normal distribution. A zero SPI indicates normal conditions. The positive values of SPI represent wet conditions, whereas the negative values represent dry conditions. The larger the negative value of SPI, the higher the severity of drought conditions. We calculated SPIs at three timescales (3, 4, and 6 months) and provided them as inputs for the SnoDRI. The SPI at 3, 4, and 6 months timescales were chosen because the snow processes and the impact of reduced winter precipitation generally occur at these timescales. These SPIs can reflect the reduced snowpack and discharge from snowmelt. For the same reason, SPIs at longer timescales (such as SPI-12 and 60) are potentially unimportant for snow droughts. Therefore, we have also fed the model with SPI-12 and SPI-60 as a sanity check to see whether the model can discard irrelevant information, which is confirmed by our results (see section 4.2). The SPI calculation only requires precipitation; it smoothens the time series data and maps the actual distribution of precipitation to normal distribution. We used the basin aggregated NLDAS-2 precipitation for computing SPIs.
Snow Water Equivalent
We use snow water equivalent (SWE) as an indicator variable for the snow drought case used for validation. SWE is the target variable for the random forest model (discussed in a later section) used in selecting the input features for our index. We obtained SWE data and snow depth from assimilated in-situ and modeled data over the conterminous US, Version 1 33,34 from National Snow and Ice Data Center (NSIDC). This data provides daily SWE and snow depth at a spatial resolution of 4km x 4km for the conterminous United States (CONUS). We collected the SWE data from 1982 to 2020 and spatially aggregated them for the study basins and converted them to monthly timeseries.
CAMELS Basin Shapefiles
Catchment Attributes and MEteorology for Large-sample Studies (CAMELS) provide the time series of meteorological forcings and attributes of 671 basins across the CONUS 35,36 . These basins are least affected by human actions 36 . The dataset contains different categories of basin attributes: topography, climate, streamflow, land cover, soil, and geology 36 . We used the basin shapefiles provided in the dataset and filtered basins (a total of 85) belonging to the Pacific Coast states. The gridded datasets are aggregated to the basin scale using these shapefiles.
Discharge
The discharge data for these basins were collected from the US Geological Survey's (USGS) streamflow measurements provided in the CAMELS dataset. USGS collects, monitors, and analyzes existing resource conditions across the different sites in the US. USGS stations measure velocity through a current meter or acoustic doppler current profiler. The discharge is computed by multiplying the cross-sectional area by the estimated velocity. For this study, daily data were obtained from 1979 through 2020. The daily flow records for USGS gage stations provide the value of mean discharge for each day of the water year (US Geological Survey, 2023).
SnoDRI Framework
In this work, as shown in Figure 2, we followed a general framework where all input data were standardized up front. Using this dataset, we found the weight of each input variable from an ML model coupled with MI. The details of these components are discussed in the later on. Once the weights are obtained, each variable is multiplied with the corresponding weight, and the weighted inputs are added. Thus, the resultant values are standardized to obtain the SnoDRI index.
Random Forest Regression
Random Forests are a collection of decision trees. Each tree in a Random Forest is built based on a random subset of variables from bootstrapped data, which can be used for classification and regression problems 38 . Since every regression tree inside the random forest identifies an order of variables to classify the training dataset, we can leverage the random forest regression algorithm to find the feature importance of training variables in predicting the target variable. In this study, we use random forest regression to select the important variables to develop SnoDRI. We aggregated the NLDAS-2 variables for 85 CAMELS basins in the Western US and regressed it against SWE and discharge within the basins. This yields two sets of feature importance corresponding to SWE and discharges for each basin. Then by taking the mean of all basins, the average feature importance of variables for the Western US is calculated. The union of the top four variables in average feature importance for predicting SWE and discharge is used to compute SnoDRI.
Self-Supervised Learning with Autoencoder
Given the fact that we are trying to develop a novel snow drought index derived from different snow variables, we do not have a target variable to train a model. The absence of a target variable makes our problem a case of unsupervised learning. We used the ML algorithm of autoencoders, a particular type of neural network that is used for dimensionality reduction 39 . During the learning process, autoencoders "discard" the insignificant information present in the dataset. An autoencoder consists of an input layer, an encoder with hidden layers, a bottleneck layer, a decoder with hidden layers, and an output layer that tries to reconstruct the input data (self-supervised learning). The NLDAS-2 variables identified through random forest regression (section 3.1) are initially passed to the input layer. As the data passes through the encoder and reaches the bottleneck layer, it encodes the entire data into a reduced form which can be regarded as a 'compressed' form of important information in the entire dataset. After several trialand-error iterations, we finalized an autoencoder network with three hidden layers, including a bottleneck. The structure of the autoencoder network is shown in Figure 2.
The bottleneck layer consists of one neuron, and the other two hidden layers consist of fifteen neurons. The hidden layers and the bottleneck layer used tanh activation functions which take care of the nonlinearities in the model. We found that the Adam optimization algorithm with the Huber loss function gives better training of our model. The training of the model in 3000 epochs as single batches provided the best possible accuracy with the given datasets. Loss and accuracy stayed nearly the same regardless of the additional increase in the number of epochs.
A valid question here is why we cannot directly use the weights of the trained autoencoder. To explain this, referring to Figure 3, we must look at all possible trajectories of "information flow" from one input variable to the compressed bottleneck output. We can see that other input variables also influence the weights. For example, the highlighted path in Figure 3 shows possible trajectories of "information flow" from input X1 before it reaches the compressed bottleneck output. Since the nodes in the second layer are connected to all input nodes from previous layers, weights W 11 (2) and W 12 (2) are optimized based on the information from all input variables. As the information from each input node is divided and passed to all nodes in the hidden layer, weights W 11 (1) and W 12 (1) are optimized based on the division of information from input nodes. Therefore, the weights inside the autoencoder network are not representative of the relative contribution of each input to the compressed bottleneck output. This issue led to using an alternative method, MI, to infer the weights for each input feature.
Mutual Information
Mutual Information (MI) is a measure of how much information, on average, can one random variable tell us about another random variable 22,40,41 . It can be conceptualized as the reduction in entropy of one variable, given the information about another variable 21 . MI between two random variables, X and Y, expressed as I(X;Y), is calculated using Equation 1 22,40 . Here PX (x) and PY (y) are marginal probabilities, and PXY(x,y) is the joint probability.
( ; ) = ∑ ( , ) ( , ) ( ) ( ) =(1)
This study used MI between the bottleneck output and each input variable to determine their weights. As shown in Figure 2, the weights of each variable are multiplied by themselves and added (weighted sum), which is then standardized to obtain SnoDRI values. Since the bottleneck represents a "compressed" version of all input data, the MI shows how much of each variable is contained inside the "compressed" form.
Rain-Snow Partitioning
We also used snow fraction as an input into the model. In the modeling realm, there are different rain-snow partitioning schemes based on which precipitation variable is used as the forcing is separated into snow and rainfall. Classifying rain and snow based on a threshold temperature is the most straightforward scheme. This method is susceptible to the choice of threshold temperature. In another scheme proposed by Jordan (1991), the snow percent is calculated as a linear stepwise function of air temperature 42 . In this study, we estimated the snow fraction based on a sigmoid function of wet bulb temperature, as Wang et al. (2019) proposed 43 . The wet bulb temperature is calculated from air temperature, specific humidity, and surface pressure from the NLDAS-2 dataset.
Validation
There is no single framework for validating a drought index. Based on the purpose of the new index, one must examine whether the index follows the drought indicators, such as the scarcity of relevant environmental variables. This study uses the anomaly in SWE as an indicator variable. A negative anomaly represents the lack of snow accumulation compared to normal and vice versa. The lower the anomaly, the more severe the snow drought. The discharge co-occurring with SWE is another indicator variable. Reduced discharge due to low meltwater contribution can be a potential impact of the snow drought. A lower discharge following a lower SWE is a prime case of snow drought.
We
Results
Feature Importance
Through random forest regressions for 85 CAMELS basins, we obtained the average feature importance of NLDAS-2 variables for the Pacific Coast States of the US in predicting SWE and discharge. This method gives a sense of the relative significance of each variable in generating SWE or discharge. Figure 4a shows the feature importance for predicting SWE. We see that temperature is the top significant variable in determining SWE. This is most likely because the temperature decides the amount of snowfall (vs. rainfall) in the precipitation. Several rain-snow partitioning schemes are highly sensitive to temperature [42][43][44][45] . After temperature, the downward shortwave radiation affects the SWE most. Since the primary source of energy that drives the atmospheric process is the incoming shortwave radiation, we can expect that the formation and accumulation of snow are highly dependent on shortwave radiation. Specific humidity is the third most crucial variable for SWE as obtained from random forest regression. Specific humidity, (the measure of water content in the atmosphere) could have a significant effect on the formation of snow. Figure 4b indicates the average feature importance for predicting discharge for the Pacific Coast States of the US. The results show that precipitation has a very high significance for estimating discharge in the basin. It is intuitive that the incoming precipitation, in the form of rainfall or snow, would contribute the most to generating river runoff as precipitation is the primary water source for any basin. Though temperature, zonal wind, and downward shortwave radiation are most important after precipitation, their importance is far lower than precipitation, as the results of random forest regression suggest.
From both cases, random forest regression targeting SWE and discharge, we identified the top three variables having the highest average feature importance for our study area. The union of these variables gives a set of APCP, TMP, DSWRF, SPFH, and VGRD, which are used as input variables in SnoDRI calculations. Table 1.
Weights from Mutual Information
The MI between the compressed bottleneck output and each input variable measured the relative importance of the corresponding variable in the compressed data (bottleneck output). Figure 5 shows the approach for obtaining weights and the subsequent results.
We can see that the downward shortwave radiation, SPIs, temperature, and snow fraction are found to be carrying more weight than other variables. Downward shortwave radiation from the sun entering the atmosphere, besides acting as the sole energy source of hydroclimatic processes, causes the direct melting of snow. This shows how important the downward shortwave radiation can be in the occurrence of snow droughts, which is reflected as the highest weight estimated by our framework. However, many of the previous studies on snow droughts have not considered the impact of downward shortwave radiation in their assessment 1,7,12 . Our result suggests further investigating the direct and indirect association between downward shortwave radiation and snow droughts. The model also identified shorter-scale SPIs (3, 4, and 6 months) as significant.
Since SPIs represent the lack of precipitation in the case of drought, they can be a proxy for lower snowfall and snowpack conditions. Following this, the model assigned more weight to temperature, complying with the fact that temperature drives snow processes from the formation of snow to the melting of snow. Snow fraction, another variable with considerable weight, can be directly related to the amount of snowpack. The lower the snow fraction, the lower the snowfall, leading to snow drought conditions. Interestingly, the model identifies that precipitation as such does not have a severe influence though the snow fraction partially contains precipitation information. The reason could be that the SPIs, which are the precipitation mapped to a normal distribution, already contain the information of precipitation. This can also be attributed to the ability of the model to dismiss redundant information. The lowermost weights were assigned to the SPI-12 and 60, two variables presupposed to be irrelevant for snow droughts, additionally confirmed the aforementioned ability of the model.
SnoDRI Evaluation
With the newly developed snow drought index, SnoDRI, we analyzed the conditions of snow drought in the Upper Tuolumne River basin from February 2013 to May 2019. This period is included in the evaluation period. In other words, the model has never seen the input dataset of this duration throughout its training. Over this evaluation period, the new index shows a good performance in capturing snow drought events. Figure 6 shows , the basin received a higher snow accumulation, and as a result the basin generated a higher discharge. This absence of snow drought was reflected in the SnoDRI, as seen in Figure 6. The higher the temperature, the higher the chance of snow droughts, leading to reduced snowfall and rapid snowmelt. In accordance with Figure 6, negative SnoDRI matched with positive anomalies in temperature and vice versa.
Discussion
Generally, this study proposes a new framework to calculate SnoDRI, an index that can measure snow droughts. The framework could be advantageous due to the multiple strengths that we identified. Firstly, it can be applied in ungauged basins. Since we use only the selected features from NLDAS-2 forcings, a gridded dataset integrating satellite observation with measurement gauges and radars, we do not need to rely on any groundmeasured variables to calculate SnoDRI. Several basins worldwide are still ungauged, especially in developing countries, leading to a lack of an efficient drought monitoring system. Our framework can act as an alternative snow drought indication framework in such regions. Secondly, our framework reduces the subjectivity in choosing the input variables by using random forest models to select important features. Previous studies examined snow droughts with definitions based on a handful of variables (mostly precipitation, temperature, and SWE) selected based on expert knowledge and assumptions, which add subjectivity to the analysis 1,12 We acknowledge that, to some extent, the index is susceptible to the group of input variables we start with before the random forest feature selection. The importance of any variable obtained from the random forest feature selection depends on the whole set of input variables. In other words, adding more variables or choosing a different dataset might produce a different order of feature importance. Although we only considered the NLDAS-2 variables in calculating SnoDRI, the framework can be applied to any set of time series input variables. For example, the ERA5-Land dataset, which provides a larger number of variables with a more extended period 51 , and the data from Land Information System (LIS) simulations. In spite of the high computational cost, it would be interesting to see the performance of the framework with ERA-5 or LIS variables. We have executed the framework at a basin scale in the western US. Future efforts can be directed towards establishing the framework in a gridded manner at continental or global scales.
Although we focused on snow droughts, this framework can be applied to identify any type of drought, given the appropriate input variables and feature selection. We selected variables by regressing random forests against SWE and discharge. Training the random forests against different target variables relevant to the interested drought type would give another set of input variables. These can be transformed into an index by following the steps in our framework. Thus, by design, our framework is transferrable. We can set up the framework for any region of interest by training the random forest (for feature selection) and autoencoder (for estimating the weights of selected features through MI scores) with the data of that region. It should be noted that, for different regions, the model might assign different weights to variables depending upon the hydroclimatic characteristics of the region. For instance, in the regions where temperature variability has greater influence, temperature is most likely to control the compressed bottleneck information inside the autoencoder, leading to a higher MI score for temperature (with bottleneck information). This aspect of our framework can be used to get insights about the impact of each variable on droughts.
Conclusion
We developed a framework to calculate a new index, SnoDRI, that can be used to identify snow droughts. We trained random forest models for 85 basins across the west coast to select the input features for the index calculation. Our novel framework showed the capability of combining autoencoders (a self-supervised machine-learning algorithm) with MI (a degree of mutual dependency between two variables) to estimate the importance of input variables in the occurrence of snow droughts. We found that the downward shortwave radiation, SPIs, temperature, and snow fraction considerably influence snow droughts. In validation, SnoDRI successfully captured the reported snow drought events and their signals in Upper Tuolumne Basin in the Sierra Nevada region. The framework demonstrated the potential to eliminate redundant information in the dataset. The novelty of our framework is that it can be applied to ungauged basins since it does not use any ground measurements. The framework can be adapted to other types of droughts and to different regions around the world.
Data Availability
The datasets used and/or analyzed in the study are available from the corresponding author on reasonable request.
Figure 1 .
1The Pacific Coast States of the United States showing the study basins.
Figure 2 .
2The methodology used for developing SnoDRI. On the left-hand side is the flowchart with the steps followed in this study. On the right-hand side is the framework with autoencoder (top) and Mutual Information used to calculate the weight of each input variable. Inside the square bracket is the number of nodes in each layer of the autoencoder.
Figure 3 .
3The encoder portion (first half) of the autoencoder neural network. The highlighted paths show two possible "information flows" (black arrowed lines) from the first input feature to the compressed bottleneck output.
compared the novel SnoDRI with patterns of SWE and discharge in the Upper Tuolumne basin of California. Hatchett et al. (2022) and Harpold & Dettinger (2017) reported a snow drought in this region during the winter (Jan to Apr) of 2014 and 2015 5,8 . A lower SWE, along with a lower discharge, is taken as a signal of snow drought in the basin. We checked if these signals correspond to a lower value of SnoDRI. Meanwhile, the index should not give a false positive forecast.
Figure 4 .
4Average Feature importance for SWE (a) and Q (b) in basins of pacific states. The long names of variables are provided in
Figure 5 .
5Weights obtained for each input variable as the Mutual Information in each variable and the compressed bottleneck output.
the SnoDRI calculated for Upper Tuolumne Basin in California, indicating a lower value corresponding to the reported snow droughts during the winter of 2013/14 and 2014/2015. Negative (positive) values of the SnoDRI suggest the presence (absence) and severity of the snow drought. In addition, we placed SnoDRI and the potential signals of the snow drought in juxtaposition. Comparing SnoDRI with the anomaly in SWE, SnoDRI shows lower (more negative) values when the SWE anomaly is shallow. For instance, in 2014, 2015 and 2018, the Upper Tuolumne Basin showed a negative anomaly in SWE (Figure 6). During these years, SnoDRI showed lower values, indicating the occurrence of snow drought. On the other hand, during 2017, the region observed higher snow accumulation, leading to a positive SWE anomaly. SnoDRI indicates a continuous positive value during this period. Moreover, the higher (in 2017) and lower (in 2014, 2015, and 2018) values of original SWE and snow depth are reflected in SnoDRI. A lower discharge accompanied by lower SWE is another indicator of snow drought. In Figure 6, during all winters from 2013 to 2019, except for 2017 and 2019, the Upper Tuolumne basin produced low discharges. SnoDRI identifies this signal. Whereas in 2017 and 2019
Figure 6 .
6SnoDRI, SWE anomaly, SWE, and discharge in the Upper Tuolumne basin during the validation period.
Table 1 .
1List of variables considered for analysis.Variables name
Fourthly, regardless of several model inputs that possess multicollinearity, a common issue while using multiple predictor variables, we can see that the SnoDRI framework could eliminate redundant information to a certain extent. For instance, the model gave a lower weight to specific humidity, a variable used along with temperature to calculate snow fraction. Finally, unlike SPI or Palmer Drought Severity Index (PDSI;Palmer, 1965), the proposed ML-based framework is not sensitive to the time series length 14 . Once the model is trained, its weights are fixed and can be applied to a new dataset, no matter the range of time. However, the efficacy of SnoDRI in capturing multi-scale (both spatial and temporal) droughts needs to be explored in greater depth. The abovementioned capabilities of the framework highlight the competence of ML and information theory metrics in assessing snow droughts (or any drought, for that matter).Formulating the framework for drought index calculation came with several challenges. Most importantly, the absence of a target variable to train and test the ML model. It is not straightforward to establish the performance of the index with a statistical metric (e.g., NSE or KGE). Rather, we had to compare and contrast the indicator variables with the index to see whether it was able to capture the signals of drought. This gave rise to another challenge: the lack of characterization of snow droughts in the present literature. Despite the ongoing interest in the topic, researchers have not reached an established definition of snow drought. This introduces uncertainty in assessing the presence of snow droughts. Related to the above, another challenge in this study was the validation of the new index. It is a common problem in all drought index development studies46 . Generally, the researchers compare a new index with some of the reported drought events to validate the index[47][48][49] . Nevertheless, this does not show exclusively the ability of an index to capture all the drought events. Some studies validate their index by checking its congruency with the US Drought Monitor49,50 . There is vast room for research in developing an established framework for validating drought indices. A possible way to create a validation framework is to verify the closeness of the distributions of relevant variables with that of the index. The closeness of different distributions can be quantified with statistical methods.. Thirdly, our framework, besides
calculating the index, can give insights into what factors drive snow drought conditions,
as represented by the weights from the autoencoder with MI. Typically, studies
investigating snow droughts calculate the index based on the abnormal variations in
snow variables.
AcknowledgmentsWe would like to thank the Holland Computing Center at the University of Nebraska-Lincoln for providing high-performance computing resources.Author Contribution StatementSRK and TR designed the framework. SRK implemented the framework. SRK, KKK, SS conducted data pre-processing. TR supervised the work. MS and TT provided important feedback. SRK prepared the article with contributions from KKK, SS, TT, MS, and TR.
Global snow drought hot spots and characteristics. L S Huning, A Aghakouchak, Proc. Natl. Acad. Sci. U. S. A. 117Huning, L. S. & AghaKouchak, A. Global snow drought hot spots and characteristics. Proc. Natl. Acad. Sci. U. S. A. 117, 19753-19759 (2020).
Climate change and global water resources. N W Arnell, Glob. Environ. Chang. 9Arnell, N. W. Climate change and global water resources. Glob. Environ. Chang. 9, S31-S49 (1999).
Potential impacts of a warming climate on water availability in snow-dominated regions. T P Barnett, J C Adam, D P Lettenmaier, Nat. 4387066Barnett, T. P., Adam, J. C. & Lettenmaier, D. P. Potential impacts of a warming climate on water availability in snow-dominated regions. Nat. 2005 4387066 438, 303-309 (2005).
Evidence for intensification of the global water cycle: Review and synthesis. T G Huntington, J. Hydrol. 319Huntington, T. G. Evidence for intensification of the global water cycle: Review and synthesis. J. Hydrol. 319, 83-95 (2006).
Defining Snow Drought and Why It Matters Winter 2015 on the West Coast : A Tale of Two Snow Droughts. B A A Harpold, M Dettinger, Harpold, B. A. A. & Dettinger, M. Defining Snow Drought and Why It Matters Winter 2015 on the West Coast : A Tale of Two Snow Droughts. 2013-2017 (2017).
Snow Drought Risk and Susceptibility in the Western United States and Southwestern Canada. J R Dierauer, D M Allen, P H Whitfield, Water Resour. Res. 55Dierauer, J. R., Allen, D. M. & Whitfield, P. H. Snow Drought Risk and Susceptibility in the Western United States and Southwestern Canada. Water Resour. Res. 55, 3076-3091 (2019).
Measuring, and Assessing the Consequences of Snow Drought. A R Gottlieb, J S Mankin, Observing, Bull. Am. Meteorol. Soc. 103Gottlieb, A. R. & Mankin, J. S. Observing, Measuring, and Assessing the Consequences of Snow Drought. Bull. Am. Meteorol. Soc. 103, E1041-E1060 (2021).
Monitoring the daily evolution and extent of snow drought. B J Hatchett, A M Rhoades, D J Mcevoy, Nat. Hazards Earth Syst. Sci. 22Hatchett, B. J., Rhoades, A. M. & McEvoy, D. J. Monitoring the daily evolution and extent of snow drought. Nat. Hazards Earth Syst. Sci. 22, 869-890 (2022).
Exploring the origins of snow drought in the northern sierra nevada. B J Hatchett, D J Mcevoy, california. Earth Interact. 22Hatchett, B. J. & McEvoy, D. J. Exploring the origins of snow drought in the northern sierra nevada, california. Earth Interact. 22, (2018).
Projected Changes in Interannual Variability of Peak Snowpack Amount and Timing in the Western United States. A M Marshall, J T Abatzoglou, T E Link, C J Tennant, Geophys. Res. Lett. 46Marshall, A. M., Abatzoglou, J. T., Link, T. E. & Tennant, C. J. Projected Changes in Interannual Variability of Peak Snowpack Amount and Timing in the Western United States. Geophys. Res. Lett. 46, 8882-8892 (2019).
An aggregate drought index: Assessing drought severity based on fluctuations in the hydrologic cycle and surface water storage. J A Keyantash, J A Dracup, Water Resour. Res. 40Keyantash, J. A. & Dracup, J. A. An aggregate drought index: Assessing drought severity based on fluctuations in the hydrologic cycle and surface water storage. Water Resour. Res. 40, 1-14 (2004).
A drought index accounting for snow. M Staudinger, K Stahl, J Seibert, 10.1002/2013WR014979.ReplyWater Resour. Res. Staudinger, M., Stahl, K. & Seibert, J. A drought index accounting for snow. Water Resour. Res. 7861-7872 (2014) doi:10.1002/2013WR014979.Reply.
Improving the Palmer drought severity index by incorporating snow and frozen ground. S Qiu, The University of North DakotaQiu, S. Improving the Palmer drought severity index by incorporating snow and frozen ground. (The University of North Dakota, 2013).
. W C Palmer, 58Washington, DCMeteorological droughtResearch paper no. 45. US Weather BurPalmer, W. C. Meteorological drought, Research paper no. 45. US Weather Bur. Washington, DC 58, (1965).
Drought Characterization for a Snow-Dominated Region of Afghanistan. A Muhammad, S Kumar Jha, P F Rasmussen, J. Hydrol. Eng. 225017014Muhammad, A., Kumar Jha, S. & Rasmussen, P. F. Drought Characterization for a Snow-Dominated Region of Afghanistan. J. Hydrol. Eng. 22, 05017014 (2017).
Feature selection: A data perspective. J Li, ACM Comput. Surv. 50Li, J. et al. Feature selection: A data perspective. ACM Comput. Surv. 50, (2017).
Feature selection: evaluation, application, and small sample performance. A Jain, IEEE Trans. Pattern Anal. Mach. Intell. 19Jain, A. Feature selection: evaluation, application, and small sample performance. IEEE Trans. Pattern Anal. Mach. Intell. 19, 153-158 (1997).
A survey on machine learning for data fusion. T Meng, X Jing, Z Yan, W Pedrycz, Inf. Fusion. 57Meng, T., Jing, X., Yan, Z. & Pedrycz, W. A survey on machine learning for data fusion. Inf. Fusion 57, 115-129 (2020).
Feature Selection, Online Feature Selection Techniques for Big Data Classification: -A Review. S G Devi, M Sabrigiriraj, 10.1109/ICCTCT.2018.8550928Proc. 2018 Int. Conf. Curr. Trends Towar. 2018 Int. Conf. Curr. Trends TowarDevi, S. G. & Sabrigiriraj, M. Feature Selection, Online Feature Selection Techniques for Big Data Classification: -A Review. Proc. 2018 Int. Conf. Curr. Trends Towar. Converging Technol. ICCTCT 2018 (2018) doi:10.1109/ICCTCT.2018.8550928.
Feature selection based on mutual information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. H Peng, F Long, C Ding, IEEE Trans. Pattern Anal. Mach. Intell. 27Peng, H., Long, F. & Ding, C. Feature selection based on mutual information: Criteria of Max-Dependency, Max-Relevance, and Min-Redundancy. IEEE Trans. Pattern Anal. Mach. Intell. 27, 1226-1238 (2005).
Mutual information. P E Latham, Y Roudi, 41658Latham, P. E. & Roudi, Y. Mutual information. Scholarpedia 4, 1658 (2009).
The mathematical theory of communication. C E Shannon, W Weaver, Univ. Illinois PressUrbana, IllinoisShannon, C. E. & Weaver, W. The mathematical theory of communication. Univ. Illinois Press. Urbana, Illinois. (1949).
Using Mutual Information for Selecting Features in Supervised Neural Net Learning. R Battiti, IEEE Trans. Neural Networks. 5Battiti, R. Using Mutual Information for Selecting Features in Supervised Neural Net Learning. IEEE Trans. Neural Networks 5, 537-550 (1994).
Normalized mutual information feature selection. P A Estévez, M Tesmer, C A Perez, J M Zurada, IEEE Trans. Neural Networks. 20Estévez, P. A., Tesmer, M., Perez, C. A. & Zurada, J. M. Normalized mutual information feature selection. IEEE Trans. Neural Networks 20, 189-201 (2009).
Using CERES SYN Surface Irradiance Data as Forcing for Snowmelt Simulation in Complex Terrain. L M Hinkelman, K E Lapo, N C Cristea, J D Lundquist, J. Hydrometeorol. 16Hinkelman, L. M., Lapo, K. E., Cristea, N. C. & Lundquist, J. D. Using CERES SYN Surface Irradiance Data as Forcing for Snowmelt Simulation in Complex Terrain. J. Hydrometeorol. 16, 2133-2152 (2015).
Simulating cold season snowpack: Impacts of snow albedo and multi-layer snow physics. D Waliser, Clim. Change. 109Waliser, D. et al. Simulating cold season snowpack: Impacts of snow albedo and multi-layer snow physics. Clim. Change 109, 95-117 (2011).
Winter Snow Level Rise in the Northern Sierra Nevada from. B J Hatchett, 9899Hatchett, B. J. et al. Winter Snow Level Rise in the Northern Sierra Nevada from 2008 to 2017. Water 2017, Vol. 9, Page 899 9, 899 (2017).
Technical note: Precipitation-phase partitioning at landscape scales to regional scales. E Lynn, Hydrol. Earth Syst. Sci. 24Lynn, E. et al. Technical note: Precipitation-phase partitioning at landscape scales to regional scales. Hydrol. Earth Syst. Sci. 24, 5317-5328 (2020).
The Changing Character of the California Sierra Nevada as a Natural Reservoir. A M Rhoades, A D Jones, P A Ullrich, Geophys. Res. Lett. 4519Rhoades, A. M., Jones, A. D. & Ullrich, P. A. The Changing Character of the California Sierra Nevada as a Natural Reservoir. Geophys. Res. Lett. 45, 13,008-13,019 (2018).
The multi-institution North American Land Data Assimilation System (NLDAS): Utilizing multiple GCIP products and partners in a continental distributed hydrological modeling system. K E Mitchell, J. Geophys. Res. Atmos. 109Mitchell, K. E. et al. The multi-institution North American Land Data Assimilation System (NLDAS): Utilizing multiple GCIP products and partners in a continental distributed hydrological modeling system. J. Geophys. Res. Atmos. 109, 1-32 (2004).
Continental-scale water and energy flux analysis and validation for the North American Land Data Assimilation System project phase 2 (NLDAS-2): 1. Intercomparison and application of model products. Y Xia, J. Geophys. Res. Atmos. 117Xia, Y. et al. Continental-scale water and energy flux analysis and validation for the North American Land Data Assimilation System project phase 2 (NLDAS-2): 1. Intercomparison and application of model products. J. Geophys. Res. Atmos. 117, (2012).
The relationship of drought frequency and duration to time scales. T B Mckee, N J Doesken, J Kleist, Eighth Conf. Appl. Climatol. Mckee, T. B., Doesken, N. J. & Kleist, J. The relationship of drought frequency and duration to time scales. Eighth Conf. Appl. Climatol. 17-22 (1993).
Daily 4 km gridded SWE and snow depth from assimilated in-situ and modeled data over the conterminous US, version 1. NASA Natl. Snow Ice Data Cent. P Broxton, X Zeng, N Dawson, Distrib. Act. Arch. Center. Broxton, P., Zeng, X. & Dawson, N. Daily 4 km gridded SWE and snow depth from assimilated in-situ and modeled data over the conterminous US, version 1. NASA Natl. Snow Ice Data Cent. Distrib. Act. Arch. Center, Boulder, CO (2019).
Snowpack change from 1982 to 2016 over conterminous United States. X Zeng, P Broxton, N Dawson, Geophys. Res. Lett. 45Zeng, X., Broxton, P. & Dawson, N. Snowpack change from 1982 to 2016 over conterminous United States. Geophys. Res. Lett. 45, 12-940 (2018).
Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: Data set characteristics and assessment of regional variability in hydrologic model performance. A J Newman, Hydrol. Earth Syst. Sci. 19Newman, A. J. et al. Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: Data set characteristics and assessment of regional variability in hydrologic model performance. Hydrol. Earth Syst. Sci. 19, 209-223 (2015).
The CAMELS data set: Catchment attributes and meteorology for large-sample studies. N Addor, A J Newman, N Mizukami, M P Clark, Hydrol. Earth Syst. Sci. 21Addor, N., Newman, A. J., Mizukami, N. & Clark, M. P. The CAMELS data set: Catchment attributes and meteorology for large-sample studies. Hydrol. Earth Syst. Sci. 21, 5293-5313 (2017).
National Water Information System data available on the World Wide Web. U S Survey, U.S. Geological Survey. National Water Information System data available on the World Wide Web. http://waterdata.usgs.gov/nwis/ (2023).
Random forests. L Breiman, Mach. Learn. 45Breiman, L. Random forests. Mach. Learn. 45, 5-32 (2001).
PhD thesis: Modeles connexionnistes de l'apprentissage (connectionist learning models). Y Lecun, LeCun, Y. PhD thesis: Modeles connexionnistes de l'apprentissage (connectionist learning models). (1987).
Elements of information theory. T M Cover, J A Thomas, John Wiley & SonsLtdCover, T. M. & Thomas, J. A. Elements of information theory. (John Wiley & Sons, Ltd, 1991).
Entropy and Mutual Information. E G Learned-Miller, Learned-Miller, E. G. Entropy and Mutual Information. (2013).
A one-dimensional temperature model for a snow cover: Technical documentation for SNTHERM. R E Jordan, 89Jordan, R. E. A one-dimensional temperature model for a snow cover: Technical documentation for SNTHERM. 89. (1991).
A Wet-Bulb Temperature-Based Rain-Snow Partitioning Scheme Improves Snowpack Prediction Over the Drier Western United States. Y H Wang, Geophys. Res. Lett. 46Wang, Y. H. et al. A Wet-Bulb Temperature-Based Rain-Snow Partitioning Scheme Improves Snowpack Prediction Over the Drier Western United States. Geophys. Res. Lett. 46, 13825-13835 (2019).
Applicability of a flood forecasting system for Nebraska watersheds. S Rasiya Koya, Environ. Model. Softw. 164105693Rasiya Koya, S. et al. Applicability of a flood forecasting system for Nebraska watersheds. Environ. Model. Softw. 164, 105693 (2023).
Snow-detonated floods: Assessment of the U.S. midwest march 2019 event. N Velásquez, F Quintero, S R Koya, T Roy, R Mantilla, J. Hydrol. Reg. Stud. 47101387Velásquez, N., Quintero, F., Koya, S. R., Roy, T. & Mantilla, R. Snow-detonated floods: Assessment of the U.S. midwest march 2019 event. J. Hydrol. Reg. Stud. 47, 101387 (2023).
New drought indices. S Niemeyer, Options Méditerranéennes. Série A Séminaires Méditerranéens. 80Niemeyer, S. New drought indices. Options Méditerranéennes. Série A Séminaires Méditerranéens 80, 267-274 (2008).
Comparison of the Performance of Six Drought Indices in Characterizing Historical Drought for the Upper Blue Nile Basin. Y Bayissa, Ethiopia. Geosci. 881PageBayissa, Y. et al. Comparison of the Performance of Six Drought Indices in Characterizing Historical Drought for the Upper Blue Nile Basin, Ethiopia. Geosci. 2018, Vol. 8, Page 81 8, 81 (2018).
Integrated Drought Index (IDI) for Drought Monitoring and Assessment in India. D Shah, V Mishra, Water Resour. Res. 56Shah, D. & Mishra, V. Integrated Drought Index (IDI) for Drought Monitoring and Assessment in India. Water Resour. Res. 56, e2019WR026284 (2020).
Continental drought monitoring using satellite soil moisture, data assimilation and an integrated drought index. L Xu, P Abbaszadeh, H Moradkhani, N Chen, X Zhang, Remote Sens. Environ. 250112028Xu, L., Abbaszadeh, P., Moradkhani, H., Chen, N. & Zhang, X. Continental drought monitoring using satellite soil moisture, data assimilation and an integrated drought index. Remote Sens. Environ. 250, 112028 (2020).
The Drought Monitor. M Svoboda, Bull. Am. Meteorol. Soc. 83Svoboda, M. et al. The Drought Monitor. Bull. Am. Meteorol. Soc. 83, 1181-1190 (2002).
ERA5-Land hourly data from 1950 to 1980. Muñoz Sabater, J , Copernicus Clim. Chang. Serv. Clim. Data Store. Muñoz Sabater, J. ERA5-Land hourly data from 1950 to 1980. Copernicus Clim. Chang. Serv. Clim. Data Store 1181-1194 (2021).
| [] |
[
"Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics",
"Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics"
] | [
"Donghyuk Kim \nKAIST\nDaejeonSouth Korea\n",
"South Daejeon \nKAIST\nDaejeonSouth Korea\n",
"Jae-Young Korea \nKAIST\nDaejeonSouth Korea\n",
"Kim \nKAIST\nDaejeonSouth Korea\n",
"Wontak Han \nSK hynix Inc. Icheon\nSouth Korea\n",
"Jongsoon Won [email protected] \nSK hynix Inc. Icheon\nSouth Korea\n",
"Haerang Choi [email protected] \nSK hynix Inc. Icheon\nSouth Korea Joo-Young Kim\n",
"Yongkee Kwon [email protected] \nKAIST\nDaejeonSouth Korea\n"
] | [
"KAIST\nDaejeonSouth Korea",
"KAIST\nDaejeonSouth Korea",
"KAIST\nDaejeonSouth Korea",
"KAIST\nDaejeonSouth Korea",
"SK hynix Inc. Icheon\nSouth Korea",
"SK hynix Inc. Icheon\nSouth Korea",
"SK hynix Inc. Icheon\nSouth Korea Joo-Young Kim",
"KAIST\nDaejeonSouth Korea"
] | [] | Processing-in-memory (PIM) architecture is an inherent match for data analytics application, but we observe major challenges to address when accelerating it using PIM. First, data analytics involves intensive read and write operations on databases, causing bandwidth bottleneck issues even inside the memory. Furthermore, irregular and non-deterministic data analytics workload causes load imbalance among in-memory processing units, deteriorating the overall performance. Then, the conventional DRAM command protocol, which sends a command to a single bank, causes the command bottleneck for complex data analytics operators. In this paper, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics, which fully exploits the internal bandwidth of DRAM using the bank-, bank group-, chip-, and rank-level parallelisms. Considering the properties of data analytics operators and DRAM's area constraints, Darwin maximizes the internal data bandwidth by placing the PIM processing units, buffers, and control circuits across the hierarchy of DRAM. More specifically, it introduces the bank processing unit for each bank in which a single instruction multiple data (SIMD) unit handles regular data analytics operators (e.g., select, aggregate, and sort), and bank group processing unit for each bank group to handle workload imbalance in the condition-oriented data analytics operators (e.g., project, and join). Furthermore, Darwin supports a novel PIM instruction architecture that concatenates instructions for multiple thread executions on bank group processing entities, addressing the command bottleneck by enabling separate control of up to 512 different in-memory processing units simultaneously. We build a cycle-accurate simulation framework to evaluate Darwin with various DRAM configurations, optimization schemes and workloads. Darwin achieves up to 14.7x speedup over the non-optimized version, leveraging many optimization schemes including the novel instruction architecture, circuit optimizations for each data analytics operator, and the proper placement of the processing unit. Finally, the proposed Darwin architecture achieves 4.0x-43.9x higher throughput and reduces energy consumption by 85.7% than the baseline CPU system (Intel Xeon Gold 6226 + 4 channels of DDR4-2933) for essential data analytics operators. Compared to the state-of-the-art PIM architectures, Darwin achieves up to 7.5x and 7.1x in the basic query operators and TPC-H queries, respectively. Darwin is based on the latest GDDR6 and requires only 5.6% area overhead, suggesting a promising PIM solution for the future main memory system.RelationType Price OriginMeat | 10.48550/arxiv.2305.13970 | [
"https://export.arxiv.org/pdf/2305.13970v1.pdf"
] | 258,841,262 | 2305.13970 | 3eae4656851d4d37f71b1134877b6c51f81a6e49 |
Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics
Donghyuk Kim
KAIST
DaejeonSouth Korea
South Daejeon
KAIST
DaejeonSouth Korea
Jae-Young Korea
KAIST
DaejeonSouth Korea
Kim
KAIST
DaejeonSouth Korea
Wontak Han
SK hynix Inc. Icheon
South Korea
Jongsoon Won [email protected]
SK hynix Inc. Icheon
South Korea
Haerang Choi [email protected]
SK hynix Inc. Icheon
South Korea Joo-Young Kim
Yongkee Kwon [email protected]
KAIST
DaejeonSouth Korea
Darwin: A DRAM-based Multi-level Processing-in-Memory Architecture for Data Analytics
Processing-in-memory (PIM) architecture is an inherent match for data analytics application, but we observe major challenges to address when accelerating it using PIM. First, data analytics involves intensive read and write operations on databases, causing bandwidth bottleneck issues even inside the memory. Furthermore, irregular and non-deterministic data analytics workload causes load imbalance among in-memory processing units, deteriorating the overall performance. Then, the conventional DRAM command protocol, which sends a command to a single bank, causes the command bottleneck for complex data analytics operators. In this paper, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics, which fully exploits the internal bandwidth of DRAM using the bank-, bank group-, chip-, and rank-level parallelisms. Considering the properties of data analytics operators and DRAM's area constraints, Darwin maximizes the internal data bandwidth by placing the PIM processing units, buffers, and control circuits across the hierarchy of DRAM. More specifically, it introduces the bank processing unit for each bank in which a single instruction multiple data (SIMD) unit handles regular data analytics operators (e.g., select, aggregate, and sort), and bank group processing unit for each bank group to handle workload imbalance in the condition-oriented data analytics operators (e.g., project, and join). Furthermore, Darwin supports a novel PIM instruction architecture that concatenates instructions for multiple thread executions on bank group processing entities, addressing the command bottleneck by enabling separate control of up to 512 different in-memory processing units simultaneously. We build a cycle-accurate simulation framework to evaluate Darwin with various DRAM configurations, optimization schemes and workloads. Darwin achieves up to 14.7x speedup over the non-optimized version, leveraging many optimization schemes including the novel instruction architecture, circuit optimizations for each data analytics operator, and the proper placement of the processing unit. Finally, the proposed Darwin architecture achieves 4.0x-43.9x higher throughput and reduces energy consumption by 85.7% than the baseline CPU system (Intel Xeon Gold 6226 + 4 channels of DDR4-2933) for essential data analytics operators. Compared to the state-of-the-art PIM architectures, Darwin achieves up to 7.5x and 7.1x in the basic query operators and TPC-H queries, respectively. Darwin is based on the latest GDDR6 and requires only 5.6% area overhead, suggesting a promising PIM solution for the future main memory system.RelationType Price OriginMeat
I. INTRODUCTION
In the era of big data, data-intensive applications such as artificial intelligence [27] and data analytics [8] proliferate by utilizing extremely large datasets. As these applications mainly consist of low compute-to-memory ratio operations, memory operations dominate over the compute operations, causing the "memory wall" [6]. Furthermore, the bottleneck on the memory side exacerbates as the improvement of memory technology in speed falls behind the logic technology. Continual efforts in increasing the off-chip bandwidth of recent DRAM technologies [17], [28] result in higher IO speed and more pins, but they come at the cost of expense and power consumption.
To minimize the data movement overhead in data analytics, the database backend is relocated from storage to main memory [26], [35], [44], avoiding expensive disk IO access. Additionally, some of analytical query operators (e.g., select, aggregate, sort, project, and join) are converted into vector operations, increasing the throughput of the query processing [5]. Since the query operators iteratively compute on sequences of data stream, vector type processing can easily accelerate by computing many data at a time. However, even with the high-performance CPU, the vectorized query operations with low compute-to-memory ratio cannot be sped up due to the ever-growing data size and limitation of the off-chip bandwidth.
Previous research propose accelerating data analytics on different hardware platforms, such as field-programmable gate array (FPGA) [36], [46], [50] and graphics processing unit (GPU) [33], [42]. However, these approaches focus on improving the computation capability, while leaving the essence of the memory bottleneck issue that occurs in computing data-intensive applications. Therefore, the paradigm shift Vectorized Operations Fig. 1. Column-oriented Database Management System from computation-centric to memory-centric architecture is unavoidable in such data-intensive applications.
As a result, inevitable memory bottleneck problem drives both industry and academia to reassess the DRAM-based near-memory-processing (NMP) [7], [12], [19], [25], [50] and processing-in-memory (PIM) [13], [14], [16], [21], [22], [30], [31], [32], [34], [38], [48], [49] architectures that increase the internal bandwidth by integrating computational logic and DRAM device/cells closely. NMP architectures integrate homogeneous processing unit (PU) per vault in the base logic die of hybrid memory cube (HMC), supporting flexible dataflow for query operations. However, these approaches, only integrating PUs external to memory, lose the opportunity to fully benefit from the wide internal bandwidth of DRAM memory. On the other hand, PIM architectures can fully exploit the abundant internal bandwidth of DRAM. However, they cannot efficiently compute complex query operations with complicated internal data movement since they are only capable of processing bulk-wise data processing such as vectorvector and matrix-vector multiplications with fixed data path. Furthermore, PIM architectures that integrate compute logic closer than the bank level (e.g., in cell and near subarray) are impractical that reduces the density of cells significantly.
To address the limitations of the previous approaches, we propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics. First, Darwin reuses the conventional DRAM hierarchical architecture to save the additional interconnect resource for internal data movement occurred for PIM computation. It exploits bank, bank group, and rank levels for multi-level parallelisms with reduced internal data movement. It utilizes single instruction multiple data (SIMD) fixed-point unit exploiting wide bank-level parallelism while separating the control granularity to bank group for flexible execution.
Second, Darwin supports in-memory control unit to support seamless workload balancing after condition-oriented query operators where the output data size is not predictable in static compile time. Third, Darwin modifies the command interface to avoid the command bottleneck in individually controlling multiple PUs simultaneously. We introduce a new PIM instruction architecture that concatenates multi-bank group command that enables independent but concurrent operations in multiple bank groups.
In this paper, we make the following contributions:
• We propose a multi-level PIM architecture for data analytics which fully exploits the internal bandwidth from the bank, bank group, and rank levels within a commodity DRAM architecture, reducing any additional overhead for practicality. • We propose bank group-level processing units to support irregular data analytics operations, enabling dynamic runtime execution on the condition-oriented operations and low-overhead workload balancing. • We propose a new command interface to support parallel but individual executions of in-memory PUs while avoiding the command bottleneck. • We evaluate TPC-H and basic query operators on Darwin over the baseline CPU and state-of-the-art PIM.
II. BACKGROUND
A. Data Analytics
Most current database systems used in finance and business are relational database management system (RDBMS) [10], where data are stored in the form of relations, which comprise lists of tuples and attributes. A tuple represents a row, and an attribute represents a column in a relation. RDBMS can be divided into two types depending on how data are stored: roworiented [2] and column-oriented [18], [45]. The row-oriented database organizes data by the record, sequentially storing attributes of records. It is optimized for reading and writing rows, such as online transactional processing (OLTP). On the other hand, a column-oriented database such as MonetDB [18] stores data into an array of each attribute, which benefits in reading and computing on columns as in online analytical processing (OLAP), as shown in Figure 1. Furthermore, the basic operators can turn into vectorized query executions by using column-oriented storage, where the execution is iteratively performed in a batch of input data. As a result, sequential memory accesses are prevalent in column-oriented database, where PIM are suitable as it promotes high data parallelism and wide bandwidth utilization of the memory.
In the column-oriented storage, project is the dominant operator among the others. Kepe et al. [20] analyze the latency breakdown of MonetDB in the TPC-H benchmark. The result shows that the project takes up 58% of the overall latency, while each of the other operator (e.g., select, aggregate, sort, and join) takes only up to 11%. The project, which materializes intermediate tables, occupies the majority because it occurs after every query operator in a query plan. This is because the column-oriented storage generates sets of object-IDs (OIDs), representing the address of each tuple which is unique for each relation, as the output of query operators. Using the OID result, the project can connect the previous and following operators by generating the intermediate tables, which is used in the following operator as inputs.
B. Architecture of Main Memory System
In order to properly exploit the PIM architecture, we need to understand the control granularity and the internal bandwidth of main memory system. The internal bandwidth can be understood through the logical structure of the main memory. It adopts the multi-drop tree topology, where the highest level starts from the memory channel controlled by a memory controller of the host. A channel comprises multiple dual inline memory modules (DIMMs). Within a DIMM, several DDR chip packages are placed forming a rank. The number of chip packages in a rank is determined by the number of DQ pins per package, where DQ pins are used for data input and output. The total number of DQ pins per DIMM is 64 bits to match the JEDEC specification. Then, multiple bank groups make up a rank where four banks form a bank group. Due to the multi-drop tree topology, only a single bank of the DDR packages within a rank can be accessed at a time across a channel. Getting the same command and address from the memory controller, each DDR package in a rank operates simultaneously, contributing to a part of the combined DQ pins. Even though DRAM can only access a bank at a time, it is efficient for DRAM since it can exploit the bank interleaving scheme to hide the internal delays, such as activation and precharge delays, increasing the bandwidth utilization. However, the multi-drop topology unavoidably limits the internal bandwidth of multiple banks for simultaneous read/write.
III. CHALLENGE OF DATA ANALYTICS A. Internal Data Movement Overhead in Single-Level PIM
The conventional single-level PIM (SLPIM) architectures, such as proposed in [7], [12], [16], [30], incur significant internal data movement in accelerating condition-oriented operators (e.g., join and project) and merge phase of sort. SLPIM refers to an architecture that places PUs only at a single level (e.g., subarray, bank, or rank). We demonstrated to see the overhead of internal data movement in SLPIM when executing the basic data analytics operators. Figure 2 (a) shows the latency breakdown of the basic operators executed in SLPIM. The result shows two distinct trends since the characteristics of the operators is different. First, the execution of sort, project, and join are dominated by the internal data movement. As project and join are condition-oriented operators, they generate workload imbalance across the PUs and induce additional internal data movement. When executing these operators, dataflow and input/output sizes are decided at runtime. In other words, the input can be evenly mapped on different memory nodes for the balanced workload, but the intermediate data are distributed unevenly as the different outputs are generated among the PUs. It leads to underutilization and performance degradation, especially in the parallel computing of the inmemory PUs. Thus, it requires balancing the workload among the PUs to maximize hardware utilization and performance, which additionally generates data movement. Furthermore, the merge phases of sort and join cause significant internal data movement because data in different nodes are accessed frequently to merge separate partitions into one. Second, the data movement are negligible in select and aggregate as they are not condition-oriented nor have heavy merge phases. Instead, the main overhead is computation where they require high computation throughput for vectorized query execution. As shown in Figure 2 (b), SLPIM's data access mechanism is inherently inefficient when accessing PU's neighbor memory node. SLPIM's data access is degraded since each PU shares global buffer for inter-node data movement which causes the bottleneck in the global buffer. Since data analytics incurs significant overhead in internal data movement, PIM architecture should be capable of moving data inside DRAM efficiently. However, the conventional DRAM structure does not have a specific interconnect for internal data movement. While Rowclone [40] proposes bulk data copy of a row data across different banks, significant data movement induced in data analytics are flexible that Rowclone cannot be effectively utilized. TransPIM [51] and GearBox [31] propose specific network-on-chip (NoC) for efficient DRAM internal data movement for target applications. However, their NoCs consume a large area overhead considering the DRAM area constraint. In particular, utilizing the customized NoC within a commodity DRAM is unscalable and impractical for the flexible data movement required by data analytics.
Depending on target operations, SLPIM can place PUs at different level (e.g., subarray, bank, or rank). Having a PU at a high level, such as rank [7], [12], enables the direct use of data in multiple memory nodes. However, the advantage of PIM is decreased because the high data parallelism cannot be achieved by placing PU far from the memory nodes. Conversely, placing a PU at a lower level [16], [30] (e.g., bank) can maximize the advantages of PIM, but the performance is degraded by the frequent internal and off-chip data movement overhead caused by the data analytics operator. When accelerating data analytics, SLPIM is not the most practical architecture since flexibility in data movement and high computation parallelism can not be achieved at the same time.
B. Command Bottleneck
The conventional DRAM command protocol causes a bottleneck in PIM since it is only dedicated to utilize offchip bandwidth efficiently. It exploits bank interleaving by alternatively accessing data from different banks since it can only send a command (e.g., activation, read, write, precharge, and refresh) to a single bank at a time. It is not suitable for executing PIM operations involving more than one PUs across multiple banks. Regardless of how fast DRAM receives the commands, the shortest latency is saturated at t CCDS , the minimum interval time between the memory column read.
To address the command bottleneck, previous research [16], [30] proposed all bank mode. Instead of controlling a single bank, it sends a command to control every bank with the same command. As a result, it can efficiently address the command bottleneck for matrix-vector multiplications that only requires a homogeneous dataflow. However, data analytics incurs irregularity across the PUs requiring different dataflow due to the condition-oriented workload. Thus, all bank mode is not suitable for irregular query operators due to the low control granularity and poor flexibility in which each PU needs to compute the different workload. For example, when processing join and project in a PIM with multiple PUs, each PU computes a different workload depending on the partitioned input data. Since the required dataflow varies depending on the input data, all bank mode, which is only efficient for applications with homogenous dataflow, is not suitable for the PIM architecture that targets data analytics.
IV. DARWIN ARCHITECTURE
We propose Darwin, a practical multi-level PIM (MLPIM) architecture with the concatenated instructions, multiple threads (CIMT) to address the challenges of in-memory data analytics processing. Darwin is capable of handling the complex dataflow and imbalanced workload problems, with keeping conventional DRAM's hierarchical structure for practicality. Regarding the PIM-host interface, CIMT addresses the command bottleneck for the irregular operators, maximizing the command density and the hardware utilization.
A. Multi-Level PIM Architecture
As explained in Section III-A, SLPIM is ineffective in accelerating data analytics. To this end, Darwin integrates heterogeneous processing units at different levels of the DRAM hierarchy to achieve flexible internal data movement. Although this approach seems to increase hardware and software complexity, MLPIM is a practical solution to reduce the complexity of hardware for data analytics which requires complicated dataflow. First, by integrating hardware units at both high and low levels of DRAM, Darwin reduces hardware overhead and eliminates additional NoC costs. This is achieved by reusing the conventional DRAM network and leveraging efficient memory access across multiple memory nodes. Furthermore, to enable PIM feasible, Darwin integrates optimized PUs that are located no closer than the bank-level. Second, Darwin reduces software complexity by integrating a hardware controller inside DRAM to manage the parallel processing of irregular operators.
As shown in Figure 2 (c), the major difference of SLPIM and MLPIM is that the levels where the operators are offloaded make different memory access. Therefore, the levels of PUs must be placed depending on the data access patterns of the operators. When performing regular operators (e.g., aggregate, select, and sort), they are handled most effectively when processed at L1, where the highest bandwidth gain is guaranteed. Since their output data size is determined at the static time, they do not incur workload imbalance. In addition, each PU requires few memory accesses from its neighbor memory nodes. On the other hand, irregular operators (e.g., join, and project) are handled most effectively when processed at a higher level where the efficient irregular memory access can be provided. These operators cause frequent irregular memory access to several nodes by a PU. If the PU is placed at a lower level the memory accesses slow down. Furthermore, the workload imbalance occurred by these and merge operators generate additional data movement across memory nodes for balancing the workload.
The MLPIM architecture of Darwin, as depicted in Figure 3, incorporates computing units and control across different levels, such as rank, chip, bank group, and bank. The bandwidth gain and corresponding operations at each level are summarized in the table. The major PUs are placed at the bank and bank group levels, with the bank processing unit (BPU) supporting regular operators to maximize data parallelism, while the bank group processing unit (BGPU) handles irregular operators that require data from broad memory nodes by addressing the internal data movement across the banks within a bank group. To efficiently use all compute units, Darwin has a PIM command scheduler at the chip level that supports bank group-level threading. Each bank group acts as an independent processing entity that executes a thread together, and multiple processing entities can Fig. 4. Concatenated Instructions, Multiple Threads Architecture execute multiple threads simultaneously. As depicted in the figure, Darwin supports multi-level data movement within a rank at each level, including inter-bank, inter-bank group, and inter-chip communication.
B. CIMT: Concatenated Instructions, Multiple Threads
It is challenging to offload data analytics workload that comprises irregular and regular operators to PIM as stated in Section III-B. To control in-memory PUs separately in computing irregular operators, the conventional DRAM command protocol, which sends one command at a time using the narrow command and address (C/A) pins, cannot provide enough bandwidth to control Darwin. All bank mode can provide wide bandwidth to execute in-memory PUs simultaneously, but the control granularity is extremely low that it cannot process complicated data analytics operations efficiently. To this end, Darwin supports CIMT that handles multiple in-memory PUs with fine control granularity without the command bottleneck. CIMT is optimized for the physical layout of the main memory system that has multiple DRAM chips in a rank. Unlike the conventional command protocol in which different DRAM chips have to receive the same command, each bank group in the different DRAM chips receives different instructions. Darwin utilizes 64-bit DQ pins when sending CIMT using the write command for wider bandwidth. Thus the timing constraints follow the same as the write command.
Each of the four DRAM chips in Figure 4 has 16-bit DQ pins, forming a rank with the 64-bit DQ pins for offchip data transmission. With the burst length of 8, the total data transaction size of 64-byte is sent per a write command. To match the data size, the CIMT instruction comprises 8 different 64-bit PIM instructions being concatenated. Each PIM instruction is divided into four 16-bit slices, and the 16bit slices of the different PIM instructions are placed in an interleaved manner forming eight 64-bit interleaved instructions. Then, 64-bit interleaved instructions are streamed into Darwin through the 64-bit DQ pins as it is formatted to send the corresponding instruction to each chip. As a result, each chip receives a complete 64-bit instruction. It takes eight cycles to send all 8 PIM instructions with a burst length of 8.
Each PIM instruction is decoded at the bank group level to generate up to 64 sequential PIM commands to relieve the burden of sending commands through the off-chip bandwidth. The PIM command, generated from the PIM instruction, is a DRAM readable command (e.g., activate, precharge, read, and write) paired with control signals for in-memory PUs. To increase the throughput of BPUs, all banks within the same bank group receive the same PIM command simultaneously, enabling concurrent execution of BPUs. Thus, the CIMT instruction architecture enables separate control of up to 512 different BGPUs simultaneously. The CIMT architecture is applicable to other DRAM configurations. Figure 5 shows Darwin's overall operation flow that is mainly divided into data preparation, computation, and output stages. The regular operators have a fixed data flow in computing vectorized operations. When each thread has the same input amount, the computation flow are equal in all threads. An example operation flow of select is shown in Figure 5 (a). In the data preparation stage, the PIM instructions are sent separately to each bank group, and the required input data are transferred to BPU in each bank. To reduce the latency, BPU can directly use the data for one input operand from its own memory. The scalar data of a SIMD operand is sent only once in the preparation stage since the attribute data of the other operand can be transferred directly from memory during the computation stage. In the computation stage, BPUs execute SIMD in parallel. Only one PIM instruction is required to compute the select operator on a row of data since 64 sequential PIM commands can be generated. The computation stage continues until the register is filled with the generated output. Having a 512-bit size bitmask register, BPU can compute select on one row, which generates a 512-bit output bitmask, and move on to the output stage. In the output stage, the generated output data are stored back in memory. Due to the limited size of the registers in BPU, the output data cannot be stored in the register for the entire operation. For the select operator, four write commands are required per a row for 512 bits bitmask data to memory.
C. Darwin Operation
The irregular operators have a much more complicated data flow and imbalanced workload among the threads even with the same input amount. An example operation flow of the project operator is shown in Figure 5 (b). In the preparation stage, the tuple number and the initial OID are set initially. Then, the 512 bits bitmask data, generated from the previous select operator, are transferred to the BGPU. Then, the BGPU receives the PIM instruction to generate corresponding read commands based on the bitmask data for the input attributes. In the computation stage, each BGPU receives different amounts of workload due to the different bitmask that each has. By generating commands internally with the CIMT architecture, each BGPU executes an individual command without the command bottleneck. Furthermore, the computation of the BGPU is rate-matched to the peak bandwidth of the bank group for the streaming execution flow. In the output stage, the selected data are stored in the output register of the BGPU. Once the register is ready, the write commands are generated to store back the output attribute. To balance the workload seamlessly, the BGPU generates the write commands in a bank interleaving manner to write data on each bank memory evenly. It guarantees the shortest latency between the write commands for maximum bandwidth utilization while exploiting the bank group-level parallelism. This process is repeated until the end.
V. IN-MEMORY LOGIC DESIGN
A. Bank-Level Processing Units Figure 6 shows the BPU's microarchitecture specialized for processing regular operators. Different from the previous PIM architectures designed for matrix-vector multiplication [16], [30] with straightforward data read and accumulation path, Darwin supports a long sequence of data processing for data analytics, including data read, sort, select, and aggregate. To make this in a streaming processing without data re-writing, the BPU is composed of row registers, a SIMD unit, two permute units, and OID processing engine (OPE).
Select and Aggregate The BPU receives 32B of attribute data from the bank's I/O sense amplifiers (IOSAs) and saves in the row register A or B. The SIMD unit comprises eight sets of 4-byte fixed-point adders and multipliers, supporting the addition, multiplication, min, and max on eight 4-byte data, which matches the bandwidth of a bank. The SIMD unit outputs bitmask, max, min, and result, and based on the operator's opcode, they are multiplexed into the permute unit. For the aggregate operator, the result is accumulated in the row register A. For the select operator, the bitmask is used in which each 1-bit indicates whether the input tuple is selected or not. Instead of using the 32-bit OID as an output, the bitmask reduces 32x the memory footprint for the output data. The output bitmask is saved in the bitmask register for the selected data, which later can be used for the project operator.
Sort We utilize the Bitonic merge-sort algorithm [4] to accelerate the rather compute-intensive sort operator with the help of BPU, which is known to work well with SIMD hardware. The Bitonic sort network comprises ten stages that require a total of 4 instructions for each stage, including input Fig. 6. BPU Microarchitecture permutation, min, max, and output permutation instructions [9]. In addition, the addresses of the data (i.e., OID) must be sorted as well, which incurs doubling the instructions. To minimize computation latency and the number of instructions, we have incorporated two permute units before and after the SIMD unit. The permute unit shuffles sixteen 4B data as input with pre-defined patterns, reducing area overhead by optimizing the permute unit circuitry for seven permutation patterns, as shown in Figure 6. The output generated by the permute unit is sent to the SIMD unit for comparison operations. The SIMD unit generates both min and max data simultaneously, reducing the two instructions for min and max operations. The 16 output data are then sent to the permute unit for output permutation. The BPU supports the OPE, which performs the permutation of addresses that are tagged along with the data result. It eliminates the need to shuffle the OIDs separately as OIDs are shuffled simultaneously with the data.
Input_2' 0' 0' 1' 1' 0' 1' 6' 6' 7' 7' 6' 7' 0' 1' 6' 7' Input_2' 0' 1' 6' 7'
B. Bank Group-Level Processing Unit
As described in Section III, computing condition-oriented operators in memory incurs challenges.To this end, we propose BGPU, as shown in Figure 7. BGPU leverages the efficiency of the project and join unit the most by optimizing the execution flow and the workload balancing overhead together. It comprises the bank group controller, data analytical engine (DA engine), and PIM command generator.
Data Analytical Engine The DA engine is composed of two vector registers, the project and join unit, and the output FIFO. For the project operator, the OIDs and bitmask are stored in the vector register A, and the attribute is stored in the vector register B. The project unit can decode either OIDs or bitmask, which indicate the selected tuples in the projected attribute. All the operators except select generate a set of OIDs as a result, while the select operator generates a simple bitmask, as described in the BPU microarchitecture. Based on the preconfigured addresses and the initial OID values, the project unit first sends the bitmask or OIDs to the PIM command generator, so that it can generate the memory read command for the input attribute, and the memory write command for the output attribute. The index selector of the project unit selects projected tuples among eight 4B tuple data at t CCDL period to rate-match the peak bandwidth at the bank group level, assuming DRAM configures 16-bit DQ pins with the Cmp.
Instruction
Decoder Fig. 7. BGPU Microarchitecture burst length of 16. Then, the selected output data are stored in the output register. Depending on the selectivity, the index selector selects less than eight data. If a set of complete eight 4B data are prepared in the output register, it is dispatched to the output FIFO, which eventually goes to the banks. To reduce the read and write turnaround latency, the output FIFO holds on to 256B data and sequentially write them back to the bank.
For the merge phase of join, the two sorted attributes are fetched in the vector register A and B, while the two OID sets are stored in the OID registers in the join unit. The join unit receives the two input attributes to merge them by comparing each tuple sequentially, executed by the join controller. The join controller sends addresses for the required attribute data to the PIM command generator which generates the memory read and write command for the next input. In order to ratematch the peak bandwidth of the bank group, it includes two comparators for processing two sets of tuples at a time, or a total of four sets of 4B data and 4B OIDs. The OIDs of the output data which satisfy the join-merge condition are selected by the output OID selector. Then, the OIDs of the two tuples are sent to the output FIFO. Same as the project operator, the output FIFO holds on to a set of output data and sequentially writes them back to the bank.
Bank Group Controller Instead of sending the intermediate results of the condition-oriented query operators to the host CPU for checking the condition, Darwin supports the bank group controller to take the CPU's role and remove off-chip data movement for host-pim communication. The bank group controller is responsible for managing the PIM commands within bank group. The sources of the PIM commands are either from the PIM instruction or the PIM command generator. The PIM instruction is decoded into PIM commands in the instruction decoder. The PIM commands are generated sequentially and stored in the command queue. On the other hand, the PIM command generator in the bank group generates PIM commands based on the initial configurations received from the BGPU instructions when executing the project and join operators. When the PIM commands are stored in the queue, it receives issuable signal from the command scheduler and sends out the PIM command.
PIM Command Generator The PIM command generator conditionally generates PIM commands that are determined by the condition-oriented operations. Since DRAM is a timing deterministic device that the control signal is managed by the strict timing rules, the host memory controller cannot decide when to properly send the next command if the execution flow is non-deterministically decided inside the memory by the bank group controller. This issue can be easily addressed with the simple hand-shaking protocol between the CPU and the PIM device. The host CPU holds the next PIM instruction if the ready signal is not asserted from the device side.
C. Chip-level Command Scheduler
Darwin includes the PIM command scheduler at the chip level to oversee all the bank group command queues considering the inter-bank timing constraints such as row-to-row activation delay (t RRD ) and four-bank activation window (t FAW ). Along with the inter-bank timing constraints, the scheduler manages each bank state and counters, which indicates the remaining latency for each command to be issued. It pops the available PIM commands from each bank group command queue. The overhead of the command scheduler is extremely low since the PIM commands generated for data analytics operators have sequential memory access patterns.
D. Rank-Level Buffering
Data movement is unavoidable across the banks, such as workload balancing after the irregular operators, aggregating, projecting, and merging two attributes among the banks. To this end, Darwin supports inter-bank, inter-bank group, and interchip communication to move data across the banks efficiently. It supports inter-chip communication by integrating a simple circuit in the buffer chip of the LRDIMM. Figure 3 shows that the instruction decoder, the permute unit, and DQ aligner are only added. When moving data from one chip to another chip, the rank buffer first receives the instruction indicating data read from a bank, waiting t CAS to retrieve data from the chip. The PIM instruction also indicates the number of data read in nCMD, which generates a sequence of read commands. The rank buffer can receive these data in series and store them in the buffer. Then, the rank buffer receives the instruction indicating data write to a bank in another chip. The instruction includes the permute index which enables re-ordering the data in the rank buffer and writes back to the destination bank. The same process is supported at the chip level and bank group level, which enables inter-bank group and inter-bank communications, respectively. The DQ aligner can change the form of transaction data for the various types of DRAM that Darwin supports to be compliant with the JEDEC's 64-bit DQ Table pins for the main memory system. It can receive any form of 64-byte data (e.g., 32-bit with 16 burst length) into 64-bit with 8 burst length, which matches the width of the conventional main memory system.
VI. SOFTWARE ARCHITECTURE
A. Execution Flow of Darwin
The software stack for Darwin supports the execution with multiple threads, as shown in Figure 8. The query operation and the corresponding tuples are evenly partitioned into several threads. Darwin runtime library receives query operations from the query plan that are offloaded to Darwin. After receiving query operations, Darwin instruction generator first maps operand data to the memory space in a way that can exploit internal bandwidth the most. Darwin operates within a separate memory region that is distinct from the memory region utilized by the host, allowing it to bypass coherence issues. This designated area is made uncacheable and enables contiguous mapping of virtual memory addresses to contiguous physical memory addresses. By providing the driver with the starting address of each bank and bank group, the host can map data to a continuous memory space. Darwin does not engage in virtualization at a lower level than DIMM, such as the rank or bank group level, as it would be both impractical and unnecessary. Since it is typical to use multiple DIMMs to serve a database in data analytics, Darwin supports virtualization at DIMM-level with much flexible control and large memory capacity. Then, it converts query operations into the form of Darwin instruction format, which is shown in Figure 9. Darwin memory manager manages the memory allocated by Darwin device driver for the proper memory addresses. When enough instructions are generated to form a CIMT, the CIMT encoder generates a CIMT, which is then sent to the hardware. In the Darwin hardware, a CIMT instruction is divided into the number of bank groups where each thread is offloaded. Then, each BGPU, including a PIM instruction decoder and a command queue, receives different commands simultaneously. Then, each thread is computed separately on each PU, exploiting the intra-thread and inter-thread data parallelism. Figure 9 illustrates the 64-bit PIM instruction format. It contains an opcode to determine the type and an ID option to indicate the thread that the instruction is executed on. Depending on the opcode, the instruction format is divided into three categories: BPU, BGPU, and data movement.
The BPU instruction format and its configurations are shown in Figure 9 (a) and (b), respectively. It supports three different input sources (e.g., memory, row register A, and OID register A) for one input operand of the SIMD unit while the other operand is fixed to the row register B. The permute case is used to control the BPU's permute network. The metadata is used to generate sequential PIM commands using the nCMD, step1, and step2. The nCMD determines the number of sequential PIM commands, up to 64, generated by the PIM instruction, while the step1 and step2 determine the offset of column addresses for the first and second input sources, respectively. For example, if the column address, nCMD, step1, and step2 are configured 0, 4, 1, and 2, respectively, four sequential PIM commands with column addresses of both input operands are configured (0,0), (1,2), (2,4), and (3,6) while the bank and the row addresses are fixed.
The BGPU instruction format and its options are shown in Figure 9 (a) and (b). The instructions are mainly for initializing BGPU. The instructions send BGPU with the initial configurations (i.e., input and output OIDs, tuple number, and memory address) to generate PIM commands for computing project and join operators as well as allocating addresses for intermediate data. After setting the initial configurations, the start instruction initiates the project and join operators.
The data movement instructions are configured enabling data transfer between the memory and other levels. However, sending PIM instructions for the data movement occupies DQ pins and leaves less room for data transfer. The switching overhead of writing PIM instructions and reading data through the DQ pins becomes even worse if more data transfer occurs. To this end, the nRD and step1 options are enabled for the data movement, generating sequential PIM commands in BGPU and relieving stresses on the DQ pins by the PIM instructions. In addition, the permute index determines the data shuffle order for the permute unit in the rank buffer for inter-chip data movement. Fig. 11. Darwin's Simulation Framework
C. Data Mapping
Darwin adopts the column-oriented DBMS to exploit the data parallelism and DRAM's bandwidth maximally. In the column-oriented DBMS, the attributes are stored separately as array structures to accelerate the analytical query operators, which perform element-wise vector operations on the attributes. In addition, this column-oriented mapping leads to sequential memory access where Darwin can exploit the minimum memory access latency. Furthermore, it adopts relation partitioning to process an analytical query in parallel. Figure 10 shows the relation mapping layout of Darwin, assuming DIMM configures 4 chips and 1 bank group per chip with 2 banks per bank group. Since one bank group corresponds to a thread, 4 different threads can be generated. In order to balance the workload across different threads for maximum utilization, columns of attributes and OIDs are evenly partitioned into 4, the total number of threads, then each partition of the attribute is mapped to the corresponding thread.
VII. METHODOLOGY
A. Benchmarks
We use the TPC-H benchmark [1] with scaling factor 1 to generate a database whose relations include up to 6,001,215 tuples. For the overall performance of query processing, we evaluate with TPC-H Query 1, which mainly performs project and aggregate operators on low-cardinality data, and 6, which mainly performs select on high-cardinality data. We further evaluate Darwin to see the individual performance of each query operator, such as select, project, aggregate, sort, and join. For the dataset, we extract 8,388,608 tuples from lineitem relation of TPC-H at scaling factor 10. In addition, we use the datasets from Balkensen et al. [3] for join, where two relations, R and S, have 8,388,608 tuples. Note that, for the dataset for join, S is a foreign key to R, which means that every tuple in S has exactly one match to a tuple in R. The dataset assumes a column-oriented model, and both data and OID pairs are 4B integers.
B. Darwin Simulation Framework
Performance and Functional Simulation We modified the DRAMSim2 [39] simulator to support computation of the proposed multi-level functions and CIMT instructions for Darwin, as shown in Figure 11. MonetDB first converts SQL into an optimized query plan. Receiving a query plan and relations as input, the CIMT instruction generator generates a trace file of CIMT instructions for the operation of Darwin. It also receives the information on the memory system for Darwin configuration and DRAM device parameters to generate addresses and data order of various DRAM devices properly. The instruction trace file is sent to the Darwin's performance and functional simulator. As a result, we can obtain the Darwin's performance results, such as bandwidth utilization and latency, as well as the overall power and energy consumption using the measurement results from the circuit simulator, in executing the query operators.
Area, Power, and Energy Measurement The area and power are measured using the Synopsys Design Compiler with a 28nm CMOS technology at 500MHz operating frequency. The power is scaled considering the V DD difference. We scale up the area by 80% [24] considering the difference between the logic and DRAM process technology, then scale to match the process node. The rank buffer, PIM command scheduler, BGPU, and BPU are synthesized separately, and each logic is scaled with the number of PUs used in the target DRAM device. The energy consumption by the in-memory PUs and internal data movement is measured by the event-driven method that accumulates energy consumption per command to obtain the overall result. The average energy consumption per each PIM command of BPU and BGPU is measured using the PrimePower tool. The power for internal data movement is scaled and modeled from the fine-grained DRAM [37]. The energy parameters are integrated into the performance simulator to measure the total energy consumption.
C. System Configuration
Hardware Baseline The TPC-H queries and basic operators are evaluated on four different state-of-the-art architectures: baseline CPU, Mondrian [12], Newton [16], and Darwin. For the evaluation of PIM architectures, we use the latest GDDR6 configuration for the maximum speedup as the previous research [28], [29] has shown the feasibility of GDDR6based Darwin in the main memory system for AI applications. The DDR4 configuration is also used for the performance comparison over GDDR6-based Darwin.
CPU The baseline CPU is an Intel(R) Xeon(R) Gold 6226R CPU with 512 GB of four channels of DDR4-2933 with a peak bandwidth of 93.84 GB/s. We measure the runtime of TPC-H queries using MonetDB with 64 threads and exclude the query optimization step. For comparison with the basic operators, we implement each operator in C/C++ using the Pthread library [11] to maximize the computation throughput using multiple threads. The sort function in the C++ standard library and hash-join algorithm are evaluated. Previous PIM architectures We evaluated the performance of Newton and Mondrian which are representative bankand rank-level SLPIM architectures, respectively. Newton places PUs at bank-level, while Mondrian integrates PUs at the logic die of HMC. In order to fairly compare Darwin with Newton and Mondrian, we implemented Newton and Mondrian using the same simulation framework as Darwin. We matched the configurations of hardware components for Newton and Mondrian with Darwin to show the benefit of multilevel architecture and CIMT. Both Newton and Mondrian are implemented with the same configurations, such as memory type, frequency and configuration of PU, as Darwin. Newton's microacrhicture of PU is not dedicated for data analytics so we replace Newton's PU with BPU and BGPU at the bank level for the fairness. While Mondrian is SIMD-based architecture that we matched the width of SIMD equal to Darwin.
Darwin Table I summarizes the DRAM parameters used for Darwin. We follow typical GDDR6 and DDR4 settings. The GDDR6 has two pseudo-channels (PC) in a package (i.e., chip), where each PC has 16 DQ pins with a burst length of 16. Thus, a total of 64 bytes of data can be transferred per one read or write command. It allows only one package to constitute a rank. To fairly compare the performances of database operators on real CPU hardware and the traced-based simulator, we only compare runtime. The execution time on the baseline CPU is evaluated without query pre-processing steps, such as generating query plan and optimization. For the traced-based simulation, the execution time is evaluated by using a trace file with pre-generated CIMT instructions. In addition, as the simulator always runs optimally, we ensure that the baseline CPU can run optimally by choosing the thread number with the best performance for each operator.
VIII. EXPERIMENTAL RESULTS
In this section, we evaluate the benefits of Darwin for various analytics operators and queries. We first compare the performance of Darwin over the state-of-the-art PIM architectures and the baseline CPU and give a detailed analysis of its performance gain on each optimization, bank grouplevel unit placement, performance comparison over DDR4, and
A. Darwin Performance
Comparison to CPU Figure 12 shows the speedup of Darwin over the baseline CPU by evaluating the basic query operators. For the comparison over baseline CPU, Darwin achieves 9.3x, 9.0x, 17.8x, 43.9x, and 4.0x faster in the select, aggregate, sort, project, and join, respectively. The speedup comes from the MLPIM architecture of Darwin that exploits internal parallelism and optimized data movement. The select, aggregate, and sort operators are executed by BPUs utilizing the bank-level parallelism, while the project and join is executed by BGPUs utilizing the bank group-level parallelism. We further evaluate TPC-H queries for end-to-end query processing. Darwin is 5.4x and 13.5x faster than the baseline CPU in Query 1 and 6, respectively.
Comparison to previous PIM Architecture Mondrain and Newton are evaluated as shown in Figure 12. For evaluating the basic query operators and TPC-H queries, both SLPIM architectures show significantly less speedup compared to Darwin. On average, Newton and Mondrian achieve 9.2x and 4.6x higher throughput than the baseline CPU, respectively, while Darwin achieves a much higher throughput of 15.3x. Mondrian shows the least speedup due to the limited bandwidth gain for much further integration of logic. On the other hand, Newton shows no degradation on select and aggregate operators as compared to Darwin since these operators can be accelerated easily with all bank mode of Newton where all PUs execute on the identical operations. However, sort, project, and join are not simply sped up by having in-memory PUs, requiring further optimization schemes in dataflow with CIMT and multi-level data movement. The performance of Newton and Mondrian are further degraded for the end-to-end TPC-H queries due to a large number of project operators on intermediate data and limit the performance gain of only 1.6x and 1.7x, respectively, while Darwin achieves 9.5x higher throughput than the baseline CPU.
B. Effect of Optimizations
To show the effect of each optimization scheme, we evaluate the speedup when each scheme is applied to the non-optimized Darwin (No-opt), as shown in Figure 13. More specifically, Noopt has BPUs that is not optimized for the bitmask register, sort, and CIMT instructions and has the BGPU placed at the rank level. Based on this, we gradually add each optimization: 1) Each of the schemes significantly improves the performance of Darwin. The CIMT instruction, which applies to all operators, shows the most speedup than the other optimizations by significantly reducing the command bandwidth requirement. The other optimizations are accumulated, and the Darwin with the entire optimizations achieves 9.2x, 5.2x, 14.7x, 3.3x, and 5.8x speedup for each operator compared to the No-opt version, which shows 11.2x, 10.9x, 18.1x, 5.6x, and 2.0x higher performance than the baseline CPU.
Optimization on BPU The effect of the all bank mode and PIM instruction on BPU are negligible. The reason is that the BPU receives the same instruction for each of the regular operators to leverage the data parallelism. The BPU is rather optimized to reduce computation for the speedup. First, bitmask is used instead of OID for the select operator where the memory footprint for the output is reduced by 32x. In addition, the throughput of the select with the bitmask (Opt-BM) increases by 11.2x than the No-opt due to the reduced number of writing the output. Second, the compute circuit for the sort is optimized to reduce the computation overhead. The optimizations on the sort logic (i.e., SIMD and permute units, Opt-sort) and OPE (opt-OID) reduce the number of compute commands for the bitonic sort by 4x and 2x, respectively. The Opt-sort and Opt-OID achieve 15.5x and 18.1x higher throughput than the No-opt, respectively.
Benefit of PIM Instruction In data analytics, the command bandwidth is critical as the performance can be bounded by the command bottleneck. The select, aggregate, and sort operators, where the BPU can perform with massive data parallelism, the conventional DRAM command scheme can not provide any speedup to Darwin since each bank receives a command separately. Both all bank mode and PIM instruction show benefit since all BPUs can receive the same command due to
C. Internal Data Movement of MLPIM
To see the benefit of exploiting BGPU at the bank group level over the bank and rank levels, we further evaluate the performances of project units depending on its location. Figure 14 shows the evaluation results of BGPU at the different levels performing the project operator and workload balancing on eight different data distributions (i.e., normal, gamma, chisquared and uniform distributions). The project operator is performed at each memory level, and the additional data movement is applied to evenly distribute the output data, which will later be fed as input to the following operator. The latency reduces as the placement of BGPU goes lower to the bank level, but the data movement latency increases since the project operation is only executed within the level. In the uniform distribution, there is no additional data movement overhead since the workload is perfectly balanced. Therefore, there is only the latency for the execution of the project operator, achieving the highest speedup in the bank level. However, the uniform distribution is an ideal case that is unlikely to occur. Thus, with the normal, gamma, and chi-squared distributions, the BGPU in the bank group level shows the highest speedup of average 4.0x than the rank level. Table I. A single chip of GDDR6 is compared with four chips of DDR4, forming a rank, to match the number of banks, BPUs, and BGPUs between them. The capacity of Darwin-DDR4 is 4 times larger than the Darwin-GDDR6. However, Darwin-GDDR6 achieves up to 3.0x higher throughput than Darwin-DDR4 since GDDR6 provides about two times faster column access and two time wider I/O bit width than DDR4. Figure 15 (b) shows the speedup of Darwin for the basic query operators as we vary its memory configuration among 1 PC with 4 banks, 1 PC with 16 banks, and 2 PC with 16 banks. The internal bandwidth and the PUs increase linearly as the number of banks increases. However, the speedup dampens as the number of banks increases due to the increased amount of internal data movement. The average speedup of Darwin when the bank number increases from 4 to 16 with 1 PC is 3.2x. On the other hand, the average speedup when increasing the PC from 1 to 2 with 16 banks is 1.6x.
D. Comparison over DDR4
E. Darwin Scalability
F. Area, Frequency, Power, and Energy
The areas of BPU, BGPU, and rank buffer are measured 0.104mm 2 , 0.043mm 2 , and 0.078mm 2 . Once scaled and summated, the total area overhead is measured 3.752mm 2 which is only 5.6% of the GDDR6's die area [23].
The energy consumption is evaluated as shown in Figure 16. In the ideal CPU, the energy consumption is measured only by the data movement between the CPU and the memory. Since we assume the CPU is ideal with unlimited computation capacity and speed, it does not take any delay or energy for any operator execution. This guarantees that the peak bandwidth is utilized, and there is no redundant read or write on the same data address. The background energy is increased in Darwin due to the longer execution time than the ideal case, however, due to the reduced off-chip movement, the overall energy drops. As a result, the reduced I/O movement brings all operators to achieve energy savings significantly. Darwin reduces its energy consumption by 45.4%, 10.2%, 67.4%, 47.1%, and 84.7% in join, sort, project, aggregate, and select, respectively.
IX. RELATED WORK
PIM and NMP Newton, HBM-PIM, TransPIM, McDRAM, Ambit, and SIMDRAM [15], [16], [30], [41], [43], [51] support the regular or non-condition-oriented workloads to avoid data dependent dataflow by accelerating memory-bound vector operations exploiting internal parallelism of DRAM. [19], [25], [38] accelerates a recommendation system where the gatherand-scatter operations are the main target.
Accelerating Data Analytics On-chip accelerator Q100 [47] exploits pipeline in query processing with heterogeneous processing cores to minimize the memory access. Mondrian and Polynesia [7], [12] integrate circuits in logic die of 3D stacked memory, which is much further from the DRAM cells and loses the internal bandwidth. To the best of our knowledge, Darwin is the first proposal that accelerates data analytics operators reducing the overhead by reusing the hierarchical structure of the main memory system.
X. DISCUSSION
Applying Darwin to other workloads Although Darwin is designed to target data analytics, it can still accelerate other similar workloads that have large memory footprint, memory bounded, and especially having wide data dependency. Deep neural network (DNN) workloads, such as transformer and LSTM, are applicable to Darwin. Its memory-bounded algebra operations, such as matrix vector multiplication and matrixmatrix multiplication, can be easily accelerated with BPU's SIMD. Furthermore, multi-level characteristic of Darwin can accelerate the frequent output and partial sum data movement of these workloads where they are incurred from data partitioning among the memory nodes due to the large sized weight. Especially, Darwin is applicable to accelerate sparse matrix multiplication with its efficient in-memory network to gather non-zero elements and perform multiplication in BPU. In addition, Darwin can accelerate recommendation system where it includes gather-and-reduction and fully-connected workloads. Darwin can gather data from wide range of memory space in parallel using BGPU, while fully-connected layers are accelerated using BPU's SIMD.
XI. CONCLUSION
We propose Darwin, a practical LRDIMM-based multi-level PIM architecture for data analytics. We addressed the issues in adopting PIM for data analytics through the three contributions. First, Darwin reduces overhead of integrating additional logic in DRAM by reusing the conventional DRAM architecture that fully exploits the multi-level of DRAM, while maximizing the internal bandwidth. Second, Darwin places the BGPU to mitigate the data movement overhead and load balancing across the banks, while performing the condition-oriented memorybounded data analytics operators. Third, CIMT instruction is adopted to address the command bottleneck to enable separate control of multiple PUs simultaneously. The simulation results on the major five data analytics operators, such as select, aggregate, sort, project, and join, show that the GDDR6 based Darwin achieves up to 43.9x over the baseline CPU. Darwin is more energy-efficient than the ideal CPU systems by 85.7%, while the additional area overhead is only 5.6%.
Fig. 2 .
2(a) Latency Breakdown of Data Analytics Operators (b) Single-Level PIM (c) Multi-Level PIM.
Fig. 3 .
3Multi-level Architecture of Darwin.
Fig. 5 .
5Darwin's PIM Operation. (a) Regular Operation (b) Irregular Operation
Fig. 8. Darwin Software Stack
Fig. 9 .
9Darwin's Instruction Format. (a) PIM (b) Configuration
Fig. 10 .
10Data
Fig. 12 .
12Performance Comparison over Baselines
Fig. 13 .
13Evaluation on Optimization scalability in bank number. Finally, we report the Darwin's physical implementation results and its overhead in GDDR6.
Fig. 14 .
14Evaluation on BGPU Placement all bank mode (Opt-ABM), 2) BGPU placed at the bank group level (Opt-BG), 3) CIMT (Opt-CIMT), 4) bitmask register (Opt-BM), 5) circuit optimizations for sort (Opt-sort), and 6) OID processing engine (Opt-OID
Fig. 15 .
15Darwin Scalability. (a) DDR4 Comparison (b) Memory Configuration the data parallelism. However, in the project and join operators, simply moving the BGPU from the rank level (Opt-ABM) to the bank group level (Opt-BG) does not provide any speedup with the all bank mode. Each BGPU at the bank group level requires different commands. Thus, the all bank mode cannot leverage the benefit of bank group-level processing. CIMT instruction, which gives different command threads at each bank group level, can effectively leverage the benefit of Darwin. As a result, the CIMT instruction achieves 2.3x and 3.6x speedup in the project and join operations, respectively.
Figure 15 (
15a) shows the normalized throughput of DDR4 based Darwin (Darwin-DDR4) and GDDR6 based Darwin (Darwin-GDDR6). The memory configurations are shown in
act ... PIM_WRITE Gen. PIM Command (by PIM Inst.) PIM Command (by PIM Inst.) PIM Command (by CMD Gen.) PIM Command (by PIM Inst.)Chip0
Rank0
Bank group0
Bank group1
Bank group2
Bank group3
Bank group0
Bank group1
Bank group2
Bank group3
Chip3
act
act
act
act
pre
pre
pre
pre
act
act
act
act
pre
pre
pre
pre
act
act
act
act
pre
pre
pre
act
act
pre
Preparation Stage
Computation Stage
Output Stage
pre
act
act
pre
act
pre
act
act
act
act
act
act
(b)
(a)
...
Rank0
Bank group0
Bank group1
Bank group2
Bank group3
Chip0
Bank group0
Bank group1
Bank group2
Bank group3
Chip3
act
pre
act
act
act
act
pre
pre
pre
act
act
act
pre
act
pre
pre
pre
act
act
act
act
pre
act
act
act
act
pre
pre
pre
act
act
act
pre
act
pre
pre
pre
act
act
act
Preparation Stage
Computation Stage
Output Stage
tRRD
tCCDL
Initial Address Set
PIM Command (by PIM Inst.)
PIM Command (by PIM Inst.)
PIM Command (by CMD Gen.)
PIM Command (by CMD Gen.)
TABLE I
IDARWIN PARAMETERS
Darwin
BPU
Logic
Darwin
BGPU
Logic
Add/Sub
Multiply
Sort
Select
Frequency
Project
Join
Frequency
DDR4 GDDR6
Chip
Bank Group
Bank
Row
Column I/O
I/O Bit Width
tCK
4
2
4
65536
128
128bits
0.63ns
1
4
4
16384
64
256bits
0.57ns
tFAW
tRAS
tRCD
tRRDS
tRRRL
tCCDS
tCCDL
16ns
52ns
22ns
4ns
8ns
4ns
8ns
32ns
54ns
24ns
9ns
9ns
2ns
4sn
DDR4 GDDR6
145.6pJ/cmd
139.4pJ/cmd
193.2pJ/cmd
41.2pJ/cmd
500MHz
0.025mW
0.0233mW
500MHz
). Although the No-opt has a BPU in each bank exploiting the internal bandwidth, Darwin shows marginal speedup results without any optimizations. It shows only 1.2x, 2.1x, 1.2x, 1.4x, and 0.3x speedup for each operator over the baseline CPU. The performance of join operator is degraded significantly because the command bottleneck deters the BPUs in the sort phase and narrow rank bandwidth deters BGPUs in the merge phase.
Fig. 16. Evaluation on Relative Energy Consumption0
Join
Sort
Project
Aggregate
Select
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Relative Energy
Ideal CPU
Darwin
Ideal CPU
Darwin
Ideal CPU
Darwin
Ideal CPU
Darwin
Ideal CPU
Darwin
BGPU
BPU
Data Movement IO
Data Movement BPU
Act/pre
Background
Data Movement BGPU
. Tpc-H Benchmark, TPC-H Benchmark. [Online]. Available: http://www.tpc.org/tpch/
Spark sql: Relational data processing in spark. M Armbrust, R S Xin, C Lian, Y Huai, D Liu, J K Bradley, X Meng, T Kaftan, M J Franklin, A Ghodsi, M Zaharia, 10.1145/2723372.2742797Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '15. the 2015 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '15New York, NY, USAAssociation for Computing MachineryM. Armbrust, R. S. Xin, C. Lian, Y. Huai, D. Liu, J. K. Bradley, X. Meng, T. Kaftan, M. J. Franklin, A. Ghodsi, and M. Zaharia, "Spark sql: Relational data processing in spark," in Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data, ser. SIGMOD '15. New York, NY, USA: Association for Computing Machinery, 2015, p. 1383-1394. [Online]. Available: https://doi.org/10.1145/2723372.2742797
Multi-core, main-memory joins: Sort vs. hash revisited. C Balkesen, G Alonso, J Teubner, M T Özsu, Proceedings of the VLDB Endowment. the VLDB Endowment7C. Balkesen, G. Alonso, J. Teubner, and M. T.Özsu, "Multi-core, main-memory joins: Sort vs. hash revisited," Proceedings of the VLDB Endowment, vol. 7, no. 1, pp. 85-96, 2013.
Sorting networks and their applications. K E Batcher, Proceedings of the. thespring joint computer conferenceK. E. Batcher, "Sorting networks and their applications," in Proceedings of the April 30-May 2, 1968, spring joint computer conference, 1968, pp. 307-314.
Monetdb/x100: Hyper-pipelining query execution. P A Boncz, M Zukowski, N Nes, Cidr. Citeseer5P. A. Boncz, M. Zukowski, and N. Nes, "Monetdb/x100: Hyper-pipelining query execution." in Cidr, vol. 5. Citeseer, 2005, pp. 225-237.
Google workloads for consumer devices: Mitigating data movement bottlenecks. A Boroumand, S Ghose, Y Kim, R Ausavarungnirun, E Shiu, R Thakur, D Kim, A Kuusela, A Knies, P Ranganathan, O Mutlu, 10.1145/3296957.3173177SIGPLAN Not. 532A. Boroumand, S. Ghose, Y. Kim, R. Ausavarungnirun, E. Shiu, R. Thakur, D. Kim, A. Kuusela, A. Knies, P. Ranganathan, and O. Mutlu, "Google workloads for consumer devices: Mitigating data movement bottlenecks," SIGPLAN Not., vol. 53, no. 2, p. 316-331, mar 2018. [Online]. Available: https://doi.org/10.1145/3296957.3173177
Polynesia: Enabling effective hybrid transactional/analytical databases with specialized hardware/software co-design. A Boroumand, S Ghose, G F Oliveira, O Mutlu, arXiv:2103.00798arXiv preprintA. Boroumand, S. Ghose, G. F. Oliveira, and O. Mutlu, "Polynesia: En- abling effective hybrid transactional/analytical databases with specialized hardware/software co-design," arXiv preprint arXiv:2103.00798, 2021.
An overview of data warehousing and olap technology. S Chaudhuri, U Dayal, ACM Sigmod record. 261S. Chaudhuri and U. Dayal, "An overview of data warehousing and olap technology," ACM Sigmod record, vol. 26, no. 1, pp. 65-74, 1997.
. J Chhugani, A D Nguyen, V W Lee, W Macy, M Hagog, Y.-K , J. Chhugani, A. D. Nguyen, V. W. Lee, W. Macy, M. Hagog, Y.-K.
Efficient implementation of sorting on multi-core simd cpu architecture. A Chen, S Baransi, P Kumar, Dubey, Proceedings of the VLDB Endowment. the VLDB Endowment1Chen, A. Baransi, S. Kumar, and P. Dubey, "Efficient implementation of sorting on multi-core simd cpu architecture," Proceedings of the VLDB Endowment, vol. 1, no. 2, pp. 1313-1324, 2008.
The relational model for database management: version 2. E F Codd, Addison-Wesley Longman Publishing Co., IncE. F. Codd, The relational model for database management: version 2. Addison-Wesley Longman Publishing Co., Inc., 1990.
The native posix thread library for linux. U Drepper, I Molnar, White Paper, Red Hat Inc10U. Drepper and I. Molnar, "The native posix thread library for linux," White Paper, Red Hat Inc, vol. 10, no. 2, pp. 22-42, 2003.
The mondrian data engine. M Drumond, A Daglis, N Mirzadeh, D Ustiugov, J Picorel, B Falsafi, B Grot, D Pnevmatikatos, ACM SIGARCH Computer Architecture News. 452M. Drumond, A. Daglis, N. Mirzadeh, D. Ustiugov, J. Picorel, B. Falsafi, B. Grot, and D. Pnevmatikatos, "The mondrian data engine," ACM SIGARCH Computer Architecture News, vol. 45, no. 2, pp. 639-651, 2017.
Computedram: In-memory compute using off-the-shelf drams. F Gao, G Tziantzioulis, D Wentzlaff, Proceedings of the 52nd annual IEEE/ACM international symposium on microarchitecture. the 52nd annual IEEE/ACM international symposium on microarchitectureF. Gao, G. Tziantzioulis, and D. Wentzlaff, "Computedram: In-memory compute using off-the-shelf drams," in Proceedings of the 52nd annual IEEE/ACM international symposium on microarchitecture, 2019, pp. 100-113.
ipim: Programmable in-memory image processing accelerator using near-bank architecture. P Gu, X Xie, Y Ding, G Chen, W Zhang, D Niu, Y Xie, 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEEP. Gu, X. Xie, Y. Ding, G. Chen, W. Zhang, D. Niu, and Y. Xie, "ipim: Programmable in-memory image processing accelerator using near-bank architecture," in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). IEEE, 2020, pp. 804-817.
Simdram: a framework for bit-serial simd processing using dram. N Hajinazar, G F Oliveira, S Gregorio, J D Ferreira, N M Ghiasi, M Patel, M Alser, S Ghose, J Gómez-Luna, O Mutlu, Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems. the 26th ACM International Conference on Architectural Support for Programming Languages and Operating SystemsN. Hajinazar, G. F. Oliveira, S. Gregorio, J. D. Ferreira, N. M. Ghiasi, M. Patel, M. Alser, S. Ghose, J. Gómez-Luna, and O. Mutlu, "Simdram: a framework for bit-serial simd processing using dram," in Proceedings of the 26th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, 2021, pp. 329-345.
Newton: A dram-maker's accelerator-in-memory (aim) architecture for machine learning. M He, C Song, I Kim, C Jeong, S Kim, I Park, M Thottethodi, T Vijaykumar, 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEEM. He, C. Song, I. Kim, C. Jeong, S. Kim, I. Park, M. Thottethodi, and T. Vijaykumar, "Newton: A dram-maker's accelerator-in-memory (aim) architecture for machine learning," in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 372-385.
. K.-D Hwang, B Kim, S.-Y Byeon, K.-Y Kim, D.-H Kwon, H.-B. Lee, G.-I. Lee, S.-S. Yoon, J.-Y. Cha, S.-Y. Jang, S.-H. Lee, Y.-S. Joo, G.-SK.-D. Hwang, B. Kim, S.-Y. Byeon, K.-Y. Kim, D.-H. Kwon, H.-B. Lee, G.-I. Lee, S.-S. Yoon, J.-Y. Cha, S.-Y. Jang, S.-H. Lee, Y.-S. Joo, G.-S.
A 16gb/s/pin 8gb gddr6 dram with bandwidth extension techniques for high-speed applications. S.-S Lee, S.-B Xi, K.-H Lim, J.-H Chu, J Cho, J Chun, J Oh, S.-H Kim, Lee, 2018 IEEE International Solid -State Circuits Conference (ISSCC). Lee, S.-S. Xi, S.-B. Lim, K.-H. Chu, J.-H. Cho, J. Chun, J. Oh, J. Kim, and S.-H. Lee, "A 16gb/s/pin 8gb gddr6 dram with bandwidth extension techniques for high-speed applications," in 2018 IEEE International Solid -State Circuits Conference (ISSCC), 2018, pp. 210-212.
Monetdb: Two decades of research in column-oriented database. S Idreos, F Groffen, N Nes, S Manegold, S Mullender, M Kersten, IEEE Data Engineering Bulletin. S. Idreos, F. Groffen, N. Nes, S. Manegold, S. Mullender, and M. Kersten, "Monetdb: Two decades of research in column-oriented database," IEEE Data Engineering Bulletin, 2012.
Recnmp: Accelerating personalized recommendation with near-memory processing. L Ke, U Gupta, B Y Cho, D Brooks, V Chandra, U Diril, A Firoozshahian, K Hazelwood, B Jia, H.-H S Lee, M Li, B Maher, D Mudigere, M Naumov, M Schatz, M Smelyanskiy, X Wang, B Reagen, C.-J Wu, M Hempstead, X Zhang, 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA). L. Ke, U. Gupta, B. Y. Cho, D. Brooks, V. Chandra, U. Diril, A. Firoozshahian, K. Hazelwood, B. Jia, H.-H. S. Lee, M. Li, B. Maher, D. Mudigere, M. Naumov, M. Schatz, M. Smelyanskiy, X. Wang, B. Reagen, C.-J. Wu, M. Hempstead, and X. Zhang, "Recnmp: Ac- celerating personalized recommendation with near-memory processing," in 2020 ACM/IEEE 47th Annual International Symposium on Computer Architecture (ISCA), 2020, pp. 790-803.
Database processing-inmemory: an experimental study. T R Kepe, E C De Almeida, M A Alves, Proceedings of the VLDB Endowment. the VLDB Endowment13T. R. Kepe, E. C. de Almeida, and M. A. Alves, "Database processing-in- memory: an experimental study," Proceedings of the VLDB Endowment, vol. 13, no. 3, pp. 334-347, 2019.
An overview of processing-in-memory circuits for artificial intelligence and machine learning. D Kim, C Yu, S Xie, Y Chen, J.-Y Kim, B Kim, J Kulkarni, T T , -H Kim, IEEE Journal on Emerging and Selected Topics in Circuits and Systems. D. Kim, C. Yu, S. Xie, Y. Chen, J.-Y. Kim, B. Kim, J. Kulkarni, and T. T.- H. Kim, "An overview of processing-in-memory circuits for artificial intelligence and machine learning," IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 2022.
Gradpim: A practical processing-in-dram architecture for gradient descent. H Kim, H Park, T Kim, K Cho, E Lee, S Ryu, H.-J Lee, K Choi, J Lee, 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEEH. Kim, H. Park, T. Kim, K. Cho, E. Lee, S. Ryu, H.-J. Lee, K. Choi, and J. Lee, "Gradpim: A practical processing-in-dram architecture for gradient descent," in 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2021, pp. 249-262.
. Y.-J Kim, H.-J Kwon, S.-Y Doo, M Ahn, Y.-H Kim, Y.-J Lee, D.-S , Y.-J. Kim, H.-J. Kwon, S.-Y. Doo, M. Ahn, Y.-H. Kim, Y.-J. Lee, D.-S.
. S.-G Kang, C.-Y Do, G.-H Lee, J.-K Cho, J.-S Park, K Kim, S Park, S.-Y. Oh, Lee, J.-H. Yu, K. Yu, C. Jeon, S.-S. Kim, H.-S. Park, J.-WKang, S.-G. Do, C.-Y. Lee, G.-H. Cho, J.-K. Park, J.-S. Kim, K. Park, S. Oh, S.-Y. Lee, J.-H. Yu, K. Yu, C. Jeon, S.-S. Kim, H.-S. Park, J.-W.
. S.-H Lee, K.-W Cho, Y Park, Y.-H Kim, Seo, C.-H. Shin, C.-Y. Lee, S.-Y. Bang, Y. Park, S.-K. Choi, B.-C. Kim, G.-H. Han, S.-J. Bae, H.-JLee, S.-H. Cho, K.-W. Park, Y. Kim, Y.-H. Seo, C.-H. Shin, C.-Y. Lee, S.-Y. Bang, Y. Park, S.-K. Choi, B.-C. Kim, G.-H. Han, S.-J. Bae, H.-J.
A 16-gb, 18-gb/s/pin gddr6 dram with per-bit trainable single-ended dfe and pll-less clocking. J.-H Kwon, Y.-S Choi, K.-I Sohn, S.-J Park, G Jang, Jin, IEEE Journal of Solid-State Circuits. 541Kwon, J.-H. Choi, Y.-S. Sohn, K.-I. Park, S.-J. Jang, and G. Jin, "A 16-gb, 18-gb/s/pin gddr6 dram with per-bit trainable single-ended dfe and pll-less clocking," IEEE Journal of Solid-State Circuits, vol. 54, no. 1, pp. 197-209, 2019.
Assessing merged dram/logic technology. Y.-B Kim, T W Chen, Integration, the VLSI Journal. 227Y.-B. Kim and T. W. Chen, "Assessing merged dram/logic technology," Integration, the VLSI Journal, vol. 2, no. 27, pp. 179-194, 1999.
Tensordimm: A practical near-memory processing architecture for embeddings and tensor operations in deep learning. Y Kwon, Y Lee, M Rhu, Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture. the 52nd Annual IEEE/ACM International Symposium on MicroarchitectureY. Kwon, Y. Lee, and M. Rhu, "Tensordimm: A practical near-memory processing architecture for embeddings and tensor operations in deep learning," in Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, 2019, pp. 740-753.
Oracle timesten: An inmemory database for enterprise applications. T Lahiri, M.-A Neimat, S Folkman, IEEE Data Eng. Bull. 362T. Lahiri, M.-A. Neimat, and S. Folkman, "Oracle timesten: An in- memory database for enterprise applications." IEEE Data Eng. Bull., vol. 36, no. 2, pp. 6-13, 2013.
Deep learning. Y Lecun, Y Bengio, G Hinton, nature. 5217553Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, pp. 436-444, 2015.
0gb/s/pin gddr5m with 5.4mw standby power and an error-adaptive duty-cycle corrector. H.-W Lee, J Song, S.-A Hyun, S Baek, Y Lim, J Lee, M Park, H Choi, C Choi, J Cha, J Kim, H Choi, S Kwack, Y Kang, J Kim, J Park, J Kim, J Cho, C Kim, Y Kim, J Lee, B Chung, S Hong, 2014 IEEE International Solid-State Circuits Conference Digest of Technical Papers. 25.3 a 1.35v 5.H.-W. Lee, J. Song, S.-A. Hyun, S. Baek, Y. Lim, J. Lee, M. Park, H. Choi, C. Choi, J. Cha, J. Kim, H. Choi, S. Kwack, Y. Kang, J. Kim, J. Park, J. Kim, J. Cho, C. Kim, Y. Kim, J. Lee, B. Chung, and S. Hong, "25.3 a 1.35v 5.0gb/s/pin gddr5m with 5.4mw standby power and an error-adaptive duty-cycle corrector," in 2014 IEEE International Solid- State Circuits Conference Digest of Technical Papers (ISSCC), 2014, pp. 434-435.
A 1ynm 1.25v 8gb, 16gb/s/pin gddr6-based accelerator-in-memory supporting 1tflops mac operation and various activation functions for deep-learning applications. S Lee, K Kim, S Oh, J Park, G Hong, D Ka, K Hwang, J Park, K Kang, J Kim, J Jeon, N Kim, Y Kwon, K Vladimir, W Shin, J Won, M Lee, H Joo, H Choi, J Lee, D Ko, Y Jun, K Cho, I Kim, C Song, C Jeong, D Kwon, J Jang, I Park, J Chun, J Cho, 2022 IEEE International Solid-State Circuits Conference (ISSCC). 65S. Lee, K. Kim, S. Oh, J. Park, G. Hong, D. Ka, K. Hwang, J. Park, K. Kang, J. Kim, J. Jeon, N. Kim, Y. Kwon, K. Vladimir, W. Shin, J. Won, M. Lee, H. Joo, H. Choi, J. Lee, D. Ko, Y. Jun, K. Cho, I. Kim, C. Song, C. Jeong, D. Kwon, J. Jang, I. Park, J. Chun, and J. Cho, "A 1ynm 1.25v 8gb, 16gb/s/pin gddr6-based accelerator-in-memory supporting 1tflops mac operation and various activation functions for deep-learning applications," in 2022 IEEE International Solid-State Circuits Conference (ISSCC), vol. 65, 2022, pp. 1-3.
Hardware architecture and software stack for pim based on commercial dram technology : Industrial product. S Lee, S Kang, J Lee, H Kim, E Lee, S Seo, H Yoon, S Lee, K Lim, H Shin, J Kim, O Seongil, A Iyer, D Wang, K Sohn, N S Kim, 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA). S. Lee, S.-h. Kang, J. Lee, H. Kim, E. Lee, S. Seo, H. Yoon, S. Lee, K. Lim, H. Shin, J. Kim, O. Seongil, A. Iyer, D. Wang, K. Sohn, and N. S. Kim, "Hardware architecture and software stack for pim based on commercial dram technology : Industrial product," in 2021 ACM/IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 2021, pp. 43-56.
Gearbox: a case for supporting accumulation dispatching and hybrid partitioning in pimbased accelerators. M Lenjani, A Ahmed, M Stan, K Skadron, Proceedings of the 49th Annual International Symposium on Computer Architecture. the 49th Annual International Symposium on Computer ArchitectureM. Lenjani, A. Ahmed, M. Stan, and K. Skadron, "Gearbox: a case for supporting accumulation dispatching and hybrid partitioning in pim- based accelerators," in Proceedings of the 49th Annual International Symposium on Computer Architecture, 2022, pp. 218-230.
Fulcrum: a simplified control and access mechanism toward flexible and practical in-situ accelerators. M Lenjani, P Gonzalez, E Sadredini, S Li, Y Xie, A Akel, S Eilert, M R Stan, K Skadron, 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEEM. Lenjani, P. Gonzalez, E. Sadredini, S. Li, Y. Xie, A. Akel, S. Eilert, M. R. Stan, and K. Skadron, "Fulcrum: a simplified control and access mechanism toward flexible and practical in-situ accelerators," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 556-569.
Hippogriffdb: Balancing i/o and gpu bandwidth in big data analytics. J Li, H.-W Tseng, C Lin, Y Papakonstantinou, S Swanson, Proceedings of the VLDB Endowment. the VLDB Endowment9J. Li, H.-W. Tseng, C. Lin, Y. Papakonstantinou, and S. Swanson, "Hippogriffdb: Balancing i/o and gpu bandwidth in big data analytics," Proceedings of the VLDB Endowment, vol. 9, no. 14, pp. 1647-1658, 2016.
Drisa: A dram-based reconfigurable in-situ accelerator. S Li, D Niu, K T Malladi, H Zheng, B Brennan, Y Xie, 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). S. Li, D. Niu, K. T. Malladi, H. Zheng, B. Brennan, and Y. Xie, "Drisa: A dram-based reconfigurable in-situ accelerator," in 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).
. IEEE. IEEE, 2017, pp. 288-301.
Ibm soliddb: In-memory database optimized for extreme speed and availability. J Lindström, V Raatikka, J Ruuth, P Soini, K Vakkila, IEEE Data Eng. Bull. 362J. Lindström, V. Raatikka, J. Ruuth, P. Soini, and K. Vakkila, "Ibm soliddb: In-memory database optimized for extreme speed and availability." IEEE Data Eng. Bull., vol. 36, no. 2, pp. 14-20, 2013.
Centaur: A framework for hybrid cpu-fpga databases. M Owaida, D Sidler, K Kara, G Alonso, 2017 IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEEM. Owaida, D. Sidler, K. Kara, and G. Alonso, "Centaur: A framework for hybrid cpu-fpga databases," in 2017 IEEE 25th Annual Interna- tional Symposium on Field-Programmable Custom Computing Machines (FCCM). IEEE, 2017, pp. 211-218.
Fine-grained dram: Energy-efficient dram for extreme bandwidth systems. M Connor, N Chatterjee, D Lee, J Wilson, A Agrawal, S W Keckler, W J Dally, 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEEM. O'Connor, N. Chatterjee, D. Lee, J. Wilson, A. Agrawal, S. W. Keckler, and W. J. Dally, "Fine-grained dram: Energy-efficient dram for extreme bandwidth systems," in 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2017, pp. 41-54.
Trim: Enhancing processor-memory interfaces with scalable tensor reduction in memory. J Park, B Kim, S Yun, E Lee, M Rhu, J H Ahn, MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture. J. Park, B. Kim, S. Yun, E. Lee, M. Rhu, and J. H. Ahn, "Trim: Enhancing processor-memory interfaces with scalable tensor reduction in memory," in MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture, 2021, pp. 268-281.
Dramsim2: A cycle accurate memory system simulator. P Rosenfeld, E Cooper-Balis, B Jacob, IEEE computer architecture letters. 101P. Rosenfeld, E. Cooper-Balis, and B. Jacob, "Dramsim2: A cycle accurate memory system simulator," IEEE computer architecture letters, vol. 10, no. 1, pp. 16-19, 2011.
Rowclone: Fast and energy-efficient in-dram bulk data copy and initialization. V Seshadri, Y Kim, C Fallin, D Lee, R Ausavarungnirun, G Pekhimenko, Y Luo, O Mutlu, P B Gibbons, M A Kozuch, T C Mowry, 10.1145/2540708.2540725Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46. the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46New York, NY, USAAssociation for Computing MachineryV. Seshadri, Y. Kim, C. Fallin, D. Lee, R. Ausavarungnirun, G. Pekhimenko, Y. Luo, O. Mutlu, P. B. Gibbons, M. A. Kozuch, and T. C. Mowry, "Rowclone: Fast and energy-efficient in-dram bulk data copy and initialization," in Proceedings of the 46th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-46. New York, NY, USA: Association for Computing Machinery, 2013, p. 185-197. [Online]. Available: https://doi.org/10.1145/2540708.2540725
Ambit: In-memory accelerator for bulk bitwise operations using commodity dram technology. V Seshadri, D Lee, T Mullins, H Hassan, A Boroumand, J Kim, M A Kozuch, O Mutlu, P B Gibbons, T C Mowry, 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEEV. Seshadri, D. Lee, T. Mullins, H. Hassan, A. Boroumand, J. Kim, M. A. Kozuch, O. Mutlu, P. B. Gibbons, and T. C. Mowry, "Ambit: In-memory accelerator for bulk bitwise operations using commodity dram technology," in 2017 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2017, pp. 273-287.
A study of the fundamental performance characteristics of gpus and cpus for database analytics. A Shanbhag, S Madden, X Yu, Proceedings of the 2020 ACM SIGMOD international conference on Management of data. the 2020 ACM SIGMOD international conference on Management of dataA. Shanbhag, S. Madden, and X. Yu, "A study of the fundamental performance characteristics of gpus and cpus for database analytics," in Proceedings of the 2020 ACM SIGMOD international conference on Management of data, 2020, pp. 1617-1632.
Mcdram: Low latency and energy-efficient matrix computations in dram. H Shin, D Kim, E Park, S Park, Y Park, S Yoo, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 3711H. Shin, D. Kim, E. Park, S. Park, Y. Park, and S. Yoo, "Mcdram: Low latency and energy-efficient matrix computations in dram," IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11, pp. 2613-2622, 2018.
The voltdb main memory dbms. M Stonebraker, A Weisberg, IEEE Data Eng. Bull. 362M. Stonebraker and A. Weisberg, "The voltdb main memory dbms." IEEE Data Eng. Bull., vol. 36, no. 2, pp. 21-27, 2013.
C-Store: A Column-Oriented DBMS. Association for Computing Machinery and Morgan &. M Stonebraker, D J Abadi, A Batkin, X Chen, M Cherniack, M Ferreira, E Lau, A Lin, S Madden, E O'neil, P O'neil, A Rasin, N Tran, S Zdonik, 10.1145/3226595.3226638Claypool. M. Stonebraker, D. J. Abadi, A. Batkin, X. Chen, M. Cherniack, M. Ferreira, E. Lau, A. Lin, S. Madden, E. O'Neil, P. O'Neil, A. Rasin, N. Tran, and S. Zdonik, C-Store: A Column-Oriented DBMS. Association for Computing Machinery and Morgan & Claypool, 2018, p. 491-518. [Online]. Available: https://doi.org/10.1145/3226595.3226638
Column-oriented database acceleration using fpgas. S Watanabe, K Fujimoto, Y Saeki, Y Fujikawa, H Yoshino, 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEES. Watanabe, K. Fujimoto, Y. Saeki, Y. Fujikawa, and H. Yoshino, "Column-oriented database acceleration using fpgas," in 2019 IEEE 35th International Conference on Data Engineering (ICDE). IEEE, 2019, pp. 686-697.
Q100: The architecture and design of a database processing unit. L Wu, A Lottarini, T K Paine, M A Kim, K A Ross, ACM SIGARCH Computer Architecture News. 421L. Wu, A. Lottarini, T. K. Paine, M. A. Kim, and K. A. Ross, "Q100: The architecture and design of a database processing unit," ACM SIGARCH Computer Architecture News, vol. 42, no. 1, pp. 255-268, 2014.
Spacea: Sparse matrix vector multiplication on processing-inmemory accelerator. X Xie, Z Liang, P Gu, A Basak, L Deng, L Liang, X Hu, Y Xie, 2021 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEEX. Xie, Z. Liang, P. Gu, A. Basak, L. Deng, L. Liang, X. Hu, and Y. Xie, "Spacea: Sparse matrix vector multiplication on processing-in- memory accelerator," in 2021 IEEE International Symposium on High- Performance Computer Architecture (HPCA). IEEE, 2021, pp. 570-583.
Elp2im: Efficient and low power bitwise operation processing in dram. X Xin, Y Zhang, J Yang, 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEEX. Xin, Y. Zhang, and J. Yang, "Elp2im: Efficient and low power bitwise operation processing in dram," in 2020 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE, 2020, pp. 303-314.
Aquoman: An analytic-query offloading machine. S Xu, T Bourgeat, T Huang, H Kim, S Lee, A Arvind, 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEES. Xu, T. Bourgeat, T. Huang, H. Kim, S. Lee, and A. Arvind, "Aquoman: An analytic-query offloading machine," in 2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO). IEEE, 2020, pp. 386-399.
Transpim: A memorybased acceleration via software-hardware co-design for transformer. M Zhou, W Xu, J Kang, T Rosing, 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEEM. Zhou, W. Xu, J. Kang, and T. Rosing, "Transpim: A memory- based acceleration via software-hardware co-design for transformer," in 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA). IEEE, 2022, pp. 1071-1085.
| [] |
[
"Covariant Magnetoionic Theory II: Radiative Transfer",
"Covariant Magnetoionic Theory II: Radiative Transfer"
] | [
"Avery Broderick \n130-33, 91125Caltech, PasadenaMC, CA\n",
"Roger Blandford \n130-33, 91125Caltech, PasadenaMC, CA\n\nKIPAC\nSLAC\n2575 Sand Hill Road94025Menlo ParkCA\n"
] | [
"130-33, 91125Caltech, PasadenaMC, CA",
"130-33, 91125Caltech, PasadenaMC, CA",
"KIPAC\nSLAC\n2575 Sand Hill Road94025Menlo ParkCA"
] | [] | Accretion onto compact objects plays a central role in high energy astrophysics. In these environments, both general relativistic and plasma effects may have significant impacts upon the spectral and polarimetric properties of the accretion flow. In paper I we presented a fully general relativistic magnetoionic theory, capable of tracing rays in the geometric optics approximation through a magnetised plasma in the vicinity of a compact object. In this paper we discuss how to perform polarised radiative transfer along these rays. In addition we apply the formalism to a barotropic thick disk model, appropriate for low luminosity active galactic nuclei. We find that it is possible to generate large fractional polarisations over the innermost portions of the accretion flow, even when the emission mechanism is unpolarised. This has implications for accreting systems ranging from pulsars and X-ray binaries to AGN. | 10.1111/j.1365-2966.2004.07582.x | [
"https://arxiv.org/pdf/astro-ph/0311360v1.pdf"
] | 15,629,936 | astro-ph/0311360 | c74e14f23dcad32a02d5e3bca44983d0a5f06374 |
Covariant Magnetoionic Theory II: Radiative Transfer
15 Nov 2003 Printed 26 June 2018 26 June 2018
Avery Broderick
130-33, 91125Caltech, PasadenaMC, CA
Roger Blandford
130-33, 91125Caltech, PasadenaMC, CA
KIPAC
SLAC
2575 Sand Hill Road94025Menlo ParkCA
Covariant Magnetoionic Theory II: Radiative Transfer
15 Nov 2003 Printed 26 June 2018 26 June 2018Mon. Not. R. Astron. Soc. 000, 000-000 (0000) (MN L A T E X style file v2.2)black hole physics -magnetic fields -plasmas -polarisation -Radiative Transfer
Accretion onto compact objects plays a central role in high energy astrophysics. In these environments, both general relativistic and plasma effects may have significant impacts upon the spectral and polarimetric properties of the accretion flow. In paper I we presented a fully general relativistic magnetoionic theory, capable of tracing rays in the geometric optics approximation through a magnetised plasma in the vicinity of a compact object. In this paper we discuss how to perform polarised radiative transfer along these rays. In addition we apply the formalism to a barotropic thick disk model, appropriate for low luminosity active galactic nuclei. We find that it is possible to generate large fractional polarisations over the innermost portions of the accretion flow, even when the emission mechanism is unpolarised. This has implications for accreting systems ranging from pulsars and X-ray binaries to AGN.
INTRODUCTION
The spectral and polarimetric properties of astrophysical objects can provide significant insights into their structure and dynamics. As a result, a number of theoretical investigations into the source of these properties have been undertaken. Many of these have been primarily concerned with the spectral properties alone, typically comparing a physically motivated accretion flow to observations. However, with the measurement of polarisation in a number of sources, a significant fraction of the focus has been turned towards reproducing their polarimetric properties. In the context of an accreting compact object, both general relativistic and plasma effects can play a role in determining these properties. In Broderick & Blandford 2003 (hereafter Paper I) we demonstrated how to construct ray trajectories, in the geometric optics approximation, in a magnetoactive plasma in a relativistic environment. In order to apply this to realistic accretion environments it is necessary to be able to perform radiative transfer along these rays.
Non-refractive, polarised radiative transfer through magnetised plasmas is flat space has been extensively studied. A number of examples involving weak magnetic fields exist in the literature (see e.g. Sazonov & Tsytovich 1968;Sazonov 1969;Jones & O'Dell 1977a,b;Ginzburg 1970). More recently, investigations into the net effects of tangled magnetic fields (expected to be typical in magnetised accretion flows) have begun (see e.g. Ruszkowski & Begelman ⋆ [email protected] 2002). However, none of these deal with general relativistic environments.
The importance of refraction in the propagation of radio wavelengths has long been appreciated in the context of the ionosphere (see e.g. Budden 1964;Ginzburg 1970). More recently, refraction has been studied in conjunction with pulsars (see e.g. Weltevrede et al. 2003;Petrova 2002Petrova , 2000). Nonetheless, in all of these cases, the emission was assumed to originate from a region distinct from where the refraction occurred. Refractive lensing of neutron stars was considered by Shaviv et al. 1999, but ignored general relativisitic effects.
General relativistic studies into the propagation of polarisation in vacuum have been done. These have been primarily interested in the geometrical effects due to the parallel transport of the linear polarisation (see e.g. Agol 1997;Laor et al. 1990;Connors et al. 1980). Alternatively, in Bromley et al. 2001, polarised emission in a general relativistic environment is considered. However, none of the typical plasma transfer effects (e.g. Faraday rotation) were included in these calculations. In Heyl et al. 2003, the vacuum birefringence due to strong magnetic fields was considered in the context of neutron star atmospheres. However, in both, refraction was completely ignored. There have been some attempts to study the problem of ray propagation in a covariant form (see e.g. Broderick & Blandford 2003;Melrose & Gedalin 2001), but in these the radiative transfer was not addressed.
As discussed in Paper I, refraction coupled with the presence of a horizon can be a source of significant polar-isation when the observation frequency is near the plasma and cyclotron frequencies of emitting region. The sense of the resulting net polarisation is determined by the plasma parameters at the surface at which the polarisation freezes out (when the modes cease to be adiabatic and must be treated as if they were in vacuum). Typically, this will result in a net circular polarisation. In a future paper we will discuss astrophysical environments in which this may be the case, including applications to Sgr A * and high mass X-ray binaries.
We present a method for performing polarised radiative transfer through a strongly refractive magnetised plasma in a general relativistic environment. Additionally, we apply this to a model of a thick accretion disk. This is done in six sections with §2 briefly reviewing the formalism presented in Paper I, §3 discussing how to perform the radiative transfer in a magnetised plasma, §4 presenting low harmonic synchrotron as a possible emission mechanism, §5 presenting some results, and §6 containing conclusions. The details of constructing a magnetised, thick, barotropic disk are presented in the appendix.
RAY PROPAGATION
While astrophysical plasmas will, in general, be hot, the cold case provides an instructive setting in which to demonstrate the types of effects that may be present. As a result, it will be assumed that the plasma through which the rays propagate will be cold, with a small component of emitting hot electrons. As shown in Paper I, the rays may be explicitly constructed given a dispersion relation, D (kµ, x µ ) (a function of the wave four-vector and position which vanishes along the ray), by integrating the ray equations:
dx µ dτ = ∂D ∂kµ x µ and dkµ dτ = − ∂D ∂x µ kµ ,(1)
where τ is an affine parameter along the ray. Expanding Maxwell's equations in the geometric optics limit provides the polarisation eigenmodes and the dispersion relation (given a conductivity):
(k α kαδ µ ν − k µ kν − 4πiωσ µ ν ) E ν = 0 ,(2)
where E µ is the four-vector coincident with the electric field in the locally flat, comoving rest frame (LFCR frame), ω ≡ −u µ kµ (u µ is the plasma four-velocity which defines the LFCR frame), and σ µ ν is the covariant extension of the the conductivity tensor. For the cold, magnetoactive, electronion plasma (in the limit of infinite ion mass), the dispersion relation is
D (kµ, x µ ) = k µ kµ − δω 2 − δ 2 (1 + δ) eB µ kµ mω 2 − (1 + 2δ) ω 2 B ± eB µ kµ mω 4 + 2 (2ω 2 − ω 2 B − ω 2 P ) eB µ kµ mω 2 + ω 4 B ,(3)
where B µ is the four-vector coincident with external magnetic field in the LFCR frame, ωP is the plasma frequency in the LFCR frame, ωB is the cyclotron frequency associated with B µ , and δ ≡ ω 2 P /(ω 2 B − ω 2 P ), This is a covariant form of the Appleton-Hartree dispersion relation (see e.g. Boyd & Sanderson 1969).
In general, the electromagnetic polarisation eigenmodes will not follow the same trajectories, and in particular will not follow null geodesics. As a result, the different polarisation eigenmodes will sample different portions of the accretion flow. As shown in Paper I, it is possible for one mode to be captured by the central black hole while the other escapes, leading to a net polarisation.
POLARISED RADIATIVE TRANSFER IN REFRACTIVE PLASMAS
Both emission and absorption are local processes. However, because the transfer of radiation necessarily involves a comparison between the state of the radiation at different points in space, global propagation effects need to be accounted for. These take two general forms: correcting for the gravitational redshift; and keeping track of the local coordinate system, i.e. ensuring that polarised emission is being added appropriately in the presence of a rotation of the coordinate system propagated along the ray. In addition, for a magnetoactive plasma, it is necessary to determine how to perform the radiative transfer in the presence of refraction.
Length Scales and Regimes
The problem of performing radiative transfer in a magnetoactive plasma has been treated in detail in the context of radio-wave propagation in the ionosphere (for a detailed discussion see e.g. Ginzburg 1970;Budden 1964). In these cases it was found that there were two distinct limiting regimes. These can be distinguished by comparing two fundamental scales of the affine parameter τ : that over which the polarisation eigenmodes change appreciably, τS, and the Faraday rotation length, τF . Before τS can be defined it is necessary to define a pair of basis four-vectors that define the axes of the ellipse:
e µ = k α kα + ω 2 B µ − B ν kν (k µ − ωu µ ) k β k β + ω 2 (k σ kσ + ω 2 ) B γ Bγ − (B γ kγ ) 2 (4) e µ ⊥ = ε µναβ uνkαB β (k σ kσ + ω 2 ) B γ Bγ − (B γ kγ) 2 ,(5)
where ε µναβ is the Levi-Civita pseudo-tensor. In terms of these, the ellipticity angle χ can be defined by
tan χ ≡ i e µ EO µ e ν ⊥ EO ν = i e µ ⊥ EX µ e ν EX ν .(6)
In general, an additional angle, φ, is necessary to define the polarisation, namely the angle which defines the orientation of the ellipse. The basis four-vectors have been chosen such that φ is identically zero. However, this choice introduces a new geometric term into the equations which accounts for the necessary rotation of the basis four-vectors, contributing a non-zero dφ/dτ (see §3.3 for more details). Then, in general,
τS ≡ dφ dτ 2 + dχ dτ 2 −1/2 ,(7)
For the ordered fields employed here (see the appendices),
τS ≃ ωB ω 3 ∂ω 2 P ∂x µ dx µ dτ −1 ,(8)
where this approximation form is true for small cyclotron and plasma frequencies and all but the most oblique angles of incidence. The Faraday rotation length is defined to be the distance over which the phase difference between the two polarisation eigenmodes reaches 2π, i.e.
τF ≡ ∆kµ dx µ dτ −1 ,(9)
where ∆k µ is the difference between the wave vectors of the two modes. Strictly speaking in addition to τF , τS should be compared to a term describing the rate of change of the Faraday rotation length, however in the situations under consideration here this term is completely dominated by τF . Together, these length scales define three regimes: the adiabatic regime (τF ≪ τS), the intermediate regime (2τF ∼ τS), and the strongly coupled regime (τF ≫ τS). In all regimes the polarisation of the plasma eigenmodes is uniquely set by the dispersion equation, equation (2).
In general, as θ → π/2, ∆k ≃ (ω 2 P ωB/ω 2 c) cos θ + (ω 2 P ω 2 B /ω 3 c), where θ is the angle between the wave-vector and the magnetic field. Hence to remain in the adiabatic regime τS ≫ (ω/ωB) 2 τF (θ = 0), which is typically not true in astrophysical sources. As a result, as the magnetic field becomes perpendicular to the wave-vector, the modes generally become strongly coupled. This is the reason why, when dealing with a large number of field reversals (e.g. in a molecular cloud), the amount of Faraday rotation and conversion is ∝ B · dx and not |B| · dx (which would follow in the adiabatic regime) despite the fact that τs ≫ τF (θ = 0) may be true throughout the entire region.
Adiabatic Regime
In the adiabatic regime the two polarisation modes propagate independently (see e.g. Ginzburg 1970). As a result, to a good approximation, the polarisation is simply given by the sum of the two polarisations. The intensities, IO and IX, of the ordinary and the extraordinary modes, respectively, are not conserved along the ray due to the gravitational redshift. Consequently, the photon occupation numbers of the two modes, NO and NX , which are Lorentz scalars, and hence are conserved along the rays, are used. Therefore, the equation of radiative transfer is given by
dNO,X dτ = dl dτ j O,X − αO,X NO,X ,(10)
where
dl dτ = gµν dx µ dτ dx ν dτ − uµ dx µ dτ 2(11)
is the conversion from the line element in the LFCR frame to the affine parameterisation, and j O,X is the emissivity in the LFCR frame scaled appropriately for the occupation number (as opposed to the intensity). In practice, the occupation numbers will be large. However, up to fundamental physical constants, it is permissible to use a scaled version of the occupation numbers such that NO,X = ω −3 IO,X in vacuum. It is also this regime in which Faraday rotation and conversion occur. However, because these propagation effects result directly from interference between the two modes, and hence require the emission to be coherent among the two modes, when they diverge sufficiently the modes must be added incoherently and thus Faraday rotation and conversion effectively cease. The modes will have divereged sufficiently when
|∆x ⊥ | λ 2 ∆λ ,(12)
where ∆λ is the emission band-width. For continum emission, this reduces to |∆x ⊥ | λ. Therefore in a highly refractive medium an additional constraint is placed upon Faraday rotation. The depth at which equation (12) is first satisfied can be estimated by considering an oblique ray entering a plane-parallel density and magnetic field distribution (at angle ζ to the gradient). In this case, to linear order in ωP and ωB,
d 2 ∆x ⊥ dz 2 ≃ − sin ζ ∂D ∂z ≃ ωBω 2 P ω 3 z(13)
As a result,
|∆x ⊥ | ≃ ωBω 2 P z 2ω 3 , hence zmax ≃ λ 2ω 3 ωBω 2 P .(14)
The resulting number of Faraday rotations, nF , is then given by,
nF ≡ zmax 0 ∆k 2π dz ≃ 1 2π sin ζ ,(15)
which is typically small for all but the smallest ζ. Because, as discussed in section 5, linear polarisation is strongly suppressed by refraction, such a small Faraday rotation in negligible. As a result, for the situations of interest here, in this regime the modes can be added together incoherently to yield the net polarisation.
Strongly Coupled Regime
In the limit of vanishing plasma density it is clear that the polarisation propagation must approach that in vacuum regardless of the magnetic field geometry. In this limit the two modes must be strongly coupled such that their sum evolves as in vacuum. In particular, it is necessary to keep track of their relative phases. This can be most easily accomplished by using the Stokes parameters to describe the radiation. In this case also it is possible to account for the gravitational redshift by using the photon occupation number instead of intensities, N , NQ, NU , NV . However, it is also necessary to define the NQ, NU , and NV in a manner that is consistent along the entire ray. In order to do this we may align the axes of NQ along the magnetic field, i.e.
NQ = N (ê µ ) − N (ê µ ⊥ ) NU = N 1 √ 2ê µ − 1 √ 2ê µ ⊥ − N 1 √ 2ê µ + 1 √ 2ê µ ⊥ (16) NV = N 1 √ 2ê µ + i √ 2ê µ ⊥ − N 1 √ 2ê µ − i √ 2ê µ ⊥ ,
where N (e µ ) is the occupation number of photons in the polarisation defined by e µ . Thus the problem of relating NQ, NU , and NV along the ray is reduced to propagatingê µ and e µ ⊥ . A change in τ by dτ is associated with a rotation of the basis by an angle
dφ =ê ⊥µ dx ν dτ ∇νê µ dτ ,(17)
where the use of the covariant derivative, ∇ν , accounts for the general relativistic rotations ofê µ andê µ ⊥ . As a result, the transfer effect due to general relativity and the rotation of the magnetic field about the propagation path is
dNQ dτ = −2 dφ dτ NU dNU dτ = 2 dφ dτ NQ ,(18)
where the factor of 2 arises from the quadratic nature of N.
After a specific emission model is chosen the emissivities and the absorption coefficients are scaled as in §3.2. An example will be discussed in more detail in §4.
Intermediate Regime
At some point it is necessary to transition from one limiting regime to the other. In this intermediate regime the polarisation freezes out. A great deal of effort has been expended to understand the details of how this occurs (see e.g. Budden 1952). However, to a good approximation it is enough to set the polarisation at the point when τF = 2τS to the incoherent sum of the polarisation eigenmodes (see the discussion in Ginzburg 1970):
N = NO + NX NQ = − cos 2χ(NO − NX ) NU = 0 (19) NV = sin 2χ(NO − NX )
It is straightforward to show that in terms of the generalised Stokes parameters NO and NX are given by (this is true even when they are offset by a phase)
NO = 1 2 (N − cos 2χNQ + sin 2χNV ) NX = 1 2 (N + cos 2χNQ − sin 2χNV ) .(20)
Note that, in general, polarisation information will be lost in this conversion. This is a reflection of the fact that the space spanned by the incoherent sum of the two modes forms a subset of the space of unpolarised Stokes parameters. This is clear from their respective dimensionalities; the former is three dimensional (there are only three degrees of freedom for the decomposition into the two polarisation modes, namely their amplitudes and relative phase), while the later is four dimensional (I, Q, U , and V , subject only to the condition that I 2 Q 2 + U 2 + V 2 ).
LOW HARMONIC SYNCHROTRON RADIATION INTO COLD PLASMA MODES
As discussed in the previous section, emission and absorption are inherently local processes. As a result it will be sufficient in this context to treat them in the LFCR frame, and hence in flat space. In this frame it is enough to solve the problem in three dimensions and then insert quantities in a covariant form. Because refractive effects become large only when ω ∼ ωB, ωP , for there to be significant spectral and polarimetric effects it is necessary to have an emission mechanism which operates in this frequency regime as well. A plausible candidate is low harmonic synchrotron emission. It is assumed that a hot power-law distribution of electrons is responsible for the emission while the cold plasma is responsible for the remaining plasma effects. In Paper I we did present the theory for the warm plasma as well, however, as in the conventional magnetoionic theory, it is much more cumbersome to utilise.
Razin Suppression
A well known plasma effect upon synchrotron emission is the Razin suppression (see e.g. Rybicki & Lightman 1979;Bekefi 1966). This arises due to the increase in the wave phase velocity above the speed of light, preventing electrons from maintaining phase with the emitted electromagnetic wave, resulting in an exponential suppression of the emission below the Razin frequency,
ωR = ω 2 P ωB .(21)
However, as discussed in the Appendix, for the disk model we have employed here, typically ωB > ωP and hence the Razin effects do not arise.
Projection onto Non-Orthogonal Modes
A significant problem with emission mechanisms in the ω ∼ ωB, ωP frequency regime is that the modes are no longer orthogonal. It is true that for a lossless medium (such as the cold plasma), equation (2), which defines the polarisation, is self-adjoint. However, because of the k µ differ for the two modes, it is a slightly different equation for each mode, and hence the polarisations are eigenvectors of slightly different hermitian differential operators. In the high frequency limit this difference becomes insignificant. The energy in the electromagnetic portion of the wave (neglecting the plasma portion) is given by
E = E * · ǫ · E 4π = 1 4π E * · 1 + 4πi ω σ · E(22)
For each mode (EO and EX ), the dispersion equation gives
ω 2 + 4πiωσ · EO,X = k 2 O,X − kO,X ⊗ kO,X · EO,X = k 2 O,X 1 −k ⊗k · EO,X . (23) Therefore, with E = i Ei, E = 1 4πω 2 i,j k 2 j E * i · 1 −k ⊗k · Ej .(24)
However, for a lossless medium it is also true that
E = E † = 1 4πω 2 i,j k 2 i E * i · 1 −k ⊗k · Ej ,(25)
and therefore,
i,j k 2 i − k 2 j E * i · 1 −k ⊗k · Ej = 0 .(26)
For a non-degenerate dispersion relation, e.g. that of a magnetoactive plasma, this implies that the the components of the polarisation transverse to the direction of propagation are orthogonal for the two modes, i.e.
F * i ·Fj = k 2 i δij (27) whereF O,X = kO,X 1 −k ⊗k ·ÊO,X E * O,X · 1 −k ⊗k ·ÊO,X .(28)
As a result it is possible to define EO,X such that
EO,X = F * O,X · FO,X 4π and E = i Ei ,(29)
i.e. that the electromagnetic energy can be uniquely decomposed into the electromagnetic energy in the two modes. Expressions for the FO,X can be obtained by solving for the eigenvectors of the dispersion equation. For the cold magnetoactive plasma this giveŝ
FO,X = kO,X √ 2 1 ± (1 + ε) −1/2ê ± i 1 ∓ (1 + ε) −1/2ê ⊥ ,(30)
where, (not to be confused with the Levi-Civita pseudotensor)
ε = sin 2 θ 2 cos θ ωωB ω 2 P − ω 2 −2 ,(31)
θ is the angle between the magnetic field and the wave vector, andê ,⊥ are the flat space analogues of the basis vectors in equation (5). θ may be defined covariantly by
cos 2 θ = (B µ kµ) 2 B ν Bν (k σ kσ + ω 2 ) .(32)
This corresponds to the polarisation found in the literature (cf. Budden 1964).
Emissivities
Because the energy can be uniquely decomposed into the energy in each polarisation eigenmode, it is possible to calculate the emissivities and absorption coefficients by the standard far-field method. For synchrotron radiation this was originally done by Westfold 1959. The calculation is somewhat involved but straightforward and has been done in detail in the subsequent literature (see e.g. Rybicki & Lightman 1979). Consequently, only the result for the power emitted (per unit frequency and solid angle) for a given polarisation is quoted below:
P O,X ω Ω = e 3 B sin θ 8 √ 3π 2 mk 2 O,X n 2 r d 3 pf (p) F O,X ·ê 2 + F O,X ·ê ⊥ 2 F (x) + F O,X ·ê 2 − F O,X ·ê ⊥ 2 G(x) , (33) where x = 2mcω 3γ 2 eB sin θ ,(34)
f (p) is the distribution function of emitting electrons, nr is the ray-refractive index (for a suitable definition see Bekefi 1966), and F and G have their usual definitions,
F (x) = x ∞ x K 5 3 (y)dy and G(x) = xK 2 3 (x) ,(35)
where the K 5/3 and K 2/3 are the modified Bessel functions of 5/3 and 2/3 order, respectively. The addition factor of n 2 r arises from the difference in the photon phase space, d 3 k and the analogous integral over frequency, 4πdω.
For the adiabatic regime, the emissivities, j O,X ω , can now be defined:
j O,X = 1 4πn 2 r ω 3 P O,X ω Ω .(36)
For a power-law distribution of emitting electrons, f (p)d 3 p = Cγ −s dγ, this gives
j O,X = √ 3e 2 C 24π 2 ω 2 c(1 + s) 3 ωB ω sin θ s+1 2 Γ s 4 + 19 12 × Γ s 4 − 1 12 1 ± 3s + 3 3s + 7 (1 + ε) − 1 2 .(37)
The Stokes emissivities and absorption coefficients for an emitting hot power law (ignoring effects of order γ −1 as these explicitly involve the propagation through the hot electrons) are given by
j N = j O + j X (38) j Q = √ 3e 2 C 48π 2 ω 2 c 3 ωB ω sin θ s+1 2 × Γ s 4 + 7 12 Γ s 4 − 1 12 (39) j U = j V = 0 .(40)
Note that for low γ synchrotron can efectively produce circular polarisation, namely j V ∼ 3/γ. The production of circular polarisation in this way in environments with large Faraday depths will be considered in future publications.
Absorption Coefficients
For the adiabatic regime, detailed balance for each mode requires that the absorption coefficients are then given by
αO,X = √ 3πe 2 C6ωmc
In the strongly coupled regime, the Stokes absorption coefficient matrix (see e.g. Jones & O'Dell 1977b, and references therein),
αN αQ 0 αV αQ αN 0 0 0 0 αN 0 αV 0 0 αN .(42)
where the Faraday rotation and conversion due to the hot electrons have been ignored as a result of the fact that they will be negligible in comparison to the Faraday rotation and conversion due to the cold electrons. The individual α's can be obtained in terms of the αO,X using the fact that the energy in the electromagnetic oscillations can be uniquely decomposed into contributions from each mode (equation (29)). Then,
dN dλ = dNO dλ + dNX dλ = jO + jX − αONO − αX NX = (jO + jX ) − 1 2 (αO + αX )N(43)+ 1 2 cos 2χ (αO − αX ) Q − 1 2 sin 2χ (αO − αX ) V .
Therefore, the absorption coefficients may be identified as,
αN = 1 2 (αO + αX ) (44) αQ = − 1 2 cos 2χ (αO − αX ) (45) αV = 1 2 sin 2χ (αO − αX ) .(46)
Unpolarised Low Harmonic Synchrotron Radiation
To highlight the role of refraction in the generation of polarisation, an unpolarised emission mechanism is also used.
To compare with the results of the polarised emission model discussed in the previous section, the artificial scenario in which the synchrotron emission is split evenly into the two modes was chosen. In this case,
j UP O,X = 1 2 j N ,(47)
and
j UP N = j N ,(48)
with the other Stokes emissivities vanishing. Similarly, the absorption coefficients are given by,
α UP O,X = α UP N = αN ,(49)
with the other absorption coefficients vanishing as well.
Constraints Upon the Emitting Electron Fraction
For refractive plasma effects to impact the spectral and polarimetric properties of an accretion flow, it is necessary that it be optically thin. This places a severe constraint upon the fraction of hot electrons, f ≡ C/[ne(s − 1)]. In terms of the plasma frequency and f the absorptivity is approximately
αN ∼ √ 3 24c f ω 2 P ω 3 ωB ω sin θ (s+2)/2 .(50)
With s ∼ 2, and ω ∼ ωP , ωB, the typical optical depth (not to be confused with the affine parameter) is
τ ∼ 10 −1 f R λ hence f ∼ 10 λ R ,(51)
where R is the typical disk scale length (here on the order of 10M ).
RESULTS
Disk Model
Before any quantitative results are presented it is necessary to select a specific plasma and magnetic field distribution.
Here this takes the form of an azimuthally symmetric, thick, barotropic disk around a maximally rotating Kerr black hole (a ≃ 0.98). The magnetic field is chosen to lie upon surfaces of constant angular velocity, thus insuring that it does not shear. In order to maintain such a field it must also be strong enough to suppress the magneto-rotational instability. Further details may be found in the appendix.
Ray Trajectories
Figure 1 shows vertical and horizontal slices of rays propagated back through the disk discussed in the previous section from an observer elevated to 45 • above the equatorial plane at a frequency ω∞ = 3ωP max/4. Note that since the maximum occurs at req = 2M , the relativistically blue-shifted ω is approximately 1.8ωP max placing it comfortably above the plasma resonance at all points (assuming Doppler effects do not dominate at this point.)
The refractive effects of the plasma are immediately evident with the extraordinary mode being refracted more so (see discussion in Broderick & Blandford 2003). Gravitational lensing is also shown to be important over a significant range of impact parameters. There will be an azimuthal asymmetry in the ray paths due to both the black hole spin and the Dopper shift resulting from the rotation of the disk. This can be clearly observed in panel (b) Figure 1.
In panel (a) of Figure 1 the transition between the two radiative transfer regimes is also clearly demonstrated. Each time a ray passes from the strongly coupled to the adiabatic regime it must be reprojected into the two polarisation eigenmodes. If the plasma properties (e.g. density, magnetic field strength or direction, etc) are not identical to when the polarisation had previously frozen out (if at all), this decomposition will necessarily be different. As a result, when propagating the rays backwards, whenever one passes from the adiabatic to the strongly coupled regime, it is necessary to follow both polarisation eigenmodes in order to ensure the correctness of the radiative transfer. The leads to a doubling of the rays at such points. When integrating the radiative transfer equations forward along the ray, the net intensity is then projected out using equation (20). This ray doubling is clearly present in panel (a) of Figure 1, where the rays pass into the strongly coupled regime and back again as they traverse the evacuated funnel above and below the black hole.
Note that the trajectories of the rays depend upon ωP /ω∞ and ωB/ω∞ only (given a specified disk and magnetic field structure, of course), where ω∞ is ω as measured at infinity. Therefore, the paths shown in Figure 1 are valid for any density normalisation of the disk described in the appendix as long as ω is adjusted accordingly.
Polarisation Maps
In order to demonstrate the formalism described in this paper, polarisation maps were computed for the disk model described in section 5.1 and the appendix A orbiting a maximally rotating black hole as seen by an observer at infinity elevated to 45 • above the equatorial plane. Each map shows Stokes I, Q, U , and V . As with the rays trajectories, the particular form of the polarisation maps only depend upon a few unitless parameters. These necessarily include ωP max/ω and ωB max/ω as these define the ray trajectories. In addition, the relative brightness depends upon the optical depth which is proportional to (ωP max/ω) 2 (ωB max/ω)M f ω/c. As a result if the following dimensionless quantities remain unchanged, the polarisation maps shown in the following sections will apply (up to a constant scale factor)
ωP max ω∞ = 4 3 ωB max ω∞ = 4 3 f M λ = 2.30 × 10 3 .(52)
Despite the fact that the form of the polarisation maps will remain unchanged if the quantities in equation (52) remain constant, the normalisation will change by a multiplicative constant in the same way as the source function, namely proportional to ω 2 ∞ . However, an additional multiplicative factor arises from the solid angle subtended by the source on the sky. As a result, Stokes I, Q, U , and V are all shown in units of
M D 2 me ω 2 P max ,(53)
where D is the distance to the source. This amounts to plotting
kTB mec 2 ω∞ ωP max 2 ,(54)
where TB is the brightness temperature of the source.
Unpolarised Emission
For the purpose of highlighting the role of refractive plasma effects in the production of significant quantities of circular polarisation, Figure 2 shows Stokes I, Q, U , and V at ω∞ = 3ωP max/4, calculated using the unpolarised emission model described in section 4.5. Immediately noticeable are the regions of considerable polarisation surrounding the black hole. In addition, the outlines of the evacuated funnel above and below the hole are clearly visible. Differences in refraction of the two polarisation eigenmodes leads two two generic effects: (i) the presence of two maxima in the intensity map, each associated with the intensity maxima in a given polarisation eigenmode; and (ii) a net excess of one polarisation, and in particular, circular polarisation. The polarisation changes rapidly at the edges of the evacuated funnels because the refraction and mode decomposition changes rapidly for modes that just enter the funnel and those that pass wide of it. Note that all of the polarisation is due entirely to refractive plasma effects in this case.
The integrated values for the Stokes parameters are I = 1.3, Q = −9.4 × 10 −4 , U = 4.9 × 10 −5 , and V = 6.2 × 10 −2 , demonstrating that there does indeed exist a significant net circular polarisation. Figure 2 may be compared with Figure 3 in which Stokes I, Q, U , and V are shown at ω∞ = 3ωP max for the same unpolarised emission model. In the latter case the refractive effects are significantly repressed. This demonstrates the particularly limited nature of the frequency regime in which these types effects can be expected to occur. In this case there still does exist a net circular polarisation, now with integrated values I = 1.0, Q = −4.8 × 10 −6 , U = 2.4 × 10 −7 , and V = 1.2 × 10 −3 .
Polarised Emission
In general, synchrotron emission will be polarised. As a result it is necessary to produce polarisation maps using the emission model described in sections 4.3 and 4.4. In this case a net polarisation will exist even in the absence of any refraction. In order to compare the amount of polarisation generated by refractive effects to that created intrinsically, Figure 4 shows Stokes I, Q, U , and V calculated using the polarised emission model and ignoring refraction (i.e. setting the rays to be null geodesics) for ω∞ = 3ωP,max/4. Strictly speaking, this is a substantial over estimate of the polarisation. This is because, in the absence of refraction, in principle it is necessary to include Faraday rotation and conversion in the transfer effects considered. As a result of the high plasma density and magnetic field strengths, the Faraday rotation and conversion depths for this system should be tremendous for non-refractive rays, effectively depolarising any emission.
In comparison to Figures 2 and 3, the general morphology of the polarisation maps are substantially different. In addition, the amount of linear polarisation is significantly larger, having an integrated value of over 60% compared to less than 0.1% in Figure 2 and less than 10 −3 % in Figure 3. This calculation can be compared to that done by Bromley et al. 2001. In both it was assumed that the rays were null geodesics. In both Faraday rotation/conversion were neglected (in Bromley et al. 2001 because for their disk model it was assumed to be negligible.) However, in Bromley et al. 2001 it was also assumed that the radiative transfer could always be done in the adiabatic regime. As a result, the net polarisation was determined entirely by the emission mechanism. However, as discussed in section 3.1 this is only possible in the strongly coupled regime. In this case, the dichroic terms in equation (42) provide the source of circular polarisation, even in the absence of a circularly polarised emission, resulting from the different absorption properties of the two polarisation eigenmodes. This is what leads to the presence of circular polarisation in Figure 4 but not in Bromley et al. 2001. In this case, the integrated values of the Stokes parameters are I = 1.1, Q = 6.0 × 10 −1 , U = −4.9 × 10 −3 , and V = 6.9 × 10 −2 . The vertical feature directly above the black hole in panels (b) and (c) are associated with the rapid decrease in the magnetic field strength in the evacuted funnel above and below the black hole and are due to the geometric transfer effect discussed in section 3.3.
Finally, in Figure 5, both refractive effects and the po-larised emission mechanism are included (again at ω∞ = 3ωP,max/4). Many of the qualitative features of Figure 2 still persist. The integrated values of the Stokes parameters are I = 1.3, Q = −2.2 × 10 −3 , U = 1.2 × 10 −4 , and V = 1.4 × 10 −1 . While the intrinsic polarisation in the emission does make a quantitative difference, it is clear that in this case the generic polarimetric properties are dominated by the refractive properties. This is most clearly demonstrated by noting the strong supression of linear polarisation. In Figure 5 the linear polarisation fraction is less than 0.2% as compared with nearly 60% in Figure 4. Figure 6 shows the Stokes parameters as a function of frequency for when only polarised emission is considered, only refractive plasma effects are considered, and when both are considered. There are two notable effects due to refraction: (i) the significant suppression of the linear polarisation, and (ii) the large amplification of circular polarisation. The linear polarisation is decreased by at least two orders of magnitude, and in particular, at least two orders of magnitude less than the final circular polarisation. On the other hand, the circular polarisation is more than doubled at its peak, and increases by many orders of magnitude at higher frequencies.
Integrated Polarisations
Nonetheless, by ω∞ = 10ωP max, both polarisations are less than one tenth of their maxima. As a result, it is clear that this mechanism is restricted to approximately one decade in frequency, centred about ωP max. Figure 7 shows the circular polarisation fraction as a function of frequency for the same set of cases that were depicted in the previous figure. As can be seen in Figure 6, the circular and linear polarisation spectral index are approximately equal, and both are softer than that of the total intensity. The result is a decreasing circular polarisation fraction with increasing frequency.
CONCLUSIONS
We have presented refraction as a mechanism for the generation of polarisation when ω∞ ∼ ωP , ωB. That this will typically result in mostly circular polarisation is a result of the fact that the polarisation eigenmodes are significantly elliptical only when the wave-vector and the magnetic field are within ωB/ω of perpendicular, which is usually a small number near the surface where the polarisation freezes out. In addition to producing circular polarisation, this mechanism also significantly suppresses linear polarisation. Because it does require significant refraction to take place, it is necessarily limited to approximately a decade in frequency, making it simple to identify.
As shown in section 5.4, the resulting circular polarisation will be softer than the intensity. However, because of optical depth effects, as the observation frequency increases the polarimetric properties of such a system will be dominated be increasingly smaller areas. As a result, the fractional variability in such a system would be expected to increase with frequency. Furthermore, even though the emission may arise from a large region, the polarimetric properties will continue to be determined by this compact area, making it possible to have variability on time scales short in comparison to Figure 2. Stokes I, Q, U , and V per unit M 2 are shown in panels (a), (b), (c), and (d), respectively, for the unpolarised emission mechanism described in section 4.5 and the disk model described in section 5.1 and appendix A orbiting a maximally rotating black hole from a vantage point 45 • above the equatorial plane at the frequency ω∞ = 3ω P,max /4. The contour levels are at 0.2 (dashed) and 0.6 (solid) of the maximum values shown on the associated colorbars. The integrated fluxes over the region shown are I = 1.3, Q = −9.4 × 10 −4 , U = 4.9 × 10 −5 , and V = 6.2 × 10 −2 . All fluxes are in units of (M/D) −2 ω 2 P max as discussed above equation (53).
those associated with the emission region. In addition, variability in the circular polarisation would be expected to be correlated with variability in the integrated intensity at frequencies where the emission is dominated by contributions from close to the horizon (e.g. X-rays). Possible applications to known astrophysical sources include the Galactic Centre (at submm wavelengths) and extinct high mass X-ray binaries (in the infrared). These will be disscussed in further detail in an upcoming paper. be derived from the third. Explicitly, Ω and L are related by
Ω = g φφ L + g tφ g tt + g tφ L ,(A2)
and the condition that u µ uµ = u t ut + u φ u φ = −1 gives E in terms of Ω and L to be
E = − g tt + g tφ L (1 − ΩL) −1/2 .(A3)
In principle this should be combined with a torque balance equation which explicitly includes the mechanism for angular momentum transport through the disk. However, given a relationship between any two of the quantities E, Ω, and L specifies this automatically. Thus the problem can be significantly simplified if such a relationship can be obtained, presumably from the current MHD disk simulations.
A1 Barotropic Disks
For a barotropic disk the left side of equation (A1) can be explicitly integrated to define a function H:
H = dP ρ(P ) + Γ Γ−1 P ,(A4)
which may be explicitly integrated for gases with constant Γ to yield
H = ln 1 + Γ Γ − 1 P ρ .(A5)
Therefore, reorganising equation (A1) gives
∂µ (H − ln E) = − Ω∂µL 1 − ΩL ,(A6)
which in turn implies that Ω is a function of L alone. Specifying this function allows the definition of another function Ξ:
Ξ = Ω(L)dL 1 − Ω(L)L .(A7)
Using their definitions, it is possible to solve Ω = Ω(L) for L(x µ ) and hence Ξ(x µ ). Then H and Ξ are related by,
H = H0 + ln E − Ξ ,(A8)
which then may be inverted to yield ρ(H0 − ln E + Ξ). Inverting H for ρ then yields ρ(x µ ). The quantity H0 sets the density scale and may itself be set by choosing ρ at some point:
H0 = H(ρ0) − (ln E − Ξ)(x µ 0 ) . (A9)
A1.1 Keplerian Disk
As a simple, but artificial, example of the procedure, a Keplerian disk is briefly considered in the limit of weak gravitating Schwarzschild black hole (i.e. r ≫ M ). Note that this cannot be done in flat space because in equation (A1) the gravitational terms are present in the curvature only. For a Keplerian flow, Ω = M/(r sin θ) 3 ≃ M 2 L −3 . In that case using the definition of Ξ gives
Ξ = M 2 dL L 3 − M 2 L = dℓ ℓ 3 − ℓ = ln 1 − ℓ −2 ,(A10)
where ℓ = L/M . However, ln E is given by and hence,
ln E = − ln −g tt (1 − ΩL) = ln 1 − 2M r − ln 1 − ℓ −2 ,(A11)H = H0 − ln E + Ξ = H0 − ln 1 − 2M r + ln 1 − ℓ −2 ≃ H0 + M r − M r sin θ ,(A12)
where ℓ = r sin θ/M and the weakly gravitating condition were used. As expected, along the equatorial plane H, and therefore ρ, is constant. For points outside of the equatorial plane pressure gradients are required to maintain hydrostatic balance.
A1.2 Pressure Supported Disk
Accretion disks will in general have radial as well as vertical pressure gradients. Inward pressure gradients can support a stable disk inbetween the innermost stable orbit and the photon orbits, thus decreasing the radius of the inner edge of the disk. Around a Schwarzschild black hole this can bring the inner edge of the disk down to 3M . In a maximally rotating Kerr spacetime this can allow the disk to extend down nearly to the horizon. Far from the hole, accreting matter will create outward pressure gradients. An angular momentum profile appropriate for a Kerr hole which goes from being super to sub-Keplerian is
L(req) = g tφ 2 ,r − g tt ,r g φφ ,r − g tφ ,r g φφ −1
,r r=req if req < rinner c1M 3/2 r −1 eq + c2M 1/2 + l0 M req otherwise Ω(req) = g φφ L + g tφ g tt + g tφ L r=req , where both L and Ω are parametrised in terms of the equatorial radius, req. The condition that L reduces to the angular momentum profile of a Keplerian disk for radii less than the inner radius ensures that no pathological disk structures are created within the photon orbit. The constants c1 and c2 are defined by the requirement that at the inner edge of the disk, rinner, and at the density maximum, rmax, the angular momentum must equal that of the Keplerian disk. In contrast, l0 is chosen to fix the large r behaviour of the disk. The values chosen here were rinner = 1.3M , rmax = 2M , and l0 = 0.1. The value of H0 was set so that H(req = 100M ) = 0, thus making the disk extend to req = 100M .
In addition to defining Ω and L it is necessary to define P (ρ). Because the gas in this portion of the accretion flow is expected to be unable to efficiently cool, Γ = 5/3 was chosen. The proportionality constant in the polytropic equation of state, κ, is set by enforcing the ideal gas law for a given temperature (T0) at a given density (ρ0). Thus,
P (ρ) = ρ0 kT0 mp ρ ρ0 5/3 .(A14)
Note that ρ0 and T0 provide a density and temperature scale. A disk solution obtained for a given ρ0 and T0 may be used to generate a disk solution for a different set of scales simply by multiplying the density everywhere by the appropriate constant factor.
A2 Non-Sheared Magnetic Field Geometries
The disk model discussed thus far is purely hydrodynamic. Typically, magnetic fields will also be present. In general, it is necessary to perform a full MHD calculation in order to self-consistently determine both the plasma and magnetic field structure. However,an approximate steady state magnetic field can be constructed by requiring that the field lines are not sheared. To investigate the shearing between two nearby, spacelike separated points in the plasma, x µ 1 and x µ 2 , consider the invariant interval between them:
∆s 2 = ∆x µ ∆xµ where ∆x µ = x µ 2 − x µ 1 .(A15)
The condition that this doesn't change in the LFCR frame is equivalent to
d∆s 2 ds = 0 .(A16)
Expanding in terms of the definition of ∆s gives,
d ds gµν ∆x µ ∆x ν = gµν,σ dx σ ds ∆x µ ∆x ν + 2gµν∆x µ d∆x ν ds = 0 . (A17)
Note that by definition, dx µ ds = u µ and d∆x µ ds = u µ 2 − u µ 2 = u µ ,σ ∆x σ . (A18) Figure A1. Shown are the contours of the density and azimuthal velocity as measured by the zero angular momentum observer, and the magnetic field lines. Starting at the density maximum (req = 2M and z = 0), the density is contoured at levels 10 −0.5 to 10 −4.5 times the maximum density in multiples of 10 −1 . From left to right, the velocity is contoured at levels 2 −0.5 c to 2 −5 c in multiples of 2 −0.5 . In order to provide a distinction between the velocity contours and the magnetic field lines, the velocity contours are terminated at the disks surface.
Hence, d∆s 2 ds = gµν,σu σ + 2gµσu σ ,ν ∆x µ ∆x ν = (gµν,σu σ + 2uµ,ν − 2gµσ,νu σ ) ∆x µ ∆x ν = 2 uµ,ν − Γ σ µν uσ ∆x µ ∆x ν = 2 (∇µuν ) ∆x µ ∆x ν = 0 .
The final equality is easy to understand from a geometrical viewpoint; for there to be no shearing, there can be no change in the direction of ∆x µ of the component of the plasma four-velocity along ∆x µ . That a steady state, axially symmetric magnetic field must lie upon the non-shearing surfaces can be seen directly by considering the covariant form of Maxwell's equations. In particular ∇ * ν F µν = 0, where * F µν is the dual of the electromagnetic field tensor, which in the absence of an electric field in the frame of the plasma takes the form * F µν = B µ u ν − B ν u µ . Therefore, Bµ∇ * ν F µν = BµB µ ∇νu ν + Bµu ν ∇ν B µ − Bµu µ ∇νB ν − BµB ν ∇νu µ = −B µ B ν ∇νuµ = 0 ,
where the first three terms vanish due to the symmetries and the requirement that B µ uµ = 0. This is precisely the non-shearing condition obtained in equation (A19). For plasma flows that are directed along the Killing vectors of the spacetime, ξ µ i , i.e.
u µ = u t t µ + i u i ξ µ i ,(A21)
where t µ is the time-like Killing vector, it is possible to simplify the no-shear condition considerably.
∆x µ ∆xν∇µu ν = ∆x µ ∆xν u t ∇µt ν + i u i ∇µξ ν i + ∆x µ ∆xν t ν ∂µu t + i ξ ν i ∂µu i = ∆xt∆x µ ∂µu t + i ∆xi∆x µ ∂µu i = 0 ,(A22)
Where terms in the first parentheses vanish due to Killing's equation. The additional constraint that ∆xµu µ = 0 gives
∆xt = − i Ωi∆xi ,(A23)
where Ωi ≡ u i /u t is a generalisation of the definition of Ω at the beginning of the section. Inserting this into equation (A22) and simplifying yields i ∆xi∆x µ ∂µΩi = 0 ,
i.e. the no shear hypersurfaces are those upon which all of the Ωi are constant. For the plasma flows considered in §A1 the plasma velocity is in the form of equation (A21) where the space-like Killing vector is that associated with the axial symmetry, φ µ . Thus with Ω φ = Ω, the no-shear condition for this class of plasma flows is ∆x µ ∂µΩ = 0 .
(A25)
Note that while we have been considering only axially symmetric plasma flows, this no shear condition is more generally valid, extending to the case where Ω is a function of t and φ as well as r and θ. However, in this case it is not the perfect-MHD limit of Maxwell's equations. For a cylindrically symmetric disk, the no-shear condition may be used to explicitly construct the non-shearing poloidal magnetic fields by setting B r = BΩ ,θ and B θ = −BΩ,r .
Once the magnitude of B µ is determined at some point along each non-shearing surfaces (e.g. in the equatorial plane), it may be set everywhere by ∇µB µ − B µ u ν ∇ν uµ = 0, which comes directly from Maxwell's equations in covariant form and B µ uµ = 0. Inserting the form in equation (A26) into the first term gives ∇µB µ = 1 √ g ∂ν √ gB ν = 1 √ g (∂r √ gBΩ ,θ − ∂ θ √ gBΩ,r)
= 1 √ g (Ω ,θ ∂r √ gB − Ω,r∂ θ √ gB) = B ν ∂ν ln √ gB .(A27)
The second term can be simplified using equation (A21), B µ u ν ∇νuµ = B µ u ν ∇ν u t tµ + u φ φµ = B µ u ν tµ∂νu t + φµ∂νu φ − u t ∇µtν − u φ ∇µφν = Btu ν ∂νu t + B φ u ν ∂ν u φ + B µ u ν tν∂µu t φν∂µu φ
+ B µ u ν ∇µuν = B µ ut∂µu t + u φ ∂µu t Ω = B µ (ut + Ωu φ ) ∂µu t + u φ u t B µ ∂µΩ = −B µ ∂µ ln u t ,(A28)
where the stationarity and axially symmetry have been used in the third step and the no-shear condition was used in the final step. Therefore, the magnitude B can be determined by
Figure 1 .
1Shown in panels (a) and (b) are vertical and horizontal cross sections of rays propagating backwards from an observer located 45 • above the equatorial plane. The strongly coupled (adiabatic) regime is denoted by the solid (long-dashed) lines for the ordinary (thin) and extraordinary (thick) polarisation eigenmodes. For reference, the null geodesics are drawn in the short dash. In addition, the black hole horizon and the boundary of the ergosphere are also shown.
Figure 3 .
3Same as Figure 2 except with ω∞ = 3ω P max . The integrated fluxes over the region shown are I = 1.0, Q = −4.8 × 10 −6 , U = 2.4 × 10 −7 , and V = 1.2 × 10 −3 . All fluxes are in units of (M/D) −2 ω 2 P max as discussed above equation(53).
Figure 4 .
4Same as Figure 2 except using the polarised emission mechanism (described in sections 4.3 and 4.4) and ignoring refractive plasma effects. The integrated fluxes over the region shown are I = 1.1, Q = 6.0 × 10 −1 , U = −4.9 × 10 −3 , and V = 6.9 × 10 −2 . All fluxes are in units of (M/D) −2 ω 2 P max as discussed above equation(53).
Figure 5 .
5Same as Figure 4 except including refractive plasma effects. The integrated fluxes over the region shown are I = 1.3, Q = −2.2 × 10 −3 , U = 1.2 × 10 −4 , and V = 1.4 × 10 −1 . All fluxes are in units of (M/D) −2 ω 2 P max as discussed above equation (53).
Figure 6 .
6The log of the integrated intensity, total linear polarisation, and circular polarisation are shown as a function of the observation frequency at infinity for when only polarised emission is considered (open triangles), only refractive plasma effects are considered (open squares), and when both are considered (filled circles). As inFigures 1-5, the disk model described in section 5.1 and appendix A orbiting a maximally rotating black hole is viewed from a vantage point 45 • above the equatorial plane. All fluxes are in units of (M/D) −2 ω 2 P max as discussed above equation (53).
Figure 7 .
7Shown is the circular polarisation fraction as a function of the observation frequency at infinity for when only polarised emission is considered (open triangles), only refractive plasma effects are considered (open squares), and when both are considered (filled circles). As inFigures 1-6, the disk model described in section 5.1 and appendix A orbiting a maximally rotating black hole is viewed from a vantage point 45 • above the equatorial plane.
c 0000 RAS, MNRAS 000, 000-000
ACKNOWLEDGEMENTSWe would like thank Eric Agol and Yasser Rathore for a number of useful conversations and comments regarding this work. This research has been supported by NASA grants 5-2837 and 5-12032.APPENDIX A: A THICK DISK MODELIn general, the innermost portions of the accretion flow will take the form of a thick disk. The equation for hydrostatic equilibrium in the limit that Ω ≫ vr is given bywhere here Γ is the adiabatic index, E = −ut, Ω = u φ /u t , and L = −u φ /ut(Blandford & Begelman 2003). Note that, given the metric, any two of the quantities E, Ω, or L, may An example application of this formalism is a cylindrical flow in flat space. In this case, Ω is a function of the cylindrical radius ̟ ≡ r sin θ. The Keplerian disk is a specific example with Ω = ̟ −3/2 . The direction of the magnetic field is determined by, Ω,r = dΩ d̟ sin θ and Ω ,θ = dΩ d̟ r cos θ .The magnitude, B is given byand thuswhere the particular form of b(̟) depends upon the particular form of f (Ω). Therefore,which is precisely the form of a cylindrically symmetric vertical magnetic field.A2.2 Stability to the Magneto-Rotational InstabilityA sufficiently strong non-shearing magnetic field configuration will remain stable to the magneto-rotational instability (MRI). The criterion for instability to the MRI iswhere k is the wave vector of the unstable mode and vA is the Alfvén velocity(Hawley & Balbus 1995). For a nearly vertical magnetic field geometry, stability will be maintained if modes with wavelength less than twice the disk height(for h0 ≃ 0.1 and r ≃ 7 which are typical for the disk pictured inFigure A1. Comparison to equipartion fields can provide some insight into how unrestrictive the stability criterion really is. Given β = Pgas/Pmag and the ideal gas law it is straight forward to show thatwhere T is the ion temperature. Because the ion temperature in a thick disk will typically be on the order of or exceed 10 12 K, the equipartition ωB (β = 1) will be at least an order of magnitude larger than ωP . As a result the field needed to stabilise the disk against the MRI is an order of magnitude less than equipartition strength, and hence is not physically unreasonable.A2.3 Magnetic Field ModelConsidering the restriction placed upon the magnetic field strength discussed in the previous sections, B was set such that in the equatorial plane ωB = ωP + η (r + 10M ) −5/4 ,where the second term provides a canonical scaling at large radii. Here η was chosen to be 0.01. This paper has been typeset from a T E X/ L A T E X file prepared by the author.
. E Agol, 2Ph.D. ThesisAgol E., 1997, Ph.D. Thesis, pp 2+
. J Arons, J J Barnard, ApJ. 302120Arons J., Barnard J. J., 1986, ApJ, 302, 120
. J J Barnard, J Arons, ApJ. 302138Barnard J. J., Arons J., 1986, ApJ, 302, 138
G Bekefi, Radiation Processes in Plasma Physics. New YorkWileyBekefi G., 1966, Radiation Processes in Plasma Physics. New York: Wiley, 1966
. R D Blandford, M C Begelman, astro-ph/0306184Blandford R. D., Begelman M. C., 2003, astro-ph/0306184
T J M Boyd, J J Sanderson, Plasma Dynamics. Thomas Nelson and Sons LTD. London Broderick A., Blandford R.MNRASBoyd T. J. M., Sanderson J. J., 1969, Plasma Dynamics. Thomas Nelson and Sons LTD, London Broderick A., Blandford R., 2003, MNRAS
. B C Bromley, F Melia, S Liu, ApJL. 55583Bromley B. C., Melia F., Liu S., 2001, ApJL, 555, L83
K G Budden, Proc. nullRoy. SocBudden K. G., 1952, Proc. Roy. Soc.
K G Budden, Lectures on Magnetoionic Theory. Gordon and Breach. Connors P. A., Stark R. F., Piran T.New York235224Budden K. G., 1964, Lectures on Magnetoionic Theory. Gordon and Breach, New York Connors P. A., Stark R. F., Piran T., 1980, ApJ, 235, 224
The propagation of electromagnetic waves in plasmas. V L Ginzburg, International Series of Monographs in Electromagnetic Waves. Pergamon2nd rev. and enlGinzburg V. L., 1970, The propagation of electromagnetic waves in plasmas. International Series of Monographs in Electromagnetic Waves, Oxford: Pergamon, 1970, 2nd rev. and enl. ed.
. J F Hawley, S A Balbus, Publications of the Astronomical Society of Australia. 12159Hawley J. F., Balbus S. A., 1995, Publications of the As- tronomical Society of Australia, 12, 159
. J S Heyl, N J Shaviv, D Lloyd, MNRAS. 342134Heyl J. S., Shaviv N. J., Lloyd D., 2003, MNRAS, 342, 134
. T W Jones, S L O'dell, ApJ. 214522Jones T. W., O'Dell S. L., 1977a, ApJ, 214, 522
. T W Jones, S L O'dell, ApJ. 215236Jones T. W., O'Dell S. L., 1977b, ApJ, 215, 236
. A Laor, H Netzer, T Piran, MNRAS. 242560Laor A., Netzer H., Piran T., 1990, MNRAS, 242, 560
. D B Melrose, M Gedalin, Phys. Rev. E. 6427401Melrose D. B., Gedalin M., 2001, Phys. Rev. E, 64, 027401
. S A Petrova, A&A. 360592Petrova S. A., 2000, A&A, 360, 592
. S A Petrova, A&A. 3831067Petrova S. A., 2002, A&A, 383, 1067
. M Ruszkowski, M C Begelman, ApJ. 573485Ruszkowski M., Begelman M. C., 2002, ApJ, 573, 485
Radiative processes in astrophysics. G B Rybicki, A P Lightman, Wiley-Interscience393New YorkRybicki G. B., Lightman A. P., 1979, Radiative processes in astrophysics. New York, Wiley-Interscience, 1979. 393 p.
. V N Sazonov, JETP. 561075Sazonov V. N., 1969, JETP, 56, 1075
. V N Sazonov, V N Tsytovich, Radiofizika. 111287Sazonov V. N., Tsytovich V. N., 1968, Radiofizika, 11, 1287
. N J Shaviv, J S Heyl, Y Lithwick, MNRAS. 306333Shaviv N. J., Heyl J. S., Lithwick Y., 1999, MNRAS, 306, 333
. P Weltevrede, B W Stappers, L J Horn, R T Edwards, astro-ph/0309578Weltevrede P., Stappers B. W., Horn L. J. v. d., Edwards R. T., 2003, astro-ph/0309578
. K C Westfold, ApJ. 130241Westfold K. C., 1959, ApJ, 130, 241
| [] |
[
"Fast and Accurate Prediction of Material Properties with Three-Body Tight-Binding Model for the Periodic Table",
"Fast and Accurate Prediction of Material Properties with Three-Body Tight-Binding Model for the Periodic Table"
] | [
"Kevin F Garrity \nMaterials Measurement Laboratory\nNational Institute of Standards and Technology\n20899GaithersburgMD\n",
"Kamal Choudhary \nMaterials Measurement Laboratory\nNational Institute of Standards and Technology\n20899GaithersburgMD\n"
] | [
"Materials Measurement Laboratory\nNational Institute of Standards and Technology\n20899GaithersburgMD",
"Materials Measurement Laboratory\nNational Institute of Standards and Technology\n20899GaithersburgMD"
] | [] | Parameterized tight-binding models fit to first principles calculations can provide an efficient and accurate quantum mechanical method for predicting properties of molecules and solids. However, well-tested parameter sets are generally only available for a limited number of atom combinations, making routine use of this method difficult. Furthermore, many previous models consider only simple two-body interactions, which limits accuracy. To tackle these challenges, we develop a density functional theory database of nearly one million materials, which we use to fit a universal set of tight-binding parameters for 65 elements and their binary combinations. We include both twobody and three-body effective interaction terms in our model, plus self-consistent charge transfer, enabling our model to work for metallic, covalent, and ionic bonds with the same parameter set. To ensure predictive power, we adopt a learning framework where we repeatedly test the model on new low energy crystal structures and then add them to the fitting dataset, iterating until predictions improve. We distribute the materials database and tools developed in this work publicly. | 10.1103/physrevmaterials.7.044603 | [
"https://export.arxiv.org/pdf/2112.11585v3.pdf"
] | 245,385,183 | 2112.11585 | 894d2118184fef598f7649cdda6b42f5b0304e23 |
Fast and Accurate Prediction of Material Properties with Three-Body Tight-Binding Model for the Periodic Table
Kevin F Garrity
Materials Measurement Laboratory
National Institute of Standards and Technology
20899GaithersburgMD
Kamal Choudhary
Materials Measurement Laboratory
National Institute of Standards and Technology
20899GaithersburgMD
Fast and Accurate Prediction of Material Properties with Three-Body Tight-Binding Model for the Periodic Table
(Dated: April 28, 2023)
Parameterized tight-binding models fit to first principles calculations can provide an efficient and accurate quantum mechanical method for predicting properties of molecules and solids. However, well-tested parameter sets are generally only available for a limited number of atom combinations, making routine use of this method difficult. Furthermore, many previous models consider only simple two-body interactions, which limits accuracy. To tackle these challenges, we develop a density functional theory database of nearly one million materials, which we use to fit a universal set of tight-binding parameters for 65 elements and their binary combinations. We include both twobody and three-body effective interaction terms in our model, plus self-consistent charge transfer, enabling our model to work for metallic, covalent, and ionic bonds with the same parameter set. To ensure predictive power, we adopt a learning framework where we repeatedly test the model on new low energy crystal structures and then add them to the fitting dataset, iterating until predictions improve. We distribute the materials database and tools developed in this work publicly.
Parameterized tight-binding models fit to first principles calculations can provide an efficient and accurate quantum mechanical method for predicting properties of molecules and solids. However, well-tested parameter sets are generally only available for a limited number of atom combinations, making routine use of this method difficult. Furthermore, many previous models consider only simple two-body interactions, which limits accuracy. To tackle these challenges, we develop a density functional theory database of nearly one million materials, which we use to fit a universal set of tight-binding parameters for 65 elements and their binary combinations. We include both twobody and three-body effective interaction terms in our model, plus self-consistent charge transfer, enabling our model to work for metallic, covalent, and ionic bonds with the same parameter set. To ensure predictive power, we adopt a learning framework where we repeatedly test the model on new low energy crystal structures and then add them to the fitting dataset, iterating until predictions improve. We distribute the materials database and tools developed in this work publicly.
I. INTRODUCTION
With the growth in computing power over the past several decades, first principles electronic structure calculations have come to play an ever larger role in materials physics and materials design [1,2]. The increasing use of high-throughput computing techniques has allowed the construction of several databases containing calculated properties for thousands of materials [3][4][5][6][7][8]. Nevertheless, there remain many types of calculations that are too computationally expensive to consider systematically, even at the level of relatively inexpensive semi-local density functional theory (DFT). Examples of these calculations include harmonic and anharmonic phonons [9], thermal conductivity [10], thermoelectrics [11], defect energetics [12] , surfaces [13], grain-boundaries [14], phase-diagrams [15,16], disordered materials [17], dopants [18], structure prediction [19], and molecular dynamics [20].
Building models based on DFT calculations is a major way to bridge the gap between existing databases and new properties or structures, but models are often developed on a case-by-case basis for single materials systems, which doesn't scale easily for materials design applications. Machine learning approaches [21] with limited physics built-in have emerged in recent years as a very promising way to incorporate the large amount of DFT data available, but they can have difficulty extrapolating beyond their training data to new situations [22]. In this work, we aim to develop a physics-based model of the energy and electronic structure of materials, which we fit to a large database of DFT calculations using a combination of traditional and machine learning-inspired approaches.
Our underlying model is a tight-binding (TB) model where the TB Hamiltonian depends on a parameterized function of the crystal structure [13,21,[23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39], including the effects of charge self-consistency [31,40,41]. This * [email protected] formalism contains the minimal description of quantum mechanics and electrostatics necessary to describe chemical bonding. The difficulty with this approach is producing a model that is both simple to fit and and efficient to evaluate while maintaining predictive accuracy. Here, we go beyond previous works through a combination of two ideas. First, in addition to the typical twobody (two-center) atom-atom interactions, we use threebody (three-center) terms [42][43][44][45][46] to predict the tightbinding Hamiltonian from atomic positions. Including explicit three-body terms allows the Hamiltonian matrix elements between a pair of atoms to be environment dependent [47][48][49][50]. This creates a more transferable model that can be applied with equal accuracy to many crystal structures and that better takes advantage of the abundance of DFT data available from modern computational resources. Previously, three center expansions have been used most prominently [44][45][46] to approximate the exchange-correlation terms in tight-binding approaches that expand specific interactions from DFT [24,31,43,51,52]. We instead include three-body interactions as general fitting parameters for both onsite and intersite matrix elements.
Second, we fit coefficients for 65 elemental systems (the main group and transition metals) as well as any binary combination of those elements, resulting in 2080 combinations. Within our framework, materials with three or more elements can be treated, but require three-body interactions between three different elements that go beyond the fitting in this work. Our total database consists of over 800,000 DFT calculations. Furthermore, we employ an active learning-inspired approach to continue generating new fitting data until our model performs well on out-of-sample tests. By fitting our model to a wide range of elemental and binary compounds, we hope to make a model that can be used in high-throughput or on-demand computing applications that are not possible with individually fit tight-binding models. Given a crystal structure, our three-body tight-binding model can calculate the band structure, total energy, forces, and stresses at a fraction of the computational cost of a direct DFT calculation. This combination of built-in physics, accuracy, and scope should allow our model to be applied for various materials design applications that are difficult with other techniques.
We distribute a publicly available implementation of the present work and the fitting parameters at https: //github.com/usnistgov/ThreeBodyTB.jl in the Julia programming language, as well as a python interface at https://github.com/usnistgov/tb3py. The documentation is available at https://pages.nist.gov/ ThreeBodyTB.jl/, including examples.
This work is organized as follows. Sec. II presents our tight-binding formalism, Sec. III describes a method to generate initial TB parameters for a single material via atomic projection, Sec. IV provides details of the fitting process and dataset generation, Sec. V shows tests of the model energy and electronic structure, and Sec. VI presents conclusions.
II. TIGHT-BINDING FORMALISM
A. Overview
The basic idea of TB is to perform electronic structure calculations in a minimal basis [1]. For example, a calculation of fcc Al will have one s-orbital and three porbitals, for a total of four basis functions, rather than potentially hundreds of plane-waves or similar basis functions. Given a DFT calculation for a particular material, it is possible to use Wannier functions [53][54][55] or related techniques [56][57][58] to generate tight-binding Hamiltonians for that material. However, our goal is to predict the Hamiltonian directly from the crystal structure without performing an expensive DFT calculation first, allowing us to inexpensively predict the energy, band structure, and related properties.
Our tight-binding model is largely similar to formalism from density functional tight-binding including charge self-consistency [31,40,41], as well as the Navy Research Lab (NRL) tight-binding formalism [13]. Here, we only include a brief overview of standard aspects of tight-binding, interested readers can consult the review article such as [25,40,51,59,60] for a more pedagogical introduction.
In addition to the band structure, we need to be able calculate the total energy, E. Many tight-binding formalisms make a distinction between the band structure and non-band structure contributions to the total energy, with the latter grouped together as a repulsive energy contribution, E rep . We instead follow the NRL philosophy of grouping all the energy terms together by shifting
the DFT eigenvalues, i , E = occ. i i + E rep = occ. i i (1) i = i + E rep /N(2)
where i are the shifted eigenvalues and N is the total number of electrons. After performing this shift, there is no need for a separate repulsive energy term. Below, we assume this shift has been done and do not write the prime explicitly. We use non-orthogonal basis orbitals, where the tightbinding orbitals φ µ have a non-trivial overlap matrix S µν = φ µ |φ ν . The Hamiltonian is also a matrix H µν = φ µ | H |φ ν . The eigenvectors, ψ i = µ c i µ φ µ , with coefficients c i µ and eigenvalues, i , come from solving a generalized eigenvalue equation Hψ i = i Sψ i . The total energy is
E = i f i µν c i * µ c i ν H µν(3)
where f i is the occupancy of eigenstate i. For periodic systems, there is also an average over k-points, which is implicit above. Once we have the Hamiltonian, solving the model involves diagonalizing a matrix with four (sp) or nine (spd) basis functions per atom, which is computationally inexpensive for small-to-medium sized systems. The orbitals we chose for each element are listed in Fig. 1. The overlap matrix can be fit easily from the atomic orbitals. Thus, predicting a set of matrix elements, H µν , that accurately reproduce the energy and band structure directly from the crystal structure is the main challenge of developing a parameterized tight-binding model.
B. Charge self-consistency
A major limitation of the above formalism is that it does not include any explicit role for charge transfer or the resulting long-range Coulomb interaction. While this may be adequate for elemental systems and some metal alloys, explicitly including self-consistent electrostatics greatly improves fitting for ionic systems, as the remaining interactions become short-ranged [25,31,40,41]. In this work, we do not consider magnetism, but spin selfconsistency can be included along similar lines. The cost of including self-consistency is that the eigenvalue problem will have to be solved several times to reach convergence, in a manner similar to solving the Kohn-Sham equations. In practice, the smaller basis sets used in tight-binding reduce the convergence difficulties, and similar charge mixing schemes can be employed [61].
The key variable for charge self-consistency is ∆q I , the excess charge on ion I, relative to a neutral atom:
q I = i f i µ∈I ν 1 2 (c i * µ c i ν + c i µ c i * ν )S µν (4) ∆q I = q I − q 0 I(5)
where q 0 I is the valence ionic charge. ∆q I enters the expression for the Coulomb energy, E coul ,
E coul = 1 2 IJ γ IJ ∆q I ∆q J(6)
where γ IJ is the Coulomb operator:
γ IJ = U I I = J erf (C IJ R IJ ) R IJ I = J (7) C IJ = π/2 1/U 2 I + 1/U 2 J .(8)
At long distances, γ IJ follows 1/R IJ , where R IJ is the distance between ions I and J. U I is an onsite Hubbard term, which we fit to the changes in atomic eigenvalues for different numbers of electrons. The erf (C IJ R IJ ) term reduces the interaction between nearby orbitals due to orbital overlap, and goes to 1 at long distances, please see details in [31,40,41]. Incorporating the Coulomb term, our expression for the total energy is now
E = i f i µν c i * µ c i ν H µν + 1 2 IJ γ IJ ∆q I ∆q J (9)
and the Hamiltonian used to calculate the eigenvectors and eigenvalues must be modified to
H µν = H µν + 1 2 S µν K (γ IK + γ JK ) ∆q K .(10)
C. Two-body Intersite Interactions
The largest contributions to the intersite Hamiltonian matrix elements H µν are the two-body interactions between orbitals µ and ν. Following the Slater-Koster [23] formalism, these terms can be factored into functions that depend solely on distance between the two atoms and symmetry factors that depend on the orbital types (s, p, or d) and their relative orientations. The symmetry factors are tabulated by the Slater-Koster matrix elements M x ij , where i and j are the orbitals, and x is an index over a number of components (traditionally labeled σ, π, δ):
H 2bdy iI,jJ = x f x iI,jJ (R IJ )M x ij .(11)
Here H 2bdy iI,jJ are the two-body Hamiltonian matrix elements between orbital i on atom I and orbital j on atom J. These depend on f x (R IJ ), which are functions of the distance between the atoms. We expand the function of distance in terms of the Laguerre polynomials L x (d) times a decaying exponential:
f iI,jJ (d) = e −ad x f x iI,jJ L x (d),(12)
where f x iI,jJ are fitting coefficients that depend on the types of atoms I, J and the orbital types i, j. a is a universal decay constant that is set to 2 Bohr ≈ 1.058Å. The Laguerre polynomials are chosen because they are complete and orthogonal with respect to the inner product < f, g >= ∞ 0 f (x)g(x)e −x dx and result in numerically stable fits. We use five terms in the above expansion to fit the two-body Hamiltonian matrix elements.
We use an identical formalism to fit the overlap matrix elements, except we use seven terms as there is less danger of overfitting. Unlike the Hamiltonian, the overlap matrix elements are approximating overlap integrals that are explicitly two-body, so there is no need for threebody interactions.
The decay constant parameter a can be optimized to improve the convergence speed of the Laguerre expansion. Because the overlaps themselves and the intersite Hamiltonian are due to orbital overlap, the optimal choice for a is close to the decay length of the valence atomic orbitals we include. These decay lengths are set by the valence orbital eigenvalues, and are therefore in same range for all elements. We find that any value near 1 A is reasonable and gives similar results. We note that by fixing the decay constant, the two-body Hamiltonian now depends linearly on the fitting coefficients f x iI,jJ , which greatly simplifies the fitting procedure. We will design the other terms in our model such that they are linear as well.
+ - Atom A Orbital p z Atom X Atom B Orbital s R AB R BX R AX FIG. 2.
Schematic of three-body terms. The direct twobody interaction between the pz-orbital on atom A (left) and the s-orbital on atom B (right), represented by the solid blue line, is zero by symmetry. However, atom X (top) breaks the mirror symmetry and allows a non-zero Hp z A,sB via the three body interaction (dashed lines).
D. Three-body Intersite Interactions
Most tight-binding formalisms ignore contributions to the intersite Hamiltonian matrix elements that go beyond the two-body terms that we consider above. While this is usually adequate for fitting to a single structure at various volumes or with small distortions, it leads to wellknown difficulties when fitting to multiple structures that we discus further in Sec. V A. In such situations, the best matrix elements for each structure cannot be fit with a single function of distance.
While there are various methods to alleviate this problem by including neighborhood-dependent hoppings [47][48][49][50], here, we directly include three-body terms in our fitting [42]. For example, consider H pzA,sB , the interaction between the p z -orbital on atom A and the s-orbital on atom B in Fig. 2. Due to the symmetry of the orbitals, the direct two-body interaction (solid line) is zero. However, the presence of other atoms, in this case atom X, will modify this interaction. Here, atom X allows a nonzero interaction by breaking the mirror symmetry along the line from A to B. This three-body interaction can be represented as two hoppings (dashed lines in Fig. 2): A to X, and then X to B. If we assume atom X has the symmetry of an s-orbital, then this pair of hoppings is indeed non-zero. Thus, including three-body interactions in this way allows atom X to modify the A-B interaction.
We implement this idea in our model by including a contribution to the intersite matrix elements from nearby third atoms :
H 3bdy iI,jJ = K g iI,jJ,K (R I , R J , R K )M is M js .(13)
Here the sum over K is a sum over nearby third atoms, and the symmetry factors are a product of two Slater-Koster symmetry factors, with the symmetry of the third atom assumed to be an s-orbital, i.e. isotropic. This symmetry assumption can be viewed either as the simplest assumption or as the first term in an expansion, and it will not break any symmetries required by the space group. However, as discussed above, the three-body term can correctly split certain degeneracies or allow for nonzero couplings if those "extra" symmetries are artifacts of assuming a purely two-body interaction.
The fitting function g can in principle depend on a complicated function of the three atom positions, which creates potential problems with over-fitting. In order to make progress, we make the simplifying assumption that the three-body terms can be expanded in terms of the three distances R IJ , R IK , and R JK only, and furthermore, only a few terms are necessary in the expansion:
g iI,jJ,K (R I , R J , R K ) = e −a(R IK +R JK ) [ g 1 L 0 (R IK )L 0 (R JK ) +g 2 L 0 (R IK )L 1 (R JK ) +g 3 L 1 (R IK )L 0 (R JK ) +g 4 L 0 (R IK )L 0 (R JK )L 0 (R IJ )e −aR IJ ].(14)
Here, there are four fitting coefficients g i multiplied by specific products of Laguerre polynomials times decaying exponentials. The g i depend on the types of atom I, J, and K, as well as the orbitals i and j, but we suppress these indexes for clarity. We find through experimentation that these four terms have the largest contribution in typical cases. In the case where atom I and J are the same type, there are only three independent coefficients, as g 2 = g 3 by permutation symmetry. Importantly, the contribution from the third atom decays exponentially as it moves further away from either of the primary two atoms, which constrains the contributions to be shortranged.
We note that the self-consistent electrostatic terms introduced in Sec. II B can also create effective three-body interactions. However, there is no issue with double counting as the three-body body terms introduced in this section are fit after the effects of charge self-consistency have already been included.
E. Onsite Interactions
The onsite matrix elements H iI,jI require significant care to fit, as they effectively incorporate the contributions from the normal repulsive energy term (see section II A). The one-body term is due to the non-spin-polarized spherically symmetric atomic eigenvalues iI . The twobody terms modify the orbital energies due to a single nearby atom. They are split into an average term and a crystal-field term. The former changes the average eigenvalue of a set of orbitals (e.g. p-orbitals) due to a nearby atom, while the latter can split the degeneracy of a set orbitals depending on the site symmetry. Finally, we in-clude a simple three-body term discussed below:
H iI,jI = iI δ ij + H avg iI δ ij + H cf iI,jI + H 3bdy I δ ij (15) H avg iI = J h iIJ (R IJ ) (16) H cf iI,jI = J h cf iI,jJ (R IJ )M is M js (17)(18)
Here, δ ij is the Kronecker delta function, H avg iI,iI is the average interaction, H cf iI,jI is the crystal field interaction, and H 3bdy iI,iI is the three-body interaction. Like the twobody inter-atomic term (see Eq. 12), the average interaction is expanded as a Lagauerre polynomial times a decaying exponential. The crystal field term is very similar except it includes a pair of symmetry factors. Similar to the three-body intersite case discussed above, we assume the second atom contributes with isotropic s-orbital symmetry. The crystal field term allows the mixing of different orbitals on the same atom (e.g. s and p x ) if the atom is on a low symmetry site.
h iIJ (d) = e −ad x h x iIJ L x (d) (19) h cf iI,jJ (d) = e −ad x h cf,x iI,jJ L x (d).(20)
h x iIJ and h cf,x iI,jJ are the fitting coefficients for the average and crystal field terms, respectively. We fit them with four terms in the expansion (x = 1 − 4).
Finally, there is a three-body average onsite interaction. To simplify the fitting, we apply this term to all orbitals on an atom equally, without an orbital dependence.
H 3bdy I = JK h 3bdy IJK (R IJ , R IK , R JK )(21)
This is again expanded into four terms:
h 3bdy IJK (R IJ , R JK , R IK ) = e −a(R IJ +R IK +R JK ) × [ h 3bdy 1 L 0 (R IJ )L 0 (R JK )L 0 (R IK )+ h 3bdy 2 L 1 (R IJ )L 0 (R JK )L 0 (R IK )+ h 3bdy 3 L 0 (R IJ )L 1 (R JK )L 0 (R IK )+ h 3bdy 4 L 0 (R IJ )L 0 (R JK )L 1 (R IK )].(22)
h 3bdy
x are the four fitting coefficients, which depend also on IJK. In the case where the type of atom J and K are the same, only three fitting coefficients are independent due to permutation symmetry. We discuss the relative magnitudes of typical three-body terms in an example material in supplementary materials Sec. S3[62].
III. ATOMIC PROJECTION OF WAVEFUNCTIONS
A. Projection Method
In order to fit the model defined in section II, we need data from DFT calculations. While we will primarily concentrate on fitting to energies and eigenvalues as discussed later, we need a reasonable set of tight-binding parameters to start the fitting process. A difficultly comes from the fact that even a set of isolated bands can be described by many different tight-binding models, as it is always possible to apply unitary transformations to a Hamiltonian without changing the eigenvalues. Furthermore, the conduction bands we wish to describe with tight-binding are generically entangled with both higher energy atomic levels and free-electron bands that we cannot describe with our model. Maximally-localized Wannier functions and similar methods [53,54,56] are a well-known ways to generate a tight-binding Hamiltonian. However, because they require an optimization step, they are not guaranteed to resemble atomic-like orbitals in general cases, and they can depend discontinuously on atomic positions, making them a poor choice for the fitting data we need. Symmetry-adapted Wannier functions can improve the situation for some structures, but the same issues remain for broken symmetry structures [63].
We want a procedure to generate the best tightbinding matrix for our goal, which is to serve as the data for fitting the model described in Sec. II. We therefore use a non-iterative atomic orbital projection procedure. Projection schemes have the advantage of maintaining the correct symmetry of the tight-binding Hamiltonian and do not require optimization. Following similar schemes [64,65], the basic idea involves projecting the large N -band Kohn-Sham Hamiltonian H KS at a given k-point onto a smaller number of M atomic orbitals:
H T B α,β = φ α | H KS |φ β (23) ≈ n φ α |ψ n E n ψ n |φ β ,(24)
where φ α are atomic-like orbitals, and ψ n and E n are the Kohn-Sham eigenvectors and eigenvalues in a plane-wave basis.
A difficulty with Eq. 24 is how to select the best M bands that are appropriate to describe with atomic-like orbitals in the case of entanglement, which is generic for conduction bands. We proceed by defining a set of projection coefficients B α,n = φ α |ψ n . Then, we consider the projection matrix for eigenvectors:
(B † B) n,m = ψ n | P |ψ m = P n,m(25)
The diagonal elements of this N × N matrix are the projectibility of each band [64,65].
Our key approximation is to represent the projection matrix P with a new matrixP , created from the M eigen- vectors of P that have the largest eigenvalues.
P i,j = N m,n=1 Q i,n p n,m (Q j,m ) † (26) P i,j = M n=1 Q i,n (Q j,n ) † (27) P =B †B(28)
Here, Q i,n are the N eigenvectors of P , and p n,m are a diagonal matrix of eigenvalues. The sum in Eq. 27 is over the M largest eigenvalues, and Eq. 28 definesB, which is a M × N matrix.P projects onto the highest projectiblity M -dimensional subspace to represent the M atomic wavefunctions. By construction, it has M eigenvalues equal to 1, with the rest equal to zero. UsingP , we can now apply the philosophy of Eq. 24 without difficulty:
H T B = BP EP † B †(29)H T B = BB †B EB †B B †(30)
Here, E is a diagonal N × N matrix of the original eigenvalues. By approximating P with its M eigenvectors with large eigenvalues, we have effectively selected the Mdimensional subspace of the original larger Hamiltonian that are best (most atomic-like), thus avoiding the difficulty of the naive Eq. 24. This projection scheme can then be applied to a grid of k-points, and the resulting TB Hamiltonian can be Fourier-transformed onto a realspace grid. Because the original atomic-like states are localized in real-space, the real-space Hamiltonian will also be localized as well, although not maximally-localized.
B. Implementation Details
The projection method described above picks out the highest projectibility Hamiltonian for the set of M atomic orbitals, and can be used to separate both semicore states and high energy states from the valence and conduction bands we wish to describe. Ideally it also maintains the symmetry of the tight-binding Hamiltonian. However, we note that the original selection of N -bands at each kpoint can be a subtle source of symmetry breaking, as the N -th and N+1-th band can be degenerate, and selecting only one of these at random introduces unwanted symmetry breaking. Therefore, we make sure to throw away the highest eigenvalues at each k-point before projection.
A second problem can occur if the desired set of atomic orbitals includes high energy states that are not well described by the N Kohn-Sham bands in the original DFT calculation. In this case, the trace of the M × M matrix BB † will be much less than M . This situation can be monitored and can usually be solved by increasing N .
A more serious difficulty is that the projection scheme does not reproduce even the occupied states exactly. While it is impossible to reproduce the larger set of unoccupied bands with only tight-binding orbitals, it is desirable to reproduce the occupied bands, and possibly the lowest conduction bands, for our eventual fitting procedure. Fortunately, the occupied orbitals are almost always well-described by atomic orbitals and our atomicprojected Hamiltonians require only small adjustments.
We perform this adjustment by first we deciding on an energy range below which the eigenvalues should be exact by defining a smooth cutoff function f (E) that is one below some cutoff energy and that goes to zero at higher energies. Then, we can adjust the TB eigenvalues to match the DFT eigenvalues while keeping the TB eigenvectors unchanged:
H T B = ΨE T B Ψ † (31) H Adj = ΨE Adj Ψ †(32)
Here, Ψ are the eigenvectors, and E T B and E Adj are diagonal matrices of tight-binding and adjusted eigenvalues. The adjusted eigenvalues are
Adj n = f ( DF T n ) DF T n + (1 − f ( DF T n )) T B n ,(33)
where Adj n , T B n , and DF T n are the adjusted, tightbinding, and DFT eigenvalues, respectively. For this procedure to work, it is necessary to identify which DFT eigenvalue should be matched with each TB eigenvalue. We do this by comparing the energies and the eigenvector projections on the DFT bands to find the best match. We take the cutoff energy to be the lowest eigenvalue above the Fermi level, and the cutoff range is 3 eV.
In Fig. 3, we show a comparison between the DFT eigenvalues for silicon in the diamond structure and our atomic projected tight-binding model, using the method described in this section. We can see that there is excellent agreement for the occupied eigenvalues, even for k-points along high symmetry lines but not in our original grid. However, there is much worse agreement for the conduction bands, with the tight-binding bands only tracing the general shape of the conduction bands. This is because there is significant mixing between these states and various unoccupied Si s * and d-states and other states that are not part of our model, which limits our ability to describe these states using solely atomic-like s and p orbitals. It may be possible to improve this agreement by including more orbitals, but this will increase the cost of the tight-binding calculations, undercutting the main motivation for using tight-binding in the first place. We leave models with more orbitals or other approaches to future work.
IV. FITTING
We fit tight-binding matrix elements to a set of DFT calculations by first doing a least squares fit to the set of initial DFT Hamiltonian matrix elements (see Sec. III). This is followed by another fit to the total energies and eigenvalues. A key part of our procedure is our recursive generation of new DFT fitting data to improve the model. We discuss these ideas in the following subsections. To orient the reader, an overview our procedure is presented in Fig. 4.
A. Initial fitting
Our initial fit is to the atomic-projected Hamiltonian matrix elements for a set of DFT calculations. Each DFT calculation contributes n k M 2 matrix elements, where n k is the number of symmetry-reduced k-points and M is the number of orbitals. The number of independent matrix elements is reduced by the Hermitian symmetry and any crystal symmetries. These matrix elements are ar-ranged into a long vector of length N T B . The charge self-consistency contributions (Sec. II B) are subtracted from the matrix elements.
The set of descriptors is a N T B ×n param matrix, where n param are the number of tight-binding model parameters that are relevant to the DFT calculations. These parameters include two-body terms (Eq. 12), three-body terms (Eq. 14), and onsite terms (Eq. [19][20][21][22]. The entries of this matrix come from Fourier-transforming the tight-binding model of Sec. II for each material.
As noted in Sec. II C, all of our fitting parameters are linearly related to the Hamiltonian. The initial set of coefficients then comes from a simple linear least-squares fit of the model coefficients to the Hamiltonian matrix elements. This fit is generally good enough to produce reasonable looking band structures, but the total energies are not very accurate. A major difficulty with the fitting of total energies is that the bandwidth of a given material can be a dozen eV, but the energy differences between chemically relevant structures are on the order of 0.1 eV/atom, making it necessary to include the total energy directly in the fitting instead of indirectly through the Hamiltonian. We discuss this further in the next section.
We also fit the overlap matrices with the same procedure, except the overlaps are purely two-body interactions. The overlaps are simple to fit, and are fixed for the rest of the fitting.
B. Self-consistent fitting
Starting from our initial fitting described above, we seek to improve the model by focusing more directly on the observables we care most about, namely, the total energies and the occupied eigenvalues. Unlike the Hamiltonian itself, which as discussed in Sec. III can be always be arbitrarily modified by a choice of unitary transformation or disentanglement procedure, the energies and occupied eigenvalues are well-defined observables. Unfortunately, unlike the Hamiltonian matrix elements, our model is not linearly related to the energy or eigenvalues, which appears to pose a major difficulty for the efficiency of the fitting.
In order to overcome this difficulty, we first note that the eigenvalues nk can be linearly related to Hamiltonian if we already know the eigenvectors |ψ nk :
nk = ψ nk | H k |ψ nk .(34)
Therefore, we adopt a procedure where we use our current set of parameters to generate and diagonalize the current Hamiltonians for each material in our dataset, and then we use the resulting eigenvectors to generate the new set of descriptors, using the eigenvalues as the target data rather than the Hamiltonian. By adopting this approach, we can fit the eigenvalues using linear fitting. The problem is that the eigenvectors of the old parameters will not generally match the eigenvectors of the new parameters. Therefore, this procedure must be repeated many times to reach consistency between the eigenvectors and eigenvalues. As usual for self-consistent equations, we find that mixing the previous and new coefficients results in a more stable approach to the solution. Armed with the eigenvalues and eigenvectors, the total energy (Eq. 9) of each material can also be incorporated into the fitting straightforwardly. One final difficulty is that when including charge selfconsistency as in Sec. II B, each material must be selfconsistently solved with the current set of coefficients as an inner loop within our overall self-consistent procedure for fitting the coefficients.
C. Generation of DFT datasets
The fitting procedure described above requires a dataset of DFT calculations to fit. First, we generate datasets for the elemental systems and fit the elemental coefficients. Each element is fit separately. Then, keeping the elemental coefficients fixed, we generate datasets of binary compounds and fit the binary coefficients. The flexibility of our model enables us to fit binary compounds without sacrificing our ability to describe elements. In each case, we generate an initial dataset and then supplement it using a simple learning strategy to generate relevant new low energy structures.
To generate the elemental datasets, we begin by substituting each element into a series of common elemental structures or molecules with small unit cells, e.g. fcc, diamond, etc., as well as a dimer. All structures have eight or fewer atoms, with one or two atoms the most common. For each structure, we consider a series of three to five volumes within ±10% of the equilibrium volume, for a total of ≈100 structures. We fit an initial set of coefficents to this dataset. Unfortunately, it is impossible to ensure a priori that any such dataset has sufficiently varied structures so that the resulting model both a) describes low energy structures accurately and b) has no unphysical low energy structures. We therefore adopt a recursive learning strategy to systematically improve the model (see Fig. 4). We use the current model to search for new low energy structures and add them to the dataset.
Specifically, for each element, we generate several new structures with random lattice vectors and random atomic positions, ensuring that no atoms overlap [66]. These new structures have two or three atoms per unit cell, and we relax them using the tight-binding model. For each of the new relaxed structures, we perform a new DFT calculation and compare the new DFT energy to the TB energy. If the total energy per atom differs by more than a tolerance of roughly 0.1 eV/atom, we add the new structures to the dataset and restart the fitting. We continue adding new structures in this way until the out-of-sample performance on these low energy structures improves.
The procedure for binary compounds is similar, except that we have to consider differing stoichiometry as well. We start our dataset with a few common structural prototypes at a range of stoichiometries (e.g. rocksalt, CaF 2 ). We add a few extra common structures at chemically relevant stoichiometries for that binary pair, as well as any matching structures from the JARVIS-DFT database [7,67] with small unit cells. Finally, we include a dimer at several bond lengths, for a total of ≈100 starting structures. We again employ recursive learning, generating two or three new random structures at the following compositions: 2/2, 1/2, 2/1, 1/3, and 3/1. These structures are relaxed with the model and then compared to new DFT calculations. The process is iterated until the out-of-sample energies improve. In many cases, certain stoichiometries we consider may not be chemically relevant in equilibrium, but we want the model to give reasonable results for as wide of a range of materials as possible.
This entire process results in a large dataset of DFT structures. We make the DFT calculations available on the JARVIS-QETB website (https://jarvis.nist. gov/jarvisqetb/). Details of the dataset generation and recursive procedure, including the prototype crystal structures for the initial dataset generation, are available on the ThreeBodyTB.jl code webpage and documentation. The fitted datasets themselves are available at https://github.com/usnistgov/ThreeBodyTB.jl/ tree/master/dats/pbesol/v1.2.
D. First principles details
Our first principles DFT calculations are performed using Quantum Espresso [68] code using the PBEsol [69] functional, which predicts accurate lattice constants and elastic properties of solids [70]. We describe atomic regions using slightly modified GBRV pseudopotentials [71,72] as distributed with the code. The modifications are which atomic orbitals are included in the pseudopotential files for the purposes of the atomic projections, as well as minor modification of the oxygen pseudopotential. We perform calculations using a 45 Ryd. (≈610 eV) planewave cutoff energy. We use k-point grids with a linear density of at least 29 perÅ −1 and Gaussian smearing with an energy of 0.01 Ryd. (≈0.136 eV), which we also set as the defaults for our tight-binding code. We perform only non-spin-polarized calculations. We use the JARVIS-tools [7] package to generate surface and vacancy structures.
V. RESULTS
A. Pedagogical example
We begin with a simplified pedagogical example that illustrates the power of the three-body tight-binding ap-proach. For this example, we consider hydrogen atoms in three simple crystal structures, fcc, bcc, and sc (face centered, body-centered, and simple cubic), at five volumes each. We describe hydrogen with a single isotropic sorbital, and for this example we fit directly to the atomicprojected Hamiltonian matrix elements per Sec. III A. These Hamiltonian matrix elements are plotted as a function of distance in both panels of Fig. 5 in blue symbols. We can see that there is strong decay with distance, but there is also a nearly 1.0 eV spread between the matrix elements of three different cubic structures at similar distances. Even within a single structure, the different shells of neighbors do not follow a single line versus distance.
If we fit a tight-binding model using purely two-body interactions as in Eqs. 11-12, the resulting intersite interactions between s orbitals depend solely on distance. As shown in Fig. 5a, it is clearly not possible to describe all of these interactions accurately with purely two-body terms. However, by including three-body interactions as in Eqs. 13-14, the model can describe the additional variation in the matrix elements that comes from the differing local environments of the bonds. This can be seen in Fig. 5b, which shows almost perfect agreement between the three-body tight-binding model and the DFT matrix elements. This increase in flexibility and accuracy requires only three additional parameters in this case.
B. Bulk Structures
We now present results demonstrating the accuracy of our model in reproducing and predicting bulk energies, volumes, bulk moduli, bandwidths, and band gaps. See supplementary materials Sec. S4 for details on each structure we test. We separate our results into elemental systems, binary systems with small unit cells (2-6 atoms), and binary systems with large unit cells (9-10 atoms), only the last of which is an out-of-sample test. The structures we consider are the relevant bulk structures from the JARVIS-DFT database [7], which includes experimentally observed structures and other structures that are close to thermodynamic stability. We include a summary of these results in table I. The electronic bandwidth is defined as the difference between the valence band maximum and the lowest occupied states we include in our model. For ease of computation, the volume and bulk modulus are calculated for fixed internal atomic coordinates, i.e. unrelaxed, and as in the entire paper, all calculations are non-spin-polarized.
We start by considering elemental structures. Because there are relatively few unique elemental structures that are observed experimentally, we do not have a separate test and training set for bulk elements (although see Sec. V D). In Fig. 6, we present a comparison between the DFT and TB atomization energies, occupied state bandwidth, volume, and bulk modulus. The structures we consider are three-dimensional elemental solids.
As can be seen in Fig. 6a, there is excellent agreement between the model and DFT atomization energies, which are a direct part of the fitting process. Fig. 6b shows that the TB model can also reproduce basic features of the band structure like the bandwidth. In Figs. 6c, we see that there is good agreement for the volumes, with most structures having less than 3% error, which corresponds to only 1% error in lattice constants. The bulk modulus, shown in Fig. 6d shows significantly more error. The bulk modulus is computed from six energy calculations between 94% and 106% of the equilibrium volume, and maintaining agreement with the first principles results over such a wide range is more challenging. In addition, some elemental structures include weak bonding between molecules, which is challenging for either our model or the underlying DFT to capture accurately.
We move on to consider binary compounds. First, we consider binary compounds with two to six atoms per unit cell from the JARVIS-DFT database, which are again in-sample for our fitting procedure. The results, shown in Fig. 7, are again very promising, with excellent agreement for energies and bandwidths, good agreement for volumes, and reasonable agreement for the bulk mod- ulus. In addition, in Fig. 7c, we show results for band gaps. Because our fitting procedure emphasizes the occupied eigenvalue and total energies, with a lower weight on unnoccupied bands, the band gaps are more challenging to fit quantitatively. Nevertheless, we find reasonable agreement between the DFT and TB band gaps.
Finally, we consider results for binary compounds with 9-10 atoms per unit cell from the JARVIS-DFT database, as shown in Fig. 8. None of these crystal structures are included in our fitting in any way, as we include only structures with eight or fewer atoms. Still, we find levels of agreement that are similar to our in-sample results. We find that the atomization energies (Fig.8a) are excellent, and the band gaps and bandwidths (Fig. 8b) are very good. The volume and bulk modulus errors (Fig. 8cd) are also comparable to the in-sample data from Figs. 6-7. These results demonstrate the predictive power of our fit model over a wide range of chemistries, bonding types, and crystal structures.
C. Band Structures
As discussed above, Figs. 6b-8b include statistical evidence of the accuracy of our model in reproducing electronic properties like the bandwidth and band gap. In this section, we present a few example comparisons between band structures calculated with tight-binding or directly with DFT. In Fig. 9, we show band structures TABLE I. Summary of model accuracy on bulk structures from the JARVIS-DFT database. Columns are absolute errors in atomization energy (eV/atom), volume (% error), bulk modulus (%), bandwidth (eV), and band gap (eV). Results are split into elements (in-sample) and small binary (2-6 atoms, in-sample) and large binary (9-10 atom, out-ofsample) unit cells. for Rh in the fcc structure as well as ZnSe in the zinc blende structure. These simple materials are both included in the relevant fitting datasets, and thus are insample predictions. As can be seen in the figure, we reproduce the occupied bands very well. The relatively localized d-states of Rh are very well described. The occupied Se p states and lower energy Zn d-states again match the DFT band structure; although, the Zn d-states are shifted slightly. We also show reasonable agreement for the unoccupied bands, but the fit is less quantitatively accurate.
In Fig. 10, we show band structures for three materials with larger unit cells that are out-of-sample predictions: Ga 4 Te 6 (Cc space group, JVASP-22549 ), Ca 5 P 8 (C2/m, JVASP-12962 ), and Au 2 Bi 8 (Fd-3m, JVASP-101068 ). Despite not being fit to these crystal structures, we are able to produce reasonable band structures in all three cases. Some of the bands are reproduced almost quantitatively, while others are shifted somewhat, but the averaged electronic properties are well reproduced with far less computational effort than full DFT calculations.
D. Defects and Surfaces
Thus far, we have only considered near-equilibrium properties of bulk materials. In this section, as a first step beyond these limitations, we consider vacancy formation energies and (111) surface energies of elemental solids. For computational convenience, we only consider unrelaxed geometries. However, we also provide comparisons to calculated relaxed structures and experimental measurements in the supplementary materials when available [73][74][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89], which show that relaxation effects are generally small in elemental systems. We note that none of the vacancy structures and none of the specific surfaces considered here are included in our fitting dataset, making these structures an out-of-sample test of the model. Our dataset does include thinner three to five atom slabs in the fcc and bcc structures.
We generate vacancy structures by first creating a supercell of the elemental ground state structure as neces- sary to ensure the defects are separated by at least 10Å, and then deleting an atom. We calculate the vacancy formation energy as:
V f = E def ect − E ideal + µ.(35)
where E def ect and E ideal are the energies of the defect and ideal structures respectively, and µ is the chemical potential of the element in the same structure. A comparison between the DFT results and the tight-binding calculations are shown in Fig. 11a, which show good agreement in most cases across a wide range of defect energies. Next, we calculate the (111) surface energies of the elemental solids in their respective reference structures and compare with DFT data in Fig. 11b. We generate surfaces with a 10Å slab thickness and 15Å vacuum padding during surface structure creation. We note that real surfaces can display significant reconstructions, but here we only consider ideal unrelaxed surfaces with a specific structure. We calculate surface energies as
γ = (E surf − µN at )/(2A),(36)
where E surf is the surface energy, N at is the number of atoms in the surface unit cell, A is the surface area, and the factor of two is because slabs have two surfaces. As shown in Fig. 11b, we again find good agreement between the tight-binding results and the DFT surface energies. The raw data from the Fig. 11 as well as a comparison to previous calculations and experiments is available in the supplementary information Sec. S5.
VI. DISCUSSION AND SUMMARY
The results of Sec. V demonstrate that we are able to predict DFT energies and band structures using our parameterized tight-binding model including three-body interactions and self-consistent charges, with much reduced computation time (see Sec. S1). This success shows our parsing of first principles electronic structures into at most three-atom effective interactions is a useful way to understand materials chemistry. In addition, we have indirectly demonstrated that the space of minimal atomic Hamiltonians is a smooth function of atomic positions even across a wide range of materials, which makes it possible to fit to our parameterized model in the first place. Also, because basic quantum mechanics and electrostatics are built directly into the formalism, we expect reasonable predictions when extrapolating beyond the training data. We note that the accuracy of our model in predicting the energies of bulk materials is comparable to state-of-the-art non-parametric machine learning models that do not directly include quantum mechanics [21,[90][91][92][93][94]. It may be possible to improve predictions by combining the best features of both approaches, which has already been explored in a few studies [95][96][97]. Still, our model has several shortcomings. First, for simplicity we currently include only non-spin polarized calculations, although there is no obvious problem with applying the approach to magnetic systems. Similarly, long-range London dispersion forces are missing from our underlying PBEsol DFT calculations, and thus not well described by our model, but not inherently problematic to the formalism. Second, there are remaining limitations of accuracy, especially in describing conduction bands or crystal structures that are very different from those in the training data. Finally, a more fundamental issue is that our use of three-body interactions means that applying our formalism to ternary (or quaternary, etc.) materials requires the inclusion of three-body terms between three different atom types. Such terms are not included in our current fitting set, which includes elemental and binary combinations only. We expect the importance of these terms to vary according to crystal structure, as we find that such three-body interactions are short-ranged. Adding ternary materials to our dataset systematically would require adding roughly an order-of-magnitude of DFT calculations to our already large dataset, but we may pursue a subset of materials.
In summary, we have developed a tight-binding formalism that predicts the atomic-orbital Hamiltonian in terms of two-body and three-body interactions. The inclusion of three-body terms increases the model transferability and allows us to apply the same model to 65 elemental systems and any binary combination of those elements. We fit the model to a large dataset of DFT calculations, and we systematically generate new crystal structures until our model performs well on out-of-sample tests. To initialize the fitting process, we also develop a technique to generate an atom-projected tight-binding model for a single band structure. We demonstrate the effectiveness of this model in calculating total energies, volumes, elastic properties, and band structures of materials, as well as defects and surfaces. To enhance the utility and reproducibility of the current method, we provide software packages for the user to either directly use the current model parameterization for energy and band structure calculations, or to fit their own model. Finally, we have developed a publicly available database of the underlying DFT calculations.
Supplementary Materials
Supplementary materials. Contains tables of data for various point vacancy and surface calculations and comparison with previous theory and experiment, as well as periodic table describing orbital choices.
VII. TIMINGS
In order for our scheme to be useful, it must be much faster to run a tight-binding calculation than an underlying DFT calculation. We expect the most computationally expensive step for large tight-binding calculations to be diagonalizing the Hamiltonian. This behavior is similar to DFT, but with a much smaller prefactor as we use many fewer basis functions (a minimal orbital basis vs plane-waves). When constructing the Hamiltonian, the three-body terms are more computationally expensive than the two-body terms; however, the added computational cost scales linearly with the number of atoms because the procedure is cutoff in real-space. Therefore the three-body terms do not add a large computational cost when studying large systems sizes.
Exact coding timings depend on the hardware, compilation details, etc. Nevertheless, we provide example scaling relations in Fig. S1 for Si in the diamond structure and NaCl in the rocksalt structure, using our code and Quantum Espresso. We do calculations with and without taking into account symmetry to reduce the number of k-points, and with one and eight processors. Calculations were performed on Raritan cluster at NIST. We find that the tight-binding code is 500-1000 times faster than the DFT for these examples. Atoms with d-orbitals will have slightly slower results due to the higher number of orbitals per atom.
VIII. COMPARISON WITH DFTB+
Density functional tight binding (DFTB, see discussion at S98) and codes that implement it like DFTB+[S24, S29, S36, S37, S52, S99] are a longstanding method that is closely related to this work. In DFTB, a particular form for the Hamiltonian is derived by considering a second-order expansion of the Kohn-Sham energy with respect to charge density fluctuations. Typically, matrix elements for two-body interactions between actual orbitals are computed/interpolated, and repulsive terms between atoms are fit to the remaining energy. Usually at least charge self-consistency[S31, S40] is included like in this work, but many other terms have been included including spin-orbit, spin-polarization[S32], Van der Waals [S39], etc. See discussions in Ref. S29, for example.
Because DFTB is a framework more than a specific Hamiltonian, set of datafiles, or procedures, it is difficult to compare in general with this work, but the methods are closely related. The major differences are 1) the procedure for generating (Slater-Koster) datasets from DFT calculations and 2) the number of datasets we make available. In this work, we include three-body (three-center) interactions between orbitals, which are rarely part of modern DFTB calculations (although see Ref. S24, S31, S43-S46, S51, and S52). For the fitting, we fit all coefficients to DFT calculations rather than calculating and interpolating two-body Hamiltonian terms directly and only fitting the repulsive terms. Also, we simultaneously fit off-site and on-site interactions, rather than separately considering Hamiltonian and repulsive terms.
The other difficulty in comparing the methods is that relatively few Slater-Koster parameter sets are available on the https://dftb.org/parameters page. Those that are available were often compiled for specific use cases, often for organic molecules, and are not typically interoperable with parameter sets prepared for other purposes (see discussion on page). In this project, we instead seek to eventually create a universal and complete set of parameters for main group and transition metals appropriate for solid state materials, although we are currently limited to elemental and binary systems. Furthermore, we develop an automated testing procedure to test and improve our datasets, while the datasets available on DFTB.org come from a variety of sources and methods.
For cases where appropriate parameters are available for both this work and DFTB+, we expect reasonable agreement. As a simple example, Fig. S2 presents DOS plots for C in the diamond structure using the two codes. They show good agreement for the occupied orbitals.
IX. TWO-BODY VS THREE-BODY MODEL
As an example of typical magnitudes of three-body terms in the model, in Fig. S3, we compare the band structure of AlAs in the zinc blende structure with different terms in the model turned off, as compared to DFT. In panel the intersite three-body interactions is much larger than the onsite interactions, which modify only fine details of the band structure. This observation of relative magnitudes is typical. The atomization energies are presented in table I and show a similar pattern, although the three-body onsite term is quantitatively important. The relatively small magnitude of the onsite three-body term justifies of simplification of this term to be non-orbital dependent. (111) surface energies (γ) in Jm −2 of solids with DFT and the TB model. All calculations are unrelaxed and non-magnetic, see details in main text. This is the data from Fig. 10. See also the following table. Not all of these surfaces are physically relevant, the main goal is to compare the model in an out-of-sample manner. See following table also.
Mat. JVASP-ID V DF T V T B Diff γ DF T γ T B Diff Ag 14606
FIG. 3 .
3Band structure comparison between DFT (blue), and atomic projected tight-binding (orange) for Si in the diamond structure. The zero of energy is the valence band maximum.
FIG. 5 .
5Comparison of atom-projected DFT intersite s-s Hamiltonian matrix elements for three hydrogen structures (blue symbols) with the a) two-body model, orange line, and b) three-body model (TB3), orange symbols. The three-body model points in b) are almost on top of the DFT results. See text Sec. V A.
FIG. 6 .
6Comparison of DFT and tight-binding properties for elemental systems: a) atomization energies (eV/atom), b) occupied electronic bandwidth (eV, see text), c) volume (absolute error percentage), d) bulk modulus (absolute error percentage).
FIG. 7 .
7Comparison of DFT and TB properties for in-sample binary compounds with two to six atoms per unit cell: a) atomization energies (eV/atom), b) occupied electronic bandwidth in blue and bandwidths in orange (eV, see text), c) volume (absolute error percentage), d) bulk modulus (absolute error percentage).
FIG. 8 .
8Comparison of DFT and TB properties for out-ofsample binary compounds with nine to ten atoms per unit cell: a) atomization energies (eV/atom), b) occupied electronic bandwidth in blue and band gaps in orange (eV, see text), c) volume (absolute error percentage), d) bulk modulus (absolute error percentage).
FIG. 9 .
9In-sample band structure comparison between DFT (blue), and tight-binding (orange) for a) Rh in fcc structure, and b) ZnSe in zinc-blende structure.
FIG. 10 .
10Out-of-sample band structure comparison between DFT (blue), and tight-binding (TB3, orange) for a) Ga4Te6, b) Ca5P8, and c) Au2Bi8 (see text).
FIG. 11 .
11Comparison of DFT and tight-binding calculations for unrelaxed a) point vacancy formation energy (eV) and b) (111) surface energies (Jmm −2 ) of elemental solids.
Fig. S3a, we include the full model. Panel Fig. S3b turns off the onsite three-body interactions, Fig. S3c turns off the intersite three-body interactions, and Fig. S3d turns off all three-body interactions. We note that the effect of FIG. S2. Top: diamond DOS performed with the ThreeBodyTB.jl code (this work). Bottom: same, performed with the DFTB+ code and the pbc-03 parameter set[S100, S101]
FIG
X. CRYSTAL STRUCTURE AND DETAILED DATACrystal structures and detailed data on each material used for testing is available at https://doi.org/10.6084/ m9.figshare.21158905.v1 and https://doi.org/10.6084/m9.figshare.21158902.v1. See also https://jarvis. nist.gov/jarvisqetb/ for all the fitting data and https://jarvis.nist.gov/jarvisdft/ for more details on each . S3. AlAs band structure with a) full model b) no on-site three-body terms, c) no off-site three-body terms, and d) no three-body terms (only 2-body terms). Orange: DFT, Blue: TB. Energies aligned at valence band maximum. All panels include charge self-consistency.
FIG. 1. Orbitals used in our tight-binding model. Red: s only (hydrogen), Blue: sp, Yellow: spd, White: Not included in model.s
s,p
s,p,d
FIG. 4. Overview of the fitting process.Initial DFT
Dataset
Generate TB
Hamiltonians
Initial Fit to TB
Hamiltonians
Get Eigenvectors
Fit to Energies
Self-consistent Fitting
Generate/Relax
Random Structures
New DFT calcs.
Performance?!?
Good
Bad
Add to
Dataset
Start
Done :)
TABLE I .
IAtomization energies of partial models (see Fig S3) XI. DATA TABLES Various data tables related to Fig. 10 in the main text. The first table shows the data in the table, the second table is a comparison with some experimental and other theoretical literature values. We should expect only qualitative agreement with the literature values as we only consider unrelaxed unreconstructed structures and use the PBEsol[S69] functional without spin-polarization.Method
Fig. S3 panel Atomization Energy (eV)
DFT
all
-9.68
Full
a
-9.68
No onsite 3-body
b
-10.10
No intersite 3-body
c
-12.22
Only 2-body
d
-12.70
structure.
TABLE II .
IIComparison of vacancy formation energies (V ) in eV and
TABLE III .
IIIComparison of vacancy formation energies (V) in eV and surface energies (γ) in Jm −2 of solids with DFT [S73-S79],
Electronic structure and the properties of solids: the physics of the chemical bond (Courier Corporation. W A Harrison, W. A. Harrison, Electronic structure and the properties of solids: the physics of the chemical bond (Courier Cor- poration, 2012).
R M Martin, Electronic structure: basic theory and practical methods. Cambridge university pressR. M. Martin, Electronic structure: basic theory and practical methods (Cambridge university press, 2020).
The high-throughput highway to computational materials design. S Curtarolo, G L Hart, M B Nardelli, N Mingo, S Sanvito, O Levy, Nature materials. 12191S. Curtarolo, G. L. Hart, M. B. Nardelli, N. Mingo, S. Sanvito, and O. Levy, The high-throughput highway to computational materials design, Nature materials 12, 191 (2013).
Aflow: An automatic framework for high-throughput materials discovery. S Curtarolo, W Setyawan, G L Hart, M Jahnatek, R V Chepulskii, R H Taylor, S Wang, J Xue, K Yang, O Levy, Computational Materials Science. 58218S. Curtarolo, W. Setyawan, G. L. Hart, M. Jahnatek, R. V. Chepulskii, R. H. Taylor, S. Wang, J. Xue, K. Yang, O. Levy, et al., Aflow: An automatic frame- work for high-throughput materials discovery, Compu- tational Materials Science 58, 218 (2012).
Commentary: The materials project: A materials genome approach to accelerating materials innovation. A Jain, S P Ong, G Hautier, W Chen, W D Richards, S Dacek, S Cholia, D Gunter, D Skinner, G Ceder, APL materials. 111002A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, et al., Commentary: The materials project: A materials genome approach to accelerating materials innovation, APL materials 1, 011002 (2013).
The open quantum materials database (oqmd): assessing the accuracy of dft formation energies. S Kirklin, J E Saal, B Meredig, A Thompson, J W Doak, M Aykol, S Rühl, C Wolverton, Computational Materials. 11S. Kirklin, J. E. Saal, B. Meredig, A. Thompson, J. W. Doak, M. Aykol, S. Rühl, and C. Wolverton, The open quantum materials database (oqmd): assessing the ac- curacy of dft formation energies, npj Computational Materials 1, 1 (2015).
The joint automated repository for various integrated simulations (jarvis) for data-driven materials design. K Choudhary, K F Garrity, A C Reid, B Decost, A J Biacchi, A R H Walker, Z Trautt, J Hattrick-Simpers, A G Kusne, A Centrone, Computational Materials. 61K. Choudhary, K. F. Garrity, A. C. Reid, B. DeCost, A. J. Biacchi, A. R. H. Walker, Z. Trautt, J. Hattrick- Simpers, A. G. Kusne, A. Centrone, et al., The joint automated repository for various integrated simulations (jarvis) for data-driven materials design, npj Computa- tional Materials 6, 1 (2020).
C W Andersen, R Armiento, E Blokhin, G J Conduit, S Dwaraknath, M L Evans, Á Fekete, A Gopakumar, S Gražulis, A Merkys, arXiv:2103.02068Optimade: an api for exchanging materials data. arXiv preprintC. W. Andersen, R. Armiento, E. Blokhin, G. J. Conduit, S. Dwaraknath, M. L. Evans,Á. Fekete, A. Gopakumar, S. Gražulis, A. Merkys, et al., Op- timade: an api for exchanging materials data, arXiv preprint arXiv:2103.02068 (2021).
Tight-binding moleculardynamics study of phonon anharmonic effects in silicon and diamond. C Wang, C Chan, K Ho, Physical Review B. 4211276C. Wang, C. Chan, and K. Ho, Tight-binding molecular- dynamics study of phonon anharmonic effects in silicon and diamond, Physical Review B 42, 11276 (1990).
Orthogonal tight-binding model for the thermal conductivity of si. A Katre, G K Madsen, Physical Review B. 93155203A. Katre and G. K. Madsen, Orthogonal tight-binding model for the thermal conductivity of si, Physical Re- view B 93, 155203 (2016).
Tight-binding modeling of thermoelectric properties of bismuth telluride. S Lee, P Von Allmen, Applied physics letters. 8822107S. Lee and P. von Allmen, Tight-binding modeling of thermoelectric properties of bismuth telluride, Applied physics letters 88, 022107 (2006).
Transferable tight-binding models for silicon. I Kwon, R Biswas, C Wang, K Ho, C Soukoulis, Physical Review B. 497242I. Kwon, R. Biswas, C. Wang, K. Ho, and C. Soukoulis, Transferable tight-binding models for silicon, Physical Review B 49, 7242 (1994).
Applications of a tight-binding total-energy method for transition and noble metals: Elastic constants, vacancies, and surfaces of monatomic metals. M J Mehl, D A Papaconstantopoulos, Physical Review B. 544519M. J. Mehl and D. A. Papaconstantopoulos, Applica- tions of a tight-binding total-energy method for tran- sition and noble metals: Elastic constants, vacancies, and surfaces of monatomic metals, Physical Review B 54, 4519 (1996).
Tight-binding study of tilt grain boundaries in diamond. J Morris, C Fu, K Ho, Physical Review B. 54132J. Morris, C. Fu, and K. Ho, Tight-binding study of tilt grain boundaries in diamond, Physical Review B 54, 132 (1996).
Tight-binding calculations of the ni-al phase diagram. C Colinet, P Hicter, A Pasturel, Physical Review B. 451571C. Colinet, P. Hicter, and A. Pasturel, Tight-binding calculations of the ni-al phase diagram, Physical Review B 45, 1571 (1992).
Tight-binding calculation of ti-rh-type phase diagram. M Sluiter, P Turchi, F Zezhong, D De Fontaine, Physical review letters. 60716M. Sluiter, P. Turchi, F. Zezhong, and D. de Fontaine, Tight-binding calculation of ti-rh-type phase diagram, Physical review letters 60, 716 (1988).
Tight-binding models of amorphous systems: liquid metals. L M Roth, Physical Review B. 74321L. M. Roth, Tight-binding models of amorphous sys- tems: liquid metals, Physical Review B 7, 4321 (1973).
Dopant states in a-si: Hi tight-bindingmodel results. J Robertson, Physical Review B. 284647J. Robertson, Dopant states in a-si: Hi tight-binding- model results, Physical Review B 28, 4647 (1983).
Structure of neutral aluminum clusters al n (2 n 23): Genetic algorithm tight-binding calculations. F.-C Chuang, C Wang, K Ho, Physical Review B. 73125431F.-C. Chuang, C. Wang, and K. Ho, Structure of neu- tral aluminum clusters al n (2 n 23): Genetic algo- rithm tight-binding calculations, Physical Review B 73, 125431 (2006).
Tight-binding electronicstructure calculations and tight-binding molecular dynamics with localized orbitals. S Goedecker, M Teter, Physical Review B. 519455S. Goedecker and M. Teter, Tight-binding electronic- structure calculations and tight-binding molecular dy- namics with localized orbitals, Physical Review B 51, 9455 (1995).
Materials science in the artificial intelligence age: high-throughput library generation, machine learning, and a pathway from correlations to the underpinning physics. R K Vasudevan, K Choudhary, A Mehta, R Smith, G Kusne, F Tavazza, L Vlcek, M Ziatdinov, S V Kalinin, J Hattrick-Simpers, MRS communications. 9821R. K. Vasudevan, K. Choudhary, A. Mehta, R. Smith, G. Kusne, F. Tavazza, L. Vlcek, M. Ziatdinov, S. V. Kalinin, and J. Hattrick-Simpers, Materials science in the artificial intelligence age: high-throughput library generation, machine learning, and a pathway from cor- relations to the underpinning physics, MRS communi- cations 9, 821 (2019).
Can machine learning identify the next high-temperature superconductor? examining extrapolation performance for materials discovery. B Meredig, E Antono, C Church, M Hutchinson, J Ling, S Paradiso, B Blaiszik, I Foster, B Gibbons, J Hattrick-Simpers, Molecular Systems Design & Engineering. 3819B. Meredig, E. Antono, C. Church, M. Hutchinson, J. Ling, S. Paradiso, B. Blaiszik, I. Foster, B. Gibbons, J. Hattrick-Simpers, et al., Can machine learning iden- tify the next high-temperature superconductor? exam- ining extrapolation performance for materials discovery, Molecular Systems Design & Engineering 3, 819 (2018).
Simplified lcao method for the periodic potential problem. J C Slater, G F Koster, Physical Review. 941498J. C. Slater and G. F. Koster, Simplified lcao method for the periodic potential problem, Physical Review 94, 1498 (1954).
Construction of tight-binding-like potentials on the basis of density-functional theory: Application to carbon. D Porezag, T Frauenheim, T Köhler, G Seifert, R Kaschner, Physical Review B. 5112947D. Porezag, T. Frauenheim, T. Köhler, G. Seifert, and R. Kaschner, Construction of tight-binding-like poten- tials on the basis of density-functional theory: Applica- tion to carbon, Physical Review B 51, 12947 (1995).
Density-functional tightbinding for beginners. P Koskinen, V Mäkinen, Computational Materials Science. 47237P. Koskinen and V. Mäkinen, Density-functional tight- binding for beginners, Computational Materials Science 47, 237 (2009).
Gfn2-xtb-an accurate and broadly parametrized self-consistent tightbinding quantum chemical method with multipole electrostatics and density-dependent dispersion contributions. C Bannwarth, S Ehlert, S Grimme, Journal of chemical theory and computation. 151652C. Bannwarth, S. Ehlert, and S. Grimme, Gfn2-xtb-an accurate and broadly parametrized self-consistent tight- binding quantum chemical method with multipole elec- trostatics and density-dependent dispersion contribu- tions, Journal of chemical theory and computation 15, 1652 (2019).
Kwant: a software package for quantum transport. C W Groth, M Wimmer, A R Akhmerov, X Waintal, New Journal of Physics. 1663065C. W. Groth, M. Wimmer, A. R. Akhmerov, and X. Waintal, Kwant: a software package for quantum transport, New Journal of Physics 16, 063065 (2014).
. T Yusufaly, D Vanderbilt, S Coh, Tight-binding formalism in the context of the pythtb packageT. Yusufaly, D. Vanderbilt, and S. Coh, Tight-binding formalism in the context of the pythtb package (2013).
Dftb+, a software package for efficient approximate density functional theory based atomistic simulations. B Hourahine, B Aradi, V Blum, F Bonafé, A Buccheri, C Camacho, C Cevallos, M Deshaye, T Dumitricȃ, A Dominguez, The Journal of chemical physics. 152124101B. Hourahine, B. Aradi, V. Blum, F. Bonafé, A. Buc- cheri, C. Camacho, C. Cevallos, M. Deshaye, T. Du- mitricȃ, A. Dominguez, et al., Dftb+, a software pack- age for efficient approximate density functional theory based atomistic simulations, The Journal of chemical physics 152, 124101 (2020).
Calculations of molecules, clusters, and solids with a simplified lcaodft-lda scheme. G Seifert, D Porezag, T Frauenheim, International journal of quantum chemistry. 58185G. Seifert, D. Porezag, and T. Frauenheim, Calculations of molecules, clusters, and solids with a simplified lcao- dft-lda scheme, International journal of quantum chem- istry 58, 185 (1996).
Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties. M Elstner, D Porezag, G Jungnickel, J Elsner, M Haugk, T Frauenheim, S Suhai, G Seifert, Physical Review B. 587260M. Elstner, D. Porezag, G. Jungnickel, J. Elsner, M. Haugk, T. Frauenheim, S. Suhai, and G. Seifert, Self-consistent-charge density-functional tight-binding method for simulations of complex materials properties, Physical Review B 58, 7260 (1998).
Density functional based calculations for fen. C Köhler, G Seifert, T Frauenheim, Chemical physics. 3093223C. Köhler, G. Seifert, and T. Frauenheim, Density functional based calculations for fen (n 32), Chemical physics 309, 23 (2005).
Combining selfconsistent-charge density-functional tight-binding (sccdftb) with molecular mechanics by the generalized hybrid orbital (gho) method. J Pu, J Gao, D G Truhlar, The Journal of Physical Chemistry A. 1085454J. Pu, J. Gao, and D. G. Truhlar, Combining self- consistent-charge density-functional tight-binding (scc- dftb) with molecular mechanics by the generalized hy- brid orbital (gho) method, The Journal of Physical Chemistry A 108, 5454 (2004).
From dft to machine learning: recent approaches to materials science-a review. G R Schleder, A C Padilha, C M Acosta, M Costa, A Fazzio, Journal of Physics: Materials. 232001G. R. Schleder, A. C. Padilha, C. M. Acosta, M. Costa, and A. Fazzio, From dft to machine learning: recent approaches to materials science-a review, Journal of Physics: Materials 2, 032001 (2019).
Marques, Recent advances and applications of machine learning in solid-state materials science. J Schmidt, M R Marques, S Botti, M A , Computational Materials. 51J. Schmidt, M. R. Marques, S. Botti, and M. A. Mar- ques, Recent advances and applications of machine learning in solid-state materials science, npj Computa- tional Materials 5, 1 (2019).
Dftb parameters for the periodic table: Part 1, electronic structure. M Wahiduzzaman, A F Oliveira, P Philipsen, L Zhechkov, E Van Lenthe, H A Witek, T Heine, Journal of chemical theory and com. 94006M. Wahiduzzaman, A. F. Oliveira, P. Philipsen, L. Zhechkov, E. Van Lenthe, H. A. Witek, and T. Heine, Dftb parameters for the periodic table: Part 1, elec- tronic structure, Journal of chemical theory and com- putation 9, 4006 (2013).
Dftb parameters for the periodic table, part 2: Energies and energy gradients from hydrogen to calcium. A F Oliveira, P Philipsen, T Heine, Journal of chemical theory and computation. 115209A. F. Oliveira, P. Philipsen, and T. Heine, Dftb param- eters for the periodic table, part 2: Energies and energy gradients from hydrogen to calcium, Journal of chemical theory and computation 11, 5209 (2015).
A robust and accurate tight-binding quantum chemical method for structures, vibrational frequencies, and noncovalent interactions of large molecular systems parametrized for all spd-block elements (z= 1-86). S Grimme, C Bannwarth, P Shushkov, Journal of chemical theory and computation. 131989S. Grimme, C. Bannwarth, and P. Shushkov, A robust and accurate tight-binding quantum chemical method for structures, vibrational frequencies, and noncovalent interactions of large molecular systems parametrized for all spd-block elements (z= 1-86), Journal of chemical theory and computation 13, 1989 (2017).
Hydrogen bonding and stacking interactions of nucleic acid base pairs: A density-functional-theory based treatment. M Elstner, P Hobza, T Frauenheim, S Suhai, E Kaxiras, 10.1063/1.1329889The Journal of Chemical Physics. 1145149M. Elstner, P. Hobza, T. Frauenheim, S. Suhai, and E. Kaxiras, Hydrogen bonding and stacking interactions of nucleic acid base pairs: A density-functional-theory based treatment, The Journal of Chemical Physics 114, 5149 (2001).
Scc-dftb: what is the proper degree of selfconsistency?. M Elstner, The Journal of Physical Chemistry A. 1115614M. Elstner, Scc-dftb: what is the proper degree of self- consistency?, The Journal of Physical Chemistry A 111, 5614 (2007).
A self-consistent charge density-functional based tightbinding method for predictive materials simulations in physics, chemistry and biology. T Frauenheim, G Seifert, M Elsterner, Z Hajnal, G Jungnickel, D Porezag, S Suhai, R Scholz, physica status solidi (b). 21741T. Frauenheim, G. Seifert, M. Elsterner, Z. Hajnal, G. Jungnickel, D. Porezag, S. Suhai, and R. Scholz, A self-consistent charge density-functional based tight- binding method for predictive materials simulations in physics, chemistry and biology, physica status solidi (b) 217, 41 (2000).
. W.-C Lu, C Z Wang, L.-Z Zhao, W Qin, K , W.-C. Lu, C. Z. Wang, L.-Z. Zhao, W. Qin, and K. M.
Three-center tight-binding potential model for c and si. Ho, 10.1103/PhysRevB.92.035206Phys. Rev. B. 9235206Ho, Three-center tight-binding potential model for c and si, Phys. Rev. B 92, 035206 (2015).
Advances and applications in the fireball ab initio tight-binding molecular-dynamics formalism, physica status solidi (b). J P Lewis, P Jelínek, J Ortega, A A Demkov, D G Trabada, B Haycock, H Wang, G Adams, J K Tomfohr, E Abad, H Wang, D A Drabold, https:/arxiv.org/abs/https:/onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201147259248J. P. Lewis, P. Jelínek, J. Ortega, A. A. Demkov, D. G. Trabada, B. Haycock, H. Wang, G. Adams, J. K. Tomfohr, E. Abad, H. Wang, and D. A. Drabold, Advances and applications in the fireball ab initio tight-binding molecular-dynamics for- malism, physica status solidi (b) 248, 1989 (2011), https://onlinelibrary.wiley.com/doi/pdf/10.1002/pssb.201147259.
Multicenter approach to the exchangecorrelation interactions in ab initio tight-binding methods. P Jelínek, H Wang, J P Lewis, O F Sankey, J Ortega, 10.1103/PhysRevB.71.235101Phys. Rev. B. 71235101P. Jelínek, H. Wang, J. P. Lewis, O. F. Sankey, and J. Ortega, Multicenter approach to the exchange- correlation interactions in ab initio tight-binding meth- ods, Phys. Rev. B 71, 235101 (2005).
Ab initio multicenter tight-binding model for molecular-dynamics simulations and other applications in covalent systems. O F Sankey, D J Niklewski, 10.1103/PhysRevB.40.3979Phys. Rev. B. 403979O. F. Sankey and D. J. Niklewski, Ab initio multicenter tight-binding model for molecular-dynamics simulations and other applications in covalent systems, Phys. Rev. B 40, 3979 (1989).
Efficient ab initio tight binding. A P Horsfield, 10.1103/PhysRevB.56.6594Phys. Rev. B. 566594A. P. Horsfield, Efficient ab initio tight binding, Phys. Rev. B 56, 6594 (1997).
Transferability of the slater-koster tight-binding scheme from an environment-dependent minimal-basis perspective. W C Lu, C Z Wang, K Ruedenberg, K M Ho, 10.1103/PhysRevB.72.205123Phys. Rev. B. 72205123W. C. Lu, C. Z. Wang, K. Ruedenberg, and K. M. Ho, Transferability of the slater-koster tight-binding scheme from an environment-dependent minimal-basis perspec- tive, Phys. Rev. B 72, 205123 (2005).
Environment-dependent tightbinding potential models. C Wang, K Ho, Handbook of Materials Modeling. SpringerC. Wang and K. Ho, Environment-dependent tight- binding potential models, in Handbook of Materials Modeling (Springer, 2005) pp. 307-347.
Environment-dependent tight-binding potential model. M S Tang, C Z Wang, C T Chan, K M Ho, 10.1103/PhysRevB.53.979Phys. Rev. B. 53979M. S. Tang, C. Z. Wang, C. T. Chan, and K. M. Ho, Environment-dependent tight-binding potential model, Phys. Rev. B 53, 979 (1996).
. C.-Z Wang, G.-D Lee, J Li, S Yip, K.-M , C.-Z. Wang, G.-D. Lee, J. Li, S. Yip, and K.-M.
Atomistic simulation studies of complex carbon and silicon systems using environment-dependent tightbinding potentials. Ho, Scientific Modeling and Simulations. SpringerHo, Atomistic simulation studies of complex carbon and silicon systems using environment-dependent tight- binding potentials, in Scientific Modeling and Simula- tions (Springer, 2008) pp. 97-121.
Tight-binding density functional theory: an approximate kohn-sham dft scheme. G Seifert, The Journal of Physical Chemistry A. 1115609G. Seifert, Tight-binding density functional theory: an approximate kohn-sham dft scheme, The Journal of Physical Chemistry A 111, 5609 (2007).
Calculations of molecules, clusters, and solids with a simplified lcao-dft-lda scheme. G Seifert, D Porezag, T Frauenheim, 10.1002/(SICI)1097-461X(1996)58:2<185::AID-QUA7>3.0.CO;2-UInternational Journal of Quantum Chemistry. 58185G. Seifert, D. Porezag, and T. Frauenheim, Calcula- tions of molecules, clusters, and solids with a simplified lcao-dft-lda scheme, International Journal of Quantum Chemistry 58, 185 (1996).
N Marzari, A A Mostofi, J R Yates, I Souza, D Vanderbilt, Maximally localized wannier functions: Theory and applications. 841419N. Marzari, A. A. Mostofi, J. R. Yates, I. Souza, and D. Vanderbilt, Maximally localized wannier functions: Theory and applications, Reviews of Modern Physics 84, 1419 (2012).
wannier90: A tool for obtaining maximally-localised wannier functions. A A Mostofi, J R Yates, Y.-S Lee, I Souza, D Vanderbilt, N Marzari, Computer physics communications. 178685A. A. Mostofi, J. R. Yates, Y.-S. Lee, I. Souza, D. Van- derbilt, and N. Marzari, wannier90: A tool for obtain- ing maximally-localised wannier functions, Computer physics communications 178, 685 (2008).
Database of wannier tight-binding hamiltonians using high-throughput density functional theory. K F Garrity, K Choudhary, Scientific data. 81K. F. Garrity and K. Choudhary, Database of wannier tight-binding hamiltonians using high-throughput den- sity functional theory, Scientific data 8, 1 (2021).
. X Qian, J Li, L Qi, C.-Z Wang, T.-L Chan, Y.-X , X. Qian, J. Li, L. Qi, C.-Z. Wang, T.-L. Chan, Y.-X.
Quasiatomic orbitals for ab initio tight-binding analysis. K.-M Yao, S Ho, Yip, 10.1103/PhysRevB.78.245112Phys. Rev. B. 78245112Yao, K.-M. Ho, and S. Yip, Quasiatomic orbitals for ab initio tight-binding analysis, Phys. Rev. B 78, 245112 (2008).
Optimal basis sets for detailed brillouinzone integrations. E L Shirley, 10.1103/PhysRevB.54.16464Phys. Rev. B. 5416464E. L. Shirley, Optimal basis sets for detailed brillouin- zone integrations, Phys. Rev. B 54, 16464 (1996).
Bloch-state-based interpolation: An efficient generalization of the shirley approach to interpolating electronic structure. D Prendergast, S G Louie, 10.1103/PhysRevB.80.235126Phys. Rev. B. 80235126D. Prendergast and S. G. Louie, Bloch-state-based in- terpolation: An efficient generalization of the shirley ap- proach to interpolating electronic structure, Phys. Rev. B 80, 235126 (2009).
Tightbinding modelling of materials. C Goringe, D Bowler, E Hernandez, Reports on Progress in Physics. 601447C. Goringe, D. Bowler, and E. Hernandez, Tight- binding modelling of materials, Reports on Progress in Physics 60, 1447 (1997).
Atomistic simulations of complex materials: ground-state and excited-state properties. T Frauenheim, G Seifert, M Elstner, T Niehaus, C Köhler, M Amkreutz, M Sternberg, Z Hajnal, A Di Carlo, S Suhai, Journal of Physics: Condensed Matter. 143015T. Frauenheim, G. Seifert, M. Elstner, T. Niehaus, C. Köhler, M. Amkreutz, M. Sternberg, Z. Hajnal, A. Di Carlo, and S. Suhai, Atomistic simulations of com- plex materials: ground-state and excited-state proper- ties, Journal of Physics: Condensed Matter 14, 3015 (2002).
Convergence acceleration of iterative sequences. the case of scf iteration. P Pulay, 10.1016/0009-2614(80)80396-4Chemical Physics Letters. 73393P. Pulay, Convergence acceleration of iterative se- quences. the case of scf iteration, Chemical Physics Let- ters 73, 393 (1980).
Symmetry-adapted wannier functions in the maximal localization procedure. R Sakuma, 10.1103/PhysRevB.87.235109Phys. Rev. B. 87235109R. Sakuma, Symmetry-adapted wannier functions in the maximal localization procedure, Phys. Rev. B 87, 235109 (2013).
Effective and accurate representation of extended bloch states on finite hilbert spaces. L A Agapito, A Ferretti, A Calzolari, S Curtarolo, M Buongiorno Nardelli, 10.1103/PhysRevB.88.165127Phys. Rev. B. 88165127L. A. Agapito, A. Ferretti, A. Calzolari, S. Curtarolo, and M. Buongiorno Nardelli, Effective and accurate rep- resentation of extended bloch states on finite hilbert spaces, Phys. Rev. B 88, 165127 (2013).
Accurate tight-binding hamiltonian matrices from ab initio calculations: Minimal basis sets. L A Agapito, S Ismail-Beigi, S Curtarolo, M Fornari, M B Nardelli, 10.1103/PhysRevB.93.035104Phys. Rev. B. 9335104L. A. Agapito, S. Ismail-Beigi, S. Curtarolo, M. Fornari, and M. B. Nardelli, Accurate tight-binding hamiltonian matrices from ab initio calculations: Minimal basis sets, Phys. Rev. B 93, 035104 (2016).
Ab initio random structure searching. C J Pickard, R Needs, Journal of Physics: Condensed Matter. 2353201C. J. Pickard and R. Needs, Ab initio random structure searching, Journal of Physics: Condensed Matter 23, 053201 (2011).
High-throughput identification and characterization of two-dimensional materials using density functional theory. K Choudhary, I Kalish, R Beams, F Tavazza, Scientific Reports. 71K. Choudhary, I. Kalish, R. Beams, and F. Tavazza, High-throughput identification and characterization of two-dimensional materials using density functional the- ory, Scientific Reports 7, 1 (2017).
Quantum espresso toward the exascale. P Giannozzi, O Baseggio, P Bonfà, D Brunato, R Car, I Carnimeo, C Cavazzoni, S De Gironcoli, P Delugas, F Ferrari Ruffino, A Ferretti, N Marzari, I Timrov, A Urru, S Baroni, https:/arxiv.org/abs/https:/doi.org/10.1063/5.0005082The Journal of Chemical Physics. 152154105P. Giannozzi, O. Baseggio, P. Bonfà, D. Brunato, R. Car, I. Carnimeo, C. Cavazzoni, S. de Gironcoli, P. Delugas, F. Ferrari Ruffino, A. Ferretti, N. Marzari, I. Timrov, A. Urru, and S. Baroni, Quantum espresso toward the exascale, The Journal of Chemical Physics 152, 154105 (2020), https://doi.org/10.1063/5.0005082.
Assessing the performance of recent density functionals for bulk solids. G I Csonka, J P Perdew, A Ruzsinszky, P H T Philipsen, S Lebègue, J Paier, O A Vydrov, J G Angyán, 10.1103/PhysRevB.79.155107Phys. Rev. B. 79155107G. I. Csonka, J. P. Perdew, A. Ruzsinszky, P. H. T. Philipsen, S. Lebègue, J. Paier, O. A. Vydrov, and J. G. Angyán, Assessing the performance of recent density functionals for bulk solids, Phys. Rev. B 79, 155107 (2009).
Rungs 1 to 4 of dft jacob's ladder: Extensive test on the lattice constant, bulk modulus, and cohesive energy of solids. F Tran, J Stelzl, P Blaha, https:/arxiv.org/abs/https:/doi.org/10.1063/1.4948636The Journal of Chemical Physics. 144204120F. Tran, J. Stelzl, and P. Blaha, Rungs 1 to 4 of dft jacob's ladder: Extensive test on the lattice con- stant, bulk modulus, and cohesive energy of solids, The Journal of Chemical Physics 144, 204120 (2016), https://doi.org/10.1063/1.4948636.
Pseudopotentials for high-throughput dft calculations. K F Garrity, J W Bennett, K M Rabe, D Vanderbilt, 10.1016/j.commatsci.2013.08.053Computational Materials Science. 81446K. F. Garrity, J. W. Bennett, K. M. Rabe, and D. Van- derbilt, Pseudopotentials for high-throughput dft cal- culations, Computational Materials Science 81, 446 (2014).
Soft self-consistent pseudopotentials in a generalized eigenvalue formalism. D Vanderbilt, 10.1103/PhysRevB.41.7892Phys. Rev. B. 417892D. Vanderbilt, Soft self-consistent pseudopotentials in a generalized eigenvalue formalism, Phys. Rev. B 41, 7892 (1990).
Vacancy formation energies in metals: A comparison of metagga with lda and gga exchange-correlation functionals. B Medasani, M Haranczyk, A Canning, M Asta, Computational Materials Science. 10196B. Medasani, M. Haranczyk, A. Canning, and M. Asta, Vacancy formation energies in metals: A compari- son of metagga with lda and gga exchange-correlation functionals, Computational Materials Science 101, 96 (2015).
On the vacancy formation energy and volume of simple cubic metals. Z Popovic, J Carbotte, G Piercy, Journal of Physics F: Metal Physics. 4351Z. Popovic, J. Carbotte, and G. Piercy, On the vacancy formation energy and volume of simple cubic metals, Journal of Physics F: Metal Physics 4, 351 (1974).
Vacancy formation energy of simple metals using reliable model and ab initio pseudopotentials. S Haldar, A Ghorai, D Sen, Diffusion-Fundamentals.org. 1S. Haldar, A. Ghorai, and D. Sen, Vacancy formation energy of simple metals using reliable model and ab initio pseudopotentials, Diffusion-Fundamentals.org , 1 (2017).
Briddon, Formation energy and migration barrier of a ge vacancy from ab initio studies, Materials science in semiconductor processing. H Pinto, J Coutinho, V Torres, S Öberg, P , 9498H. Pinto, J. Coutinho, V. Torres, S.Öberg, and P. Brid- don, Formation energy and migration barrier of a ge va- cancy from ab initio studies, Materials science in semi- conductor processing 9, 498 (2006).
Firstprinciples calculations for point defects in solids, Reviews of modern physics. C Freysoldt, B Grabowski, T Hickel, J Neugebauer, G Kresse, A Janotti, C G Van De Walle, 86253C. Freysoldt, B. Grabowski, T. Hickel, J. Neugebauer, G. Kresse, A. Janotti, and C. G. Van de Walle, First- principles calculations for point defects in solids, Re- views of modern physics 86, 253 (2014).
Defect energies of graphite: Density-functional calculations. L Li, S Reich, J Robertson, Physical Review B. 72184109L. Li, S. Reich, and J. Robertson, Defect energies of graphite: Density-functional calculations, Physical Re- view B 72, 184109 (2005).
Ab initio atomic-scale determination of point-defect structure in hcp zirconium. C , A Legris, Philosophical Magazine. 85569C. Domain* and A. Legris, Ab initio atomic-scale de- termination of point-defect structure in hcp zirconium, Philosophical Magazine 85, 569 (2005).
Equilibrium vacancies and thermophysical properties of metals. Y Kraftmakher, Physics Reports. 29979Y. Kraftmakher, Equilibrium vacancies and thermo- physical properties of metals, Physics Reports 299, 79 (1998).
P Ehrhart, P Jung, H Schultz, H Ullmaier, Atomic defects in metals, zahlenwerte und funktionen aus naturwissenschaften und technik: Kristallund festkörperphysik. P. Ehrhart, P. Jung, H. Schultz, and H. Ullmaier, Atomic defects in metals, zahlenwerte und funktio- nen aus naturwissenschaften und technik: Kristallund festkörperphysik (1991).
Phase transformations and vacancy formation energies of transition metals by positron annihilation. H Matter, J Winter, W Triftshäuser, Applied physics. 20135H. Matter, J. Winter, and W. Triftshäuser, Phase trans- formations and vacancy formation energies of transition metals by positron annihilation, Applied physics 20, 135 (1979).
Equilibrium vacancy parameters and limit temperature of nonequilibrium melting in osmium. V Y Chekhovskoi, V D Tarasov, High Temperature. 50722V. Y. Chekhovskoi and V. D. Tarasov, Equilibrium va- cancy parameters and limit temperature of nonequilib- rium melting in osmium, High Temperature 50, 722 (2012).
Monovacancy formation enthalpy in silicon. S Dannefaer, P Mascher, D Kerr, Physical review letters. 562195S. Dannefaer, P. Mascher, and D. Kerr, Monovacancy formation enthalpy in silicon, Physical review letters 56, 2195 (1986).
Vacancy formation in iron investigated by positron annihilation in thermal equilibrium. H.-E Schaefer, K Maier, M Weller, D Herlach, A Seeger, J Diehl, Scripta Metallurgica. 11803H.-E. Schaefer, K. Maier, M. Weller, D. Herlach, A. Seeger, and J. Diehl, Vacancy formation in iron inves- tigated by positron annihilation in thermal equilibrium, Scripta Metallurgica 11, 803 (1977).
The formation energy of vacancies in aluminium and magnesium, physica status solidi (b). P Tzanetakis, J Hillairet, G Revel, 75433P. Tzanetakis, J. Hillairet, and G. Revel, The forma- tion energy of vacancies in aluminium and magnesium, physica status solidi (b) 75, 433 (1976).
Firstprinciples study of vacancy formation and migration energies in tantalum. A Satta, F Willaime, S De Gironcoli, Physical Review B. 607001A. Satta, F. Willaime, and S. de Gironcoli, First- principles study of vacancy formation and migration en- ergies in tantalum, Physical Review B 60, 7001 (1999).
Vacancies and changes of physical properties of metals at the melting point. T Görecki, International Journal of Materials Research. 65426T. Görecki, Vacancies and changes of physical properties of metals at the melting point, International Journal of Materials Research 65, 426 (1974).
An experimental estimation of the vacancy formation energy in diamond. J Bourgoin, Radiation effects. 79235J. Bourgoin, An experimental estimation of the vacancy formation energy in diamond, Radiation effects 79, 235 (1983).
Atomistic line graph neural network for improved materials property predictions. K Choudhary, B Decost, Computational Materials. 71K. Choudhary and B. DeCost, Atomistic line graph neu- ral network for improved materials property predictions, npj Computational Materials 7, 1 (2021).
Developing an improved crystal graph convolutional neural network framework for accelerated materials discovery. C W Park, C Wolverton, Physical Review Materials. 463801C. W. Park and C. Wolverton, Developing an improved crystal graph convolutional neural network framework for accelerated materials discovery, Physical Review Materials 4, 063801 (2020).
K T Schütt, P.-J Kindermans, H E Sauceda, S Chmiela, A Tkatchenko, K.-R Müller, arXiv:1706.08566Schnet: A continuous-filter convolutional neural network for modeling quantum interactions. arXiv preprintK. T. Schütt, P.-J. Kindermans, H. E. Sauceda, S. Chmiela, A. Tkatchenko, and K.-R. Müller, Schnet: A continuous-filter convolutional neural net- work for modeling quantum interactions, arXiv preprint arXiv:1706.08566 (2017).
Crystal graph convolutional neural networks for an accurate and interpretable prediction of material properties. T Xie, J C Grossman, Physical review letters. 120145301T. Xie and J. C. Grossman, Crystal graph convolutional neural networks for an accurate and interpretable pre- diction of material properties, Physical review letters 120, 145301 (2018).
Graph networks as a universal machine learning framework for molecules and crystals. C Chen, W Ye, Y Zuo, C Zheng, S P Ong, Chemistry of Materials. 313564C. Chen, W. Ye, Y. Zuo, C. Zheng, and S. P. Ong, Graph networks as a universal machine learning frame- work for molecules and crystals, Chemistry of Materials 31, 3564 (2019).
Machine learning method for tight-binding hamiltonian parameterization from ab-initio band structure. Z Wang, S Ye, H Wang, J He, Q Huang, S Chang, Computational Materials. 71Z. Wang, S. Ye, H. Wang, J. He, Q. Huang, and S. Chang, Machine learning method for tight-binding hamiltonian parameterization from ab-initio band struc- ture, npj Computational Materials 7, 1 (2021).
Machine learning approach to constructing tight binding models for solids with application to bitecl. M Nakhaee, S Ketabi, F Peeters, Journal of Applied Physics. 128215107M. Nakhaee, S. Ketabi, and F. Peeters, Machine learn- ing approach to constructing tight binding models for solids with application to bitecl, Journal of Applied Physics 128, 215107 (2020).
A density functional tight binding layer for deep learning of chemical hamiltonians. H Li, C Collins, M Tanha, G J Gordon, D J Yaron, Journal of chemical theory and computation. 145764H. Li, C. Collins, M. Tanha, G. J. Gordon, and D. J. Yaron, A density functional tight binding layer for deep learning of chemical hamiltonians, Journal of chemical theory and computation 14, 5764 (2018).
Stoichiometric and non-stoichiometric (1010) and (1120) surfaces in 2h-sic: a theoretical study. E Rauls, J Elsner, R Gutierrez, T Frauenheim, 10.1016/S0038-1098(99)00137-4Solid State Communications. 111459E. Rauls, J. Elsner, R. Gutierrez, and T. Frauen- heim, Stoichiometric and non-stoichiometric (1010) and (1120) surfaces in 2h-sic: a theoretical study, Solid State Communications 111, 459 (1999).
Surface energies of elemental crystals. R Tran, Z Xu, B Radhakrishnan, D Winston, W Sun, K A Persson, S P Ong, Scientific data. 31R. Tran, Z. Xu, B. Radhakrishnan, D. Winston, W. Sun, K. A. Persson, and S. P. Ong, Surface energies of ele- mental crystals, Scientific data 3, 1 (2016).
Square symbols use symmetry, circles are slightly distorted structures with no symmetry. TB calculations with 1 processor (blue) or 8 processors (cyan) are up to 1000 times faster the equivalent DFT calculations with. Fig, S1, 1Timings in seconds for simple TB and DFT total energy calculations for Si (top) and NaCl (bottom) with varying numbers of atoms (note log scale on y-axis). red) or 8 (orange) processorsFIG. S1. Timings in seconds for simple TB and DFT total energy calculations for Si (top) and NaCl (bottom) with varying numbers of atoms (note log scale on y-axis). Square symbols use symmetry, circles are slightly distorted structures with no symmetry. TB calculations with 1 processor (blue) or 8 processors (cyan) are up to 1000 times faster the equivalent DFT calculations with 1 (red) or 8 (orange) processors.
| [
"https://github.com/usnistgov/tb3py.",
"https://github.com/usnistgov/ThreeBodyTB.jl/"
] |
[
"Neuronal cable equations derived from the hydrodynamic motion of charged particles",
"Neuronal cable equations derived from the hydrodynamic motion of charged particles",
"Neuronal cable equations derived from the hydrodynamic motion of charged particles",
"Neuronal cable equations derived from the hydrodynamic motion of charged particles"
] | [
"Davide Forcella \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n",
"Alberto Romagnoni \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n",
"Alain Destexhe \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n",
"Davide Forcella \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n",
"Alberto Romagnoni \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n",
"Alain Destexhe \nParis-Saclay University\nCNRS\nGif sur YvetteFrance\n\nThe European Institute of Theoretical Neuroscience (EITN)\nParisFrance\n"
] | [
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance",
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance",
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance",
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance",
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance",
"Paris-Saclay University\nCNRS\nGif sur YvetteFrance",
"The European Institute of Theoretical Neuroscience (EITN)\nParisFrance"
] | [] | Neuronal cable theory is usually derived from an electric analogue of the membrane, which contrasts with the slow movement of ions in aqueous media. We show here that it is possible to derive neuronal cable equations from a different perspective, based on the laws of hydrodynamic motion of charged particles (Navier-Stokes equations). This results in similar cable equations, but with additional contributions arising from nonlinear interactions inherent to fluid dynamics, and which may shape the integrative properties of the neurons. | 10.1007/s00033-023-01986-y | [
"https://export.arxiv.org/pdf/2201.04927v1.pdf"
] | 245,906,406 | 2201.04927 | 52439d3990ea66711479d69964fb0b331ff0f5d8 |
Neuronal cable equations derived from the hydrodynamic motion of charged particles
13 Jan 2022
Davide Forcella
Paris-Saclay University
CNRS
Gif sur YvetteFrance
The European Institute of Theoretical Neuroscience (EITN)
ParisFrance
Alberto Romagnoni
Paris-Saclay University
CNRS
Gif sur YvetteFrance
The European Institute of Theoretical Neuroscience (EITN)
ParisFrance
Alain Destexhe
Paris-Saclay University
CNRS
Gif sur YvetteFrance
The European Institute of Theoretical Neuroscience (EITN)
ParisFrance
Neuronal cable equations derived from the hydrodynamic motion of charged particles
13 Jan 2022(Dated: January 14, 2022)
Neuronal cable theory is usually derived from an electric analogue of the membrane, which contrasts with the slow movement of ions in aqueous media. We show here that it is possible to derive neuronal cable equations from a different perspective, based on the laws of hydrodynamic motion of charged particles (Navier-Stokes equations). This results in similar cable equations, but with additional contributions arising from nonlinear interactions inherent to fluid dynamics, and which may shape the integrative properties of the neurons.
The dynamics of the membrane potential (V m ) of neurons in their complex dendritic arborization can be described by cable equations, introduced by Rall almost 60 years ago [11] and which are still used today. These equations are based on a set of partial-differential equations (PDEs) that describe the V m dynamics in space and time. The derivation of cable equations is usually based on an electric-circuit analogue of the membrane, which is widely used to model neurons [6,7]. This formalism has been very successful to model a wide range of neural phenomena involving dendrites [12].
Besides this success, treating the neuron as an electric circuit has some drawbacks. For example, the classic cable theory accounts for charge accumulation in the membrane capacitance, but forbids charge accumulation inside the cable, and thus forbids the formation of electric monopoles in neurons [2]. Ideally, one should derive equivalent cable equations by allowing dendritic charge accumulation, which was attempted through the "generalized cable" model [1]. In that approach, the current is extended to include the displacement current, thus allowing charge accumulation inside the dendrite. However, the generalized cable is still based on an electric circuit analogue. As a consequence, the electromagnetic signal is considered as instantaneous, which appears non realistic for neurons. Indeed, in neuronal cables, the charges are ions moving in aqueous media (cytoplasm or extracellular fluid), and have a mobility orders of magnitude lower than electrons in metal. For instance, the mobility of electrons in copper is of 4.45 × 10 −3 m 2 /sV for a temperature of 298.15 o K [10], which is about 10 5 times larger than the mobility of Na + in sea water, which is of 5.19 × 10 −8 m 2 /sV , and is similar for other ions such as K + , Ca 2+ and Cl − [6]. Despite such huge differences, there is presently no formalism to correctly describe this slow charge movement.
In the present letter, we show that it is possible to derive cable equations from another perspective, which does not require to make the assumptions of instantaneity of electric circuits. We treat ions moving in aqueous media as fluids, and use the equations of hydrodynamic motion to describe this flow. We show that, consider-ing this fluid as linear, leads to the classic passive cable equations. Including the nonlinearity inherent to fluids leads to novel forms of nonlinearity, that could be tested experimentally.
The classic derivation of the cable equation starts from Ohm's law, according to which the axial current i i in a cylindric cable can be written as:
i i = − 1 r i ∂V m ∂x ,(1)
where V m is the membrane potential, r i is the resistivity in the axial direction, and x is the distance along the axis of the cylindric cable. The current balance implies that the variation of the current along x is equal to the membrane current i m :
i m = − ∂i i ∂x .(2)
According to the RC-circuit analogue of the membrane, the membrane current is given by:
i m = c m ∂V m ∂t + V m r m ,(3)
where c m is the specific membrane capacitance, and r m the resistivity of the membrane. Combining the above equations, we obtain:
1 r i ∂ 2 V m ∂x 2 − c m ∂V m ∂t − V m r m = 0 ,(4)
which is known as the cable equation. Note that, in this form, the cable is of constant radius, and c m , r m , and r i are constant. Using the more compact notation ∂ t and ∂ x for the partial derivatives, we can rewrite the cable equation in the form:
τ CT ∂ t − λ 2 CT ∂ 2 x + I V m = 0,(5)
where the length and time constants are:
λ CT = r m r i (6) τ CT = r m c m .(7)
Since the operator acting on V m in Eq. 5 commutes with spatial and temporal derivatives, we can rewrite the equation for the electric current vector i,
τ CT ∂ t − λ 2 CT ∂ 2 x + I i = 0,(8)
where i x = i i is the axial current, and i y = i z = κ i m , with i m the membrane current and κ a suitable constant with the dimension of a length. Notice that this would no longer be true if κ does depend on t and x.
To describe neuronal cables from a different perspective, we consider the flow of ions into the aqueous medium of dendrites and axons as the flow of a charged fluid subject to boundary conditions at the border (membrane) [3]. A large class of fluids can be described by very wellknown Navier-Stokes (NS) equations [9]:
ρ ∂ t v + ( v · ∇) v = − ∇P + η∇ 2 v + 1 3 η + ζ ∇( ∇ · v) + f − ρ v τ N S(9)
where v is the velocity vector field of the fluid, ρ is the density of the fluid (for the fluid at equilibrium: constant and homogeneous), P is the external pressure, η the shear viscosity of the fluid, while ζ is its bulk viscosity, and f the set of vector forces external to the fluid (e.g. due to external electrical field, gravity, etc.). The term ρ v τNS is a correction to the NS equations due to interaction of the fluid with the external medium: it is an effective friction term that causes the slowdown the flow of the fluid. It is a first order correction to NS equations, where τ N S is the mean free time of interaction between fluid's components and the external medium.
For a charged fluid in a neuronal cable, we consider the following conditions:
• in the absence of external forces, f = 0;
• with a homogeneous pressure, ∇P = 0;
• the fluid is incompressible, ∇ · v = 0;
• the fluid is linear,
( v · ∇) v ∼ 0;
Under these hypothesis, the NS Eq. (9) then reduces to:
ρ∂ t − η∇ 2 + ρ τ N S v = 0.(10)
The current density is naturally defined as j = ρ q v, where the average charge density is ρ q = qρ, and q is the average charge carried by an element of fluid. Note that q should be considered as a "net" charge, because both positive and negative ions contribute to the charged fluid (this is similar to consider the total membrane current although it is carried by different ions). In the case of constant and homogeneous charge density (∂ t ρ q = ∇ρ q = 0), or if the charge density satisfies the diffusion equation ∂ t ρ q − η ρ ∇ 2 ρ q = 0, Eq. (10) can be written as:
τ N S ∂ t − λ 2 N S ∇ 2 + I j = 0,(11)
where λ 2 N S = τNSη ρ . This equation has exactly the same mathematical form as the cable Eq. (5).
Notice that in particular this duality implies the correspondence λ 2 CT = λ 2 N S and τ CT = τ N S , which gives the following relations:
η = ρ r i c m ,(12)τ N S = r m c m(13)
Note that this equality holds only in the case of a uniformly charged fluid where the charge density qρ is independent of position and time. In reality, most of the ions are located close to the neuronal membrane [6], so the charge density is non-uniform. Nevertheless, in a one-dimensional approximation of the cable (neglecting radial variations of density), and in cables of constant diameter (neglecting longitudinal changes in density), this relation should be useful to relate hydrodynamic and neuronal variables. Also note that if transient input currents are located along the dendritic cable (such as synapses), the charge density will change locally, possibly forming transient monopoles before re-equilibration. Thus, we see that a form equivalent to cable equations can be derived from the linearized version of Navier-Stokes equations. However, Navier-Stokes equations are intrinsically non-linear, and can be approximated by linear equations only in specific circumstances. If we now consider the full Navier-Stokes equations (Eqs. 9) without linear approximation, thus taking into account the nonlinear term ρ( v · ∇) v, provides a natural non-linear correction to cable equations (5):
∂ t − η ρ ∇ 2 + 1 ρτ j + 1 qρ ( j · ∇) j = 0(14)
In the linear response regime: j = 1 r E with r the vector of resistivity of the medium: r x = r i , r y = r z = r m In the simple case in which E = − ∇V m , equation (14) becomes:
∂ t − η ρ ∇ 2 + 1 ρτ V m − 1 qρ ( ∇ r V m · ∇)V m = 0(15)
Reintroducing the classical coefficients used in cable equation (4), and considering the particular case in which V m depends only on x, leads to:
− c m ∂ t V m + 1 r i ∂ 2 x − 1 r m V m + 1 qr i (∂ x V m ) 2 = 0 (16)
Thus, a new nonlinear term 1 qri (∂ x V m ) 2 appears in cable equations, due to the nonlinear nature of fluid dynamics as described by Navier-Stokes equations.
To conclude, we have shown here that the well-known cable equations of neurons [11] can be derived from fluid dynamics considerations. The cable equations are classically derived from the RC-circuit analogue of the membrane. We show here that very similar cable equations can be derived without associating the cable membrane as a series of RC circuits, but rather from the hydrodynamic motion of charged particles. We discuss below the possible consequences of this work, and openings for future studies.
A first consequence is that there is no need to necessarily associate an RC circuit to the cable interactions in neuronal structures. The RC circuit describes well the ionic and capacitive currents in neuronal membranes. However, associating a RC circuit analogue to the current flow (axial current) inside the dendrites makes the assumption that the electromagnetic signal propagates infinitely fast. Such an assumption may be good to describe electronic circuits, but biological media are much slower, principally because the charges are ions moving in a fluid (cytoplasm or extracellular space), which have a mobility 5 orders of magnitude less than electrons in a metal [10]. Thus, as noticed before [2], the RC-circuit analogue forbids phenomena such as charge accumulation inside the dendrite, although there is experimental evidence for such charge accumulation and electric monopoles in neural tissue [13]. With the hydrodynamic analogue we propose here, such charge accumulation would a priori be possible, because fluid dynamics is fully compatible with slow charge movement. This aspect should be examined in more detail in future studies.
A second consequence of this hydrodynamic analogue is that going away from the linear approximation leads to a new term in cable equations. This constitutes a strong prediction of this formalism. Notice that in general, two different types of corrections to linear approximation of NS equations for a fluid could be taken into account: corrections proportional to powers of v and corrections proportional to powers of derivatives of v. The first type are due to interactions between the fluid and the external medium. Those are terms of friction which break translational invariance. The linear term ∼ v τNS breaks the invariance in an isotropic way, and τ N S is the thermalization time of this type of interactions. On the other hand, the second type of correction, which we are considering here, is related to internal fluid-fluid interactions and are the first non-linear corrections to the classical linear approximation of NS.
Future studies should examine possible consequences of this additional nonlinear term, as well as the terms which arise from the relaxation of the constant and homogeneous charge density hypothesis, on the integrative properties of neurons. Further, one should also examine if some of these consequences could be measured experimentally, resulting in a test of the predictions of the present formalism.
The hydrodynamic cable formalism will allow us to investigate the functional consequences of the slow movement of charges in neurons, which was previously forbidden by describing neurons as (infinitely fast) electric circuits. More generally, it provides a first step towards a description of the slow movement of charges inherent to biological media.
We thank Claude Bedard for useful discussions. Research funded by the CNRS, the European Community (H2020-720270, H2020-785907), the ANR (PARADOX) and the ICODE excellence network.
Generalized cable theory for neurons in complex and heterogeneous media. C Bédard, A Destexhe, Physical Review E. 8822709Bédard, C. and Destexhe, A. (2013) Generalized cable theory for neurons in complex and heterogeneous media. Physical Review E 88 : 022709.
Do neurons generate monopolar current sources ?. A Destexhe, C Bedard, J. Neurophysiol. 108Destexhe, A. and Bedard, C. (2012) Do neurons generate monopolar current sources ? J. Neurophysiol. 108: 953- 955.
. D Forcella, C Prada, R Carminati, Phys.Rev.Lett. 11813134301Forcella, D, Prada, C. and Carminati, R. Phys.Rev.Lett. 118 (2017) no.13, 134301.
The dielectric properties of biological tissues : II. Measurements in the frequency range 10 Hz to 20 GHz. S Gabriel, R W Lau, C Gabriel, Phys. Med. Biol. 41Gabriel, S., Lau, R.W. and Gabriel, C. (1996) The dielec- tric properties of biological tissues : II. Measurements in the frequency range 10 Hz to 20 GHz. Phys. Med. Biol. 41, 2251-2269.
Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons. J M Gomes, C Bédard, S Valtcheva, M Nelson, V Khokhlova, P Pouget, L Venance, A Destexhe, Biophys. J. 110Gomes JM, Bédard C, Valtcheva S, Nelson M, Khokhlova V, Pouget P, Venance L, Bal T and Destexhe, A. (2016) Intracellular impedance measurements reveal non-ohmic properties of the extracellular medium around neurons. Biophys. J. 110: 234-246.
Ionic Channels of Excitable Membranes. B Hille, Sinauer Associates IncSunderland MAHille, B. (2001) Ionic Channels of Excitable Membranes. Sinauer Associates Inc, Sunderland MA.
A quantitative description of membrane current and its application to conduction and excitation in nerve. A L Hodgkin, A F Huxley, J. Physiol. Lond. 117Hodgkin, A.L. and Huxley, A.F. (1952) A quantitative description of membrane current and its application to conduction and excitation in nerve. J. Physiol. Lond. 117: 500-544.
Biophysics of Computation. C Koch, Oxford University pressOxford, UKKoch, C. (1999) Biophysics of Computation. Oxford Uni- versity press, Oxford, UK.
. L D Landau, Em- Lifschitz, 6Landau LD and Lifschitz EM-Vol. 6 -Fluid Mechanics.
Technology of Engineering Materials. M Philip, W Bolton, ElsevierNew YorkPhilip M. and Bolton W. (2002) Technology of Engineer- ing Materials. Elsevier, New York.
Electrophysiology of a dendritic neuron model. W Rall, Biophys J. 2Rall, W. (1962) Electrophysiology of a dendritic neuron model. Biophys J. 2: 145-167.
The Theoretical Foundations of Dendritic Function. W Rall, MIT PressCambridge, MARall, W. (1995) The Theoretical Foundations of Dendritic Function. MIT Press, Cambridge, MA.
. J J Riera, T Ogawa, T Goto, A Sumiyoshi, H Nonaka, A Evans, H Miyakawa, R Kawashima, J. Neurophysiol. 108Riera, J.J., Ogawa, T., Goto, T., Sumiyoshi, A., Nonaka, H., Evans, A., Miyakawa, H. and Kawashima, R. (2012) J. Neurophysiol 108: 956-975.
Introduction to Theoretical Neurobiology: Linear cable theory and dendritic structure. H C Tuckwell, Cambridge University PressCambdridge, UKTuckwell, H.C. (1988) Introduction to Theoretical Neu- robiology: Linear cable theory and dendritic structure. Cambridge University Press, Cambdridge, UK.
| [] |
[
"Bridging the gap between classical and quantum many-body information dynamics",
"Bridging the gap between classical and quantum many-body information dynamics",
"Bridging the gap between classical and quantum many-body information dynamics",
"Bridging the gap between classical and quantum many-body information dynamics"
] | [
"Andrea Pizzi \nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"Daniel Malz \nMax-Planck-Institute of Quantum Optics\nHans-Kopfermann-Str. 185748GarchingGermany\n\nMunich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany\n",
"Andreas Nunnenkamp \nFaculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51190ViennaAustria\n",
"Johannes Knolle \nMunich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany\n\nDepartment of Physics\nTechnische Universität München\nJames-Franck-Straße 185748GarchingGermany\n\nBlackett Laboratory\nImperial College London\nSW7 2AZLondonUnited Kingdom\n",
"Andrea Pizzi \nCavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom\n",
"Daniel Malz \nMax-Planck-Institute of Quantum Optics\nHans-Kopfermann-Str. 185748GarchingGermany\n\nMunich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany\n",
"Andreas Nunnenkamp \nFaculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51190ViennaAustria\n",
"Johannes Knolle \nMunich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany\n\nDepartment of Physics\nTechnische Universität München\nJames-Franck-Straße 185748GarchingGermany\n\nBlackett Laboratory\nImperial College London\nSW7 2AZLondonUnited Kingdom\n"
] | [
"Cavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Max-Planck-Institute of Quantum Optics\nHans-Kopfermann-Str. 185748GarchingGermany",
"Munich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany",
"Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51190ViennaAustria",
"Munich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany",
"Department of Physics\nTechnische Universität München\nJames-Franck-Straße 185748GarchingGermany",
"Blackett Laboratory\nImperial College London\nSW7 2AZLondonUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nCB3 0HECambridgeUnited Kingdom",
"Max-Planck-Institute of Quantum Optics\nHans-Kopfermann-Str. 185748GarchingGermany",
"Munich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany",
"Faculty of Physics\nUniversity of Vienna\nBoltzmanngasse 51190ViennaAustria",
"Munich Center for Quantum Science and Technology (MCQST)\n80799MunichGermany",
"Department of Physics\nTechnische Universität München\nJames-Franck-Straße 185748GarchingGermany",
"Blackett Laboratory\nImperial College London\nSW7 2AZLondonUnited Kingdom"
] | [] | The fundamental question of how information spreads in closed quantum many-body systems is often addressed through the lens of the bipartite entanglement entropy, a quantity that describes correlations in a comprehensive (nonlocal) way. Among the most striking features of the entanglement entropy are its unbounded linear growth in the thermodynamic limit, its asymptotic extensivity in finite-size systems, and the possibility of measurement-induced phase transitions, all of which have no obvious classical counterpart. Here, we show how these key qualitative features emerge naturally also in classical information spreading, as long as one treats the classical many-body problem on par with the quantum one, that is, by explicitly accounting for the exponentially large classical probability distribution. Our analysis is supported by extensive numerics on prototypical cellular automata and Hamiltonian systems, for which we focus on the classical mutual information and also introduce a 'classical entanglement entropy'. Our study sheds light on the nature of information spreading in classical and quantum systems, and opens new avenues for quantum-inspired classical approaches across physics, information theory, and statistics. | 10.1103/physrevb.106.214303 | [
"https://export.arxiv.org/pdf/2204.03016v1.pdf"
] | 248,005,983 | 2204.03016 | 60800cda0c29a9f79163d73e455b74ce92de0a86 |
Bridging the gap between classical and quantum many-body information dynamics
Andrea Pizzi
Cavendish Laboratory
University of Cambridge
CB3 0HECambridgeUnited Kingdom
Daniel Malz
Max-Planck-Institute of Quantum Optics
Hans-Kopfermann-Str. 185748GarchingGermany
Munich Center for Quantum Science and Technology (MCQST)
80799MunichGermany
Andreas Nunnenkamp
Faculty of Physics
University of Vienna
Boltzmanngasse 51190ViennaAustria
Johannes Knolle
Munich Center for Quantum Science and Technology (MCQST)
80799MunichGermany
Department of Physics
Technische Universität München
James-Franck-Straße 185748GarchingGermany
Blackett Laboratory
Imperial College London
SW7 2AZLondonUnited Kingdom
Bridging the gap between classical and quantum many-body information dynamics
The fundamental question of how information spreads in closed quantum many-body systems is often addressed through the lens of the bipartite entanglement entropy, a quantity that describes correlations in a comprehensive (nonlocal) way. Among the most striking features of the entanglement entropy are its unbounded linear growth in the thermodynamic limit, its asymptotic extensivity in finite-size systems, and the possibility of measurement-induced phase transitions, all of which have no obvious classical counterpart. Here, we show how these key qualitative features emerge naturally also in classical information spreading, as long as one treats the classical many-body problem on par with the quantum one, that is, by explicitly accounting for the exponentially large classical probability distribution. Our analysis is supported by extensive numerics on prototypical cellular automata and Hamiltonian systems, for which we focus on the classical mutual information and also introduce a 'classical entanglement entropy'. Our study sheds light on the nature of information spreading in classical and quantum systems, and opens new avenues for quantum-inspired classical approaches across physics, information theory, and statistics.
The fundamental question of how information spreads in closed quantum many-body systems is often addressed through the lens of the bipartite entanglement entropy, a quantity that describes correlations in a comprehensive (nonlocal) way. Among the most striking features of the entanglement entropy are its unbounded linear growth in the thermodynamic limit, its asymptotic extensivity in finite-size systems, and the possibility of measurement-induced phase transitions, all of which have no obvious classical counterpart. Here, we show how these key qualitative features emerge naturally also in classical information spreading, as long as one treats the classical many-body problem on par with the quantum one, that is, by explicitly accounting for the exponentially large classical probability distribution. Our analysis is supported by extensive numerics on prototypical cellular automata and Hamiltonian systems, for which we focus on the classical mutual information and also introduce a 'classical entanglement entropy'. Our study sheds light on the nature of information spreading in classical and quantum systems, and opens new avenues for quantum-inspired classical approaches across physics, information theory, and statistics.
I. INTRODUCTION
Many-body quantum systems display a huge variety of physical phenomena and may carry a vast amount of information in the exponentially many components of their wavefunction. Characterizing this information has become a major goal of modern quantum science, of prime relevance for quantum computing [1] and quantum simulation [2,3]. In recent years the study of closed many-body quantum systems has flourished in particular in the nonequilibrium regime, with key questions revolving around the dynamics of equilibration and information spreading [4,5]. These have become increasingly relevant in light of the experimental advances with nonequilibrium many-body systems kept in almost isolated conditions, e.g., in Rydberg atom arrays [6] or cold atoms in optical lattices [7].
One of the most prominent tools that has emerged from this field is the bipartite entanglement entropy (EE). As it can account for correlations in a comprehensive, nonlocal, multipoint way, the EE has been extensively adopted to monitor the dynamical entangling of the system's parts in pure quantum systems [8][9][10][11][12][13][14][15][16][17][18][19][20][21][22][23][24]. Among the paradigms that have been unearthed are the linear and logarithmic unbounded growth of the EE in the thermodynamic limit for generic many-body systems [21] and in many-body localized (MBL) ones [12,25], respectively, and its saturation to an extensive value for finitesize systems at long times. More recently, it has been shown that this extensivity allows for novel measurement-induced phase transitions (MIPT), whereby the rate of local random measurements determines whether the asymptotic EE scales proportional to the volume of the considered system's parts ('volume-law scaling'), or to the area of the boundary between them ('area-law scaling'). Since its discovery three * Corresponding author, [email protected] years ago [26], the MIPT has received a remarkable amount of interest [27][28][29][30][31][32][33][34][35].
An obvious and conceptually fundamental question is: to what extent are these distinctive and celebrated many-body features of quantum information spreading purely quantum? One of the reasons why they have become well-appreciated in the quantum world is that state-of-the-art numerical techniques can approximate [38] or even exactly describe [39,40] the exponentially large many-body wavefunction. In contrast, the probability distribution of classical nonequilibrium manybody systems, while also being exponentially large, is normally not explicitly accounted for. Rather, classical information spreading is mostly investigated in terms of spatiotemporal correlations, transport properties, and the spreading of perturbations [41][42][43][44][45][46][47][48][49][50][51], all of which are based on the study of few-body observables that can be effectively estimatedà la Monte Carlo as averages over a handful of trajectories. According to the paradigm of thermalization, the expected value of these few-point observables reaches, at long times and under general circumstances, the value predicted by a thermal ensemble ∝ e −βH at suitable temperature β −1 [52]. But for instance, the bipartite mutual information (MI), that is often regarded as the natural classical analogue of the bipartite EE, is many-body in nature, requires the knowledge of the system's full probability distribution, and at long times does not obviously reach the value predicted by a thermal ensemble.
Here, we argue that this mismatch in the description of quantum wavefunctions and classical probability distributions has led to a gap between our understanding of classical and quantum information spreading. We bridge this gap by (i) accounting for the dynamics of the full many-body probability distribution, thus treating the classical many-body problem on par with the quantum one, and (ii) focusing on many-body information measures akin to quantum EE, such as the classical MI and the 'classical EE' (cEE), a complementary measure for classical non-separability that we introduce. We find that the qualitative features of quantum and classical information spreading are remarkably similar, and show the first instances of asymptotic extensivity, unbounded linear growth, and MIPT of MI and cEE in a classical setting. The analogies between classical and quantum information spreading are summarized in Table I. The rest of the paper is structured as follows. In Section II we introduce the main notation and our definition of the cEE. In Section III we review the idea of thermalization following a quench to remark on the importance of retaining information on the full and exponentially large state of the system, both in the classical and quantum cases. In Section IV we show how the main features of quantum information spreading naturally emerge in classical cellular automata: the bipartite MI and cEE grow linearly in time until saturating to an extensive value. Emphasis is put on the role that time reversibility plays in these effects. Upon interleaving the automaton dynamics with local measurements, we find and analyse a classical MIPT in Section V. The discussion moves then to continuous Hamiltonian dynamics in Section VI, where with analytical arguments and numerics we show the asymptotic MI to be generally extensive, reducing to area-law only for initial conditions at an effective infinite temperature. We highlight a key difference between the MI and the cEE, namely that the cEE remains asymptotically extensive even at infinite temperature, thus further narrowing the classical-quantum gap. In Section VII we argue that, beyond sharing the same phenomenology, classical and quantum many-body information spreading also require remarkably similar experimental and computational protocols to be observed, involving in both cases either exponentially-large resources or exponentially many runs. We conclude in Section VIII with a discussion of the results and an outlook.
II. NOTATION AND DEFINITIONS
Consider a bipartite system (A, B) consisting of either classical or quantum degrees of freedom. We are interested in quantifying the amount of information that the parts A and B carry on one another. The system is described by a probability distribution p and by a density matrix ρ in the classical and quantum cases, respectively. While we are here ultimately interested in the study of classical systems, to most easily appreciate the classical-quantum connections we will try to develop the formalism in clear analogy with the quantum one. Whether we are talking about a classical or a quantum system should be clear from context and notation.
The part A itself is described by the marginal probability distribution p A = Tr B p = B p A,B or by the reduced density matrix ρ A = Tr B ρ, with Tr B the partial trace over B. The classical entropy reads S = − A,B p A,B log p A,B , whereas the quantum (von Neumann) entropy yields S = − Tr [ρ log ρ]. Similarly, the marginal entropy associated with
A reads S A = − A p A log p A or S A = − Tr A [ρ A log ρ A ],
and analogously for S B .
A central object in information theory, statistics, and statistical physics is the MI, quantifying by how much our ignorance on B is reduced when observing A [53]. Among its possible representations, one that holds for both the classical and quantum cases is
I A;B = S A + S B − S.(1)
The MI is non-negative, and vanishes if and only if the state is separable, that is, when
p A,B = p A p B or ρ = ρ A ⊗ ρ B .
Thus, the MI unambiguously diagnoses any statistical interdependence between A and B.
Generally, the marginal entropies S A and S B are not representative of the degree of interdependence of A and B, but just of the degree of uncertainty on A and B themselves: S A and S B can be positive while I A;B vanishes. Still, special cases exist in which S A and S B are proportional to the MI. Most notably, for pure quantum states (ρ = |ψ ψ|) one has S = 0, and thus S A = S B = 1 2 I(A, B). Indeed, in this case S A is known as EE. Since S A is defined for generic mixed states, but can only be called EE when used on pure states, it is worth with a bit of redundancy to introduce a new symbol S e for the EE, whose explicit expression reads
S e = − Tr A (ρ A log ρ A ) , ρ A = Tr B (|ψ ψ|) .(2)
The MI is a natural classical counterpart of the EE. Yet, seeking to make the classical-quantum analogy even more direct, we make an extra step and propose an alternative analogue of the EE, the cEE, whose definition is closely inspired by the quantum one. As it will turn out, the cEE complements the MI, showing behaviours analogue to the quantum EE even when the MI fails to do so. To define it, we first introduce the 'classical reduced density matrix' ρ A as the matrix with entries
(ρ A ) A ,A = B √ p A ,B p A ,B .(3)
The diagonal elements of ρ A yield the marginal probability distribution, (ρ A ) A,A = p A , and so Tr ρ A = 1. Since ρ A is positive semi-definite [54] and Tr ρ A = 1, its eigenvalues {λ n } represent a valid (positive and normalized) probability distribution, which we use to define the cEE as
S e = − Tr [ρ A log ρ A ] = − n λ n log λ n .(4)
Note that the cEE S e is nothing but the quantum EE associated to the 'classical wavefunction' [55,56] |ψ with components
A, B|ψ := √ p A,B ,(5)
which is correctly normalized, ψ|ψ = A,B p A,B = 1, and which lives in a fictitious Hilbert space generated by considering the configurations of (A, B) as a basis {|A, B }. By construction, many properties of the cEE thus directly follow from those of its quantum counterpart. Most importantly, S e ≥ 0, and S e = 0 if and only if I A;B = 0, which makes the cEE a good witness of statistical dependence between A and B, in contrast to the marginal entropy S A (the latter is computed on the diagonal of ρ A rather than on its eigenvalues). Moreover, we note that, while for pure quantum states the MI and the EE coincide (up to a factor 2), the classical MI and cEE are generally different. Indeed, while they both quantify statistical dependence between A and B, the cEE features scaling behaviours that the MI lacks, and that are crucial to connect to quantum information spreading.
III. A LESSON FROM QUANTUM MECHANICS: THE IMPORTANCE OF TRACKING THE GLOBAL STATE
Touching upon a number of well-known ideas around the concepts of thermalization and convergence of ensembles, in this Section we establish close parallels between classical and quantum quench dynamics. This highlights the need to account for the global state of the system when looking at the MI, a key conceptual step necessary to correctly frame the classical problem on par with the quantum one.
Consider a many-body system undergoing non-dissipative dynamics generated by the Hamiltonian H. The global state of the system is described by a wavefunction |ψ and by a probability distribution p in the quantum and classical cases, respectively [57]. The wavefunction is represented with respect to a certain basis {|n } as |ψ = n n|ψ |n , whereas the probability distribution is represented with respect to a certain set of coordinates x as p(x). We are interested in the general scenario of local, many-body, and non-integrable Hamiltonians H.
We assume the standard situation in which fluctuations are initially local, meaning that the various parts of the system are statistically independent. Specifically, we consider a product state |ψ(0) = |ψ A (0) ⊗ |ψ B (0) and a disjoint probability distribution p(0) = p A (0)p B (0) for a quantum and a classical system, respectively, such that I A;B (0) = S e (0) = 0. The initial quantum wavefunction (classical probability distribution) has a simple aspect in the Hilbert (phase) space. Of course, the shape depends on the specific choice of the basis (coordinates), but can be pictured as something simple if the latter is local, which we shall assume, see left panels in Fig. 1(a,b).
Under the non-dissipative dynamics, the spreading of correlations breaks the local character of the wavefunction (probability distribution), which in the local basis becomes therefore more and more 'structured'. In the quantum case, this can be understood from the wavefunction amplitudes | n|ψ(t)
| 2 = E n|E e −i Et E|ψ(0) 2
acquiring an effectively random character due to the chaotic spectrum {E} of many-body non-integrable Hamiltonians [58][59][60][61][62]. Indeed, the phases e −i Et at long times t become effectively random, resulting in probabilities | n|ψ | 2 that fluctuate in time and with no clear dependence on n, see the rightmost panel of Fig. 1(b). In the classical case, the dynamical randomization of the probability is instead due to incompressibility and chaos, which taken together imply the probability distribution to develop finer and finer features [52], or more and more structure, as time increases -see the leftmost panel of Fig. 1(a).
It turns out that these fine features make only little difference for most observables of interest, according to the paradigm of thermalization. The latter stipulates that, in the time evolution after a quench, local few-point observables equilibrate at long times to their thermal value, which depends on the initial condition only through its energy. This general idea has long been established in classical physics in terms of chaos and ergodicity [52] and more recently for closed quantum many-body systems in terms of the eigenstate thermalization hypothesis (ETH) [58][59][60][61]. The concept of thermalization is naturally associated with that of convergence of ensembles [52], according to which we can think of the distribution of the system (classical probability p or quantum density matrix ρ = |ψ ψ|) as approaching the stationary thermal one ∝ e −βH . Due to chaos and incompressibility, the probability mass becomes more and more structured in phase space, with fine features of size ∼ e −λt , with λ the Lyapunov exponent, effectively acquiring a random character. (b) Schematic representation of the evolution of the wavefunction, represented with respect to a computational basis {|n } sorted with respect to n|H|n . The wavefunction becomes more and more structured due to the dephasing effects from an effectively random spectrum. (c) The state of the system, whether in the form of a classical probability distribution or quantum wavefunction, is at long times very highly structured in generic many-body dynamics, as pictorially represented on the left. By contrast, a thermal distribution ∝ e −βH washes out much of this fine structure (right). If the system has thermalized, few-body observables such as order parameters or correlation functions are correctly predicted by the thermal distribution ∝ e −βH (top), but multi-point information measures such as the MI are not (bottom). Specifically, thermal states have area-law MI whereas the exact state at late times has volume law. It is thus necessary to keep track of the exact state of the system to correctly account for information spreading, which is clear in the quantum setting, and which we here consider in the classical one.
This point deserves much care, though. As a matter of fact, the idea of convergence of the ensembles only holds at the level of few-point observables, reduced density matrices, and marginal probability distributions. But as described above, the global state itself is far from stationary, let alone thermal, and this can have deep consequences on many-point observables. Most importantly, while the long-time value of the quantum MI fulfills volume law [8,13], which is rooted precisely in the effective randomness of the many-body wavefunction [37], the MI of a thermal state fulfils area-law scaling [36]. Indeed, a thermal distribution ρ th washes out the fine structure of the state, thus missing a large (extensive) amount of MI, as pictorially illustrated in Fig. 1
(c).
This example highlights the importance of keeping track of the exponentially large state of the system for studying many-body information spreading. While this is customary for quantum theories, it is not for classical ones, that instead either assume its convergence to a canonical ensemble, or focus on few-body observables within Monte-Carlo sampling. Individuating this difference as the main origin of the gap between our understanding of classical and quantum information spreading is the main conceptual finding of our work.
Before substantiating these ideas with numerics on Hamiltonian dynamics, which involves an extra phase-space discretization procedure and that we postpone to Section VI, let us take a step back and start by considering the simpler case of classical cellular automata.
IV. INFORMATION SPREADING IN CLASSICAL CELLULAR AUTOMATA
In this Section, we apply our probabilistic framework to investigate information spreading in classical cellular automata. Imagine to draw an initial condition from a disjoint (in A and B) initial probability distribution and to evolve it "blindly", meaning without looking at it. If at time t we inspect the state of half of the system A, how much do we learn about B? To address this question, we will consider how the many-body probability distribution evolves under the automaton dynamics and compute the MI and cEE. In addition to offering a discrete setting convenient for implementation, cellular automata help us to highlight the role played by time reversibility in the dynamics of MI and cEE.
Consider a system made of N bits s = (s 1 , s 2 , . . . , s N ) ∈ {0, 1} N . The bit-string evolves in time under the action of some update rule, s(t + 1) = Rule [s(t)], with discrete time t = 0, 1, 2, . . . . For concreteness, we shall henceforth focus on Wolfram's Rule 30 [63]. This rule shares two key features with the standard quantum settings for information spreading, namely chaos and locality, but lacks a third: time reversibility. This ingredient is decisive, because it ensures information to be preserved in time. In fact, of the 256 rules that can be obtained from local updates involving only the nearest neighbouring sites, none is both chaotic and timereversible [63]. One way of recovering time-reversibility from a given Rule, while preserving chaos, is to modify it as does not exhibit typical features of classical and quantum non-dissipative dynamics. By contrast, Rule 30R captures features such as the initial linear growth of entanglement ∼ t and its eventual saturation to an extensive ∼ N value (reached after a time t ∼ N ). Indeed, this stresses that key ingredients behind this phenomenology are locality, chaos, and time-reversibility, the latter of which is featured by Rule 30R but not by Rule 30. Here, we used q0 = 0.7.
s(t+1) = Rule [s(t)] s(t−1),
where denotes a bit-wise logical XOR operation [63,64]. Such a modified rule, that we shall call Rule R, constitutes a 'second-order' automaton, in which the state of the system at time t+1 does not just depend on the state of the system at time t, but also on that at time t − 1. Put differently, the reversibility of the automaton means that there exist an injective map connecting s(1) and s(0) to s(t) and s(t − 1), and vice versa. Rule 30, Rule 30R, and two instances of the spatio-temporal profiles that they generate are shown in Fig. 2(a,b).
In the case of first-order automata like Rule 30, the full information on the system is carried, at all times t, by a probability distribution p with support on the 2 N possible bit-string configurations. In the case of a second-order automaton like Rule 30R, however, a similar distribution does not suffice. Indeed, what matters in this case is not just the configuration of the bit-string at time t, but also that at time t − 1. Consequently, to retain the full information on the system, one has to define the possible states σ as the 2N bit-strings composed of the bits s(t) and s(t − 1), see Fig. 2c. The probability distribution p will correspondingly have 2 2N components [65].
Initially, we take the bits {s j (t = 0)} and {s j (t = −1)} to be independent and identically distributed (i.i.d.) random variables, equal to 1 with probability q 0 and to 0 otherwise. That is, each of the 4 N possible initial conditions σ 0 has a probability p σ0 (0) that reads
p σ0 (0) = 2N j=1 [σ 0,j q 0 + (1 − σ 0,j )(1 − q 0 )] .(6)
The evolution of the probability vector from one time to the next is described by a map,
p(t + 1) = F [p(t)] ,(7)
which can be determined once and for all by checking how each two-time microstate σ is evolved by one application of the automaton. We note that, in the case of reversible automata, like Rule 30R, the map F acts as a permutation of the elements of p, because of injectivity. Indeed, in this case the probability does not change when "sitting" on a discrete trajectory σ t starting in σ 0 . In a formula, p σt (t) = p σ0 (0), which we can view as the discrete version of the Liouville's theorem for Hamiltonian systems [52]. By iterating Eq. (7), we evolve the many-body probability distribution p(t) and investigate the corresponding MI and cEE dynamics for various system sizes N in Fig. 2(d). In the non-reversible case of Rule 30, we find that the MI and the cEE generally grow until reaching a stationary value (unless in the case N = 6, which appears somewhat special and that we attribute to particularly severe finite-size effects). The key features characterizing quantum information spreading, namely linear growth and extensivity at short and long times, respectively, are absent. In striking contrast, these instead appear in the time-reversible automaton. In this case, we find that both the MI and the cEE grow linearly in time, S e , I A;B ∝ t until a time ∝ N , after which saturation to an extensive value ∝ N takes over. Since the cEE is the EE obtained by treating the classical probability as a wavefunction, see Eq. (5), we understand that the extensivity of the asymptotic cEE is due to an effective randomization of p, analogously to how in the quantum case it emerges from the effective randomness of the wavefunction [37]. Classical cellular automata therefore allow to appreciate time-reversibility as a key ingredient underpinning the distinctive features of classical and quantum information spreading.
V. MEASUREMENT-INDUCED TRANSITION
A natural question thus emerges from the previous Section, as it did in the quantum case: how does the tendency of chaotic dynamics to build extensive MI and cEE compete with the disentangling effect of local measurements? We turn to this question now, and show a classical MIPT, in which the rate of local measurements determines whether the asymptotic MI and cEE fulfill volume-or area-law scaling, in close analogy to its quantum counterpart [26].
A. Protocol and classical measurements
Imagine as in Section IV to sample an initial condition σ 0 and evolve it with the automaton. As long as we do not look at it, the state of the automaton at time t is described by a probability distribution p(t), that starts from p(0) as in Eq. (6), and evolves as p(t + 1) = F [p(t)] as in Eq. (7). Information spreading generally builds MI between a given site j and the rest of the system, meaning p = p j p \j and I j;\j > 0, with p j = Tr \j p and p \j = Tr j p the marginals of site j and of the rest of the system \j, respectively. Now, imagine that the 'blind' evolution of the automaton is interleaved with random local measurements: each spacetime point (j, t) can be observed with probability p m , Fig. 3(a). Measuring the state of site j at time t reduces our ignorance on the system and conditions the probability distribution [53]
p σ (t + ) = pσ(t − ) pj (mj,t,t − ) if σ j = m j,t 0 if σ j = m j,t .(8)
In Eq. (8), t − and t + refer to the instants before and after the measurement, respectively, m j,t is the measurement outcome, and p j (m j,t , t − ) its a priori probability. Simply put, to obtain p(t + ), we remove all states incompatible with the measurement outcome and normalize the distribution again. In this process, the marginal of site j becomes p σj (t + ) = δ σj ,mj,t , implying that the post-measurement conditional many-body probability factorizes, p(t + ) = p j (t + )p \j (t + ), and that the MI between site j and the rest of the system vanishes, I j;\j = 0. After many measurements, the suppression of probability components in Eq. (8) would eventually lead at long times to the collapse of the state of the system to a single microstate σ * , p σ → δ σ,σ * , for which no information can spread at all. More interesting is the situation in which the measurement apparatus is faulty, and acts as a source of fluctuations. Specifically, we suppose that, after a measurement has been performed, the measured bit randomly flips with an error probability 1 − q, with 0 < q < 1. We use t ++ to refer to the time right after the possible mistake has been introduced. We have that
p σj (t ++ ) = qδ σj ,mj,t + (1 − q)δ σj ,1−mj,t .(9)
The many-body probability distribution is modified accordingly, p(t ++ ) = p j (t ++ )p \j (t + ).
The local measurement protocol is closely analogous to the quantum case. There, the j-th qubit is sampled according to its reduced density matrix ρ (j) = Tr \j ρ: the qubit takes value 1 with probability ρ (j) 1,1 , and value 0 with probability ρ (j) 0,0 . The post-measurement fluctuations at site j are intrinsic and due to Heisenberg uncertainty. As in the classical case, the effect of a quantum measurement is that of 'factoring out' the state of j from that of the rest of the system, meaning |ψ(t + ) = |ψ j (t + ) ψ \j (t + ) or equivalently ρ(t + ) = ρ j (t + )ρ \j (t + ), all of which are conditional on the measurement outcome.
The resulting evolution of the probability distribution can be determined in a computer simulation according to the procedure schematized in Fig. 3(b). To measure the probability distribution p from an experiment, a procedure as in Fig. 3(c) is instead required, which unfolds as follows. An ensemble of initial states σ 0 is sampled from the distribution p(0), and evolved under the automaton rules. For the first of these runs, measuring a site i at time t means taking note of the state of s i (t) of the bit, and saving it as m i,t . The measurement outcome m i,t is implicitly distributed according to the marginal p i . The faulty measurement apparatus then introduces fluctuations as in Eq. (9), by leaving the bit s i (t) unchanged or inverted with probabilities q and 1 − q, respectively. For all the subsequent runs, measurements always happen in the same spacetime locations, and are not measurements but postselections. If s i,t does not match the m i,t measured in the first run, the trajectory is discarded, whereas otherwise the trajectory is maintained. Remarkably, this procedure is in close analogy to the one that should be followed in a quantum experiment, as we will argue in Section VII. Repeating this procedure exponentially many times, the probability p σ (t) is estimated as the fraction of trajectories found in σ at time t and, in the limit of infinitely many runs, matches the probability obtained from the procedure in Fig. 3(b) discussed earlier.
B. Numerical results
The above protocol is implemented in Fig. 4. In Fig. 4(a) we plot the dynamics of MI and cEE for various measurement rates p m . After an initial transient, whose duration is larger for smaller p m and that is partially cut by the chosen axes limits, the MI and the cEE reach a stationary value, up to finite-size temporal fluctuations. Asymptotic values for the MI and cEE in Eq. (7). Measuring site i at time t corresponds to (i) computing the marginal distribution of the site pi, (ii) drawing a measurement outcome according to pi, (iii) conditioning the probability accordingly, (iv) accounting for single-site fluctuations from a faulty measurement apparatus that flips the measured spin with probability q. The last point is in practice achieved by transforming the post-measurement marginal from pi(mi,t) = 1 − pi(1 − mi,t) = 1 to pi(mi,t) = q < 1, computing the marginal of the rest of the system p \i , and reconstructing the total distribution as p = pip \i . (c) Alternatively, the probability distribution could be obtained in an (numerical or actual) experiment by running the automaton on exponentially many initial configurations σ0 sampled from p(0). The probability distribution p(t) at later times is obtained as a histogram of the trajectories over the exponentially many possible microstates σ. In the first run, measuring site i at time t consists of observing the state si,t of the measured bit and storing it in the measurement variable mi,t. For all subsequent runs, the measurements are rather postselections: if si,t does not match the mi,t measured in the first run, the trajectory is discarded. In any run, the error possibly introduced by the faulty measurement apparatus is then accounted for by flipping the state of the measured bit with probability 1 − q. Remarkably, as argued in Section VII, both the procedures in (b) and (c) are essentially identical to those needed in the quantum setting.
are obtained as time averages and plotted in Fig. 4(b) versus the measurement rate p m , and for various system sizes N . Strikingly, for small p m , the asymptotic values of MI and cEE scale with the system size N , whereas they do not for large p m . To make the last statement quantitative, we use a linear fit to extract the scaling coefficients α M I and α EE , with I A;B (∞) ≈ α M I N and S e (∞) ≈ α EE N , and plot them versus p m in Fig. 4(c). Indeed, these exhibit a transition from finite value to zero at p m ≈ 0.6, thus showing the first classical MIPT between area-and volume-law cEE and MI. The cEE closely follows the qualitative behaviour of the quantum EE, see e.g. Fig. 13 in Ref. [26]. In our simulations we take q = 0.75 and q 0 = 0.8 but the key qualitative features are not contingent on this choice. Generally, we do not expect the asymptotic MI and cEE (thus, the MIPT) to depend on the initial distribution.
In this Section, we considered the most natural classical measurement procedure, that conditions the many-body probability distribution depending on the measurement outcome. We stress that similar results can be obtained for other local protocols that have the ability to decouple local fluctuations, that is, to suppress the mutual information between a local degree of freedom and the rest of the system, which can be achieved in many ways in both the classical and quantum settings. For instance, one could consider simple resettings, in which the measured bit is set to 1 with probability q and to 0 with probability 1 − q. Away from the probabilistic framework adopted here, we shall also note that the idea of MIPT can be generalized to even simpler measurement protocols, e.g., defined as locally projecting two trajectories on one another [66].
VI. INFORMATION SPREADING IN MANY-BODY HAMILTONIAN DYNAMICS
Thanks to their discrete nature, cellular automata have proven particularly convenient for numerical implementation. But let us now go back to the case of Hamiltonian dynamics, which we have already considered in Section III as a motivation for keeping track of the full many-body probability. Going beyond the result for area-law MI in a thermal ensemble [36], in this Section we find a simple expression for the asymptotic MI in Hamiltonian dynamics. We find it to be generally volume-law, reducing to area-law only at an infinite effective temperature, unlike the cEE. Introducing a careful phase-space discretization procedure we then numerically investigate information spreading in an interacting classical spin chain. We recover the salient features of information spreading in quantum quenches, and highlight the key differences between MI and cEE.
A. Asymptotic mutual information in Hamiltonian dynamics
First, we provide a heuristic expression for the asymptotic long-time MI in many-body Hamiltonian dynamics. First of all, Hamiltonian dynamics involves continuous variables, and thus a continuous probability density p over the phase space. The first point to clarify is therefore how to make sense of the definitions in Section II, that are for discrete distributions instead. A discretization of the phase space can be performed dividing it into little volumes ∆. In the continuous limit ∆ → 0, it is straightforward to show that S ≈ S − log ∆, with S = dx p(x) log(p(x)) the differential entropy, for which we use a calligraphic notation and involving integration over the whole many-body phase space [53]. Contrary to what one might naively expect, the differential entropy S is therefore not the continuous limit of the discrete entropy S, which diverges as log ∆ instead. Fortunately, this divergence does not affect the MI, which instead does have a welldefined continuous limit [53]. Indeed, in the expression of the MI in terms of the entropies, Eq. (1), the diverging terms of the entropies cancel, meaning that one can equivalently write
I A;B = S A + S B − S or I A;B = S A + S B − S, in the limit ∆ → 0.
This remark being made, we can now find an expression for the long-time asymptotic value of the MI in Hamiltonian dynamics. Due to Liouville's theorem, the integral of any function of p over the whole phase space is constant, d dt dx f (p(x)) = 0. In particular, the differential entropy is conserved, S(t) = S(0). While, strictly speaking, the total probability distribution does not thermalise due to chaos and incompressibility, see Section III, we expect the marginals to do so [67]. As a result, we can assume at sufficiently long times t that S A (t) + S B (t) ≈ S β + s al , where S β denotes the entropy of a thermal ensemble, whereas s al is an area-law correction. The inverse temperature β is found from the usual consistency condition H 0 = H β , that is, dx p(t = 0, x)H(x) = dx z −1 e −βH(x) H(x), with z = dxe −βH(x) the partition function. From Eq. (1) we thus obtain that the MI at sufficiently long times reaches the asymptotic value
I A;B (∞) ≈ S β − S(0) + s al .(10)
Eq. (10) states that the asymptotic MI between two halves A and B of a many-body Hamiltonian system is equal to the difference between the initial entropy and a thermal entropy at temperature β −1 set by the initial state p(0), up to a boundary correction s al . Note that the entropies in Eq. (10) can equivalently be meant as differential entropies or as discrete entropies in the continuous limit, since the diverging terms cancel anyway, as discussed above.
An important observation is that, by definition, the thermal distribution is the one that maximises the entropy at a given temperature, and so S β ≥ S(0), which is consistent with the positivity of the MI and in which the equality holds only for initial states with infinite temperature, for which β = 0 and H(x) 0 = dx H(x) dx , or for thermal initial distributions. Here, we use the term 'infinite temperature state' in reference not to the infinite temperature thermal distribution, but to any probability distribution whose expected energy coincides with the one that the infinite temperature thermal distribution would predict. From Eq. (10) we can thus conclude that initial non-thermal distributions p(0) corresponding to a finite effective temperature 0 < β < ∞ generally lead to a volume-law asymptotic MI, I A;B (∞) ≈ S β −S(0) = O(N ), whereas initial distributions p(0) that are either thermal or at an effective infinite temperature β = 0 result in an area-law asymptotic MI, I A;B (∞) ≈ s al N . Further, to estimate the latter, we can compute the MI associated to a random probability distribution, in the same spirit of the calculation of the EE for a random wavefunction [37]. Let us assume that the components of the probability p σ are independent identically distributed (i.i.d.) random numbers with average p σ = [ σ 1] −1 [68]. For large N and by virtue of the central limit theorem, the total entropy reads S ≈ − pσ log pσ pσ . The marginals become instead uniform, p σ A → p σ A and p σ B → p σ B , for which the marginal entropies are maximal, S A + S B ≈ − log p σ . Plugging these results in Eq. (1) quickly leads to the MI at infinite time and effective temperature
I ∞ A;B ≈ p σ p σ log p σ p σ .(11)
Eq. (10), Eq. (11), and the prediction of volume and arealaw scaling of the asymptotic MI in many-body Hamiltonian systems at finite and infinite effective temperature initial conditions, respectively, are among the major findings of this work, which we now verify numerically.
B. Case study: Heisenberg chain in a magnetic field
We numerically exemplify the above ideas for a paradigmatic model in the context of chaos, information spreading, and transport: the Heisenberg chain in a magnetic field. Consider N classical spins S j = S x j , S y j , S z j with unitary modulus |S j | 2 = 1 and j = 1, 2, . . . , N , and with Hamiltonian
H = N j=1 S j · S j+1 + h j S z j ,(12)
containing an isotropic nearest-neighbor interaction and a disordered magnetic field along z, the coefficients {h j } being independent random numbers drawn uniformly in [−W, W ]. The system undergoes Hamiltonian dynamics,
S α i = {S α i , H}, where {.
. . } denotes Poisson brackets and S α i , S β j = δ i,j α,β,γ S γ i , with δ i,j the Kronecker delta, α,β,γ the Levi-Civita anti-symmetric symbol, and α, β, and γ in {x, y, z}. Periodic boundary conditions are assumed, and results can be averaged over R independent disorder realizations, as relevant. Again, the bipartition (A, B) is chosen to correspond to the left and right halves of the chain (N is assumed even with no loss of generality).
The phase space of the system is that parametrized by 2N angles, two per spin, and is continuous as in any Hamiltonian system. As such, it requires discretization, which we perform in the minimal nontrivial way, that is, by splitting each solid angle in M = 2 hemispheres as illustrated Fig. 5(a). Two points that can be taken as representative for these hemispheres are the poles, which we may call 'singlebody reference points' and which we tag with a discrete variable σ j taking M values, say σ j = ±1 for the ±x hemispheres, respectively. The many-body phase space is correspondingly divided into M N 'pockets', each of which is represented by a many-body reference point (in which the spins points either towards +x or towards −x), and tagged by a bitstring σ = (σ 1 , σ 2 , . . . , σ N ), see Fig. 5(b). Explicitly, the σ-th pocket contains all the spin configurations such that sign (S x 1 , S x 2 , . . . , S x N ) = σ. Our goal is to find the dynamics of the discrete probability distribution p σ (t). The idea is the following. (i) First, we consider the M N many-body reference points as possible initial conditions. Specifically, we consider each spin to initially point either along +x or along −x, with probability q 0 and 1 − q 0 , respectively, so that an initial configuration tagged σ 0 has probability
p σ0 = N j=1 [σ 0,j q 0 + (1 − σ 0,j )(1 − q 0 )] .(13)
Again, the initial factorizability of the probability distribution makes our stochastic initial condition the one-to-one correspondent of quantum product states, and implies I A;B (0) = S e (0) = 0. (ii) Second, we evolve the M N trajectories integrating the ordinary differential equations (ODE) of Hamiltonian dynamics with an ODE solver, which necessarily makes them depart from the reference points they started from, and explore the continuous phase space. (iii) Third, each trajectory is projected onto the discrete space to define σ t = sign [S x 1 (t), S x 2 (t), . . . , S x N (t)] at time t. (iv) Fourth, the probability p σ (t) is obtained as
p σ (t) = σ0:σt=σ p σ0 (0),(14)
that is, as the sum of the initial probabilities p σ0 (0) of those states σ 0 that, at a time t, are found in the σ-th pocket of the phase space. (v) Fifth and final, the thus obtained vector of discretized probabilities p is used to compute quantities of interest, including the MI and cEE. For the trivial points q 0 = 0, 1 the spins are all initially perfectly polarized along either −x or +x, respectively, and remain so at all times, with no dynamics happening, no spreading of correlations, p(t) = p(0), and I A;B = S e = 0. On the other hand, the value q 0 = 0.5 corresponds to an infinite temperature state, H 0 = 0 = H β=0 .
First and foremost, we aim at verifying that the discretization protocol that we proposed leads to the desired key features of many-body dynamics, and in particular to the contrast between the equilibration of few-point observables and the non-equilibration of the probability itself,à la Liouville. In Fig. 5(c) we show, for increasing system sizes N , the dynamics of the magnetization m(t) = 1 N σ p σ (t) N j=1 σ j , of a correlator c(t) = 1 N σ p σ (t) N j=1 σ j σ j+1 , of a marginal probability p σ1=1 (t), and of the whole many-body probability p σ (t) for some randomly chosen discretized states σ. Fewpoint observables and marginal probabilities quickly equilibrate, with asymptotic temporal fluctuations as a finite-size result and decreasing with system size N (even at a fixed and minimal discretization resolution M = 2), whereas the manybody probability itself maintains (up to a factor 2 N , due to normalization) chaotic fluctuations in time, without clear dependence on the state index σ.
These key features being verified, we are now in the position to finally investigate information spreading in manybody Hamiltonian dynamics through the lens of MI and cEE, whose time traces we plot in Fig. 6 for various system sizes N and initial single-spin probability q 0 . Initially vanishing, both the MI and cEE grow linearly in time signalling the ballistic spreading of correlations, before saturating to an asymptotic value.
Let us first focus on the MI. At infinite temperature (q 0 = 0.5), we find that the asymptotic value of the MI becomes insensitive of N for N 10, see Fig. 6(a). The infinite temperature asymptotic value of the MI is understood from Eq. (11). Assuming that any of the 2 N trajectories can be found in any of the 2 N pockets of the phase space with equal likelihood, we have that p σ = 2 −N x σ , with x σ a random variable with binomial distribution of parameters 2 N and 2 −N . This converges, for N → ∞, to a Poisson distribution of parameter λ = 1, i.e., x σ = k with probability 1 ek! . From Eq. (11), we thus get that the asymptotic MI at infinite effective temperature is
I ∞ A;B ≈ ∞ k=1 log(k + 1) ek! ≈ 0.5734,(15)
that perfectly matches the numerics in Fig. 6(a). By contrast, at finite temperature (q 0 = 0.7) the asymptotic MI is proportional to N , i.e., extensive, see Fig. 6(b). Our numerics thus confirms the analytical prediction in Eq. (10): the asymptotic MI fulfills area-and volume-law scaling at infinite and finite temperature, respectively. As for the cEE, we instead find that both at finite and infinite temperature the asymptotic value fulfills volume law scaling, see Fig. 6(c,d). Indeed, extensivity of the cEE is understood not dissimilarly from the quantum case: a wavefunction with randomized components (in the computational basis) leads to an extensive cEE [37]. In the classical case, the randomness of the classical wavefunction, in Eq. (5), ultimately emerges as a result of chaos and incompressibility, that render the probability distribution highly structured, unlike a thermal one.
The different scaling of the asymptotic value of the MI and the cEE with system size at an infinite effective temperature is a major difference that we find between the two information measures. In fact, in the analogue quantum case, pure states at an infinite effective temperature do result in volume-law scaling asymptotic EE [37], which highlights the necessity for the cEE to be introduced as the close classical counterpart of the EE, able to capture features that the classical MI misses.
We perform a more in-depth scaling analysis in Fig. 7. In Fig. 7(a) we plot I A;B (t = ∞) and S e (t = ∞) versus the single-spin initial probability q 0 . As expected, in the trivial limits q 0 = 0, 1, corresponding to initially fully polarized spins and no dynamics, the MI and cEE remain zero at all times. For any other values of q 0 , nontrivial dynamics occurs, information spreads, and the asymptotic MI and cEE take a finite value. However, while volume-law scaling is observed for the cEE at any 0 < q 0 < 1, the MI is extensive only for q 0 = 0, 0.5, 1, whereas it fulfills area-law scaling at infinite temperature for q 0 = 0.5. Fig. 7(b) helps to better appreciate the specific analytic form of the scaling, that is linear in system size N . With a linear fit we can extract the scaling coefficient α, which we plot in Fig. 7(c) versus the single-spin initial probability q 0 , providing a comprehensive confirmation of our assessments concerning the scaling of MI and cEE.
VII. MANY-BODY INFORMATION IN PRACTICE -BRIDGING THE PROCEDURAL GAP
As we have shown, many of the key features of many-body information spreading are remarkably similar in the quantum and classical settings. The two, however, are fundamentally different because the many-body wavefunction |ψ can be encoded in a physical system, whereas the classical many-body probability distribution cannot [69]. However, we argue here that this difference is of limited practical relevance when measuring or computing the EE, cEE, or MI.
To obtain the quantum EE, two routes can be taken. (i) The first is to evolve the wavefunction on a classical computer, and use it to compute the EE. This requires exponentially large computational resources (memory and time). (ii) The second is to evolve the wavefunction on a physical quantum system, e.g., a quantum computer, which does not require exponentially large resources. However, computing the EE requires full tomography of the system and thus exponentially many runs [1]. Furthermore, once reconstructed, the wavefunction should again be stored on a classical computer, where with exponentially large resources one can then compute the EE.
The classical scenario is completely analogous, to the point that we can almost duplicate the previous paragraph: In the classical case, two routes can be taken to compute the dynamics of the cEE. (i) The first is to evolve the probability distribution on a classical computer through Eq. (7), and use it to compute the cEE. This route requires exponentially large computational resources (memory and time). (ii) The second is to run the actual classical dynamics itself on a physical implementation of a cellular automaton. Extracting the many-body probability distribution requires configuration statistics over exponentially many runs. Furthermore, once reconstructed, the probability distribution should again be stored on a computer, where with exponentially large resources one can then compute the cEE.
This remarkably close procedural classical-quantum analogy continues when introducing measurements as in Section V, which again requires one of the two routes (i) and (ii) above. In particular, to obtain the EE in an experiment, the following would be necessary. A first run of the experiment would return a list of measurement outcomes {m j (t)} and a physical state encoding the many-body wavefunction. To access the latter and compute the EE, however, one would need to run the experiments exponentially many times, making sure that the final to-be-measured wavefunction is always the same. That is, in the following runs one should not only repeat the measurements at the same spacetime locations, but also make sure that their outcome matches that of the first run, {m j (t)}, which requires post-selection. Indeed, this procedure perfectly matches the classical one described in Section V and Fig. 3(c).
VIII. DISCUSSION AND CONCLUSION
In this work, we have shown that many of the celebrated features of quantum many-body information spreading can also appear in a classical setting, provided that one explicitly accounts for the whole exponentially large probability distribution, without making any assumption on its equilibration. Within this framework, we have shown that key phenomenology of quantum EE dynamics, namely linear growth until saturation to an extensive value, also occurs in both the classi- cal MI and the 'classical EE' (cEE) defined above. We verified this behaviour for both cellular automata and Hamiltonian systems. For the former, we highlighted time-reversibility to play a key role in information spreading, and found the first instance of a classical MIPT, in direct correspondence with its quantum counterpart. As for Hamiltonian dynamics, we instead provided a simple expression for the asymptotic MI, which yields volume-and area-law scaling for states at an effective finite and infinite temperature, respectively. We verified this prediction numerically in the case of the Heisenberg chain, and showed it is in contrast to the cEE, whose asymptotic value is extensive even at infinite effective temperature, like in the quantum case. Finally, we argued that the study of classical and quantum information spreading requires in practice analogous procedures, involving exponentially large resources in both cases.
Since EE is defined for quantum systems, one may naively assume that effects that are described or measured through it are also purely quantum. Of course, the interplay of carefully engineered interference effects and many-body entanglement can lead to striking quantum effects, for instance enabling quantum computing [1]. By contrast, the entanglement generated in the chaotic dynamics relevant in the context of thermalization is generally 'uncontrolled'. Rather than allowing to efficiently perform a computational task, it requires exponentially large resources or many runs for its characterization, effectively resulting in physics that is much more classical than one could have expected. Indeed, here we have shown that a whole range of dynamical behaviours of the EE also emerges in the context of classical correlation spreading.
The idea of sampling and evolving many classical initial conditions adopted here is reminiscent of phase space methods like the Truncated Wigner Approximation (TWA) [70][71][72][73][74]. The mission of the latter is to efficiently simulate the dynamics of quantum many-body systems by evolving an ensemble of classical trajectories, for which the sampling of the initial conditions is constructed so to most faithfully mimic quantum fluctuations. A classical-quantum link for EE has also been established for quantum systems with a well-defined classical limit, for which the EE dynamics can be understood in relation to the presence (or lack) of underlying classical chaos [75].
The philosophy of our work contrasts with that mission: while we got inspired by the machinery of quantum mechanics to frame the classical problem on par with the quantum one, our goal was not to construct a classical model that could quantitatively emulate quantum physics, nor to link the classical and quantum descriptions in the semiclassical limit. Instead, we aimed at showing that the main qualitative features of many-body information spreading are analogous in the classical and quantum settings. If the classical-quantum quantitative agreement of few-point observables in the TWA is by construction, in the sense that it requires devising a classical protocol that emulates the quantum one, the classical-quantum qualitative agreement of multi-point information measures in this work is intrinsic, in the sense that it just happens, from the most natural setting of initially independent local fluctuations that become nonlocally correlated through dynamics. In light of this philosophy, our classical treatment did not aim at being a simple and efficient approximation of quantum mechanics, but conversely at showing that classical and quantum information spreading can be equally complex and rich.
Looking forward, our work opens a broad spectrum of research questions. Indeed, within the framework and with the tools that we have outlined here, the study of classical information spreading could be as fruitful as it has been in the quantum case. For instance, a very interesting question regards the role of strong disorder in classical information spreading. In the quantum case, disorder can lead to MBL, which is characterized by a peculiar slow growth ∼ log t of the EE, until reaching asymptotic extensivity [12]. In the classical case MBL is not possible but transport is nonetheless strongly suppressed [76], and a slow growth of the cEE is expected. The precise characterization of the functional form of such a growth is certainly worth studying. On a related line, further research should investigate what role the counterparts of integrability [77,78] and conserved quantities [51] play in classical information spreading. The cEE could then shed new light on classical prethermalization [79][80][81][82] and prethermal discrete time crystals [83][84][85]. As well, it is now natural to expect that the research thread on quantum MIPT [26][27][28][29][30][31][32][33][34][35] would carry over to the classical realm, in which the study of different kinds of automata, initial conditions, and measurement protocols would be worthwhile.
The many-body probabilistic framework that we established could then be used to firmly address the question what the strict classical counterparts of other information-related quantities from current quantum research are. For instance, it could be used to define a classical out-of-time order correlator (OTOC) going beyond the decorrelator in Refs. [45,49] (based on just two copies of the system, rather than on a probability distribution over exponentially many of them).
Finally, on a higher and open-ended level, our work opens the way to quantum-inspired advances in physics, information theory, and statistics. Indeed, while quantum EE can only be applied to pure quantum systems, the cEE that we introduced in Eq. (4) has potentially a very broad range of applicability. Beyond the information spreading considered here, it is suited for any probability distribution. Indeed, the cEE can capture some features that the MI misses (e.g., here, extensivity at infinite temperature), and could, thus, prove a powerful quantum-inspired tool to characterize new aspects of classical correlations.
FIG. 1 .
1Emergence of highly structured states in classical and quantum many-body dynamics. (a) Schematic representation of the evolution of the probability mass in the classical phase space under Hamiltonian dynamics.
FIG. 2 .
2Many-body information spreading in classical cellular automata. (a) Rule 30 cellular automaton (top), and its (second-order) reversible version Rule 30R (bottom). (b) Instances of the space-time profiles of Rule 30 (left) and Rule 30R (right). (c) Graphical representation of the subsystems A and B that are used to compute MI and cEE. The state of the system at time t is unambiguously defined by the state of the bits at time t and at time t − 1. (d) Dynamics of MI and cEE for various system sizes N . In Rule 30 (left), the dynamics of MI and cEE
FIG. 3 .
3Classical automaton with measurements -A schematic protocol illustration. (a) Each spacetime point can be measured (green boxes) with probability pm. (b) The corresponding evolution of the many-body probability dynamics p(t) can be obtained in a computer simulation. The probability is initialized as p(t) = p(0), and evolved under the automaton dynamics according to the map p(t + 1) = F [p(t)]
FIG. 4 .
4Classical measurement-induced phase transition. (a) MI and cEE dynamics for various measurement rates pm and system sizes N . After a very quick transient (partially cut), the MI and cEE saturate to a stationary value, up to temporal fluctuations due to finite-size effects. Crucially, the stationary value increases with N if pm is small enough, and is insensitive of N otherwise. (b) The long-time asymptotic values IA;B(t = ∞) and Se(t = ∞) are obtained by averaging the results in (a) over the second half of the shown time interval, and plotted versus pm for various system sizes N . The MI and cEE decay with pm, reaching 0 at pm = 1. (c) The scaling coefficients α, such that IA;B(t = ∞) ≈ αMI N and Se(∞) ≈ αEEN , are obtained through linear fits for and highlight an entangling-disentangling MIPT (the fitting excludes N = 4 and 6 to reduce finite-size effects). For pm 0.5 the scaling coefficients are finite, indicating extensivity of the asymptotic MI and cEE and thus unbounded growth in the thermodynamic limit. By contrast, for pm 0.5 the long-time values of MI and cEE become insensitive to system size N . Here, we used q0 = 0.8, q = 0.75, and R = 100.
FIG. 5 .
5Phase-space discretization for Hamiltonian dynamics. (a) Discretization of the single-body phase space of a classical spin. The sphere is divided into two hemispheres, each identified by a reference point (at the pole, in red). (b) Discretization of the many-body phase space, that is divided into M N pockets, each of which is identified by a many-body reference point (red dots). A schematic trajectory starting from a reference point (larger red dot) is shown in blue. (c) Protocol for obtaining the dynamics of the discrete probability, in the illustrative case of N = 4 spins. As initial conditions, we consider the 2 N possible spin states in which each spin is either aligned or antialigned along x (corresponding to the reference points in (b)). Initial states are labelled with a bit-string σ and their associated probability pσ 0 factorizes. The trajectories are evolved under Hamiltonian dynamics, and their coordinates in the reduced phase space, obtained as σt = sign (S x 1 , . . . , S x N ), are used to compute the probability distribution pσ(t). (d) The ensemble averages of few-point observables, such as magnetization m, correlation c, and one-site marginal distribution pσ 1 (t), equilibrate in time to the thermal value, up to temporal fluctuations that decrease with the system size N . By contrast, the many-body probability pσ(t) itself does not equilibrate, independent of N . Here, we considered the spin model in Eq.(12), with parameters J = 1, W = 2, R = 1, and q0 = 0.7.
FIG. 6 .
6Information dynamics in Hamiltonian many-body system. The MI (top) and the cEE (bottom) are plotted versus time for various system sizes N . Two representative single-spin initial probabilities are considered: q0 = 0.5 (left) and q0 = 0.7 (right), corresponding to infinite and finite effective temperatures, respectively. In all the considered cases, the quantities of interest initially grow linearly in time signalling the ballistic spreading of correlations. At sufficiently long times, saturation to an asymptotic value sets in. (a) At infinite temperature, the asymptotic MI becomes insensitive to N for N 10, reaching the value predicted in Eq. (15) and fulfilling area-law scaling, while (b) at finite temperature it fulfills a volumelaw scaling instead. (c,d) On the other hand, the asymptotic cEE is extensive both at finite and infinite temperature. Here, we considered the Heisenberg chain in Eq. (12) for J = 1, W = 3, and R = 50.
FIG. 7 .
7Long-time scaling analysis in many-body Hamiltonian dynamics. (a) The asymptotic MI IA;B(t = ∞) and cEE Se(t = ∞) are plotted versus the single-spin initial probability q0 for various system sizes N . Solid lines and shaded bands represent mean and uncertainty (over the late time range 20 < t < 30 and disorder realizations). At q0 = 0, 1 the dynamics freezes and no information spreads, leaving IA;B = Se = 0. At infinite temperature, q0 = 0.5, the asymptotic MI and cEE present local minima, but with a key difference: the former becomes independent of N for N 10, whereas the latter grows with N . For other values of q0, both the asymptotic MI and cEE scale with N . (b) The scaling is better characterized for representative values of q0, and found to be linear at large N , ∼ αN . The infinite-temperature value of the MI predicted in Eq. (15) is reported for comparison. (c) The ultimate appreciation of the scaling properties is achieved by plotting the scaling coefficient α versus the initial probability q0, as obtained from a linear fit (of the three largest values of N only, to reduce small-size effects). Finite and vanishing values of α denote volume-and area-law scaling, respectively. The latter occurs at the trivial points q0 = 0, 1 and, only for the MI, at infinite temperature q0 = 0.5. Here, we used J = 1, W = 3, and R = 50.
TABLE I .
IBridging the classical-quantum gap in many-body information spreading. We summarize the key differences and, mostly, close analogies between classical and quantum information spreading. The setting we refer to is one in which initially local classical or quantum fluctuations become nonlocal under some time-reversible and local dynamics. Disclaimer: the lines marked with † are valid under the general circumstances specified in the main text. The last column points to suitable Sections, Figures, or References in which each comparison can be best appreciated.Classical
Quantum
Reference
Space
phase space
Hilbert space
N -Particle space size
∼ e O(N )
∼ e O(N )
State of the system
probability distribution p
wavefunction |ψ
Bipartite (c)EE Se
Eq. (4)
Eq. (2)
Sec. II
Se = 0 if...
pA,B = pApB
|ψ = |ψA |ψB
Sec. II
Few-point observables become †
thermal
thermal
Sec. III
The actual MI can be †
volume law
volume law
Secs. IV, VI
The MI of a thermal ensemble is †
area law
area law
Ref. [36]
The probability/wavefunction becomes †
effectively randomized
effectively randomized
Sec. III
Origin of effective randomization †
chaos and incompressibility chaotic spectrum and dephasing Sec. III
The effective randomization underlines †
volume law cEE
volume law EE
Ref. [37]
For N → ∞ the EE shows †
unbounded linear growth
unbounded linear growth
Figs. 2, 6
Site j is measured according to...
the marginal probability pj the reduced density matrix ρj
Sec. V
Right after a measurement, site j is...
'factored out', p = pjp \j
'factored out', ρ = ρjρ \j
Sec. V
Information spreading vs measurements can lead to
MIPT
MIPT
Sec. V
Computing the (c)EE classically requires
exponentially large resources exponentially large resources
Sec. VII
Extracting the (c)EE experimentally requires
exponentially many runs
exponentially many runs
Sec. VII
Author contributions. A. P. conceived the study, performed the calculations, and drafted the manuscript. All the authors contributed with continued discussion of ideas and results, and with the finalization of the writing.
Quantum computation and quantum information. M A Nielsen, I Chuang, M. A. Nielsen and I. Chuang, "Quantum computation and quan- tum information," (2002).
. I M Georgescu, S Ashhab, F Nori, Reviews of Modern Physics. 86153I. M. Georgescu, S. Ashhab, and F. Nori, Reviews of Modern Physics 86, 153 (2014).
. E Altman, K R Brown, G Carleo, L D Carr, E Demler, C Chin, B Demarco, S E Economou, M A Eriksson, K.-M C Fu, PRX Quantum. 217003E. Altman, K. R. Brown, G. Carleo, L. D. Carr, E. Demler, C. Chin, B. DeMarco, S. E. Economou, M. A. Eriksson, K.- M. C. Fu, et al., PRX Quantum 2, 017003 (2021).
. A Polkovnikov, K Sengupta, A Silva, M Vengalattore, Reviews of Modern Physics. 83863A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore, Reviews of Modern Physics 83, 863 (2011).
. C Gogolin, J Eisert, Reports on Progress in Physics. 7956001C. Gogolin and J. Eisert, Reports on Progress in Physics 79, 056001 (2016).
. A Browaeys, T Lahaye, Nature Physics. 16132A. Browaeys and T. Lahaye, Nature Physics 16, 132 (2020).
. C Gross, I Bloch, Science. 357995C. Gross and I. Bloch, Science 357, 995 (2017).
. P Calabrese, J Cardy, Journal of Statistical Mechanics: Theory and Experiment. 4010P. Calabrese and J. Cardy, Journal of Statistical Mechanics: Theory and Experiment 2005, P04010 (2005).
. G De Chiara, S Montangero, P Calabrese, R Fazio, Journal of Statistical Mechanics: Theory and Experiment. 3001G. De Chiara, S. Montangero, P. Calabrese, and R. Fazio, Journal of Statistical Mechanics: Theory and Experiment 2006, P03001 (2006).
. L Amico, R Fazio, A Osterloh, V Vedral, Reviews of modern physics. 80517L. Amico, R. Fazio, A. Osterloh, and V. Vedral, Reviews of modern physics 80, 517 (2008).
. P Calabrese, J Cardy, Journal of physics a: mathematical and theoretical. 42504005P. Calabrese and J. Cardy, Journal of physics a: mathematical and theoretical 42, 504005 (2009).
. J H Bardarson, F Pollmann, J E Moore, Physical review letters. 10917202J. H. Bardarson, F. Pollmann, and J. E. Moore, Physical review letters 109, 017202 (2012).
. H Kim, D A Huse, Physical review letters. 111127205H. Kim and D. A. Huse, Physical review letters 111, 127205 (2013).
. W W Ho, D A Abanin, Physical Review B. 9594302W. W. Ho and D. A. Abanin, Physical Review B 95, 094302 (2017).
. A Nahum, J Ruhman, S Vijay, J Haah, Physical Review X. 731016A. Nahum, J. Ruhman, S. Vijay, and J. Haah, Physical Review X 7, 031016 (2017).
. A Nahum, S Vijay, J Haah, Physical Review X. 821014A. Nahum, S. Vijay, and J. Haah, Physical Review X 8, 021014 (2018).
. V Khemani, A Vishwanath, D A Huse, Physical Review X. 831057V. Khemani, A. Vishwanath, and D. A. Huse, Physical Review X 8, 031057 (2018).
. C Von Keyserlingk, T Rakovszky, F Pollmann, S L Sondhi, Physical Review X. 821013C. Von Keyserlingk, T. Rakovszky, F. Pollmann, and S. L. Sondhi, Physical Review X 8, 021013 (2018).
. S Gopalakrishnan, D A Huse, V Khemani, R Vasseur, Physical Review B. 98220303S. Gopalakrishnan, D. A. Huse, V. Khemani, and R. Vasseur, Physical Review B 98, 220303 (2018).
. V Alba, P Calabrese, SciPost Physics. 417V. Alba and P. Calabrese, SciPost Physics 4, 017 (2018).
. D A Abanin, E Altman, I Bloch, M Serbyn, Reviews of Modern Physics. 9121001D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Reviews of Modern Physics 91, 021001 (2019).
. T Zhou, A Nahum, Physical Review B. 99174205T. Zhou and A. Nahum, Physical Review B 99, 174205 (2019).
. B Bertini, P Kos, T Prosen, Physical Review X. 921033B. Bertini, P. Kos, and T. Prosen, Physical Review X 9, 021033 (2019).
. M Ippoliti, T Rakovszky, V Khemani, Physical Review X. 1211045M. Ippoliti, T. Rakovszky, and V. Khemani, Physical Review X 12, 011045 (2022).
. M Serbyn, Z Papić, D A Abanin, Physical review letters. 110260601M. Serbyn, Z. Papić, and D. A. Abanin, Physical review letters 110, 260601 (2013).
. B Skinner, J Ruhman, A Nahum, Physical Review X. 931009B. Skinner, J. Ruhman, and A. Nahum, Physical Review X 9, 031009 (2019).
. Y Li, X Chen, M P Fisher, Physical Review B. 100134306Y. Li, X. Chen, and M. P. Fisher, Physical Review B 100, 134306 (2019).
. Y Bao, S Choi, E Altman, Physical Review B. 101104301Y. Bao, S. Choi, and E. Altman, Physical Review B 101, 104301 (2020).
. S Choi, Y Bao, X.-L Qi, E Altman, Physical Review Letters. 12530505S. Choi, Y. Bao, X.-L. Qi, and E. Altman, Physical Review Letters 125, 030505 (2020).
. Q Tang, W Zhu, Physical Review Research. 213022Q. Tang and W. Zhu, Physical Review Research 2, 013022 (2020).
. C.-M Jian, Y.-Z You, R Vasseur, A W Ludwig, Physical Review B. 101104302C.-M. Jian, Y.-Z. You, R. Vasseur, and A. W. Ludwig, Physical Review B 101, 104302 (2020).
. A Zabalo, M J Gullans, J H Wilson, S Gopalakrishnan, D A Huse, J Pixley, Physical Review B. 10160301A. Zabalo, M. J. Gullans, J. H. Wilson, S. Gopalakrishnan, D. A. Huse, and J. Pixley, Physical Review B 101, 060301 (2020).
. A Nahum, S Roy, B Skinner, J Ruhman, PRX Quantum. 210352A. Nahum, S. Roy, B. Skinner, and J. Ruhman, PRX Quantum 2, 010352 (2021).
. M Ippoliti, M J Gullans, S Gopalakrishnan, D A Huse, V Khemani, Physical Review X. 1111030M. Ippoliti, M. J. Gullans, S. Gopalakrishnan, D. A. Huse, and V. Khemani, Physical Review X 11, 011030 (2021).
. M Block, Y Bao, S Choi, E Altman, N Y Yao, Physical Review Letters. 12810604M. Block, Y. Bao, S. Choi, E. Altman, and N. Y. Yao, Physical Review Letters 128, 010604 (2022).
. M M Wolf, F Verstraete, M B Hastings, J I Cirac, Physical review letters. 10070502M. M. Wolf, F. Verstraete, M. B. Hastings, and J. I. Cirac, Physical review letters 100, 070502 (2008).
. D N Page, Physical review letters. 711291D. N. Page, Physical review letters 71, 1291 (1993).
. U Schollwöck, Annals of physics. 32696U. Schollwöck, Annals of physics 326, 96 (2011).
A Weiße, H Fehske, Computational many-particle physics. SpringerA. Weiße and H. Fehske, in Computational many-particle physics (Springer, 2008) pp. 529-544.
. P Weinberg, M Bukov, SciPost Physics. 23P. Weinberg and M. Bukov, SciPost Physics 2, 003 (2017).
. J A Vastano, H L Swinney, Physical Review Letters. 601773J. A. Vastano and H. L. Swinney, Physical Review Letters 60, 1773 (1988).
. S Lepri, A Politi, A Torcini, Journal of statistical physics. 821429S. Lepri, A. Politi, and A. Torcini, Journal of statistical physics 82, 1429 (1996).
. G Giacomelli, R Hegger, A Politi, M Vassalli, Physical review letters. 853616G. Giacomelli, R. Hegger, A. Politi, and M. Vassalli, Physical review letters 85, 3616 (2000).
. A Das, S Chakrabarty, A Dhar, A Kundu, D A Huse, R Moessner, S S Ray, S Bhattacharjee, Physical review letters. 12124101A. Das, S. Chakrabarty, A. Dhar, A. Kundu, D. A. Huse, R. Moessner, S. S. Ray, and S. Bhattacharjee, Physical review letters 121, 024101 (2018).
. T Bilitewski, S Bhattacharjee, R Moessner, Physical Review Letters. 121250602T. Bilitewski, S. Bhattacharjee, and R. Moessner, Physical Re- view Letters 121, 250602 (2018).
. V Khemani, D A Huse, A Nahum, Physical Review B. 98144304V. Khemani, D. A. Huse, and A. Nahum, Physical Review B 98, 144304 (2018).
. M Kumar, A Kundu, M Kulkarni, D A Huse, A Dhar, Physical Review E. 10222130M. Kumar, A. Kundu, M. Kulkarni, D. A. Huse, and A. Dhar, Physical Review E 102, 022130 (2020).
. T Bilitewski, S Bhattacharjee, R Moessner, Physical Review B. 103174302T. Bilitewski, S. Bhattacharjee, and R. Moessner, Physical Re- view B 103, 174302 (2021).
. S.-W Liu, J Willsher, T Bilitewski, J.-J Li, A Smith, K Christensen, R Moessner, J Knolle, Physical Review B. 10394109S.-W. Liu, J. Willsher, T. Bilitewski, J.-J. Li, A. Smith, K. Christensen, R. Moessner, and J. Knolle, Physical Review B 103, 094109 (2021).
. A J Mcroberts, T Bilitewski, M Haque, R Moessner, Physical Review B. 105100403A. J. McRoberts, T. Bilitewski, M. Haque, and R. Moessner, Physical Review B 105, L100403 (2022).
. A Deger, S Roy, A Lazarides, arXiv:2202.11726arXiv preprintA. Deger, S. Roy, and A. Lazarides, arXiv preprint arXiv:2202.11726 (2022).
M Kardar, Statistical physics of particles. Cambridge University PressM. Kardar, Statistical physics of particles (Cambridge Univer- sity Press, 2007).
T M Cover, J A Thomas, Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing). Wiley-Interscience, USAT. M. Cover and J. A. Thomas, Elements of Information The- ory (Wiley Series in Telecommunications and Signal Process- ing) (Wiley-Interscience, USA, 2006).
The reduced density matrix can be expressed as ρA = P (1/2) T P (1/2) , where P (1/2) is a 'squared probability ma. The reduced density matrix can be expressed as ρA = P (1/2) T P (1/2) , where P (1/2) is a 'squared probability ma-
Given a vector z, we have that is. A , B Pa, B , that ρA is positive semi-definiteA,B = √ pA,B. Given a vector z, we have that is, that ρA is positive semi-definite.
. C Wetterich, Nuclear Physics B. 917241C. Wetterich, Nuclear Physics B 917, 241 (2017).
. C Wetterich, arXiv:2011.02867arXiv preprintC. Wetterich, arXiv preprint arXiv:2011.02867 (2020).
Strictly speaking, the classical system is described by a continuous probability density on phase space. This will be discussed later in Section VI. but should not disturb the reading nowStrictly speaking, the classical system is described by a contin- uous probability density on phase space. This will be discussed later in Section VI, but should not disturb the reading now.
. J M Deutsch, Phys. Rev. A. 432046J. M. Deutsch, Phys. Rev. A 43, 2046 (1991).
. M Srednicki, Phys. Rev. E. 50888M. Srednicki, Phys. Rev. E 50, 888 (1994).
. M Rigol, V Dunjko, M Olshanii, Nature. 452854M. Rigol, V. Dunjko, and M. Olshanii, Nature 452, 854 (2008).
. J M Deutsch, Reports on Progress in Physics. 8182001J. M. Deutsch, Reports on Progress in Physics 81, 082001 (2018).
Physical review letters. Y Atas, E Bogomolny, O Giraud, G Roux, 11084101Y. Atas, E. Bogomolny, O. Giraud, and G. Roux, Physical re- view letters 110, 084101 (2013).
S Wolfram, A New Kind of Science. Champaign, Ilinois, USAWolfram Media IncS. Wolfram, A New Kind of Science (Wolfram Media Inc., Champaign, Ilinois, USA, 2002).
. M Aldana, S Coppersmith, L P Kadanoff, Perspectives and Problems in Nolinear Science. 23M. Aldana, S. Coppersmith, and L. P. Kadanoff, Perspectives and Problems in Nolinear Science , 23 (2003).
This would correspond to a phase-space reduction, which is sensible provided that only 2 N possible initial conditions are considered (e.g., by setting sj(t = −1) = 0 while allowing for random sj(t = 0)). This idea will become clearer in Section VI for Hamiltonian dynamics, where such a reduction is necessary for phase-space discretization purposes. also for second-order cellular automata like Rule 30R, a shorter probability distribution defined on the 2 N single-time N -bit strings only. While this is possible, we have adopted a cellular automaton here precisely so we do not to have to deal with any phase-space reductionNote that to simplify numerics one could alternatively consider, also for second-order cellular automata like Rule 30R, a shorter probability distribution defined on the 2 N single-time N -bit strings only. This would correspond to a phase-space reduction, which is sensible provided that only 2 N possible initial condi- tions are considered (e.g., by setting sj(t = −1) = 0 while al- lowing for random sj(t = 0)). This idea will become clearer in Section VI for Hamiltonian dynamics, where such a reduction is necessary for phase-space discretization purposes. While this is possible, we have adopted a cellular automaton here precisely so we do not to have to deal with any phase-space reduction.
. J Willsher, S.-W Liu, R Moessner, J Knolle, arXiv:2203.11303arXiv preprintJ. Willsher, S.-W. Liu, R. Moessner, and J. Knolle, arXiv preprint arXiv:2203.11303 (2022).
One can increase the number of volumes Bn to make their volume small enough, so that the energy H within each of them can be considered constant (that is, with variation small compared to the involved energy scales). The probability distribution p(A, B) fluctuates over a phase space length scale ∼ e −λt that becomes arbitrarily small and eventually much smaller than that of Bn. Having arbitrarily many fluctuations within Bn, we expect the probability to self-average, meaning that Bn dB p(A, B) ≈ Bn dB p th (A, B). The continuous marginal pA(A) = dB p(A, B) can be , pA(A) = n Bn dB p. Rebuilding the integral one thus gets pA(A). = dB p th (A, B), that is, that the marginal thermalizesThe continuous marginal pA(A) = dB p(A, B) can be , pA(A) = n Bn dB p(A, B). One can increase the number of volumes Bn to make their vol- ume small enough, so that the energy H within each of them can be considered constant (that is, with variation small com- pared to the involved energy scales). The probability distribu- tion p(A, B) fluctuates over a phase space length scale ∼ e −λt that becomes arbitrarily small and eventually much smaller than that of Bn. Having arbitrarily many fluctuations within Bn, we expect the probability to self-average, meaning that Bn dB p(A, B) ≈ Bn dB p th (A, B). Rebuilding the inte- gral one thus gets pA(A) = dB p th (A, B), that is, that the marginal thermalizes.
Strictly speaking, the components of the probability are not independent, because they are tied by the normalization σ pσ. However, for N → ∞, that is for a diverging number of possible microstates, the normalization constraint becomes weaker and weaker. and the components of the probability distribution can be considered as effectively i.i.d.Strictly speaking, the components of the probability are not in- dependent, because they are tied by the normalization σ pσ. However, for N → ∞, that is for a diverging number of pos- sible microstates, the normalization constraint becomes weaker and weaker, and the components of the probability distribution can be considered as effectively i.i.d..
This is in analogy with the quantum case, in which the evolution of the wavefunction proceeds undisturbed as long as one does not measure the system. The two cases, however, differ in that the quantum wavefunction is composed of a superposition of states. As discussed in Section IV, the probability distribution is implicitly evolved by the system as long as the state of the system is not observed. which is physical rather than reflecting our ignorance on the systemAs discussed in Section IV, the probability distribution is im- plicitly evolved by the system as long as the state of the system is not observed. This is in analogy with the quantum case, in which the evolution of the wavefunction proceeds undisturbed as long as one does not measure the system. The two cases, however, differ in that the quantum wavefunction is composed of a superposition of states, which is physical rather than re- flecting our ignorance on the system.
. A Polkovnikov, S Sachdev, S Girvin, Physical Review A. 6653607A. Polkovnikov, S. Sachdev, and S. Girvin, Physical Review A 66, 053607 (2002).
. A Sinatra, C Lobo, Y Castin, Journal of Physics B: Atomic, Molecular and Optical Physics. 353599A. Sinatra, C. Lobo, and Y. Castin, Journal of Physics B: Atomic, Molecular and Optical Physics 35, 3599 (2002).
. A Polkovnikov, Physical Review A. 6853604A. Polkovnikov, Physical Review A 68, 053604 (2003).
. A Polkovnikov, Annals of Physics. 3251790A. Polkovnikov, Annals of Physics 325, 1790 (2010).
. J Schachenmayer, A Pikovski, A M Rey, Physical Review X. 511022J. Schachenmayer, A. Pikovski, and A. M. Rey, Physical Re- view X 5, 011022 (2015).
. A Lerose, S Pappalardi, Physical Review A. 10232404A. Lerose and S. Pappalardi, Physical Review A 102, 032404 (2020).
. V Oganesyan, A Pal, D A Huse, Physical Review B. 80115104V. Oganesyan, A. Pal, and D. A. Huse, Physical Review B 80, 115104 (2009).
. B Buča, K Klobas, T Prosen, Journal of Statistical Mechanics: Theory and Experiment. 202174001B. Buča, K. Klobas, and T. Prosen, Journal of Statistical Me- chanics: Theory and Experiment 2021, 074001 (2021).
. J W Wilkinson, T Prosen, J P Garrahan, Physical Review E. 10534124J. W. Wilkinson, T. Prosen, and J. P. Garrahan, Physical Review E 105, 034124 (2022).
. A Rajak, R Citro, E G Dalla Torre, Journal of Physics A: Mathematical and Theoretical. 51465001A. Rajak, R. Citro, and E. G. Dalla Torre, Journal of Physics A: Mathematical and Theoretical 51, 465001 (2018).
. T Mori, Physical Review B. 98104303T. Mori, Physical Review B 98, 104303 (2018).
. A Rajak, I Dana, E G Dalla Torre, Physical Review B. 100100302A. Rajak, I. Dana, and E. G. Dalla Torre, Physical Review B 100, 100302 (2019).
. O Howell, P Weinberg, D Sels, A Polkovnikov, M Bukov, Physical Review Letters. 12210602O. Howell, P. Weinberg, D. Sels, A. Polkovnikov, and M. Bukov, Physical Review Letters 122, 010602 (2019).
. A Pizzi, A Nunnenkamp, J Knolle, Phys. Rev. Lett. 127140602A. Pizzi, A. Nunnenkamp, and J. Knolle, Phys. Rev. Lett. 127, 140602 (2021).
. A Pizzi, A Nunnenkamp, J Knolle, Phys. Rev. B. 10494308A. Pizzi, A. Nunnenkamp, and J. Knolle, Phys. Rev. B 104, 094308 (2021).
. B Ye, F Machado, N Y Yao, Physical Review Letters. 127140603B. Ye, F. Machado, and N. Y. Yao, Physical Review Letters 127, 140603 (2021).
| [] |
[
"Nonleptonic B s to charmonium decays and their role in the determination of the β s",
"Nonleptonic B s to charmonium decays and their role in the determination of the β s"
] | [
"Wei Wang \nIstituto Nazionale di Fisica Nucleare\nSezione di Bari\nVia Orabona 4I-70126-BariItaly\n"
] | [
"Istituto Nazionale di Fisica Nucleare\nSezione di Bari\nVia Orabona 4I-70126-BariItaly"
] | [] | This talk consists of two parts. We first present a light-cone QCD sum rule computation of the B s → f 0 (980) form factors which are necessary inputs in semileptonic and nonleptonic B s decays into f 0 (980). Then we analyze nonleptonic B s decays into a charmonium state and a light meson, which are potentially useful to access the B s -B s mixing phase β s . We explore the experimental feasibility of measuring these various channels, paying attention to different determinations of β s in view of the hints of new physics recently emerged in the B s sector. | 10.1063/1.3536575 | [
"https://arxiv.org/pdf/1010.1924v1.pdf"
] | 118,580,852 | 1010.1924 | a7583b6ac798ce45e78c04099207549b36f0f1e0 |
Nonleptonic B s to charmonium decays and their role in the determination of the β s
10 Oct 2010
Wei Wang
Istituto Nazionale di Fisica Nucleare
Sezione di Bari
Via Orabona 4I-70126-BariItaly
Nonleptonic B s to charmonium decays and their role in the determination of the β s
10 Oct 2010B s decaysQCD sum rules PACS: 1325Hw1238Lg
This talk consists of two parts. We first present a light-cone QCD sum rule computation of the B s → f 0 (980) form factors which are necessary inputs in semileptonic and nonleptonic B s decays into f 0 (980). Then we analyze nonleptonic B s decays into a charmonium state and a light meson, which are potentially useful to access the B s -B s mixing phase β s . We explore the experimental feasibility of measuring these various channels, paying attention to different determinations of β s in view of the hints of new physics recently emerged in the B s sector.
INTRODUCTION
The detailed study of CP violation is a powerful and rigorous tool in the discrimination between the Standard Model (SM) and alternative scenarios. For instance the analysis of the B s unitarity triangle of the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements: V us V * ub + V cs V * cb + V ts V * tb = 0 provides an important test of the SM description of CP violation. One of its angles, defined as β s = Arg − V ts V * tb V cs V * cb , is half of the phase in the B s -B s mixing, and is expected to be tiny in the SM: β s ≃ 0.017 rad. The current measurements, by the CDF and DØ collaborations at Tevatron based on the angular analysis of the time-dependent differential decay width in the process B s → J/ψφ [1], indicate larger values and the averaged results are consistent with SM only at 2.2 σ level: φ J/ψφ s = −2β s = −0.77± 0.29 0.37 or φ J/ψφ s = −2β s = −2.36± 0.37 0.29 [2]. Although the recent result by the CDF: β s ∈ [0.0, 0.5] U [1.1, 1.5] (at 68% confidence level) [3] has a smaller deviation from the SM, the uncertainties are still large and the precise measurement of β s is a priority for the forthcoming experiments. Towards this direction the nonleptonic B s decays are certainly of prime importance.
In this work we first compute the B s → f 0 (980) form factors using the light-cone QCD sum rule (LCSR) [4]. These results will be useful in the analysis of semileptonic and nonleptonic B s → f 0 decays. Subsequently we investigate the B s decay modes induced by the transition b → ccs, namely B s → M cc + L, where M cc is an s-wave or p-wave charmonium state and L is a light scalar, pseudoscalar or vector meson, f 0 (980), η, η ′ , φ [5]. In particular, we exploit the generalized factorization approach to calculate their branching fractions in the SM in order to understand which of these modes are better suitable to determine β s .
B s → f 0 FORM FACTORS IN LCSR
Hereafter we will use f 0 to denote f 0 (980) meson for simplicity. The parametrization of matrix elements involved in B s → f 0 transitions is expressed in terms of the form factors
f 0 (p f 0 )|J 5 µ |B s (p B s ) = −i F 1 (q 2 ) P µ − m 2 B s − m 2 f 0 q 2 q µ + F 0 (q 2 ) m 2 B s − m 2 f 0 q 2 q µ , f 0 (p f 0 )|J 5T µ |B s (p B s ) = − F T (q 2 ) m B s + m f 0 q 2 P µ − (m 2 B s − m 2 f 0 )q µ ,(1)
where
P = p B s + p f 0 , q = p B s − p f 0 , and J 5 µ =sγ µ γ 5 b, J 5T µ =sσ µν γ 5 q ν b .
To compute such form factors in the LCSR [6] we consider the correlation function: The hadronic representation of the correlation function in Eq. (2) can be written as the sum of the contribution of theB s state and that of the higher resonances and the continuum of states h:
Π(p f 0 , q) = i d 4 x e iq·x f 0 (p f 0 )|T { j Γ 1 (x), j Γ 2 (0)} |0(2)Π H (p f 0 , q) = f 0 (p f 0 )| j Γ 1 |B s (p f 0 + q) B s (p f 0 + q)| j Γ 2 |0 m 2 B s − (p f 0 + q) 2 + ∞ s 0 ds ρ h (s, q 2 ) s − (p f 0 + q) 2 ,(3)
where higher resonances and the continuum of states are described in terms of the spectral function ρ h (s, q 2 ), contributing above a threshold s 0 . The correlation function can be evaluated in QCD with the expression
Π QCD (p f 0 , q) = 1 π ∞ (m b +m s ) 2 ds ImΠ QCD (s, q 2 ) s − (p f 0 + q) 2 .(4)
Expanding the T-product in Eq. (2) on the light-cone, we obtain a series of operators, ordered by increasing twist, the matrix elements of which between the vacuum and the f 0 are written in terms of f 0 light-cone distribution amplitudes (LCDA).
Since the hadronic spectral function ρ h in (3) is unknown, we use the global quarkhadron duality to identify ρ h with ρ QCD = 1 π ImΠ QCD when integrated above s 0 so that
∞ s 0 ds ρ h (s,q 2 ) s−(p f 0 +q) 2 = 1 π ∞ s 0 ds ImΠ QCD (s,q 2 )
s−(p f 0 +q) 2 . Using the quark-hadron duality, together with the equality Π H (p f 0 , q) = Π QCD (p f 0 , q) and performing a Borel transformation of the two representations, we obtain a generic sum rule for the form factors where Q 2 = −q 2 , p B s = p f 0 +q and M 2 is the Borel parameter. The Borel transformation will improve the convergence of the series in Π QCD and for suitable values of M 2 enhances the contribution of the low lying states to Π H . Eq. (5) allows us to derive the sum rules for F 1 , F 0 and F T , choosing j Γ 1 = J 5 µ or j Γ 1 = J 5T µ . We refer to Ref. [4] for numerical values of the input parameters as well as for the final expressions of the form factors obtained from (5). The s 0 is supposed to be around the mass squared of the first radial excitation of B s and is fixed as s 0 = (34 ± 2) GeV 2 . As for the Borel parameter, the result is obtained requiring stability against variations of M 2 . In Fig. 1 we show the dependence of F 1 (q 2 = 0) and F T (q 2 = 0) on M 2 and we find the stabilities when M 2 > 6 GeV 2 , and thus we choose M 2 = (8 ± 2) GeV 2 .
f 0 (p f 0 )| j Γ 1 |B s (p B s ) B s (p B s )| j Γ 2 |0 e − m 2 B s M 2 = 1 π s 0 (m b +m s ) 2 ds e − s M 2 ImΠ QCD (s, q 2 ),(5)
To describe the form factors in the whole kinematically accessible q 2 region, we use the parameterization
F i (q 2 ) = F i (0) 1−a i q 2 /m 2 B s +b i (q 2 /m 2 B s ) 2 , i ∈ {1, 0, T }; the parameters F i (0)
, a i and b i are obtained through fitting the form factors computed numerically in the large recoil region. Our results are collected in Table 1, where uncertainties in the results are due to the input parameters, s 0 and M 2 .
B s → M cc L DECAYS
The effective hamiltonian responsible for decays induced by the b → ccs transition is:
H eff = G F √ 2 V cb V * cs C 1 (µ)O 1 +C 2 (µ)O 2 −V tb V * ts 10,7γ,8g ∑ i=3 C i (µ)O i (µ) ,(6)
where of the tiny V ub V * us , we have V tb V * ts = −V cb V * cs . The simplest approach to compute the matrix element of the effective four-quark or magnetic-moment operators is the naive factorization approach. In this approach neglecting the magnetic moment operators whose contributions are suppressed by α s , the B a → M cc L amplitude reads ((a being a light flavour index))
A (B a → M cc L) = G F √ 2 V cb V * cs a eff 2 (µ) M cc |cγ µ (1 − γ 5 )c |0 L|sγ µ (1 − γ 5 )b|B a ,(7)
where a eff 2 (µ) is a combination of the Wilson coefficients: a eff 2 (µ) = a 2 (µ) + a 3 (µ) + a 5 (µ) and and a 2 = C 2 + C 1 /N c , a 3 = C 3 + C 4 /N c + 3 2 e c (C 9 +C 10 /N c ) and a 5 = C 5 + C 6 /N c + 3 2 e c (C 7 +C 8 /N c ) with e c = 2/3 and N c = 3. This factorization approach allows us to express the decay amplitudes in terms of the heavy-to-light form factors and the decay constant of the emitted meson. Unfortunately, one severe drawback is that naive factorization badly reproduces several branching ratios for which experimental data are available. In particular, the b → ccs induced modes under scrutiny are color suppressed, and the predictions of naive factorization will typically undershoot the data. The most striking discrepancy is for the B d decay modes with χ c0 in the final state, which have a sizeable rate but their decay amplitude vanishes in the factorization approach [7].
Several modifications to the naive factorization ansatz have been proposed and in particular in this work we will explore one possibility by treating the Wilson coefficients as effective parameters to be determined from experiment. In principle, it implies that such coefficients are channel dependent. However for channels related by invoking flavour symmetries, universal values for the coefficients can be assumed. In our case, this generalized factorization approach consists in considering the quantity a eff 2 in (7) as a process dependent parameter to be fixed from experiment. In particular, if one assumes the flavor SU (3) symmetry, B (B u or B d ) decays can be related to analogous B s decays, so that experimental data on B decays provide a prediction for B s related ones.
Our strategy is to exploit the existing experimental data for B decay modes to determine an effective parameter a eff 2 and, assuming SU (3) F symmetry, to use these values to predict the flavour related B s decays. In the case of modes with χ c1 in the final state, we will determine the combination f χ c1 a eff 2 since sizable uncertainties may be introduced to the Wilson coefficient but will cancel in the predictions of branching ratios. In this procedure, we use two sets of form factors: the one obtained using sum rules based on the short-distance expansion [9], and the set in [10] based on the light-cone expansion. In the case of B s → φ and B s → f 0 (980) we use form factors determined by LCSR [10,4]. B s → η (′) form factors are related to the analogous B → K form factors and the mixing angle between η and η ′ in the flavor basis [11] can be fixed to the value measured by the KLOE Collaboration: θ = 41.5 ± 0.3 stat ± 0.7 syst ± 0.6 th • [12], which is also supported by a QCD sum rule analysis of the radiative φ → η (′) γ modes [13]. The Wilson coefficients for B s → M cc f 0 (980) are obtained using the effective value determined from B → M cc K. Most of other numerical inputs are taken from the particle data group and we refer to Ref. [5] for more details.
The predictions for branching ratios of B s decays are given in Tables 2 and 3. In Table TABLE 2. Branching ratios (in units of 10 −4 ) of B s → M cc L using the form factors in [9] (CDSS) and in [10] (BZ). Experimental results are taken from PDG [8], except for B s → J/ψ η (η ′ ) measured by Belle Collaboration [14] and the bound for B s → J/ψ f 0 is from Ref. [15].
mode B (CDSS) B (BZ) Exp. mode B (CDSS) B (BZ) J/ψ η 4.3 ± 0.2 4.2 ± 0.2 3.32 ± 1.02 η c η 4.0 ± 0.7 3.9 ± 0.6 J/ψ η ′ 4.4 ± 0.2 4.3 ± 0.2 3.1 ± 1.39 η c η ′ 4.6 ± 0.8 4.5 ± 0.7 ψ(2S) η 2.9 ± 0.2 3.0 ± 0.2 η c (2S) η 1.5 ± 0.8 1.4 ± 0.7 ψ(2S) η ′ 2.4 ± 0.2 2.5 ± 0.2 η c (2S) η ′ 1.6 ± 0.9 1.5 ± 0.8 J/ψ φ - 16.7 ± 5.7 13 ± 4 η c φ - 15.0 ± 7.8 ψ(2S) φ - 8.3 ± 2.7 6.8 ± 3.0 χ c1 η 2.0 ± 0.2 2.0 ± 0.2 χ c1 f 0 1.88 ± 0.77 0.73 ± 0.30 χ c1 η ′ 1.9 ± 0.2 1.8 ± 0.2 χ c1 φ - 3.3 ± 1.3 J/ψ f 0 4.7 ± 1.9 2.0 ± 0.8 < 3.26 η c f 0 4.1 ± 1.7 2.0 ± 0.9 ψ(2S) f 0 2.3 ± 0.9 0.89 ± 0.36 η c (2S) f 0 0.58 ± 0.38 1.3 ± 0.8mode B mode B mode B χ c0 η 0.85 ± 0.13 χ c2 η < 0.17 h c η < 0.23 χ c0 η ′ 0.87 ± 0.13 χ c2 η ′ < 0.17 h c η ′ < 0.23 χ c0 f 0 1.15 ± 0.17 χ c2 f 0 < 0.29 h c f 0 < 0.30 χ c0 φ 1.59 ± 0.38 χ c2 φ < 0.10(0.62 ± 0.17) h c φ (< 1.9)
on the form factors at q 2 = 0 and on the experimental branching ratios, but in the case of the modes involving η or η ′ the uncertainty on the form factors is not included since the dependence on the form factors will cancel when the branching ratios of B → J/ψK decays are related to the corresponding B s decays. As appears from Tables 2 and 3 all the considered modes have sizable branching fractions which are large enough to make them promising candidates for the measurement of β s . The modes involving η, η ′ , f 0 present, with respect to the golden mode B s → J/ψφ , the advantage that the final state is a CP eigenstate, not requiring angular analysis. However, channels with η and η ′ can be useful only after a number of events have been accumulated, since at least two photons are required for the reconstruction.
As discussed in [16,4,17], B s → J/ψ f 0 has appealing features since, compared with the η ( ′ ) , the f 0 can be easily identified in the π + π − final state with a large BR: B( f 0 → π + π − ) = (50 +7 −8 )% [18], so that this channel can likely be accessed. At present, the Belle Collaboration has recently provided the following upper limit [15]:
B(B s → J/ψ f 0 ) × B( f 0 → π + π − ) < 1.63 × 10 −4(8)
marginally in accordance with our prediction. Let us come to B s decays to p-wave charmonia. Among these decays, the only one with non vanishing amplitude in the factorization assumption is that with χ c1 in the final state. In the other cases, i.e. modes involving χ c0,2 or h c , which we show in Table 3, results are obtained determining the decay amplitudes from the B decay data by making use of the SU(3) symmetry. In this case, the differences between the B and B s decays arise from the phase space and lifetimes of the heavy mesons. As for the mechanism inducing such processes, one possibility is that rescattering may be responsible of their observed branching fractions, as proposed in Ref. [7]. Among these channels, B s → χ c0 φ is of prime interest and promising for both hadron colliders and B factories.
CONCLUSION
Recent results in the B s sector strongly require theoretical efforts to shed light on which are the most promising decay modes to unreveal new physics. In this work we have analyzed channels induced by the b → ccs transition. Modes with a charmonium state plus η, η ′ , f 0 (980) are the most promising, being CP eigenstates not requiring an angular analysis. In particular, the case of f 0 is particularly suitable in view of its easier reconstruction in the subsequent decay to π + π − . As a preliminary step we have used the light-cone sum rules to compute the B s → f 0 (980) form factors which are necessary inputs in the analysis of B s decays.
with j Γ 1 being one of the currents in the definition of the B s → f 0 form factors: j Γ 1 = J 5 µ for F 1 and F 0 , and j Γ 1 = J 5T µ for F T . The matrix element of j Γ 2 =biγ 5 s between the vacuum and B s defines the B s decay constant f B s : B s (p B s )|biγ 5 s|0 = +m s f B s . The LCSR method consists in evaluating the correlation function in Eq. (2) both at the hadron level and at the quark level. Equating the two representations allows us to obtain a set of sum rules suitable to derive the form factors.
F T 0 FIGURE 1 .
01G F is the Fermi constant, O i are the four-quark or magnetic-moment operators and C i are Wilson coefficients. With the assumption of the CKM unitarity and the Dependence of F B s → f 0 1 (0) and F B s → f 0 T (0) on the Borel parameter M 2 .
TABLE 1 .
1B s → f 0 form factors in the LCSR.F i (q 2 = 0)
a i
b i
F 1 0.185 ± 0.029 1.44 +0.13
−0.09
0.59 +0.07
−0.05
F 0 0.185 ± 0.029 0.47 +0.12
−0.09
0.01 +0.08
−0.09
F T 0.228 ± 0.036 1.42 +0.13
−0.10
0.60 +0.06
−0.05
TABLE 3 .
3Branching ratios of B s decays into p-wave charmonia (unit:
10 −4 ).
the available experimental data[8,14,15] are also reported, with a satisfactory agreement with the predictions. In theoretical predictions we have included the uncertainty
ACKNOWLEDGMENTSI thank the workshop organizers for the nice week in Martina Franca. I also thank P. Colangelo and F. De Fazio for collaboration.
. V M Abazov, D0 CollaborationPhys. Rev. D. 7657101V. M. Abazov et al. [D0 Collaboration], Phys. Rev. D 76, 057101 (2007);
. Phys. Rev. Lett. 101241801Phys. Rev. Lett. 101, 241801 (2008);
. T Aaltonen, CDF collaborationPhys. Rev. Lett. 100121803T. Aaltonen et al. [CDF collaboration], Phys. Rev. Lett. 100, 121803 (2008);
. Phys. Rev. Lett. 100161802Phys. Rev. Lett. 100 161802 (2008).
. E Barberio, arXiv:0808.1297Heavy Flavor Averaging Group. hep-exE. Barberio et al. [Heavy Flavor Averaging Group], arXiv:0808.1297 [hep-ex].
Oakes for the CDF Collaboration. L , Talk at FPCP 2010. TorinoL. Oakes for the CDF Collaboration, Talk at FPCP 2010, Torino.
. P Colangelo, F De Fazio, W Wang, Phys. Rev. D. 8174001P. Colangelo, F. De Fazio and W. Wang, Phys. Rev. D 81, 074001 (2010).
. P Colangelo, F De Fazio, W Wang, arXiv:1009.4612hep-phP. Colangelo, F. De Fazio and W. Wang, arXiv:1009.4612 [hep-ph].
At the Frontier of Particle Physics/Handbook of QCD. P Colangelo, A Khodjamirian, arXiv:hep-ph/0010175M. ShifmanWorld Scientific3SingaporeP. Colangelo and A. Khodjamirian, in "At the Frontier of Particle Physics/Handbook of QCD", ed. by M. Shifman (World Scientific, Singapore, 2001), vol. 3, pages 1495-1576, arXiv:hep-ph/0010175.
. P Colangelo, Phys. Lett. B. 54254023Phys. Rev. DP. Colangelo et al. Phys. Lett. B 542 (2002) 71; Phys. Rev. D 69 (2004) 054023.
. C Amsler, Phys. Lett. B. 6671Particle Data GroupC. Amsler et al. [Particle Data Group], Phys. Lett. B 667, 1 (2008).
. P Colangelo, Phys. Rev. D. 533186Erratum-ibid. DP. Colangelo, et al. Phys. Rev. D 53, 3672 (1996) [Erratum-ibid. D 57, 3186 (1998)].
. P Ball, R Zwicky, Phys. Rev. D. 7114015P. Ball and R. Zwicky, Phys. Rev. D 71, 014015 (2005);
. Phys. Rev. D. 7114029Phys. Rev. D 71, 014029 (2005).
. T Feldmann, P Kroll, B Stech, Phys. Rev. D. 58114006T. Feldmann, P. Kroll and B. Stech, Phys. Rev. D 58, 114006 (1998);
. Phys. Lett. B. 449339Phys. Lett. B 449, 339 (1999);
. T Feldmann, Int. J. Mod. Phys. A. 15159T. Feldmann, Int. J. Mod. Phys. A 15, 159 (2000).
. F Ambrosino, KLOE CollaborationPhys. Lett. B. 648267F. Ambrosino et al. [KLOE Collaboration], Phys. Lett. B 648, 267 (2007).
. F De Fazio, M R Pennington, JHEP. 000751F. De Fazio and M. R. Pennington, JHEP 0007 (2000) 051.
. I Adachi, Belle CollaborationarXiv:0912.1434I. Adachi et al. [Belle Collaboration], arXiv:0912.1434.
. R Louvot, Belle CollaborationarXiv:1009.2605hep-exR. Louvot [Belle Collaboration], arXiv:1009.2605 [hep-ex].
. S Stone, L Zhang, arXiv:0909.5442Phys. Rev. D. 7974024S. Stone and L. Zhang, Phys. Rev. D 79 (2009) 074024; arXiv:0909.5442.
. O Leitner, arXiv:1003.5980hep-phO. Leitner et al., arXiv:1003.5980 [hep-ph].
. M Ablikim, BES Collaborationibid: D 72Phys. Rev. D. 7092002M. Ablikim et al. [BES Collaboration], Phys. Rev. D 70, 092002 (2004); ibid: D 72, 092002 (2005).
| [] |
[
"Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization",
"Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization"
] | [
"Markus Dreyer [email protected] ",
"Mengwen Liu [email protected] ",
"Feng Nan [email protected] ",
"Sandeep Atluri [email protected] ",
"Sujith Ravi [email protected] ",
"Slicex 2 Ai "
] | [] | [
"Association for Computational Linguistics: EACL 2023"
] | Neural models for abstractive summarization tend to generate output that is fluent and wellformed but lacks semantic faithfulness, or factuality, with respect to the input documents.In this paper, we analyze the tradeoff between abstractiveness and factuality of generated summaries across multiple datasets and models, using extensive human evaluations of factuality. In our analysis, we visualize the rates of change in factuality as we gradually increase abstractiveness using a decoding constraint, and we observe that, while increased abstractiveness generally leads to a drop in factuality, the rate of factuality decay depends on factors such as the data that the system was trained on. We introduce two datasets with human factuality judgements; one containing 10.2k generated summaries with systematically varied degrees of abstractiveness; the other containing 4.2k summaries from five different summarization models. We propose new factuality metrics that adjust for the degree of abstractiveness, and we use them to compare the abstractivenessadjusted factuality of previous summarization works, providing baselines for future work. 1 * * Work conducted during his position at Amazon. 1 Code and data are available at https: //github.com/amazon-science/ abstractive-factual-tradeoff. | null | [
"https://www.aclanthology.org/2023.findings-eacl.156.pdf"
] | 258,309,129 | 2108.02859 | 54b54f8b6955ec82c0c015189454886976025d4c |
Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization
2060 May 2-6, 2023
Markus Dreyer [email protected]
Mengwen Liu [email protected]
Feng Nan [email protected]
Sandeep Atluri [email protected]
Sujith Ravi [email protected]
Slicex 2 Ai
Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization
Association for Computational Linguistics: EACL 2023
20442060 May 2-6, 2023
Neural models for abstractive summarization tend to generate output that is fluent and wellformed but lacks semantic faithfulness, or factuality, with respect to the input documents.In this paper, we analyze the tradeoff between abstractiveness and factuality of generated summaries across multiple datasets and models, using extensive human evaluations of factuality. In our analysis, we visualize the rates of change in factuality as we gradually increase abstractiveness using a decoding constraint, and we observe that, while increased abstractiveness generally leads to a drop in factuality, the rate of factuality decay depends on factors such as the data that the system was trained on. We introduce two datasets with human factuality judgements; one containing 10.2k generated summaries with systematically varied degrees of abstractiveness; the other containing 4.2k summaries from five different summarization models. We propose new factuality metrics that adjust for the degree of abstractiveness, and we use them to compare the abstractivenessadjusted factuality of previous summarization works, providing baselines for future work. 1 * * Work conducted during his position at Amazon. 1 Code and data are available at https: //github.com/amazon-science/ abstractive-factual-tradeoff.
Introduction
Summarization is the task of generating a semantically faithful, well-formed and concise text representation of the input. Automatically generated summaries have traditionally been extractive (Luhn, 1958;Edmundson, 1969;Neto et al., 2002;Erkan and Radev, 2004;Wong et al., 2008), leading to issues with readability and coherence, as different extracted fragments may not fit well when taken out of their original contexts (Poibeau and Saggion, 2012). Researchers have also invested in methods for abstractive summarization, aiming to paraphrase the input documents' main points Figure 1: Three successively more abstractive summaries generated from the same input article, with MINT abstractiveness scores (Section 2.1) of 46.1%, 67.2%, 79.5%. Fragments extracted from the input are marked from red (longer fragments) to yellow (shorter fragments). The bottom summary has factual errors. without borrowing their exact lexical expressions (Radev and McKeown, 1998;Saggion and Lapalme, 2002;Ganesan et al., 2010;Genest and Lapalme, 2012;Radford et al., 2019;Gehrmann et al., 2019;Lewis et al., 2019;Zhang et al., 2020). Abstractive summaries generated by today's neural models tend to be fluent and well-formed, but lack semantic faithfulness (Cao et al., 2017;Kryscinski et al., 2019). Observed rates of factual errors in abstractive summaries have ranged from 30% to over 75% (Cao et al., 2017;Maynez et al., 2020). The research community is developing automatic factuality metrics Kryscinski et al., 2020;Goodrich et al., 2019;Goyal and Durrett, 2020;Ribeiro et al., 2022) and methods that attempt to increase factuality (Fan et al., 2018;Scialom et al., 2019;Zhang et al., 2019;Falke et al., 2020;Cao and Wang, 2021). However, the factuality problem of abstractive summaries cannot be well understood without considering the degree of abstractiveness of a given summary: Any summary is on a spectrum between extractive and abstractive (See et al., 2017). Summaries that are extractive to a larger extent tend to be more factual since copying text from the input into the summary rarely introduces factual errors while the task of paraphrasing, which results in summaries that are more abstractive, is harder and prone to semantic errors. As an example, Figure 1 shows part of a Washington Post article and three summaries with increasing abstractiveness, which we have generated using our abstractiveness constraints (Section 2.2). The first two summaries are correct, but the third, most abstractive, summary has factual errors, misinterpreting the input.
Few authors have discussed this connection explicitly. Lebanoff et al. (2019) observe that abstractive summaries consisting of concatenated extracted fragments tend to be more factual than those created by more complex fusion. Durmus et al. (2020) observe that models trained on the more extractive CNN/DM dataset (Hermann et al., 2015) create more factual summaries than models trained on the more abstractive XSum dataset (Narayan et al., 2018). We show that such models differ in factuality even when we bias them to generate summaries that have similar levels of abstractiveness. Our analysis (Section 4) situates summarization models on the spectrum outlined in Figure 2, where factual summaries range from "trivially factual" (extractive) to truly "paraphrasing" (abstractive). We make the following contributions:
1. We systematically explore the relationship of abstractiveness and factuality and show how factuality decays with increasing abstractiveness. We argue that factuality rates of different systems cannot be compared without taking their degrees of abstractiveness into account.
2. We introduce new factuality metrics that take abstractiveness into account and evaluate the abstractiveness-factuality tradeoff across various datasets and summarization models. We establish baselines that will allow others to demonstrate progress on mitigating the abstractiveness-factuality tradeoff.
3. We introduce a new dataset containing 10.2k summaries with systematically varied degrees of abstractiveness along with human factuality judgements, and a second dataset containing 4.2k summaries from five summarization models with their human factuality judgements.
Abstractiveness
Measuring Abstractiveness
In this paper, we wish to analyze the relationship of abstractiveness and factuality of generated summaries. We start by proposing a comprehensive abstractiveness metric. Abstractiveness measures the amount of rephrasing, i.e., the degree to which the words, phrases and sequences of the generated text have not been extracted from the corresponding input; a fully abstractive summary method expresses the main points of the input in its own words. To measure abstractiveness, most authors list the proportions of summary n-grams of varying lengths that are novel, i.e., do not occur in the corresponding inputs (See et al., 2017;Narayan et al., 2018;Gao et al., 2019). Grusky et al. (2018) proposed a new metric also based on contiguous overlapping text spans, density, measuring the average length of extracted fragments in a summary. Others have proposed metrics that take common non-contiguous subsequences into account, e.g., perfect fusion k (Durmus et al., 2020) measures the percentage of summary sentences that assemble substrings from k source sentences in their original order. Based on these previous works, we define a comprehensive abstractiveness metric that combines measures of contiguous and non-contiguous extractive summary fragments, making it sensitive to different kinds of abstractiveness and therefore suitable as a general abstractiveness metric. We define this metric as a ratio, in order to facilitate combining it with a factuality metric of the same [0,1] range (Section 4). Let χ(x, y) = hmean(p 1 , p 2 , p 3 , p 4 , lcsr) be a measure of extractive overlap between input x and summary y, using the harmonic mean of multiple component measures. Each p n , short for p n (x, y), is the n-gram precision of the n-grams in y with respect to x, i.e., the percentage of n-grams in y that are extracted from x. 2 Following common practice (Papineni et al., 2002), we use n-grams up to length four. We do not include density in χ(x, y) as its range is unbounded. The measure lcsr (longest common sub- Figure 3: Example of input and highly extractive generated output. The color coding is the same as in Fig. 1. sequence ratio), short for lcsr(x, y), is the length of the longest common subsequence (LCS) between x and y divided by the length of y. lcsr, inspired by ROUGE-L (Lin, 2004), generalizes perfect fusion k to consider all instances of non-contiguous overlaps between input and summary. Adding a measure of non-contiguous overlap is important as it detects overlaps that are long but broken up by minor changes, such as synonyms, as in the example in Figure 3. Finally, the MINT (Metric for lexical independence of generated text) abstractiveness measure is defined as MINT(x, y) = 1 − χ(x, y). For a set of inputs and their summaries, we report the average MINT score. See Figure 1 for the MINT scores of three increasingly abstractive example summaries. In Section 5, we show that MINT scores correlate hightly with density scores.
The described MINT score capitalizes on prior work to provide a comprehensive and unified metric for abstractiveness of conditionally generated text, combining measures of contiguous and noncontiguous overlap into a single percentage score. The implementation of MINT we provide will facilitate standardized comparisons of abstractiveness across different works.
Nonlinear Abstractiveness Constraints
We now introduce nonlinear abstractiveness constraints (NAC), which enable us to control the degree of abstractiveness at decoding time; it will allow us to use a trained summarization model to decode input multiple times while applying constraints to control the abstractiveness of the generated text output (e.g., see Figure 1). We will use this technique to analyze the impact of abstractiveness on factuality (Section 4).
Let F(x, y) be the set of the longest extractive fragments in the decoding output y with respect to the input x. In Figure 1, such fragments are marked in color for each summary. We define a function λ h (|f |) that assigns a discount probability to any extractive fragment f ∈ F(x, y): We configure this function 3 with h, interpreted as the length of an extracted fragment for which λ h = 0.5. Decreasing h results in a λ h that discounts shorter extractive fragments more strongly, leading to increased abstractiveness (see Figure 4). Our discount penalty grows nonlinearly, affecting longer extractive fragments more strongly than multiple shorter ones with the same combined length. To see why we choose a nonlinear penalty, consider for example that extracting a 10-gram makes a summary more extractive than using ten words from the article separately, since an extracted 10gram will be highly recognizable as stemming from the input. This nonlinearity is in contrast to Weber et al. (2018), which used a linear penalty to control the amount of copying in a pointer network.
λ h (|f |) = 2 −|f | 2 /h 2(1)
In decoding, we search for the summaryŷ that maximizes the product of the summarization model probability, p M (y | x), and the discount probabilities of the extractive fragments F(x, y):
y = arg max y p M (y | x) × f ∈F (x,y) λ h (|f |) (2)
Beam Decoding.
The model probability p M (x, y) in neural text generation models (Section 5.1.1) decomposes for token-by-token decoding as |y| i=1 p M (y i | x, y 1 , . . . , y i−1 ). Similarly, we decompose the application of the λ h function for any partial or completed extractive fragment f :
λ h (|f |) = |f | l=1 λ h (l) λ h (l − 1)(3)
Therefore, to successively apply λ h at each output position i in beam decoding, each candidate for token y i is evaluated to check whether choosing it would extend an extractive fragment to length l. If so, its model probability p M (y i | . . . ) is multiplied with λ h (l) and the λ h (l − 1) that was applied to the previous token y i−1 is divided out. We are not Figure 5: Screenshot (part) of a Mechanical Turk task (HIT) to judge the factuality of a summary sentence (in blue) with respect to news articles. Darker green article sentences are more similar to the blue summary sentence. The full task showed sentences from two more articles in the same cluster; from the Multi-News test set.
aiming to control the length of the generated output; instead we penalize the model in proportion to the length of any phrases it would extract from the input and encourage it to use novel phrases instead. Extraction Rewards. We can choose to apply an extraction reward, rather than a penalty, by using the inverse 1/λ h ; smaller values of h then result in summaries that are more extractive.
Factuality
We now describe metrics for factuality, before we can describe the relationship between abstractiveness and factuality (Section 4). By factuality of a summary y, we mean factual consistency with the input x, rather than objective factuality or universal truth. Measuring factuality automatically is an active area of research (Gabriel et al., 2020). Factuality is most naturally measured by human annotators; we describe our setup for human factuality annotation first, then move to automatic metrics.
Human-annotated Factuality
We use Amazon's Mechanical Turk (AMT) to measure the factuality of automatically generated summaries with human annotators. These annotators are untrained, so we use multiple mitigation strategies to obtain high-quality judgements. We simplify the task: To avoid overwhelming annotators with long text, we select a single sentence per summary and ask the annotators if it is factually consistent with the shown article(s). The other sentences of the summary are given as well for context, shown in gray (see Figure 5). The article(s) are shortened to show a total of 9 sentences that were determined to be semantically most similar to the selected summary sentence; 4 the remaining article parts are replaced by ". . . ". The summary sentence is selected at random in proportion to its length.
For each summary, we get judgements only for the randomly selected sentence. Aggregated over a set of summaries, we measure the average chance of any randomly selected summary sentence to be factual. We have verified high correlation of these factuality rates with the factuality rates obtained through professional annotators who judged complete summaries with respect to the full articles (see Appendix C).
We provide detailed task instructions, including examples for intrinsic and extrinsic factual errors (Maynez et al., 2020). We require that potential annotators pass a custom qualification test of finding factuality errors. Only workers with at least 100 completed tasks on AMT with an acceptance rate of 95%+ may take the test; 15% of those pass, enabling them to work on our tasks. We use three annotators per task and use MACE (Hovy et al., 2013) to aggregate annotations and recover the most likely binary factuality judgement per summary. We add summaries for which we know the correct factuality annotation and repeatedly check the annotators' accuracy on those summaries while they are annotating; all answers from annotators who fall below a threshold are replaced by answers from additional annotators. Appendix C describes more details on our setup and fair compensation.
For any set of generated summaries, we create the AMT tasks, get an aggregate binary judgement per summary based on the multiple answers as described, and report the mean of all human binary summary factuality judgements; we call this score FACTH (Table 1). We collect human factuality judgements for 10.2k BART summaries with varying degrees of abstractiveness, and for 4.2k summaries from five different summarization models.
Released Datasets. We release these human judgements as datasets called CONSTRAINTSFACT (Section 5.1) and MODELSFACT (Section 5.2). Previous datasets with human factuality judgements Kryscinski et al., 2020;Maynez et al., 2020;Pagnoni et al., 2021) are substantially smaller, with under 5k summaries each, and our CONSTRAINTSFACT dataset is the first that evaluates the factuality of summaries with systematically varied degrees of abstractiveness.
Automatically Measured Factuality
Measuring factuality automatically is an active research area; Pagnoni et al. (2021) gives an overview over recent metrics and compares their correlations to human judgements, where DAE (Goyal and Durrett, 2020, 2021) and FactCC (Kryscinski et al., 2020) perform well. DAE is an entailment model that classifies the factuality of the dependency arcs in the summary, resulting in fine-grained judgements at the subsentence level. FactCC is a BERTbased binary classifier trained on pairs of input and output sentences, where the output sentence is annotated as either factual or non-factual.
Abstractiveness-Factuality Tradeoff
The metrics for factuality and abstractiveness along with the abstractiveness constraints allow us to systematically explore the relationship between abstractiveness and factuality. We can control abstractiveness and observe the effect on factuality, i.e., we can vary the amount of lexical overlap between input and generated summary and observe the extent to which the summary preserves the input semantics.
Factuality Trend Lines. To explore this relationship, we train summarization models on different datasets. For any trained summarization model, we decode the test set multiple times with different h values for λ h (Equation 1), resulting in sets of summaries with varying degrees abstractiveness. For each of these test set decodings, we measure abstractiveness using MINT and the corresponding factuality using human annotations, unless otherwise noted. This results in a series of (abstractiveness, factuality) points for any trained summarization model, which can be plotted, along with a linear trend line. Figure 6 shows such a plot; Section 5.1.2 discusses its details.
F@50 Score. Given each trend line, we can read off the factuality at 50% abstractiveness, an intuitively interpretable metric, which we call F@50; it provides a comparison of the factuality of different models with a fixed degree of abstractiveness.
MINT-adjusted Factuality Scores. We characterize the tradeoff on any single decoding output using a weighted average between factuality and abstractiveness, (ϕF + A)/(ϕ + 1). To measure abstractiveness A, we use MINT; to measure factuality F , we use human-measured factuality or an automatic metric with [0,1] range like DAE or FactCC, resulting in abstractiveness-adjusted factuality metrics µFactH, µDAE, µFactCC, etc.
We give factuality a higher weight, since factual semantic representation of the input is a fundamental requirement for summarization and low factuality can have negative societal impact (Zellers et al., 2019), while abstractiveness is a desirable stylistic property. When two measures are combined into one comprehensive evaluation metric there is no a priori correct mixture weight; we follow common practice to give the more important measure twice the weight (Kohonen et al., 2010;Li et al., 2020;Preuß et al., 2021;Opitz and Frank, 2021) and set ϕ to 2. By this definition, a system whose factuality decreases by x units, as compared to another system, must make up for the lost factuality by 2x units in abstractiveness to get the same score. When two systems have the same factuality, the score prefers the one with higher abstractiveness.
Discussion
The abstractiveness-adjusted factuality metrics address the issue that in the past, factuality rates of different systems have been compared without taking abstractiveness into account. However, if one system has a higher factuality rate than another, it may Figure 6; we add µFACTH and F@50.
have achieved this by copying phrases from the input into the summary with minimal rephrasing, i.e., by having a low degree of abstractiveness. Such a system may produce high-quality summaries, but their factuality rate cannot directly be compared to the factuality numbers of more abstractive summarization systems. Summarization methods that are highly factual and abstractive are able to rephrase the input with few factual errors; when we compare the factuality of abstractive summarizers we must control for the amount of such rephrasing. The abstractiveness-adjusted factuality metrics we propose enable us to compare the factuality of abstractive summarization models even when they perform different amounts of rephrasings.
As an analogy, consider precision and recall. High precision can be trivially achieved with low recall, just as high factuality can be achieved with low abstractiveness. Therefore when comparing the precision of different retrieval systems, their recall numbers are taken into account by using the F-score. 5 Similarly, we argue that factuality comparisons must take abstractiveness into account. (2019). We train models with N = 800 and N = 500, called MN-800 and MN-500, respectively. We measure the MINT scores for the reference summaries in these datasets; these can be compared to the MINT scores obtained in 5 In our case, we use a weighted arithmetic mean instead because an F score would steeply decline to zero as abstractiveness goes to zero, which is undesirable for output whose factuality is high. 6 Following , we reinsert the first sentences whenever we measure factuality of XSum summaries on AMT or with automatic metrics. 7 For Multi-News and XSum, we take the first 600 samples per test set. For CNN/DM, we take the first 300 and the last 300 test samples, from CNN and Daily Mail, respectively. 8 We train for five epochs (learning rate: 2e-5) and limit output to 50 to 300 tokens. 2049 decoding (Section 5.1.2). The test set references for MN-500 have a MINT score of 78.2%, compared to 72.8% for MN-800. MINT is higher for MN-500 since the shorter truncation removes article content that could otherwise overlap with the summaries. The MINT scores for the CNN/DM and XSum references are 59.6% and 87.8%, respectively; XSum is the most abstractive dataset.
Results
We use each of the four BART models to decode its respective test set multiple times, with varying abstractiveness constraints, resulting in 17 outputs. For each one, we obtain human factuality judgements on the corresponding 600 samples, resulting in 17 x 600 human factuality judgements -our CONSTRAINTSFACT dataset -, which we aggregate into 17 mean FACTH scores; we also compute the corresponding 17 MINT scores. Figure 6 plots the resulting abstractiveness and humanmeasured factuality for each of the four models, thereby providing a visual representation of the abstractiveness-factuality tradeoff for these models. Table 1 shows the same 17 MINT and FACTH values, along with µFACTH and F@50 scores.
The lower right of Figure 6 shows five lozenges (♦). The larger one represents the decoding with our XSum-trained model using default settings; the other four red points represent decodings under the same model, but with different abstractiveness constraints that result in more extractive (1/λ h ) or more abstractive (λ h ) summaries (Section 2.2). The five red points are associated with a dashed linear trend line. Compared to the other points in the figure, abstractiveness is high and factuality low -the model tends to paraphrase its input, often incorrectly. It took a strong extractive reward (1/λ 1 ), which we did not use for the models trained on other datasets, to bias this model toward lower abstractiveness and higher factuality.
For the Multi-News models, four decodings using MN-500 are shown as squares (■), decodings under MN-800 as triangles (▲). The MN-800 model is more factual across the abstractiveness spectrum. This can be explained by the fact that for MN-500, larger parts of the input are truncated (Section 5.1.1) that the untruncated reference summary in training may still refer to; the MN-500 model learns to hallucinate more.
The four decodings for CNN/DM are shown as bullets (•). Its model output without abstractiveness constraint (large bullet) is the most extractive; the extraction reward to its left (using 1/λ 2 ) cannot make it much more extractive; however, there is room to the right, and the abstraction rewards (λ 4 and λ 2 ) move its abstractiveness far into the abstractiveness level of Multi-News and XSum.
F@50 Scores. One of the main takeaways of this study is that different systems can have different factuality rates at the same level of abstractiveness. Previous authors have observed that XSum summaries are highly abstractive and less factual, and that CNN/DM summaries are at the opposite side of that spectrum. We confirm this; however, we add that we can bias the XSum model to create less abstractive summaries and the CNN/DM model to create more abstractive models, so that their abstractiveness becomes comparable, and the factuality rates still differ considerably: Based on the trend line, the F@50 score of the XSum model is 56.7%, while the CNN/DM model's F@50 is 84.4%. MN-800 and MN-500 lie in the middle.
µFACTH Scores. The µFACTH scores adjust FACTH for abstractiveness. They penalize the CNN/DM model for its low abstractiveness and reward the XSum model for its high abstractiveness, bringing them closer together, compared to their more divergent FACTH scores. The µFACTH scores for MN-800 and MN-500 are also close (59.6% versus 61.3% for λ=none), as MN-800 is more factual but also less abstractive.
Summary Quality and Abstractiveness. Table 3 lists ROUGE-L scores for the different decodings, along with abstractiveness metrics, measured on the full test sets. ROUGE scores aim to measure summary quality by comparing the generated summaries with the reference summaries, while abstractiveness metrics measure overlap between the generated summaries and the input. Decodings without abstractiveness constraints replicate previous works' ROUGE scores Fabbri et al., 2019) (Appendix H). The λ 4 constraint can dramatically increase abstractiveness while leaving ROUGE scores virtually unchanged. We also conduct a human evaluation of informativeness and coherence, comparing unconstrained summaries with summaries generated with the λ 4 decoding constraint; the unconstrained decoding is preferred for XSum but the constrained decoding is preferred for CNN/DM, and results are mixed for Multi-News, see Appendix D. The density scores (Grusky et al., 2018) tion with the MINT scores.
Comparison Across Different Models
We also compare the abstractiveness-factuality tradeoffs of summarization models from the literature. We obtain outputs of four summarization models other than BART: BERTSUM (Liu and Lapata, 2019) is a transformer model in which only the encoder is pretrained; PGCONV (See et al., 2017) is a pointer-generator network; BOTTOMUP (Gehrmann et al., 2018) and ABSRL (Chen and Bansal, 2018) select source fragments to constrain an abstractive generation model. We obtain human factuality judgements of the five model outputs on 600 samples of CNN/DM and XSum, respectively, and release this as our MODELSFACT dataset; we apply automatic metrics (e.g., DAE) as well as our abstractiveness-adjusted variants (e.g., µDAE) to the full test sets. Table 4 shows the results. For CNN/DM, we find that the highly extractive model PGCONV receives the highest automatic and human factuality scores, while the abstractivenessadjusted variants favor BART or ABSRL, whose outputs represent better tradeoffs between abstractiveness and factuality. On XSum, BART's output is considerably more factual than BERTSUM's across all factuality metrics, while BART has only slightly lower abstractiveness; as a result, BART is also favored by all MINT-adjusted factuality metrics. Detailed results including additional factuality metrics are described in Appendix G.
The MINT-adjusted variants of factuality metrics put factuality rates into perspective. We encourage authors who compare factuality rates across summarization models to also compare MINT-adjusted variants (e.g., µDAE), to account for differing levels of abstractiveness.
Related Work
Abstractiveness-Factuality Tradeoff: Durmus et al. (2020) observe that abstractiveness at test time depends on the abstractiveness of the training data and that highly abstractive summaries tend to be less factual. We control for abstractiveness and see that factuality rates between different systems can vary widely at the same abstractiveness levels. Recently, Ladhak et al. (2022) present an alternative framework to evaluate the faithfulnessextractiveness tradeoff, requiring training multiple models on subsets of the training data to measure the tradeoff, while we use constraints to analyze tradeoffs that a single model makes. Increasing Abstractiveness: Kryściński et al. (2018) use policy gradient with a novelty reward to encourage abstraction in a pointer-generator (PG) (Gulcehre et al., 2016;See et al., 2017). Weber et al. (2018) penalize copying tokens during PG decoding. Our constraints apply to general sequence-to-sequence models and include nonlinear penalties. Song et al. (2020) control copying in training abstractive summarization models by masking the summary tokens with different probabilities, depending on whether they are seen in the input document or not. In contrast, our technique does not require retraining to 2051 obtain varying degrees of abstractiveness.
Conclusions
We presented new metrics and datasets for evaluating the relationship of abstractiveness and factuality. As part of our analysis, we presented abstractiveness constraints, which can bias a summarization model to increase or decrease the level of abstractiveness while generating summaries, using nonlinear penalties or rewards based on the length of summary fragments extracted from the source. Through automatic and human factuality evaluations, including 10.2k human factuality judgements of summaries with systematically varied abstractiveness, we shed light on how abstractiveness interacts with factuality, across multiple datasets and models. We proposed new metrics to measure the tradeoff, including F@50 and MINT-adjusted factuality rates, such as µDAE and µFactCC, and we established baselines for future research.
Limitations
The abstractiveness constraints we have presented can be used to increase or decrease the abstractiveness of the generated text. Dedicated code is needed to integrate such constraints into a decoder. The constraints are needed to obtain trend lines as in Figure 6, as well as the F@50 score. However, the MINT-adjusted factuality scores, such as µFactH, µDAE or µFactCC can be computed for any summarization system, without the need for implementing abstractiveness constraints, as we have done in Section 5.2.
Ethical Considerations
We have analyzed the factuality of generated text in relation to the abstractiveness of the source texts; we have also proposed new metrics that let researchers compare the factuality of different generative models. As such, we consider our work a contribution toward text generation methods that make fewer factual mistakes and become therefore more reliable and responsible. However, any advance in text generation methods can be used by bad actors to cheaply generate misleading or harmful texts.
We hired annotators on the Mechanical Turk platform to judge machine-generated summaries. Our first ethical consideration with respect to this data collection is fair and prompt pay for the work of the annotators. We describe in Appendix C that we paid all human subjects a fair average pay of $12.50 USD per hour, based on observed median time spent per HIT. As described (Section 3.1), we automatically approved the annotators' work promptly and paid bonuses as appropriate. The annotators' privacy and confidentiality were respected at all times.
A Measuring Abstractiveness with MINT
N -gram Overlap. Each p n , short for p n (x, y), is the n-gram precision of the n-grams in y with respect to x, i.e., the percentage of n-grams in y that are extracted from x. 9 For highly abstractive outputs, higher-order n-gram precision can be zero, leading to an undefined or zero harmonic mean value. We prevent this by smoothing the n-gram counts from which n-gram precisions are calculated, such that each n-gram count is the average of itself and the smoothed (n − 1)-gram count and the unsmoothed (n + 1)-gram count. The smoothed 0-gram count is defined as the 1-gram count plus one. We chose this method for its simplicity and effectiveness; it is described as method 5 in Chen and Cherry (2014).
Harmonic Mean. We use the harmonic mean, in analogy to the definition of the F 1 score, as it is a mean function designed to aggregate ratios with different denominators.
For a completely extractive summary that extracts sentences in the original order, the MINT score is 0. The score increases as the order of the extractive fragments is changed with respect to the input, their lengths are decreased and new words and fragments are introduced that are not part of the input x. The use of the length-normalized LCS score (lcsr) is inspired by ROUGE-L; it is a useful addition to the n-gram precisions as it can detect the extraction of longer n-grams broken up by minor edits. As an example, consider the (x, y) pair shown in Figure 3. Only 4 of the 12 summary fourgrams match the input, i.e., p 4 =33.3%, although very high overlap is apparent due to the fact that a 15-word fragment from the input was extracted with only the words "verdict" and "which" minimally changed by synonym substitution. The lcsr score reflects this and measures 12/15=80.0% overlap. On the other hand, the n-gram precisions used in the MINT score are valuable in detecting textual overlaps that are not part of the longest common subsequence. 9 MINT has elements of ROUGE (Lin, 2004) and BLEU (Papineni et al., 2002). We do not use the modified n-gram precisions, like BLEU does, because n-grams extracted multiple times from x should count as such every time.
B Details on the Abstractiveness Constraints
Log Space. We have described the abstractiveness constraints in probability space. In practice, we equivalently search forŷ in log space using log probabilities and the log of λ h defined in Equation 1. It can be shown that log λ h (|f |) = −|f | 2 (1.20112×h) 2 .
C Details on Our Mechanical Turk Setup
We provide additional details on the strategies we use to obtain high-quality judgements on Amazon Mechanical Turk. We give detailed instructions to the annotators, with definitions and examples of different factual errors (see Figure 7). We also add a request to write a short explanation when a sentence is judged as not factual.
Tasks with Known Answers. We add a number of tasks with known answers, enabling us to estimate the accuracy of workers who work on multiple of these.
Automatic Quality Checks. Workers who complete the tasks too quickly, write no or very short explanation texts or have low accuracy on the tasks with known answers are automatically removed from our worker pool. Their answers are replaced with new answers.
Bonus. We use a bonus incentive structure. Every worker who passes the automatic quality checks receives a bonus at the end.
Check Against Professional Annotators. We have seven sets of 150 automatically generated summaries each, which we had previously sent to professional news editors to annotate factuality. Those annotators rated the complete summaries with respect to the complete inputs -no sentences were preselected to simplify the task. We re-annotated these summary-article pairs using our Mechanical Turk setup, and the resulting perset factuality rates correlated highly (r=.88) with those previously obtained from the professional annotators (p< .05).
As a further quality check, we sent one set of 600 summaries to Mechanical Turk twice, several weeks apart. The two factuality rates obtained for that same set were close -91.2% and 92.0%. Qualification Test. For all our evaluations on Mechanical Turk (see Section 3.1), we first set up a short qualification test that can be taken by any worker from a country whose main language is English, who has completed 100 or more HITs so far with an acceptance rate of 95% or higher. The qualification test consists of just three questions from our factual consistency setup; two of which must be answered correctly, along with an explanation text (5 words or more) to explain when "not factually consistent" was chosen. 53% of workers who start the test provide answers to all three questions, and 27.6% of these answer at least two correctly and provide a reasonable explanation text, i.e., only 14.6% of the test takers are granted the qualification.
The qualification enables workers to work on our factual consistency HITs as well as our HITs judging informativeness and coherence.
Fair Compensation. The factual consistency task pays $0.15 per HIT with a bonus of $0.05. It can be done quickly, given the fact that a single summary sentence is evaluated and the related sentences in the article are highlighted. The task of evaluating informativeness and coherence (see Appendix D) pays $0.50 per HIT with a bonus of $0.25, as more text is displayed, compared to the factuality task. These amount to an average pay of $12.50 per hour, including the bonus, based on median time spent per HIT. The bonus is paid to workers who spend at least 10 seconds per HIT, give short explanation texts for their decisions and maintain high accuracy on HITs with known answers. Table 5: Human quality evaluation of summaries generated with no abstractiveness constraint ("off") versus λ 4 . We asked which summary is more informative or coherent, respectively. MN-800 stands for Multi-News with the input documents truncated to 800 words total (Section 5.1.1).
D Human Evaluation of Informativeness and Coherence
We conduct a human evaluation to determine the informativeness and coherence of the summaries generated with the λ 4 decoding constraint (Equation 1), which increases abstractiveness, as compared to not using any abstractiveness constraint. We use the same setup as for the factuality task, including a qualification test, three annotators per task and aggregation using MACE.
We use the following definitions of informativeness and coherence for the human evaluation:
• Informativeness: The more informative summary is better at expressing the main points of the news story. It contains information that is more relevant and important. It has fewer unimportant details. Its content is more similar to the human-written summary.
• Coherence: The more coherent summary has better structure and flow, is easier to follow. The facts are presented in a more logical order.
The results are shown in Table 5. For the CNN/DM model, the output without decoding constraints is the most extractive, and the raters preferred the more abstractive version generated with the decoding constraint, both for informativeness and coherence. For the XSum model, where the output with the decoding constraint disabled is already highly abstractive, the result is reversed. For Multi-News, the result is mixed: Raters found the output with no decoding constraints more informative, but less coherent.
E More On Automatic Factuality Metrics
When we apply FactCC to a summary, we apply it separately to each summary sentence and use the mean score per summary. For each sentence that we score with FactCC, we shorten the input document by selecting ten sentences with the highest cosine embedding similarity (Conneau et al., 2017), in order to fit the input to the length limits.
In the following two appendix sections, we use not only DAE and FactCC, as described in the main text, but also two metrics based on question answering: FEQA (Durmus et al., 2020) and QAGS . FEQA generates questions from masked summary sentences whose masked entities are used as "gold" answers; these are compared to the answers obtained from a QA model on the input. In QAGS, a question generation model generates questions from the summary, a QA model answers these questions from both summary and input, and the similarity of the answer pairs is evaluated.
F Correlating Human and Automatic
Factuality Judgements
G Comparison Across Different Models
Here we offer an extended description of our comparison of the abstractiveness-factuality tradeoffs of summarization models from the literature, including the use of additional automatic factuality metrics (see Appendix E). Table 7 shows human and automatic factuality scores, as well as MINT-adjusted versions of these scores. We observe that all factuality metrics favor the output of the PGCONV model on CNN/DM; however, its low abstractiveness indicates that its output falls into the "trivially factual" quadrant ( Figure 2). The MINT-adjusted variants (shown in green) penalize such low abstractiveness, favoring the BART or ABSRL models instead, whose outputs represent better tradeoffs between abstractiveness and factuality. Human factuality raters (FACTH) rank ABSRL in fourth place, while FactCC, FEQA and QAGS rank it highly; we hypothesize that ABSRL makes factual errors that these measures cannot detect well. On XSum, BART's output is considerably more factual than BERTSUM's across all factuality metrics, while BART has only slightly lower abstractiveness; as a result, BART is also favored by all MINT-adjusted factuality metrics. BART's pretraining of both encoder and decoder may be contributing to its factuality, in accordance with Maynez et al. (2020). Note that for DAE, we apply the Ent-C model on CNN/DM output and the XSUM-HUMAN model on XSum output. Appendix H.2 shows ROUGE scores.
H ROUGE Scores
H.1 BART Models
The aim of this paper is not to improve ROUGE scores, but to gain insights about the tradeoff between abstractiveness and factuality. We do, however, stress that the BART models we use in our analysis are competitive with the start of the art. We list our ROUGE-1, ROUGE-2 and ROUGE-L F 1 scores, as well as their averages; see the RL scores in Table 3
H.2 Comparing Summarization Models
To complement our comparison of different models in Section 5.2, we list the ROUGE-L F 1 scores of the five models in Table 8.
I Additional Experimental Details
We used AWS p3.8x and p3.16x EC2 machines for all our experiments, except we ran FEQA on the Multi-News summaries on a p3dn.24xlarge machine, as it required more memory. The BART model has 406,290,432 parameters. Fine-tuning BART on the Multi-News training set took about 2.5 hours on 4 GPUs; we fine-tuned for 5 epochs following instructions on the fairseq BART webpage, without further hyperparameter search. For CNN/DM and XSum we used the provided checkpoints. 10 The minimum and maximum length for Multi-News decoding was determined by the lengths of the training reference summaries.
Figure 2 :
2Four extremes at the abstractivenessfactuality spectrum.
Figure 6 :
6Human factuality judgements (FACTH) for different degrees of abstractiveness (MINT). Each color represents a BART model trained on a particular dataset, decoded with varying decoding constraints (Sec. 2.2); large outlined symbols mean no constraints.
Figure 7 :
7Instructions for the factuality annotation task on Amazon Mechanical Turk, as well as the summary and part of the article text shown to the worker.
as well: • For CNN/DM, our λ=none decoding has 44.1/21.2/41.0 with an average of 35.4, same as the average of 35.4 in Lewis et al. (2020). • For XSum, our λ=none decoding has 45.3/21.9/36.8 with an average of 34.7, compared to an average of 34.9 in Lewis et al. (2020). • For Multi-News, our MN-800 λ=none decoding has 50.2/20.5/45.8 with an average of 38.8, compared to improved ROUGE F 1 results of 44.5/16.0/40.3 with an average of 33.6 by Fabbri (personal communication) for Fabbri et al. (2019).
Figure 4: λ h defines discounts for extractive fragments based on their lengths. Smaller h values lead to more abstractive summaries.1
2
3
4
5
6
7
Extractive fragment length
0.00
0.25
0.50
0.75
λ
h
h =4
h =2
Table 1: Abstractiveness and factuality on 600 test samples per setting. The 17 MINT and FACTH numbers are as shown inλ
MINT
FACTH
µFACTH
F@50
CNN/DM
1/λ 2
9.7
94.8
66.5
84.4
none
17.6
91.2
66.7
λ 4
43.5
87.0
72.5
λ 2
70.8
76.7
74.7
MN-800
1/λ 2
26.8
82.2
63.7
68.9
none
37.0
73.5
61.3
λ 4
56.1
68.5
64.4
λ 2
76.2
53.5
61.1
MN-500
1/λ 2
33.6
73.5
60.2
64.4
none
45.9
66.5
59.6
λ 4
62.3
59.7
60.6
λ 2
79.7
46.5
57.6
XSum
1/λ 1
55.8
53.7
54.4
56.7
1/λ 2
74.5
51.7
59.3
none
80.8
45.3
57.2
λ 4
84.0
43.7
57.1
λ 2
88.3
40.7
56.5
Table 2 :
2Train/valid/test split on public datasets.Datasets. We use CNN/DM(Hermann et al., 2015), XSum(Narayan et al., 2018), and Multi-News(Fabbri et al., 2019), all of which contain English-only text. CNN/DM contains news articles from CNN and DailyMail paired with bullet point summaries. XSum contains articles from BBC News, using each article's first sentence as summary. 6 In Multi-News, each summary is written by a professional editor and paired with a cluster of news articles. For all three public datasets, we use the provided training/validation/test split. The sizes of the three datasets are listed inTable 2. From each of the three datasets, we use 600 samples to compare human and automatic factuality judgements. 75.1.1 SetupWe use the BART sequence-tosequence model, which was pretrained on 160GB of text and gives competitive results on CNN/DM and XSum. Our models use the provided model checkpoints for the CNN/DM and the XSum datasets as well as the recommended decoding settings. For Multi-News (MN), we train a model on the training set, starting from the bart.large pretrained model. 8 For Multi-News, we truncate the input documents per cluster so that their combined length does not exceed N words, following Fabbri et al.5 Experiments
5.1 Comparison Across Datasets Using NAC
in the table have high correla-λ
RL
MINT
p3
p4 lcsr density
CNN/DM
1/λ2
37.9
9.0 89.0 84.7 93.1
28.9
none
41.0
16.8 79.5 72.1 89.4
15.4
λ4
41.5
43.7 50.0 35.1 77.8
4.6
λ2
39.3
70.3 26.4 12.6 67.4
2.2
MN-800
1/λ2
44.8
26.6 71.1 64.1 69.5
20.7
none
45.8
37.1 58.9 50.1 63.3
13.4
λ4
45.8
56.3 38.7 27.0 51.9
4.3
λ2
44.0
76.4 20.7 10.4 41.6
2.0
MN-500
1/λ2
44.6
34.1 63.7 56.4 61.0
17.6
none
45.5
45.9 50.2 41.4 54.2
10.6
λ4
45.1
62.2 33.4 22.7 44.8
3.6
λ2
43.3
79.8 17.8 8.8 35.9
1.8
XSum
1/λ1
30.8
53.8 41.7 32.3 66.9
5.8
1/λ2
36.0
73.9 23.0 14.1 57.7
3.0
none
36.8
80.2 17.6 9.2 54.5
2.4
λ4
36.8
83.6 14.6 6.6 52.8
2.2
λ2
36.3
88.1 10.8 4.1 49.8
1.9
Table 3 :
3Impact of λ on ROUGE-L F 1 (RL) and abstrac-
tiveness metrics on the full test sets. p3, p4, lcsr are
component scores in MINT (Sec. 2.1), density is aver-
age length of extracted fragments (Grusky et al., 2018).
ROUGE measures overlap with reference summaries,
abstractiveness metrics measure input overlap.
Table 4 :
4Abstractiveness (MINT) and factuality of dif-
ferent models. For each factuality metric, we first list
its MINT-adjusted variant in green. Example: BART's
µFACTH is 66.4, while the unadjusted FACTH is 91.2.
All numbers are percentage scores ∈ [0,100].
Alexander Fabbri, IreneLi, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale
multi-document summarization dataset and abstrac-
tive hierarchical model. In Proceedings of the 57th
Annual Meeting of the Association for Computational
Linguistics, pages 1074-1084, Florence, Italy. Asso-
ciation for Computational Linguistics.
Tobias Falke, Leonardo F.R. Ribeiro, Prasetya Ajie
Utama, Ido Dagan, and Iryna Gurevych. 2020. Rank-
ing generated summaries by correctness: An interest-
ing but challenging application for natural language
inference. In ACL 2019 -57th Annual Meeting of the
Association for Computational Linguistics, Proceed-
ings of the Conference, pages 2214-2220. Associa-
tion for Computational Linguistics (ACL).
Lisa Fan, Dong Yu, and Lu Wang. 2018. Robust Neu-
ral Abstractive Summarization Systems and Evalu-
ation against Adversarial Information. In NIPS In-
terpretability and Robustness for Audio, Speech and
Language Workshop.
Saadia Gabriel, Asli Celikyilmaz, Rahul Jha, Yejin Choi,
and Jianfeng Gao. 2020. Go Figure! A Meta Eval-
uation of Factuality in Summarization. Technical
report.
Kavita A. Ganesan, ChengXiang Zhai, and Jiawei Han.
2010. Opinosis: A graph based approach to abstrac-
tive summarization of highly redundant opinions. In
International Conference on Computational Linguis-
tics.
Shen Gao, Xiuying Chen, Piji Li, Zhangming Chan,
Dongyan Zhao, and Rui Yan. 2019. How to write
summaries with patterns? learning towards abstrac-
tive summarization through prototype editing. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 3741-
3751, Hong Kong, China. Association for Computa-
tional Linguistics.
Sebastian Gehrmann, Yuntian Deng, and Alexander M.
Rush. 2018. Bottom-Up Abstractive Summarization.
In Proc. of EMNLP.
Sebastian Gehrmann, Zachary Ziegler, and Alexander
Rush. 2019. Generating abstractive summaries with
finetuned language models. In Proceedings of the
12th International Conference on Natural Language
Generation, pages 516-522, Tokyo, Japan. Associa-
tion for Computational Linguistics.
Pierre-Etienne Genest and Guy Lapalme. 2012. Fully
abstractive approach to guided summarization. In
Annual Meeting of the Association for Computational
Linguistics.
Ben Goodrich, Vinay Rao, Peter J Liu Mohammad
Saleh, Google Brain, Peter J Liu, and Mohammad
Saleh. 2019. Assessing The Factual Accuracy of Gen-
erated Text. In International Conference on Knowl-
edge Discovery and Data Mining (KDD).
Tanya Goyal and Greg Durrett. 2020. Evaluating factu-
ality in generation with dependency-level entailment.
In Findings of the Association for Computational Lin-
guistics: EMNLP 2020, pages 3592-3603, Online.
Association for Computational Linguistics.
Tanya Goyal and Greg Durrett. 2021. Annotating and
modeling fine-grained factuality in summarization.
In Proceedings of the 2021 Conference of the North
American Chapter of the Association for Computa-
tional Linguistics: Human Language Technologies,
pages 1449-1462, Online. Association for Computa-
tional Linguistics.
Max Grusky, Mor Naaman, and Yoav Artzi. 2018.
Newsroom: A dataset of 1.3 million summaries with
diverse extractive strategies. In Proceedings of the
2018 Conference of the North American Chapter of
the Association for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long Pa-
pers), pages 708-719, New Orleans, Louisiana. As-
sociation for Computational Linguistics.
Caglar Gulcehre, Sungjin Ahn, Ramesh Nallapati,
Bowen Zhou, and Yoshua Bengio. 2016. Pointing
the unknown words. In Proceedings of the 54th An-
nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 140-149.
Karl Moritz Hermann, Tomáš Kočiskỳ, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In Proceedings of the 28th Interna-
tional Conference on Neural Information Processing
Systems-Volume 1, pages 1693-1701.
Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani,
and Eduard Hovy. 2013. Learning whom to trust
with MACE. In Proceedings of the 2013 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, pages 1120-1130, Atlanta, Georgia.
Association for Computational Linguistics.
Oskar Kohonen, Sami Virpioja, and Krista Lagus. 2010.
Semi-supervised learning of concatenative morphol-
ogy. In Proceedings of the 11th Meeting of the ACL
Special Interest Group on Computational Morphol-
ogy and Phonology, pages 78-86, Uppsala, Sweden.
Association for Computational Linguistics.
Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-
Cann, Caiming Xiong, and Richard Socher. 2019.
Neural text summarization: A critical evaluation. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the 9th
International Joint Conference on Natural Language
Processing (EMNLP-IJCNLP), pages 540-551, Hong
Kong, China. Association for Computational Linguis-
tics.
both equal 17.0 24.2 25.5 24.3 64.7 65.02057
CNN/DM
MN-800
XSum
inf. coh.
inf. coh.
inf. coh.
prefer off
36.5 36.7 39.8 35.8 18.8 18.7
prefer λ4
46.5 39.2 34.7 39.8 16.5 16.3
Table 6 :
6Pearson correlations to human factuality judgements on the MODELSFACT dataset. The result with the † symbol is not significant.
Table 6
6shows correlations of the human judgements with different automatic metrics on the MODELSFACT dataset, complementing earlier studies(Gabriel et al., 2020;Pagnoni et al., 2021). We compute correlations at the level of individual summaries. To make meaningful comparisons between the human and the automatic scores, we apply the automatic metrics here to the single randomly selected sentence per summary that the human annotators judged. Overall, we observe here that DAE has the highest correlations with human judgements.2058
Table 7 :
7Abstractiveness (MINT) and factuality of different summarization models. For each factuality metric, we first list its MINT-adjusted variant in green. Example: BART's µFACTH is 66.4, while the unadjusted FACTH is 91.2. All numbers are percentage scores ∈ [0,100].
Table 8 :
8ROUGE-L F 1 scores for the models compared in Section 5.2.
We smooth all n-gram counts(Chen and Cherry, 2014) to avoid undefined or zero harmonic mean values in highly abstractive summaries. See Appendix A for details.
Additionally, the exponent used in |f | 2 and h 2 could be configured, but we keep it at 2 in our experiments. A larger exponent would result in a steeper descent around h.
We measure cosine similarity of sentence encodings computed by the Universal Sentence Encoder(Cer et al., 2018).
See https://github.com/pytorch/fairseq/ tree/master/examples/bart.
CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. Shuyang Cao, Lu Wang, 10.18653/v1/2021.emnlp-main.532Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsShuyang Cao and Lu Wang. 2021. CLIFF: Contrastive learning for improving faithfulness and factuality in abstractive summarization. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 6633-6649, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Faithful to the original: Fact aware neural abstractive summarization. Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li, abs/1711.04434CoRRZiqiang Cao, Furu Wei, Wenjie Li, and Sujian Li. 2017. Faithful to the original: Fact aware neural abstractive summarization. CoRR, abs/1711.04434.
Daniel Cer, Yinfei Yang, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Chris Yuan, Yun-Hsuan Tar, Brian Sung, Ray Strope, Kurzweil, Universal Sentence Encoder. EMNLP 2018 -Conference on Empirical Methods in Natural Language Processing: System Demonstrations, Proceedings. Daniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Constant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal Sentence Encoder. EMNLP 2018 -Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, Proceed- ings, pages 169-174.
A systematic comparison of smoothing techniques for sentencelevel BLEU. Boxing Chen, Colin Cherry, 10.3115/v1/W14-3346Proceedings of the Ninth Workshop on Statistical Machine Translation. the Ninth Workshop on Statistical Machine TranslationBaltimore, Maryland, USAAssociation for Computational LinguisticsBoxing Chen and Colin Cherry. 2014. A systematic comparison of smoothing techniques for sentence- level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 362-367, Baltimore, Maryland, USA. Association for Compu- tational Linguistics.
Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting. Yen-Chun Chen, Mohit Bansal, Proc. of ACL. of ACLYen-Chun Chen and Mohit Bansal. 2018. Fast Abstrac- tive Summarization with Reinforce-Selected Sen- tence Rewriting. In Proc. of ACL.
Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, 10.18653/v1/D17-1070Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsAlexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680, Copen- hagen, Denmark. Association for Computational Lin- guistics.
Feqa: A question answering evaluation framework for faithfulness assessment in abstractive summarization. Esin Durmus, He He, Mona Diab, Association for Computational Linguistics (ACL). Esin Durmus, He He, and Mona Diab. 2020. Feqa: A question answering evaluation framework for faith- fulness assessment in abstractive summarization. In Association for Computational Linguistics (ACL).
New methods in automatic extracting. H P Edmundson, J. ACM. 16H. P. Edmundson. 1969. New methods in automatic extracting. J. ACM, 16:264-285.
Lexrank: Graph-based lexical centrality as salience in text summarization. Günes Erkan, Dragomir R Radev, J. Artif. Int. Res. 221Günes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text sum- marization. J. Artif. Int. Res., 22(1):457-479.
Evaluating the factual consistency of abstractive text summarization. Wojciech Kryscinski, Bryan Mccann, Caiming Xiong, Richard Socher, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332-9346.
Improving abstraction in text summarization. Wojciech Kryściński, Romain Paulus, Caiming Xiong, Richard Socher, 10.18653/v1/D18-1207Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsWojciech Kryściński, Romain Paulus, Caiming Xiong, and Richard Socher. 2018. Improving abstraction in text summarization. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 1808-1817, Brussels, Belgium. Association for Computational Linguistics.
He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive? on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization. Faisal Ladhak, Esin Durmus, Proc. of ACL. of ACLFaisal Ladhak, Esin Durmus, He He, Claire Cardie, and Kathleen McKeown. 2022. Faithful or extractive? on mitigating the faithfulness-abstractiveness trade-off in abstractive summarization. In Proc. of ACL.
Analyzing sentence fusion in abstractive summarization. Logan Lebanoff, John Muchovej, Franck Dernoncourt, Soon Doo, Seokhwan Kim, Walter Kim, Fei Chang, Liu, 10.18653/v1/D19-5413Proceedings of the 2nd Workshop on New Frontiers in Summarization. the 2nd Workshop on New Frontiers in SummarizationHong Kong, ChinaAssociation for Computational LinguisticsLogan Lebanoff, John Muchovej, Franck Dernoncourt, Doo Soon Kim, Seokhwan Kim, Walter Chang, and Fei Liu. 2019. Analyzing sentence fusion in abstrac- tive summarization. In Proceedings of the 2nd Work- shop on New Frontiers in Summarization, pages 104- 110, Hong Kong, China. Association for Computa- tional Linguistics.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for nat- ural language generation, translation, and compre- hension. arXiv preprint arXiv:1910.13461.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and com- prehension. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computa- tional Linguistics.
Context-aware stand-alone neural spelling correction. Xiangci Li, Hairong Liu, Liang Huang, 10.18653/v1/2020.findings-emnlp.37Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsXiangci Li, Hairong Liu, and Liang Huang. 2020. Context-aware stand-alone neural spelling correction. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 407-414, Online. Association for Computational Linguistics.
ROUGE: A package for automatic evaluation of summaries. Chin-Yew Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsChin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.
Text Summarization with Pretrained Encoders. Yang Liu, Mirella Lapata, Proc. of EMNLP. of EMNLPYang Liu and Mirella Lapata. 2019. Text Summariza- tion with Pretrained Encoders. In Proc. of EMNLP.
The automatic creation of literature abstracts. Hans Peter Luhn, IBM J. Res. Dev. 2Hans Peter Luhn. 1958. The automatic creation of liter- ature abstracts. IBM J. Res. Dev., 2:159-165.
On faithfulness and factuality in abstractive summarization. Joshua Maynez, Shashi Narayan, Bernd Bohnet, Ryan Mcdonald, 10.18653/v1/2020.acl-main.173Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineJoshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan McDonald. 2020. On faithfulness and factu- ality in abstractive summarization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 1906-1919, On- line. Association for Computational Linguistics.
Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. Shashi Narayan, Shay B Cohen, Mirella Lapata, 10.18653/v1/D18-1206Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.
Automatic text summarization using a machine learning approach. Joel Larocca Neto, Alex Alves Freitas, Celso A A Kaestner, Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02. the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02Berlin, HeidelbergSpringer-VerlagJoel Larocca Neto, Alex Alves Freitas, and Celso A. A. Kaestner. 2002. Automatic text summarization using a machine learning approach. In Proceedings of the 16th Brazilian Symposium on Artificial Intelligence: Advances in Artificial Intelligence, SBIA '02, page 205-215, Berlin, Heidelberg. Springer-Verlag.
Towards a decomposable metric for explainable evaluation of text generation from AMR. Juri Opitz, Anette Frank, 10.18653/v1/2021.eacl-main.129Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsJuri Opitz and Anette Frank. 2021. Towards a decom- posable metric for explainable evaluation of text gen- eration from AMR. In Proceedings of the 16th Con- ference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1504-1518, Online. Association for Computational Linguistics.
Understanding factuality in abstractive summarization with FRANK: A benchmark for factuality metrics. Artidoro Pagnoni, Vidhisha Balachandran, Yulia Tsvetkov, 10.18653/v1/2021.naacl-main.383Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsArtidoro Pagnoni, Vidhisha Balachandran, and Yulia Tsvetkov. 2021. Understanding factuality in abstrac- tive summarization with FRANK: A benchmark for factuality metrics. In Proceedings of the 2021 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 4812-4829, Online. As- sociation for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Compu- tational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Automatic Text Summarization: Past, Present and Future. Thierry Poibeau, Horacio Saggion, Multi-source, Multilingual Information Extraction and Summarization. Thierry Poibeau and Horacio Saggion. 2012. Automatic Text Summarization: Past, Present and Future. In Multi-source, Multilingual Information Extraction and Summarization, pages 3-13.
Automatically identifying online grooming chats using CNN-based feature extraction. Svenja Preuß, Luna Pia Bley, Tabea Bayha, Vivien Dehne, Alessa Jordan, Sophie Reimann, Fina Roberto, Josephine Romy Zahm, Hanna Siewerts, Dirk Labudde, Michael Spranger, Proceedings of the 17th Conference on Natural Language Processing (KONVENS 2021). the 17th Conference on Natural Language Processing (KONVENS 2021)Düsseldorf, GermanyKONVENS 2021 OrganizersSvenja Preuß, Luna Pia Bley, Tabea Bayha, Vivien Dehne, Alessa Jordan, Sophie Reimann, Fina Roberto, Josephine Romy Zahm, Hanna Siewerts, Dirk Labudde, and Michael Spranger. 2021. Auto- matically identifying online grooming chats using CNN-based feature extraction. In Proceedings of the 17th Conference on Natural Language Process- ing (KONVENS 2021), pages 137-146, Düsseldorf, Germany. KONVENS 2021 Organizers.
Generating natural language summaries from multiple on-line sources. R Dragomir, Kathleen R Radev, Mckeown, Computational Linguistics. 243Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating natural language summaries from mul- tiple on-line sources. Computational Linguistics, 24(3):469-500.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
FactGraph: Evaluating factuality in summarization with semantic graph representations. F R Leonardo, Mengwen Ribeiro, Iryna Liu, Markus Gurevych, Mohit Dreyer, Bansal, 10.18653/v1/2022.naacl-main.236Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSeattle, United StatesAssociation for Computational LinguisticsLeonardo F. R. Ribeiro, Mengwen Liu, Iryna Gurevych, Markus Dreyer, and Mohit Bansal. 2022. FactGraph: Evaluating factuality in summarization with semantic graph representations. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 3238-3253, Seattle, United States. Association for Computational Lin- guistics.
Generating indicative-informative summaries with sumum. Horacio Saggion, Guy Lapalme, Computational Linguistics. 28Horacio Saggion and Guy Lapalme. 2002. Generat- ing indicative-informative summaries with sumum. Computational Linguistics, 28:497-526.
Answers unite! unsupervised metrics for reinforced summarization models. Thomas Scialom, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, 10.18653/v1/D19-1320Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsThomas Scialom, Sylvain Lamprier, Benjamin Pi- wowarski, and Jacopo Staiano. 2019. Answers unite! unsupervised metrics for reinforced summarization models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3246-3256, Hong Kong, China. Association for Com- putational Linguistics.
Get to the point: Summarization with pointergenerator networks. Abigail See, J Peter, Christopher D Liu, Manning, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Abigail See, Peter J Liu, and Christopher D Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083.
Controlling the amount of verbatim copying in abstractive summarization. Kaiqiang Song, Bingqing Wang, Zhe Feng, Ren Liu, Fei Liu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Kaiqiang Song, Bingqing Wang, Zhe Feng, Ren Liu, and Fei Liu. 2020. Controlling the amount of verbatim copying in abstractive summarization. In Proceed- ings of the AAAI Conference on Artificial Intelligence, volume 34(05), pages 8902-8909.
Asking and answering questions to evaluate the factual consistency of summaries. Alex Wang, Kyunghyun Cho, Mike Lewis, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- tual consistency of summaries. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 5008-5020.
Controlling decoding for more abstractive summaries with copy-based networks. Noah Weber, Leena Shekhar, Niranjan Balasubramanian, Kyunghyun Cho, arXiv:1803.07038arXiv preprintNoah Weber, Leena Shekhar, Niranjan Balasubrama- nian, and Kyunghyun Cho. 2018. Controlling decod- ing for more abstractive summaries with copy-based networks. arXiv preprint arXiv:1803.07038.
Extractive summarization using supervised and semisupervised learning. Kam-Fai Wong, Mingli Wu, Wenjie Li, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UK. ColingOrganizing CommitteeKam-Fai Wong, Mingli Wu, and Wenjie Li. 2008. Ex- tractive summarization using supervised and semi- supervised learning. In Proceedings of the 22nd In- ternational Conference on Computational Linguistics (Coling 2008), pages 985-992, Manchester, UK. Col- ing 2008 Organizing Committee.
Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, F Roesner, Yejin Choi, In NeurIPSRowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, F. Roesner, and Yejin Choi. 2019. Defending against neural fake news. In NeurIPS.
PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. Jingqing Zhang, Yao Zhao, Mohammad Saleh, Peter Liu, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Peter Liu. 2020. PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization. In Pro- ceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.
Optimizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports. Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christopher D Manning, Curtis P Langlotz, Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christo- pher D. Manning, and Curtis P. Langlotz. 2019. Op- timizing the Factual Correctness of a Summary: A Study of Summarizing Radiology Reports.
| [
"https://github.com/pytorch/fairseq/"
] |
[
"On the ubiquity of duopolies in constant sum congestion games*",
"On the ubiquity of duopolies in constant sum congestion games*"
] | [
"Shiksha Singhal [email protected] \nIEOR\nIndian Institute of Technology\nBombayIndia\n",
"Veeraruna Kavitha [email protected] \nIEOR\nIndian Institute of Technology\nBombayIndia\n",
"Jayakrishnan Nair [email protected] \nIndian Institute of Technology\nBombayIndia\n"
] | [
"IEOR\nIndian Institute of Technology\nBombayIndia",
"IEOR\nIndian Institute of Technology\nBombayIndia",
"Indian Institute of Technology\nBombayIndia"
] | [] | We analyse a coalition formation game between strategic service providers of a congestible service. The key novelty of our formulation is that it is a constant sum game, i.e., the total payoff across all service providers (or coalitions of providers) is fixed, and dictated by the size of the market. The game thus captures the tension between resource pooling (to benefit from the resulting statistical economies of scale) and competition between coalitions over market share. In a departure from the prior literature on resource pooling for congestible services, we show that the grand coalition is in general not stable, once we allow for competition over market share. In fact, under classical notions of stability (defined via blocking by any coalition), we show that no partition is stable. This motivates us to introduce more restricted (and relevant) notions of blocking; interestingly, we find that the stable configurations under these novel notions of stability are duopolies, where the dominant coalition exploits its economies of scale to corner a disproportionate market share. Furthermore, we completely characterise the stable duopolies in heavy and light traffic regimes. | 10.48550/arxiv.2304.12902 | [
"https://export.arxiv.org/pdf/2304.12902v1.pdf"
] | 258,309,157 | 2304.12902 | 07295d27c789dc8bac965d7926e2b600491c66fe |
On the ubiquity of duopolies in constant sum congestion games*
Shiksha Singhal [email protected]
IEOR
Indian Institute of Technology
BombayIndia
Veeraruna Kavitha [email protected]
IEOR
Indian Institute of Technology
BombayIndia
Jayakrishnan Nair [email protected]
Indian Institute of Technology
BombayIndia
On the ubiquity of duopolies in constant sum congestion games*
Submitted to Operations Research manuscript (Please, provide the manuscript number!) Authors are encouraged to submit new papers to INFORMS journals by means of a style file template, which includes the journal title. However, use of a template does not certify that the paper has been accepted for publication in the named jour-nal. INFORMS journal templates are for the exclusive purpose of submitting to an INFORMS journal and should not be used to distribute the papers in print or online or to submit the papers to another publication.Subject Classification: Games: CooperativeQueues: Markovian Area of Review: Stochastic Models Key words : Erlang-B loss systemscoalition formationpartition form gameconstant sum game
We analyse a coalition formation game between strategic service providers of a congestible service. The key novelty of our formulation is that it is a constant sum game, i.e., the total payoff across all service providers (or coalitions of providers) is fixed, and dictated by the size of the market. The game thus captures the tension between resource pooling (to benefit from the resulting statistical economies of scale) and competition between coalitions over market share. In a departure from the prior literature on resource pooling for congestible services, we show that the grand coalition is in general not stable, once we allow for competition over market share. In fact, under classical notions of stability (defined via blocking by any coalition), we show that no partition is stable. This motivates us to introduce more restricted (and relevant) notions of blocking; interestingly, we find that the stable configurations under these novel notions of stability are duopolies, where the dominant coalition exploits its economies of scale to corner a disproportionate market share. Furthermore, we completely characterise the stable duopolies in heavy and light traffic regimes.
Introduction
Resource sharing is an efficient way of reducing congestion and uncertainty in service industries.
It refers to an arrangement where service resources are pooled and used jointly by a group (a.k.a., coalition) of providers, instead of each provider operating alone using its own resources. Naturally, such a coalition would be sustainable only if the participating providers obtain higher payoffs than they would have obtained otherwise. The key driver of coalition formation in congestion prone service systems is the statistical economies of scale that emerge from the pooling of service resources-this allows the coalition to offer a better quality of service to its customers, and/or to attract more customers to its service.
Not surprisingly, there is a considerable literature (for example, see Karsten et al. (2015) and the references therein) that analyses resource pooling between independent providers of congestible services via a cooperative game theoretic approach. In these papers, each provider is modeled as a queueing system, with its own dedicated customer base, that generates service requests according to a certain arrival process. The payoff of each service provider is in turn determined by the quality of service it is able to provide to its (dedicated) customer base. In such a setting, the statistical economies of scale from resource pooling typically drives the service providers to pool all their servers together to form a grand coalition, which generates the greatest aggregate payoff across all coalitional arrangements. Naturally, the resulting aggregate payoff must be divided between the providers in a stable manner, i.e., in such a way that no subset of providers has an incentive to 'break away' from the grand coalition. Such stable payoff allocations have been demonstrated in a wide range of settings, including single/multiple server environments, and loss/queue-based environments (see Karsten et al. (2015Karsten et al. ( , 2014 and the references therein). a significant extension; it includes an impossibility result under classical notions of stability (Section 3), an analysis of stable configurations under an extension of the Shapley value for partition form games (Section 4), a complete characterisation of stable configurations in heavy and light traffic regimes (Section 5), and a comprehensive numerical case study (Section 6).
3
To summarize, the literature on coalition formation between providers of congestible services suggests that a stable grand coalition would emerge from the strategic interaction. However, a crucial aspect the preceding literature fails to capture is user churn. That is, customers can switch service providers, if offered superior service quality elsewhere. This aspect introduces competition between the service providers (or coalitions of service providers) over market share, and turns the game into a partition form game (described below). To the best of our knowledge, the interplay between resource pooling among service providers (aided by the associated economies of scale) and the competition between them, in the context of congestible services, has not been explored in the literature. This paper seeks to fill this gap. This paper also contributes to the theory of coalition formation games in terms of new notions of stability. In particular, we focus on partition form games; the main ingredients of such games are, a partition (an arrangement of players into disjoint coalitions), the worth of each coalition (which, crucially, also depends on the partition), and the anticipation rules by which a blocking or opposing coalition estimates its new worth (depending upon the anticipated retaliation of the opponents). In such games, the classical notion of stability declares a partition to be stable if it is not blocked by any coalition (Aumann (1961), Narahari (2014))-a coalition blocks a partition if it anticipates greater worth in the new arrangement. However, some case studies may have no stable partitions under such classical notions (e.g., the game studied in (Shiksha et al.(2021), and the market-size driven coalition formation game of the present paper). This necessitates a deeper study of such scenarios, possibly using new, more relevant notions of stability. In this paper, we define novel notions of stability by suitably restricting the set of candidate blocking coalitions. Indeed, in practice, rearrangements in the marketplace typically arise from mergers between, or the breaking up of, existing corporations-our new notions of stability restrict the focus only on such tensions in the marketplace.
In this paper, we analyse a coalition formation game between a collection of service providers, each of which is modelled as an Erlang-B loss system. A key aspect of our model is that the total Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games 4 Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) market size (captured via the aggregate arrival rate of customer requests) is fixed exogenously, and providers (or coalitions of providers) compete for market share-this leads to a constant sum, partition form game. These aspects, as we show, dramatically alter the outcome of the strategic interaction between providers. Interestingly, we find that under classical notions of stability, no arrangement of service providers into coalitions is stable, no matter how the payoff of each coalition is distributed across its members. However, we demonstrate stable partitions when blocking coalitions are restricted to mergers and splits of the existing coalitions. Under our new notions of stability (we define two new notions, that differ on how a blocking coalition estimates its worth), the grand coalition is not stable, except in a very specific corner case. Instead, the predominantly stable configurations are duopolies, with the larger coalition exploiting economies of scale to corner a disproportionate portion of the market size. Our work also highlights several subtleties relating to different natural notions of stability in this context, the way the payoff of each coalition is divided between its members, and the degree of congestion in the system.
Our contributions
• We formally define a constant sum coalition formation game between strategic service providers of a congestible service (see Section 2). This model is the first, to the best of our knowledge, to capture the interplay between resource pooling and competition over market share.
• Under the classical notion of stability for this partition form game model (inspired by Aumann (1961)), which we refer to as General Blocking-Perfect Assessment, we show that no configuration is stable (see Section 3). (A configuration specifies a partition of the set of providers into coalitions, and also the allocation of the total payoff of each coalition among its members.) This is because of the vast (specifically, all possible) range of deviations that can challenge any given configuration.
• In view of this impossibility result, we define two novel restricted notions of stability (see Section 4), where only coalitions arising from mergers or splits of existing coalitions can challenge the status quo. The two notions differ with respect to the precision with which the coalition that seeks to 'break' from the prevailing configuration can estimate the benefit from doing so. Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 5 Interestingly, we show that our restricted notions of stability do admit stable configurations.
Moreover, these stable configurations involve duopolies, i.e., two competing coalitions (except for a certain corner case where the grand coalition is also stable). Intuitively, configurations involving three or more coalitions are unstable because economies of scale incentivize mergers of two or more (but not all) coalitions. On the other hand, the constant sum nature of the game dis-incentivizes the formation of a grand coalition (except in the corner case mentioned above).
• Finally, we explore the impact of the overall congestion level on the stable duopolies, by analysing light and heavy traffic regimes (see Section 5). All duopolies are stable in heavy traffic, whereas only duopolies with nearly matched service capacities are stable in light traffic.
Related Literature
This paper is related to two distinct strands of literature: (i) the literature on coalition formation for resource pooling in queueing networks, and (ii) the literature on partition form games.
Resource pooling in queueing networks: This literature is quite vast, and we only provide a brief survey here; a comprehensive review can be found in Karsten et al. (2015). One line of this literature models each coalition as a single server queue. The service rate of each coalition is either assumed to be optimized by the coalition itself (see, for example, González, P. et al. (2004), García-Sanz et al. (2008), Yu et al. (2015)), or simply taken to be the sum of the intrinsic service rates of the members (see, for example, Anily et al. (2010), Timmer et al. (2010, Anily et al. (2011Anily et al. ( , 2014.
Another line of literature treats each coalition as a multi-server loss system- Karsten et al. (2012) considers the case where the number of servers with each player is fixed apriori, andÖzen et al. Karsten et al. (2014) consider the case where a coalition optimizes the number of servers it operates. Finally, Karsten et al. (2015) analyses the setting where each coalition is an M/M/s queue (Erlang C); they consider both the above mentioned models for the service capacity of a coalition.
(2011),
All the above mentioned papers assume that each service provider has a dedicated customer base (modeled via an exogenously determined arrival rate of service requests). From a game theoretic Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games 6 Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) standpoint, this simplification ensures that the worth/utility of each coalition depends only the members of that coalition. In contrast, in the present paper, we explicitly model user churn, which induces competition between coalitions, and turns the game into a partition form game, wherein the worth/utility of a coalition also depends on the arrangement of players outside that coalition.
Partition form games: The earliest work in this area can be found in Aumann (1961). The authors define a general definition of cooperative games which is applicable to both characteristic and partition form games (without using these names). The term "partition form game" was first coined in Thrall et al. (1963), where the authors further develop the theory of this class of games. Aumann et al. (1974) extends various existing stability notions for characteristic form games to partition form games.
Majority of the literature on cooperative games deals with the stability of the grand coalition in characteristic form games. In contrast, there is only a limited literature on partition form games. Hafalir (2007) established the conditions under which the grand coalition is stable for convex partition form games. The authors in Saad, W. et al. (2011) (spectrum sensing and access), (Shiksha et al.(2021) (Kelly's mechanism) show that certain finer partitions other than the grand coalition can be stable against unilateral deviations for partition form games, while the authors in Bloch, F. (1996), Yi, S.S. ( 1997) show the same for the classical notions of stability against coalitional deviations. The authors in (Shiksha et al.(2021) also study stability against coalitional deviations to show that the grand coalition is stable when players are significantly asymmetric, while no partition is stable when the players are identical. Finally, Ray, D. et al. (1999) considers a dynamic coalition formation game and shows that finer partitions can emerge at the sub-game perfect equilibrium.
Model and Preliminaries
In this section, we describe our system model for coalition formation between strategic service providers, characterize the behavior of the customer base in response to coalition formation between service providers, and introduce some background. Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 7 2.1. System model Consider a system with a set N = {1, · · · , n} of independent service providers (a.k.a., agents), with provider i having N i servers. Without loss of generality, we assume N i ≥ N i+1 for 1 ≤ i ≤ n − 1. All servers are identical, and assumed to have a unit speed, without loss of generality. The providers serve a customer base that generates service requests as per a Poisson process of rate Λ. Jobs sizes (a.k.a., service requirements) are i.i.d., with J denoting a generic job size, and E[J] = 1/µ. Service providers are strategic, and can form coalitions with other service providers to enhance their rewards. Formally, such coalition formation between the service providers induces a parti-
tion P = {C 1 , C 2 , · · · , C k } of N , where ∪ k i=1 C i = N , C i ∩ C j = ∅ for all i = j.
We refer to such a partition with k coalitions as a k-partition. (Naturally, the baseline scenario where each service provider operates independently corresponds to an n-partition.)
In response to a partition P induced by coalition formation between service providers, the arrival process of customer requests gets split across the k coalitions in P, with the arrival process seen by coalition C being a Poisson process of rate λ P C , where C∈P λ P C = Λ. (We characterize the split (λ P C , C ∈ P) as a Wardrop equilibrium; details below.) Each coalition C operates as an M /M /N C /N C (Erlang-B) loss system, with N C = j∈C N j parallel servers, and arrival rate λ P C .
This means jobs arriving into coalition C that find a free server upon arrival begin service immediately, while those that arrive when all N C servers are busy get dropped (lost). Given the well known insensitivity property of the Erlang-B system, the steady state blocking probability associated with coalition C (the long run fraction of jobs arriving into coalition C that get dropped), denoted B P C , is given by the Erlang-B formula:
B P C = B(N C , a P C ), where a P C := λ P C µ and B(M, a) = a M M ! M j=0 a j j! .
(1)
User behavior: Wardrop equilibrium
Next, we define the behavior of the customer base in response to coalition formation across service providers, via the split (λ P C , C ∈ P) of the aggregate arrival process of service requests across coalitions. This split is characterized as a Wardrop equilibrium (or WE; see Correa et al. (2010)). In the context of our model, we define the WE split of the arrival process of service requests across coalitions, such that the steady state blocking probability associated with each coalition is equal. Note that since the blocking probability associated with an 'unused' coalition would be zero, it follows that all coalitions would see a strictly positive arrival rate. Thus, the WE (if it exists) is characterized by a vector of arrival rates (λ P C , C ∈ P) satisfying
B P C = B N C , λ P C µ = B * ∀ C ∈ P and C∈P λ P C = Λ,(2)
where B * is the common steady state blocking probability for each coalition. For any given partition P, the following theorem establishes the existence and uniqueness of the WE, along with some useful properties (proof in Appendix B).
Theorem 1. Given any partition P between the service providers and market size Λ, there is a unique Wardrop equilibrium (λ P C , C ∈ P), where λ P C > 0 for all C ∈ P, that satisfies (2). Additionally, the following properties hold:
(i) For each C ∈ P, λ P C is a strictly increasing function of the total arrival rate Λ.
(ii) If the partition P is formed by merging two coalitions C i and C j in partition P where C i ∪C j = N (with all other coalitions in P remaining intact), then λ P
C i ∪C j > λ P C i + λ P C j . (iii) If P = {C 1 , C 2 }, with N C 1 > N C 2 , then λ P C 1 N C 1 > Λ N > λ P C 2 N C 2 , where N = i∈N N i .
Aside from asserting the uniqueness and strict positivity of the Wardrop split, Theorem 1 also states that equilibrium arrival rate of each coalition is an increasing function of the aggregate arrival rate Λ; see Statement (i). Additionally, Statement (ii) demonstrates the statistical economies of scale due to a merger between coalitions: the merged entity is able to attract an arrival rate that exceeds the sum of the arrival rates seen by the two coalitions pre-merger. Finally, Statement (iii) provides another illustration of statistical economies of scale for the special case of a 2-partitionthe larger coalition enjoys a higher offered load per server than the smaller one.
Coalition formation game: Preliminaries
Having defined the behavior of the user base, we now provide some preliminary details on the coalition formation game between the service providers.
Recall that each service provider is strategic, and only enters into a coalition if doing so is beneficial. Given a partition P that describes the coalitions formed by the service providers, we define the value or payoff of each coalition C ∈ P to be βλ P C , where β > 0. This is natural when λ P C is interpreted as being proportional to the number of subscribers of coalition C, with each subscriber paying a recurring subscription fee. Without loss of generality, we set β = 1.
The value λ P C of each coalition C must further be apportioned between the members of the coalition. Denoting the payoff of agent i by φ P i , we therefore have i∈C φ P i = λ P C for all C ∈ P.
Since the providers are selfish, they are ultimately interested only in their individual payoffs. Thus, the coalition formation between providers is driven by the desire of each provider to maximize its payoff, given the statistical economies of scale obtained via coalition, and also the constant sum nature of this game (the sum total of the payoffs of all providers equals Λ). Thus, the relevant fundamental questions are:
1. Which partitions can emerge as a result of the strategic interaction between providers, i.e., which partitions are stable? Indeed, a precursor to this question is: how does one define a natural notion of stability?
2. It is apparent that the answer to the above question hinges on how the value of each coalition is divided between its members. Thus, a more appropriate question is: which coalitional arrangement of agents and subsequent division of the coalitional shares results in stable configurations?
Our aim in this paper is to answer these questions; such problems can be studied using tools from cooperative game theory. In the next section, we begin with classical notions of stability and 'blocking by a coalition', available in the literature; we will observe that there exists no partition which is stable under these classical notions. In the later sections, we refine the notion of stability (using some form of restricted blocking) and study the configurations that are stable.
Classical Notions of Coalitional Blocking and Stability
It is well known that non-partition type transferable utility cooperative games are characterized by tuple (N , ν), where ν(C) for any subset C ⊂ N represents the utility of coalition C. However, this is not sufficient for a partition form game, where a coalition's utility depends not only on the coalition's players but also on the arrangement of other players. In this case ν(C) (more appropriately) can be defined as the set of payoff vectors (of dimension n) that are anticipated to be achievable by the players of the coalition C (e.g., Aumann (1961)); and this anticipation is based on their expectation of the reactions of the agents outside the coalition. The stability concepts (e.g., core) are extended to these type of games (e.g., Aumann (1961)), which are discussed at length in Appendix A. In this section we discuss the same ideas in our notations, in particular, we consider the notion of α-efficient R-core defined in Aumann (1961) (more details are in Appendix A).
This notion of stability is interlaced with the notion of a partition (more precisely, a configuration defined below) being blocked by some coalition. We begin with relevant definitions. Given a partition P = {C 1 , · · · , C k }, the set of payoff vectors consistent with P is defined as:
Φ P := Φ = [φ 1 , · · · , φ n ] ∈ R n + : j∈C i φ j = λ P C i ∀ i . A configuration is defined as a tuple (P, Φ), such that Φ ∈ Φ P .
Note that a configuration specifies not just a partition of the agents into coalitions, but also specifies an allocation of payoffs within each coalition, that is consistent with the partition.
Blocking by a coalition: A configuration (P, Φ) is blocked by a coalition C / ∈ P if, for any
partition P containing C, there exists Φ ∈ Φ P such that φ j > φ j for all j ∈ C.
Basically, a new coalition can block an existing configuration, if each one of its members can derive strictly better payoff from this realignment (irrespective of the responses of the opponents in C c ). Equivalently, (P, Φ) is blocked by coalition C / ∈ P if, for any partition P containing C, we have λ P C > j∈C φ j . Note that the above equivalence hinges on the transferable utility assumption inherent in our cooperative game, by virtue of which (partial) utilities can be transferred across agents. Intuitively, a coalition C ⊂ N blocks configuration (P, Φ), if the members of C have an incentive to 'break' one or more coalitions of P to come together and form a new coalition. In particular, it is possible to allocate payoffs within the blocking coalition C such that each member of C achieves a strictly greater payoff, irrespective of any (potentially retaliatory) rearrangements among agents outside C. This is referred to in the literature as a pessimistic anticipation rule (see Bloch et al. (2014), (Shiksha et al.(2021) and Appendix A) or α-efficient rule in Aumann (1961).
We refer the above pessimal anticipation based blocking as GB-PA (General Blocking-Perfect Assessment) rule, we first provide the precise summary:
GB-PA rule: Under this rule, a configuration (P, Φ) is blocked by any coalition Q / ∈ P if λ Q > i∈Q φ i , where λ Q := min P :Q∈P λ P Q .(3)
A configuration is stable under the GB-PA rule if it is not blocked by any coalition.
The term 'General Blocking' is used for this notion, as any arbitrary coalition (mergers or splits of the existing coalitions or mergers of partial splits) can block; and the term 'Perfect Assessment'
is used as the players in blocking coalition are aware of the previous shares of all members of the blocking coalition, i.e., previous shares of players is 'common knowledge' within Q.
Stability under GB-PA: We establish a negative result for this classical notion of stability (proof in Appendix C):
Theorem 2. For n > 2, there exists no stable configuration under GB-PA rule.
We establish the above result by showing that the configuration with the n-partition (i.e., each agent operates alone) is blocked by a suitable merger, while for any other configuration, there exists a j ∈ N such that either {j} or N − {j} blocks it. For n = 2 it is trivial to observe that the only stable configurations are (P 2 , Φ P ) and ({1, 2}, Φ P ) where P 2 := {{1}, {2}} and Φ P :
= (λ P 2 {1} , λ P 2 {2} ).
Theorem 2 states that no configuration is stable under GB-PA for n > 2, in other words, the α-core (R-core under α-effectiveness) as defined in Aumann (1961) is empty, for our game. This 'impossibility' is due to the fact that under GB-PA, a configuration can be blocked by any coalition that is not contained in it; this coalition can be formed via multiple mergers/splits of existing coalitions. But in practice, either an existing coalition splits or two or more of the existing coalitions merge. Thus, to define more practical and relevant notions of stability, one may have to consider a more restricted set of blocking candidates. This is addressed in the next section.
In the next section, we also consider an alternate form of restricted blocking, where the 'prevailing worth' of the agents of the candidate blocking coalition is assessed imprecisely. Prior to that, we conclude this section with a short discussion on other anticipation rules.
Other Anticipation Rules: There are many other anticipation rules considered in the literature, for e.g., β-effective rule in Aumann (1961)
Realistic Notions of Blocking and Stability
Motivated by the impossibility of stable configurations under GB-PA (Theorem 2), in this section, we define weaker, more realistic notions of stability, that do admit stable configurations. Specifically, the proposed stability notions differ from GB-PA on the class of candidate blocking coalitions Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 13 considered, as well as the precision with which the 'prevailing worth' of the members of the candidate coalition is assessed and/or revealed. The former distinction is inspired by the observation that organisational rearrangements predominantly occur in practice via mergers or splits of existing coalitions. For each of these notions of stability, we characterize the class of stable configurations.
The main takeaway from our results is the following. Because of the interplay between statistical economies of scale and the constant sum nature of the game, only configurations involving duopolies (i.e., partitions with two coalitions) are stable (except in a certain corner case, where the grand coalition is also stable). This is true for both the proposed notions of stability defined next.
Restricted blocking and stability
The first notion of stability we introduce simply restricts the set of candidate blocking configurations to mergers and splits of prevailing coalitions. Note that this is a natural restriction from a practical standpoint, since complex rearrangements between firms in a marketplace typically arise (over time) from a sequence of mergers and splits. We refer to this as restricted blocking (RB).
Further when one assumes the precise knowledge of the worth of the blocking candidates, it leads to the RB-PA (Restricted Blocking-Perfect Assessment) rule. We begin with this rule.
RB-PA rule: Under this rule, a configuration (P, Φ) can be blocked only by a coalition Q that is formed either via (i) a merger of coalitions in P (i.e., Q = ∪ C∈M C for M ⊆ P), or via (ii) a split of a single coalition in P (i.e., Q ⊂ C for some C ∈ P). Further, such a Q blocks (P, Φ) if,
for all partitions P containing Q, there exists Φ ∈ Φ P such that φ i > φ i for all i ∈ Q.
Equivalently, Q (as described above) blocks the configuration (P, Φ) if
λ Q > i∈Q φ i , where λ Q := min P :Q∈P λ P Q .(4)
A configuration (P, Φ) is stable under the RB-PA rule if it is not blocked by any merger or split.
Note that like GB-PA, the RB-PA rule also involves pessimal anticipation; the members of candidate blocking coalition are pessimistic in their anticipation of the value of the new coalition. Moreover, it is possible to allocate the payoff of the blocking coalition Q among its members such that each member is (strictly) better off, as discussed in the previous section.
The next notion uses the same restriction on the set of candidate blocking coalitions, but uses an imprecise assessment of the prevailing worth of the members of the candidate blocking coalition, resulting in an imprecise assessment of the benefit/loss from blocking. We refer to this as the RB-IA (Restricted Blocking-Imperfect Assessment) rule.
RB-IA rule: Under this rule, a configuration (P, Φ) is blocked by a coalition Q formed either via a merger or a split if:
λ Q := min P :Q∈P λ P Q > C∈P N C∩Q N C λ P C ,(5)λP Q > i∈Q φ i , whereP = C∈P {C \ Q} {Q}.(6)
A configuration (P, Φ) is stable under the RB-IA rule if it is not blocked by any merger or split.
Condition (5) can be interpreted as a first stage check on the feasibility of the block, by (imperfectly) assessing the total prevailing worth of the members of Q (using the prevailing coalitional worths {λ P C }). This imprecise assessment is obtained as the sum of the proportional contributions of the members of Q to their respective parent coalitions; the imprecision stems from not using the actual payoffs {φ i } i∈Q . Note that this feasibility check is also under the pessimal anticipation rule, but with imperfect estimates.
Condition (6) is the final validation of the block using precise estimates {φ i } i∈Q . This ensures that it is possible to allocate the payoff of Q among its members such that each member is (strictly) better off from the deviation. Here, the anticipation is that there would be no immediate retaliation from the leftover players, i.e., as seen from the definition ofP in (6), the opponents would remain in their original coalitions (as in the Cournot Nash equilibrium (Martins-da-Rocha et al. (2011)).
This is reasonable after the already pessimal feasibility check in (5).
Let us now interpret the condition for blocking due to a split/merger separately under RB-IA.
We begin with blocking due to a split. By (5) and (6), a configuration (P, Φ) is blocked by a coalition Q that is formed by splitting a coalition C ∈ P if:
λ P Q > N Q N C λ P C ,(7)λP Q > i∈Q φ i , whereP = (P \ {C}) ∪ {Q, C \ Q}.(8)
Condition (7) estimates the total prevailing worth of the members of Q, as proportional to their fractional contribution towards the service capacity of C, i.e., N Q /N C . Condition (8) is the final stage check on split feasibility as discussed above. Note thatP is the new partition that emerges after the split when opponents remain in their original coalitions.
Applying (5) and (6) to a merger, a configuration (P, Φ) is blocked by a merger coalition Q =
∪ C∈M C, for some M ⊆ P, if λ Q > C∈M λ P C , and λP Q > i∈Q φ i , whereP = {Q, P\M}.(9)
Note that the first condition in (9) is identical to (5), the only difference being that the prevailing worth of all the deviating members ( C∈M λ P C ) is assessed precisely, given that full coalitions are deviating. The second condition in (9) is the same as (6). However, observe that i∈Q φ i = C∈M λ P C , and hence the second condition in (9) is implied by the first, as λ Q ≤ λP Q .
Note that RB-PA and RB-IA differ only in the condition for blocking due to a split. This is natural, since the net worth of coalitions {λ P C } C∈P is often common knowledge, whereas the internal payoff allocation within a coalition can often be confidential.
Having defined our new notions of stability, we now consider each notion separately, and characterize the resulting stable configurations. We begin with RB-IA, which appears to admit a broader class of stable configurations.
Stable configurations under RB-IA
Our first result is that all configurations involving partitions of size three or more are unstable. In other words, only monopolies or duopolies can be stable (proof in Appendix C).
Theorem 3. Under the RB-IA rule, any configuration (P, Φ) with |P| ≥ 3 is not stable. The proof sheds light on why configurations with |P| ≥ 3 are unstable -they are blocked by any merger leading to a 2-partition; this is because of the economies of scale arising from such a merger (as shown in Theorem 1.(ii)), and the pessimal anticipation rule.
Next, we move to the two remaining possibilities: stable configurations involving the grand coalition, and those involving 2-partitions.
Grand Coalition: Defining P G := N as the grand coalition, it is clear that any configuration of the form (P G , Φ) can only be blocked by a split. We now show that unless a single agent owns at least half the total service capacity of the system, such a block is always possible. In other words, any configuration involving the grand coalition is unstable, unless there is a single 'dominant' agent.
On the other hand, if there is a single agent who owns at least half the service capacity, we show that there exist stable configurations of the form (P G , Φ) (see Appendix C for proof).
Theorem 4. Under the RB-IA rule:
(i) If N 1 < i =1 N i , then there exists no payoff vector Φ consistent with P G , such that (P G , Φ) is stable. (ii) If N 1 ≥ i =1 N i ,
then there exists at least one payoff vector Φ consistent with P G , such that (P G , Φ) is stable. Specifically, any configuration (P G , Φ) satisfying the following is stable:
φ 1 ≥ max {λ C : C N and 1 ∈ C} .(10)
To prove part (i) of the above theorem, we show that for any payoff vector, there exists a coalition with n − 1 players that blocks the grand coalition (details in Appendix C). For part (ii), note that only coalitions containing player 1 satisfy condition (7) and hence are potential blocking coalitions under RB-IA. Therefore, if player 1 is given a large enough allocation (as in (10) property of the stable configurations we identify is that, the stability does not depend upon the payoff vector, Φ. Instead, it only depends upon the specifics of the partition (however this is not true for all partitions). This insensitivity to the payoff vector is not seen under the RB-PA rule.
We begin by defining some preliminaries.
Stable partition: A partition P is stable if all configurations involving it are stable, i.e., configuration (P, Φ) is stable for any Φ ∈ Φ P .
By Theorem 1, λ P C 1 = λ P k is the unique zero of the following function (see (2)):
h(λ) := λ k k! N −k j=0 (Λ − λ) j j! − (Λ − λ) N −k (N − k)! k j=0 λ j j! .
Now, we define Ψ(k; Λ) := λ P k /k as the offered load (or market size) per server of the larger coalition.
Finally, define k * (Λ) := arg max k Ψ(k; Λ).
Note that k * (Λ) is the set of values of k that maximizes the per-server offered load of the larger coalition among 2-partitions.
Let C * := {C N : N C ∈ k * (Λ)} be the set of coalitions C, that can derive the maximum perserver offered load among 2-partitions. In the following lemma, we provide a sufficient condition for a class of partitions (recall any such partition is represented by P = {C 1 , C 2 }) to be stable.
Lemma 1. Consider the RB-IA rule. A 2-partition P is stable if there exists no coalition S ⊂
C i for i = {1, 2} such that λ S N S > λ C i N C i = λ P C i N C i .
The proof of the lemma follows directly from the definition of stability. Indeed, for 2-partitions that satisfy the hypothesis of the above lemma, none of the splits are feasible (they violate (7));
further, the merger of both coalitions (which leads to grand coalition) is also not feasible because of the constant sum nature of the game. A consequence of this lemma is the following (see Appendix C for the proof).
Theorem 5. Consider the RB-IA rule.
Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) (i) There always exists a stable 2-partition.
(ii) Any 2-partition P with one of the coalitions from C * is a stable partition.
(iii) Additionally, any 2-partition P = {C 1 , C 2 } (where N C 1 ≥ N C 2 ) with no C C 1 such that N C > N/2 is stable.
Note that statement (i) directly follows from statement (ii) and the non-emptiness of C * . Statement (ii) follows as the duopolies identified here satisfy the hypothesis of Lemma 1. A similar reasoning applies for statement (iii).
From Theorem 5.(iii), the duopolies with perfectly matched service capacities (N C 1 = N C 2 ) are also stable; while from (ii) any duopoly with N C 1 = k * (see (11)) is stable. Further, Theorem 5 identifies a class of stable partitions, i.e., partitions that are stable for any consistent payoff vector.
However, there can also exist duopolies that are stable only under certain consistent payoff vectors and unstable for others (see Section 6).
In Section 5, we provide a complete characterization of the class of stable partitions under RB-IA, in the heavy and light traffic regimes.
Stable configurations under RB-PA
Next, we consider stable configurations under the RB-PA rule. Under this rule, we show that only configurations involving 2-partitions can be stable, i.e., configurations involving the grand coalition, or involving k-partitions with k ≥ 3 are always unstable. In contrast, for the RB-IA rule, recall that the grand coalition is stable under certain conditions. Moreover, also in contrast to RB-IA, the stability/instability of duopoly configurations under RB-PA appears to depend on the associated payoff vector.
We begin by characterising the space of stable allocations under RB-PA. From (4), it is easy to see that a stable payoff vector lies in the polyhedron (12) defined below.
Lemma 2. [Polyhedral Characterisation] Given any partition P, stable allocations lie in the polyhedron defined by i∈Q φ i ≥ λ Q for all Q ⊆ C j ∈ P, and for all j.
19
It is clear from the above lemma that RB-PA does not admit stable partitions (unlike RB-IA).
In other words, stability under RB-PA is tied to the payoff vector. Interestingly, stable partitions under RB-IA, paired with a special payoff vector (defined next) are stable; see Theorem 6.
The proportional payoff vector Φ P p , associated with any partition P, assigns to each member a payoff in proportion to the number of servers they bring to the coalition:
φ P p,i = N i j∈C N j λ P C for any i ∈ C ∈ P.(13)
Our results for the RB-PA rule are summarized as follows (see Appendix C for the proof).
Theorem 6. Under the RB-PA rule:
(i) No configuration involving the grand coalition is stable.
(ii) No configurations involving k-partitions, for k ≥ 3 are stable.
(iii) There exists at least one 2-partition P such that (P, Φ P p ) is stable. Specifically, consider any stable 2-partition P under the RB-IA rule. Then (P, Φ P p ) is stable under the RB-PA rule.
Further, there exists a neighbourhood B P p of the payoff vector Φ P p such that (P, Φ) is stable for all Φ ∈ B P p .
Like RB-IA, RB-PA also does not admit any stable configurations involving 3 or more coalitions.
Moreover, under RB-PA, the grand coalition is also unstable for all payoff vectors (unlike RB-IA, which admits payoff vectors that stabilise the grand coalition under certain conditions). Finally, turning to duopolies, Theorem 6 conveys that partitions that are stable under the RB-IA rule partition form games, to divide a coalition's worth among its members (Aumann et al. (1974)).
Under this extension, we treat each coalition C i in the partition as a 'grand coalition', define a suitable 'worth' ν C for each C ⊂ C i , and then use the usual definition of Shapley value to obtain individual shares of the players in C i . Formally, for any j ∈ C i , φ P s,j :=
C⊆C i ,j / ∈C |C|!(|C i | − |C| − 1)! |C i |! ν C∪{j} − ν C ,(14)
where ν C is defined using pessimal anticipation as below:
ν C = λ P C , where P = P\{C i } ∪ {C, C i \C}.(15)
Note that ν C is defined as the payoff obtained by C when (i) players outside of C i remain attached to their original coalitions (as in the Cournot equilibrium), and (ii) the players in C i \ C form a single competing coalition (in the spirit of pessimal anticipation).
Next, we present some contrasting results (compared to Theorem 6) for a small number of service providers, for any 2-partition P = {C 1 , C 2 } (proof in Appendix C).
Theorem 7. Under the RB-PA rule, with the Shapley payoff vector Φ P s as defined in (14) and (15), (i) for n = 3, the configuration (P, Φ P s ) is stable for any 2-partition P, and (ii) for n = 4, the configuration (P, Φ P s ) is stable for any 2-partition P such that |C 1 | = |C 2 | = 2.
Note that Theorem 7 establishes the stability of certain 2-partitions under the Shapley payoff vector that are not covered in Theorem 6 under the proportional payoff vector (for n = 3, 4). Specifically, under the Shapley payoff vector, any 2-partition for n = 3, and any 2-partition with equal-sized coalitions for n = 4, is stable. In contrast, recall that the 2-partitions that are shown to be stable under the proportional payoff vector depend on the number of servers within each coalition (see Theorem 6). We present a few examples in Section 6 to demonstrate these contrasts numerically.
Stable Duopolies: Heavy and Light Traffic
In this section, we provide a complete characterization of stable partitions under RB-IA, and stable configurations under RB-PA with the proportional payoff vector, in heavy and light traffic regimes. Specifically, we provide the necessary and sufficient conditions for stability, as Λ ↑ ∞ (heavy traffic) and Λ ↓ 0 (light traffic), with other system parameters remaining unchanged.
Our analysis presents interesting contrasts between the heavy and light traffic regimes. In heavy traffic, we find that all duopolies form stable partitions under RB-IA and stable configurations (with the proportional payoff vector) under RB-PA. Intuitively, this is because economies of scale discourage splits in heavy traffic; as we show in Lemma 3 in Appendix C, the per server utility of the larger coalition increases with the number of servers it possesses (Interestingly, this is a 'second order' effect; per server scales as Λ /N for both coalitions in heavy traffic (see Lemma 6 in Appendix C).). In contrast, in light traffic, we find that only duopolies where the two coalitions are 'closely matched' in the number of servers they possess, are found to be stable. Intuitively, this is because economies of scale get significantly diluted in light traffic, discouraging any coalition from becoming 'too large'.
Heavy Traffic
Our main result in heavy traffic is the following (proof in Appendix C).
Theorem 8. There exists aΛ such that for all Λ ≥Λ, the following holds: given any 2-partition P, (i) P is a stable partition under RB-IA, and (ii) (P, Φ P p ) is a stable configuration under RB-PA.
This result can be interpreted as follows. Note that due to the constant sum nature of the game, duopolies can never be blocked due to a merger. Thus, our stability analysis hinges on the feasibility of splits. Specifically, we prove Theorem 8 by showing that given any 2-partition P, service capacity (see Lemma 3 in Appendix C). In other words, economies of scale persist in heavy traffic. Indeed, the above monotonicity property, which is proved by exploiting the analytical extension of the Erlang-B formula to real-valued service capacities (see Jagerman (1974)), renders condition (7) for a split under RB-IA, and condition (4) for a split under RB-PA, invalid.
Light Traffic
Next, we consider the light traffic regime and our main result here is (proof in Appendix C):
Theorem 9. Let P denote the space of 2-partitions P = {C 1 , C 2 } (where N C 1 ≥ N C 2 ) satisfying the following condition: there does not exist C ⊂ C 1 such that N C > N/2. There exists Λ > 0, such that for all Λ ≤ Λ, sub-coalition C with more than half the total service capacity. In particular, note that duopolies with perfectly matched service capacities (N C 1 = N C 2 ) also satisfy this condition. Intuitively, the result holds because in light traffic, the larger coalition corners almost the entire offered load (i.e., λ P C 1 /Λ → 1 as Λ → 0); see Lemma 8 in Appendix C.
Our results in the heavy and light traffic regimes shed light on the impact of congestion (via the total offered load Λ, a.k.a., the market size) on coalition formation. In light traffic, the perserver utility of the larger (by service capacity) coalition Ψ(k) decreases with its service capacity k (as the larger coalition captures almost the entire Λ, irrespective of k). This in turn encourages duopolies where the service capacities of the two coalitions are closely matched (even though the larger coalition corners most of the total utility). On the other hand, in heavy traffic, the per-server utility of the larger (by service capacity) coalition Ψ(k) increases with its service capacity k. These This is consistent with what we find in our numerical experiments (see Figure 1).
Finally, it is important to note that we are able to provide necessary and sufficient conditions for stability under RB-IA and RB-PA in heavy and light traffic regimes; in contrast, we could only provide sufficient conditions for stability (see Theorems 5 and 6) outside of these limiting regimes.
Numerical Case Studies
In this section, we present some numerical case studies that illustrate our key findings. Importantly, we also consider examples for which the conditions of our theorems are not satisfied; these provide additional insights. We numerically compute λ P C for various C and P using zero finding algorithms and then compute k * of (11) or use equations (7)-(8) or (4) to determine the stable configurations.
RB-IA rule: Recall that Theorem 5 provides sufficient condition for a stable partition under RB-IA, i.e., a partition that is stable under any consistent payoff vector. Here, we illustrate that RB-IA also admits stable configurations that are not supported by stable partitions. Consider the example with Λ = 13 and 4 service providers having service capacities: N 1 = 10, N 2 = N 3 = N 4 = 2.
Note that the partition P = {{1, 2, 3}, {4}} does not satisfy the hypothesis of Theorem 5 (in this case, k * = {12}). Moreover, configuration (P, Φ P p ) is blocked by Q = {1, 2} as split Q satisfies (7), while, Φ P p and Q satisfy (8). Thus, P is not a stable partition. However, the configuration (P, Φ)
is stable for the following set of payoff vectors:
Φ : φ 1 ≥ λ {{1},{2,3},{4}} {1} , φ 1 + φ 2 ≥ λ {{1,2},{3},{4}} {1,2} , φ 1 + φ 3 ≥ λ {{1,3},{2},{4} {1,3} , φ 2 + φ 3 ≥ λ {{1},{2,3},{4}} {2,3} , φ 1 + φ 2 + φ 3 = λ {1,2,3} .
It can be checked that this set is indeed non-empty. This demonstrates that it is possible for a partition to be stable under some but not all consistent payoff vectors. We consider another such example with 3 agents, Λ = 100, N 1 = 80, N 2 = 20 and N 3 = 5. By
Theorem 7.(i), (P, Φ P s ) is stable for P = {{1, 2}, {3}}. However, we find (numerically) that (P, Φ P p )
is not stable (it is blocked by Q = {1}). (Numerically, we find that k * = {80}, implying P does not satisfy the hypothesis of Theorem 6, as expected.) and 8 respectively). In particular, in light traffic, the only stable duopolies are those that are nearly matched with respect to service capacity-one where the dominant coalition is composed of agents 2, 3, 4, and 5 (k = 8) and another the dominant coalition is composed of agent 1 and one of the remaining agents (k = 9). In heavy traffic, all duopolies are stable. Importantly, the figure shows that the set of stable duopolies grows monotonically with Λ.
Conclusions and Future Work
Our work highlights that in competitive service systems enjoying statistical economies of scale, coalition formation games have very distinct equilibria when the total payoff across agents is a constant. In particular, we demonstrate that duopolies emerge, with the dominant coalition exploiting economies of scale to corner a disproportionate fraction of the total payoff.
This work motivates future work along several directions. Firstly, one could explore alternative models for a coalition's utility. For instance, one could define the utility of a coalition to be the rate of customers served (rather than the rate of customer arrivals); this is meaningful in scenarios where providers only earn revenue when a customer is successfully served. Preliminary analysis suggests that this modification of the utility structure alters the nature of stable equilibria. More generally, this work motivates a systematic understanding of how payoff structures influence the nature of equilibria in partition form games.
Another potential direction of inquiry involves exploring the effect of different queueing models, including models where customers can wait for service with/without balking or reneging.
Finally, it would also be interesting to explore dynamic variants of coalition formation games.
This would entail examining whether any limiting behaviors emerge (particularly when stable equilibria do not exist).
Appendix A: Characteristic Form games
A game in characteristic form Aumann (1961) can be defined using the tuple, (N , ν, H), where:
(a) N denotes the set of n agents; (b) ν is called a characteristic function and for any C ⊆ N , ν(C)
Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) denotes the set of all possible payoff vectors of dimension n that agents in C can jointly achieve;
and (c) H is the set of all possible payoff vectors of dimension n (such vectors are also referred to as allocation vectors in literature), which are achievable. We say (N , ν, H) is an ordinary game (see Aumann (1961)) if: x ∈ ν(N ) if and only if there is a y ∈ H such that x i ≤ y i for all i.
In this appendix, we provide the details of how our problem can be recast as a characteristic game. Let F(P) be the set of all feasible payoff vectors under partition P, these are the vectors that satisfy the following: the sum of payoffs of all agents in any coalition S is less than or equal to that obtained by S under partition P at WE, λ P S . Hence
F(P) := x = [x i ] : i∈S x i ≤ λ P S for all S ∈ P .(16)
Thus H, the set of all achievable/feasible payoff vectors is H = ∪ P F(P). Observe that for grand coalition, F(N ) = H and hence is convex. We are now left to define the characteristic function ν.
Characteristic function using pessimal rule: The characteristic function precisely describes the set of all possible divisions of the anticipated worth of any coalition. One can define such a function for partition form games using an appropriate anticipation rule Bloch et al. (2014). There are many known anticipatory rules to define characteristic function, also described in Section 3.
According to the most widely used pessimistic anticipation rule Bloch et al. (2014), the agents in deviating coalition C assume that the outside agents arrange themselves to hurt the agents in C the most. Further, the minimum utility that coalition C can achieve irrespective of the arrangement of the agents outside this coalition is given by ν C := min P:C∈P max x∈F (P) i∈C x i (observe in our case, ν C = λ C ). Thus, the characteristic function {ν(C); for all C} under pessimal rule is given by the following: for any coalition C, ν(C) = x : i∈C x i ≤ ν C , is the set of possible payoff vectors that agents in C can jointly achieve independent of the arrangement of outside agents. From the above definition, it is clear that our game is an ordinary game.
Stability: To study the stability aspects, one needs to understand if a certain coalition can 'block' any payoff vector. Blocking by a coalition implies that coalition is working as an independent unit and has an anticipation of the value it can achieve (e.g., irrespective of arrangements of others under pessimal rule). If the division of this anticipated value among the members of the coalition, under any given allocation rule, renders the members to achieve more than that in the current payoff vector then the coalition has tendency to oppose the current arrangement or the payoff vector.
Blocking: A payoff vector x ∈ H is blocked by a coalition C if there exist a payoff vector y ∈ ν(C)
such that y i > x i for all i ∈ C.
With these definitions in place, we now give define a related solution concept called R-core, which is an extension of the classical definition of core, for transferable utility games (in non-partition form games).
R-core (Aumann 1961, Section 3): We define R-core C (H) to be the set of vectors in H which cannot be blocked by any other member of H.
The authors in Hafalir (2007) studied the properties of this core under the name c-core (which is also popular by the name α-core in literature). In (Hafalir 2007, Corollary 2), they showed that a convex partition form game necessarily has a non-empty core. However, one can easily check that our game is not convex as in Hafalir (2007) and hence, it is not clear if core is non-empty or not.
In fact, in Theorem 2, we showed that the R-core is empty for our game. We hence introduce more generalised and relevant notions of stability in this paper.
Appendix B: Proof of Theorem 1
Proof of Existence and Uniqueness: Let the size of a partition be denoted by p. The first step of this proof is to show the existence and uniqueness of WE for the case when p = 2. In the next step, using induction we prove the existence for any general p = m > 2 using the corresponding results for m − 1. In the third step we show the continuity of the WE, to be precise the arrival rates at WE for m. The last step attributes to the uniqueness of our solution.
Step 1: Existence and Uniqueness of WE for p = 2
To obtain WE, the following equation needs to be solved: B P C 1 (N C 1 , a P C 1 ) = B P C 2 (N C 2 , a P C 2 ). Define a function f := B P C 1 (N C 1 , a P C 1 ) − B P C 2 (N C 2 , a P C 2 ). Then, f is a function of λ P C 1 ∈ [0, Λ] since λ P C 2 = Λ − λ P C 1 . • At λ P C 1 = 0 we have B P C 1 (N C 1 , a P C 1 ) = 0 and B P C 2 (N C 2 , a P C 2 ) > 0, thus f (0) < 0. • At λ P C 1 = Λ we have B P C 1 (N C 1 , a P C 1 ) > 0 and B P C 2 (N C 2 , a P C 2 ) = 0, thus f (Λ) > 0. Then, B P C 1 (N C 1 , a P C 1 ) and B P C 2 (N C 2 , a P C 2 ) are polynomial functions with denominator > 1 and hence are continuous functions. This implies that f is a continuous function.
Thus, f satisfies the hypothesis of Intermediate Value Theorem (IVT). Using IVT, there exists a value of λ P C 1 = λ * ∈ (0, Λ) such that f (λ * ) = 0. The uniqueness of λ * follows since B P C 1 (N C 1 , a P C 1 ) and B P C 2 (N C 2 , a P C 2 ) are strict increasing functions of λ P C 1 and λ P C 2 respectively.
Step 2: Existence for general p = m > 2
To prove the existence for any general m > 2, we assume that a unique WE exists for p = m − 1, i.e., λ P C 1 , · · · , λ P C m−1 with corresponding common blocking probability B * . With m units we can initially fix λ P Cm = 0 and obtain WE corresponding to the remaining units, which we have assumed to exist. With increase in λ P Cm , Λ − λ P Cm which is the total share of remaining agents, decreases. From part (i) of this theorem applied to the case with m − 1, we know Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) that the corresponding WE solution for these agents also decreases. This implies that the common blocking probability for C 1 , · · · , C m−1 reduces while blocking probability of C m increases (see (1)).
Using similar arguments as above and treating C 1 , · · · , C m−1 as one while defining function for IVT (continuity is obtained from Step 3, with m − 1), one can show that WE exists.
Step 3: Continuity of Optimisers, i.e., WE: Consider the following function g for m coalitions in
partition P: g(Λ, λ) := C j ∈P;1<j≤m (B P C 1 − B P C j ) 2 ,
where λ is the vector of arrival rates for all C j ∈ P. Then, we define g * (Λ, λ * ) = min {λ: j λ j =Λ} g(Λ, λ). Observe that the (unique) minimizer λ * of the function g is the (unique) WE for our queueing model, and that the function g is jointly continuous. Thus, using Maximum Theorem we have that g * and λ * is continuous in Λ.
Step 4: Uniqueness of WE To prove the uniqueness of the WE, we assume the contradiction, i.e., say (λ 1 , · · · , λ m ) and (λ 1 , · · · , λ m ) are two distinct WEs. One can have the following cases:
Case 1: There exist multiple WEs with same common blocking probability B * This implies that some of the units in partition are obtaining different arrival rates in the multiple WEs such that they have common B * , i.e., say λ i = λ i . However, this is not possible since blocking probability is a strictly increasing function of arrival rate.
Case 2: There exist multiple WEs with different common blocking probability B * andB * Without loss of generality, we can assume that B * <B * . This implies that the arrival rates to the units with common blocking probabilityB * is more (since blocking probability is an increasing function of arrival rate). However, the total arrival rate is fixed at Λ which implies that one of the WE does not satisfy C j ∈P λ P C j = Λ. Proof of All units used For contradiction, let us assume that the customers split themselves amongst some strict subset of units of partition P. Then, each unit with zero arrivals have a zero blocking probability while units with non-zero arrivals have some strict positive blocking probability. However, this contradicts the fact that the coalitions having zero arrivals should have a higher blocking probability than others at WE.
Hence at WE, each of the units in partition P obtain non-zero arrival rates.
Proof of part (i) Let λ P C 1 , · · · , λ P C k be the individual arrival rates corresponding to partition P at WE (satisfies (2)) for the coalitions C 1 , · · · , C k respectively with the total arrival rate Λ > 0. Let the corresponding common blocking probability be B * . When the total arrival rate is increased to Λ , the individual arrival rates to the providers at WE are changed to λ P C 1 , · · · , λ P C k and the corresponding common blocking probability is changed toB * . Note that these splits to the individual operating units must satisfy:
k i=1 λ P C i = Λ > Λ = k i=1
λ P C i and for any partition P.
29
Next we will show that λ P C j ≤ λ P C j is not possible for any C j ∈ P. Using (17), we know that at least one of the units have higher individual arrival rates at new WE, i.e, λ P C j > λ P C j for at least one C j ∈ P. This means that the common blocking probability at new WE is increased, i.e.,B * > B * . Now since blocking probability is a strictly increasing function of arrival rates, we have that arrival rate to each coalition is increased at new WE for Λ , i.e., λ P C j > λ P C j for all C j ∈ P. Hence, WE is an increasing function of Λ.
Proof of part (ii) Let λ P C 1 , · · · , λ P C k be the individual arrival rates corresponding to partition P at WE for the coalitions C 1 , · · · , C k respectively. Let the corresponding common blocking probability be B * . Observe that the blocking probability of C i and C j units also equals B * , and hence the merger M = C i ∪ C j = N has strictly smaller blocking probability, i.e., B M < B * , if the joint arrival rate was λ P C i + λ P C j . From (1) the blocking probability is a strictly increasing function of arrival rate. Thus the new WE after merger is formed with a (strict) bigger arrival rate to the merger, as again at the new WE the new blocking probabilities of all coalitions C ∈ P should be equal by (2).
Proof of part (iii) Consider a system with identical servers. We know that when any number of identical servers combine with their arrival rates, the combined blocking probability reduces.
This reduction is more when the number of servers combining are more, i.e.,
where 0 < L < M are constants, B is the blocking probability, N is the number of servers and a is the offered load. Now if we consider that the coalition with N C 1 and N C 2 servers gets exactly N C 1 /N and N C 2 /N share of total arrival rate Λ at WE respectively. Using (18), we have that coalition with N C 1 servers has strictly smaller blocking probability. From (2), the blocking probability of each unit at WE is same. So, the arrival rate to coalition with N C 1 and N C 2 servers need to be increased and reduced respectively to achieve the WE.
Hence, coalition with N C 1 and N C 2 servers satisfy
λ P C 1 N C 1 > Λ N > λ P C 2 N C 2 .
Appendix C: Rest of the proofs Proof of Theorem 2: Consider any configuration, say (P, Φ). From (3), the configuration is stable if and only if i∈C φ i ≥ λ C for all C / ∈ P and C ⊂ N .
Case 1: All players are alone in P
In such a case, for some player j, consider the merger coalition M = N − {j}. Then from Theorem 1.(ii), which implies that M blocks the prevalent configuration under the GB-PA rule.
λ M > C l ∈M λ P C l = i =j φ i ,
Case 2: There exists at least one coalition C ∈ P such that |C| ≥ 2
This implies that S a = N − {a} / ∈ P for all a ∈ C. We will show that some a ∈ C, either S a or {a} will block the prevailing configuration.
Case 2(a): The configuration is blocked by S a for some a ∈ C. In this case, the instability of the coalition follows immediately.
Case 2(b): The configuration is not blocked by S a for any a ∈ C. In this case,
i∈Sa φ i ≥ λ Sa = Λ − λ {a}
for all a ∈ C. This is equivalent to the statement φ a ≤ λ {a} for all a ∈ C. However, there exists â a ∈ C such that φâ < λ {â} since q∈C φ i = q∈C λ {q} ≤ q∈C λ P {q} < λ P C . Thus, the configuration (P, Φ) is blocked by {â}.
Proof of Theorem 3: Consider a partition P = {C 1 , C 2 , · · · , C k } with cardinality greater than 2.
Let M be the merger coalition containing all coalitions of P except one, i.e., M = ∪ k i=2 C i and P = {C 1 , M }. Then from Theorem 1.(ii), λ P M = λ P M > C i ∈M λ P C i , which is same as the condition required for blocking by mergers under RB-IA rule.
Hence, there exists a configuration/payoff vector such that each of the members in M obtain strictly better and thus, such a partition is not stable.
Proof of Theorem 4: There can be no merger from P G , and we only need to check if an appropriate split can block a configuration (P G , Φ), under consideration.
(i) When N 1 < i∈N ;i =1 N i (a) We first consider payoff vectors Φ that satisfy n i=2 φ i < 1 − N 1 N Λ.(20)
Let S := {2, 3, · · · , n} be the coalition made of all agents except agent 1. We will prove that this coalition will block the configuration of the form stated above.
Since coalition S has more than N/2 servers, it must satisfy the following (from Theo-
rem 1.(iii)): λ P S = λ S > Λ 1 − N 1 N , where P := {S, {1}}
, which is same as (7). Further, from (20), λ P S > Λ 1 − N 1 N > n i=2 φ i , which implies that (8) is also satisfied by coalition S.
Hence, (P G , Φ) is blocked by coalition S.
(b) Next, we consider payoff vectors that satisfy
n i=2 φ i ≥ 1 − N 1 N Λ.(21)
Suppose, for the sake of obtaining a contradiction, that, {P G , Φ} is stable. Since N 1 is the agent with maximum number of servers, S k := S\{k} ∪ {1} has N S k > N/2 for any k ≥ 2. By Theorem 1.(iii) such coalitions satisfy (7). Thus, the stability of {P G , Φ} implies that (8) must be violated for the same coalitions. That is, we have i∈S k φ i ≥ λ S k > i∈S k N i N Λ, for each k > 1, in view of Theorem 1.(iii). By adding all the above inequalities with k = 2, · · · , n, we have:
(n − 1)φ 1 + (n − 2) n i=2 φ i > (n − 1)N 1 + (n − 2) n i=2 N i N Λ,
which implies,
φ 1 + (n − 2)Λ > N 1 + (n − 2)N N Λ = N 1 N Λ + (n − 2)Λ, since n i=1 φ i = Λ, n i=1 N i = N. Thus we have, φ 1 > N 1 N Λ which contradicts (21). Thus, (P G , Φ) is unstable under RB-IA rule. (ii) When N 1 ≥ i∈N ;i =1 N i
In this case, the coalitions that satisfy condition (7) for blocking under RB-IA are exactly those coalitions that contain player 1 (from Theorem 1.(iii)). However, for any such coalition, the condition (8) for blocking under RB-IA gets violated so long as φ 1 ≥ max C λ C for all C ⊂ N containing agent 1. Thus, any allocation Φ satisfying the above bound on φ 1 is guaranteed to be stable under RB-IA.
Proof of Theorem 5: Part (i) follows from part (ii), proved below, as k * exists.
(ii) Any 2-partition P = {C 1 , C 2 } cannot be blocked by mergers since merger lead to P G and (9) is not satisfied. Next we look at splits. Say C 1 ∈ C * . Then it follows from the definition of C * that there exists no coalition C ⊂ C 1 such that it satisfies (7). Further, coalition C 2 cannot do better by splitting. Hence, any partition with one of the coalitions belonging to C * is a stable partition under RB-IA rule.
(iii) Once again, it is easy to verify that a merger cannot block any 2-partition P. Next, we check for splits. Any split leads to a coalition with a number of servers less than N/2, and hence from Theorem 1.(iii), (7) is not satisfied and hence, no split is feasible. Thus, P is stable under RB-IA rule.
Proof of Theorem 6: (i) Consider any configuration (P G , Φ) with GC. The proof of this part can be split into two cases:
Case 1: When N 1 < i∈S;i =1 N i for some S ⊂ N Under RB-PA rule for the configuration to be stable, we need to ensure that the following system of equations are satisfied simultaneously. i∈C φ i ≥ λ C for all C ⊂ N and,
i∈N φ i = Λ.(22)
Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
However, a subset of these equations itself admit no feasible solution (as proved in Theorem 4).
Thus, such a system of equations does not have a solution and hence (P G , Φ) is unstable for any payoff vector Φ.
Case 2: When N 1 ≥ i∈N ;i =1 N i
Once again we need to satisfy (22) to prove that (P G , Φ) is stable. In particular those equations will also have to be satisfied for subsets S such that |S| = n − 1. If there exists a payoff vector Φ that satisfies all such conditions, consider one such S and say j / ∈ S. Then from (22)
, φ j = Λ − i∈S φ i ≤ Λ − λ S = λ {j} . If φ j < λ {j} for some j then configuration (P G , Φ) is blocked by {j} under RB- PA rule.
Otherwise if φ j = λ {j} for all j ∈ N then i∈N φ i = i∈N λ {j} < Λ and thus (22) is not satisfied. Hence (P G , Φ) is unstable for any payoff Φ.
(ii) Since the condition required for a merger to be successful under RB-PA rule is same as under RB-IA rule, the result follows from Theorem 3.
(iii) When the payoff vector is given by equation (13), the RB-PA and RB-IA rules are equivalent to each other. Thus, the result follows from Theorem 5.
Moreover because of the continuity of Φ, we have the next result.
Proof of Theorem 7: Consider any 2-partition P = {C 1 , C 2 }.
(i) W.l.o.g., say coalition C 1 = {i, j}. From (14) and (15), the share of player i is given by:
φ i = 1 2 λ P C 1 − λ P {j} + 1 2 λ P {i} > λ P {i} > λ {i} , where P = {{i}, {j}, C 2 }.
The first inequality holds since λ P C 1 > λ P {i} + λ P {j} , and the second follows from Theorem 1.(ii). Thus, a split of C 1 does not block the configuration (P, Φ P s ). Further, a merger cannot block the configuration due to the constant sum nature of the game.
An identical argument also applies for part (ii).
Proof of Theorem 8: Consider any 2-partition P = {C 1 , C 2 } with k := N C 1 ≥ N C 2 It is easy to see that the 2-partition cannot be blocked by a merger under RB-IA/RB-PA rules. It therefore suffices to check for stability against splits.
(i) Be relaxing k to be a real number, we show that Ψ is increasing for all k in Lemma 3. Using Lemma 3, no split satisfies (7) and hence P is stable.
(ii) Under the proportional payoff vector φ P p , (4) is equivalent to (7), and hence the result follows. Below, we prove Lemma 3.
Lemma 3. Consider any > 0. Then there exists aΛ such that for all Λ ≥Λ, Ψ := λ 1 /k is strictly increasing in k over N/2 ≤ k ≤ N − .
Proof: To prove this result, we work with the analytical extension of the Erlang-B formula (see Jagerman (1974)) so that k may be treated as a real number. Under this extension, it is Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 33 easy to see that the Wardrop splits are uniquely defined for real-valued service capacities. For any 2-partition, differentiating Ψ with respect to k, we have dΨ dk = d dk λ 1 k = 1 k dλ 1 dk − λ 1 k . Thus, to prove the theorem, it suffices to show that given > 0, there exists aΛ such that for any Λ ≥Λ,
dλ 1 dk − λ 1 k > 0 for all k ∈ N 2 + , N − .
Towards this, we know that the arrival rates at WE (λ 1 ) are obtained by equating the blocking probabilities of the two coalitions. The reciprocal of the blocking probability of a coalition with k servers and offered load a admits the following integral representation (see Jagerman (1974)):
R(k, a) = a ∞ 0 h(t; a, k)dt where h(t; a, k) = (1 + t) k e −at .
Thus, the WE satisfies R(k, λ 1 ) − R(N − k, Λ − λ 1 ) = 0, which is equivalent to
λ 1 ∞ 0 h(t; λ 1 , k)dt − (Λ − λ 1 ) ∞ 0 h(t; Λ − λ 1 , N − k)dt = 0.(23)
Differentiating both sides of the above with respect to k using Lemma 7 and rearranging:
dλ 1 dk = −λ 1 ∞ 0 h(t; λ 1 , k) ln(1 + t)dt − (Λ − λ 1 ) ∞ 0 h(t; Λ − λ 1 , N − k) ln(1 + t)dt ∞ 0 h(t; λ 1 , k)dt + ∞ 0 h(t; Λ − λ 1 , N − k)dt − λ 1 ∞ 0 h(t; λ 1 , k)tdt − (Λ − λ 1 ) ∞ 0 h(t; Λ − λ 1 , N − k)tdt .
Observe that each of the integrals in the above expression is of the
form ∞ 0 f (t; k)e −λ 1 t dt or ∞ 0 f (t; k)e −(Λ−λ 1 )t dt.
In heavy traffic, since λ 1 and Λ − λ 1 tend to infinity (see Lemma 4 below) the value of these integrals is dominated by the behavior of the integrand around zero. Accordingly, one can approximate these integrals using a Taylor expansion of f (t) around t = 0. Formally, using Lemma 5 below (it is easy to show that all the integrals above satisfy the hypotheses of Lemma 5), we have dλ 1 dk = − T 1 T 2 , where
T 1 = λ 1 1 λ 2 1 + 2k − 1 λ 3 1 + 3k 2 − 6k + 2 λ 4 1 + (Λ − λ 1 ) 1 (Λ − λ 1 ) 2 + 2(N − k) − 1 (Λ − λ 1 ) 3 + 3(N − k) 2 − 6(N − k) + 2 (Λ − λ 1 ) 4 + o 1 Λ 2 , T 2 = 1 λ 1 + k λ 2 1 + k(k − 1) λ 3 1 + k(k − 1)(k − 2) λ 4 1 + 1 Λ − λ 1 + N − k (Λ − λ 1 ) 2 + (N − k)(N − k − 1) (Λ − λ 1 ) 3 + (N − k)(N − k − 1)(N − k − 2) (Λ − λ 1 ) 4 − λ 1 1 λ 2 1 + 2k λ 3 1 + 3k(k − 1) λ 4 1 + 4k(k − 1)(k − 2) λ 4 1 − (Λ − λ 1 ) 1 (Λ − λ 1 ) 2 + 2(N − k) (Λ − λ 1 ) 3 + 3(N − k)(N − k − 1) (Λ − λ 1 ) 4 + 4(N − k)(N − k − 1)(N − k − 2) (Λ − λ 1 ) 4 + o 1 Λ 3 .
Simplifying the above expression we get, Subtracting λ 1 /k and by some simplification dλ 1 dk − λ 1 k equals,
= k Λ−λ 1 − (N −k)λ 1 (Λ−λ 1 ) 2 + T 3 + o 1 Λ 2 k k λ 2 1 + 2k(k−1) λ 3 1 + 3k(k−1)(k−2) λ 4 1 + N −k (Λ−λ 1 ) 2 + 2(N −k)(N −k−1) (Λ−λ 1 ) 3 + 3(N −k)(N −k−1)(N −k−2) (Λ−λ 1 ) 4 + o 1 Λ 3 = k(N −k) (Λ−λ 1 ) 2 Λ−λ 1 N −k − λ 1 k + T 3 + o 1 Λ 2 k k λ 2 1 + 2k(k−1) λ 3 1 + 3k(k−1)(k−2) λ 4 1 + N −k (Λ−λ 1 ) 2 + 2(N −k)(N −k−1) (Λ−λ 1 ) 3 + 3(N −k)(N −k−1)(N −k−2) (Λ−λ 1 ) 4 + o 1 Λ 3 .
where T 3 := k
dk − λ 1 k = N 2 k + kN 2 (N −k) 2 − kN 2 N −k 1 N −k − 1 k kN 2 1 k + 1 N −k = 1 k > 0.
Observe that the above limit is uniform over k ∈ N 2 , N − .
Lemma 4. λ 1 Λ → k N and Λ−λ 1 Λ → N −k N uniformly over k ∈ N 2 , N as Λ → ∞.
Proof: We know that the blocking probability of a coalition with s servers and offered load a, when Λ → ∞ is bounded as (see Harel (1988)): 1 − 1 ρ < B(s, a) < ρ 1+ρ where ρ = a/s > 0. Using the upper bound for the larger coalition and the lower bound for the smaller coalition, the arrival rate λ 1 at WE can be lower bounded byλ 1 , which satisfies:λ
1 k 1+λ 1 k = 1 − 1 Λ−λ 1 N −k =⇒λ 1 = k Λ−N +k N .
Next, using the upper bound for the smaller coalition and the lower bound for the larger coalition, we obtain an upper boundλ 1 of λ 1 as follows:
Λ−λ 1 N −k / 1+ Λ−λ 1 N −k = 1 − 1 /(λ 1 /k) =⇒λ 1 = k Λ+N −k N .
From the above, we obtain the following bounds on λ 1 , k Λ−N +k
N ≤ λ 1 ≤ k Λ+N −k N . It now follows that λ 1 Λ − k N (a) ≤ k N −k N Λ (b) ≤ N Λ .
(the above inequalities lead to inequality (a), while the bound (b) is obvious), which implies that lim Λ→∞ λ 1 Λ − k N = 0 uniformly over k ∈ N 2 , N . This implies the result.
Lemma 5. Suppose f is m-times differentiable on [0, ∞), such that f (m) (t; k) is non-negative, monotonically increasing, and f (m) (t; k) ≤ c 1 + c 2 t N for all t ≥ 0 and k ∈ N 2 , N − , for some positive scalars c 1 and c 2 . Further, Λ /2 ≤ λ 1 (k) ≤ Λ for all k. Then f (j) (0; k) t j j! e −λ 1 (k)t dt + ∞ 0 f (m) (c(t); k)t m m! e −λ 1 (k)t dt.
for some c(t) strictly between 0 and t. Observe that the residue term above can be upper bounded
as ∞ 0 f (m) (c(t);k)t m m! e −λ 1 (k)t dt ≤ ∞ 0 f (m) (t;k)t m m!
e −λ 1 (k)t dt, since the m th derivative of f (·) is strictly monotonically increasing in t and as c(t) ≤ t. Under the hypothesis of this lemma, we have an upper bound independent of k ∈ N 2 , N − , which further can be upper bounded:
∞ 0 f (m) (t; k)t m m! e − Λ 2 t dt ≤ ∞ 0 (c 1 + c 2 t N ) t m m! e − Λ 2 t dt (a) = o 1 Λ m−1 ,
where equality (a) follows from simple calculations (involving the gamma function).
Lemma 6. For any
> 0, λ 1 k − Λ−λ 1 N −k → 1 N −k − 1 k uniformly over k ∈ N 2 , N − as Λ → ∞.
Proof: Observe that (23) coincides with the WE equation for integral values of k and N − k.
Relaxing k to be a real-valued number such that k ∈ N 2 , N − , observe that each integral is of the form ∞ 0 f (t; k)e −λ 1 t dt or ∞ 0 f (t; k)e −(Λ−λ 1 )t dt. In heavy traffic, since λ 1 and Λ − λ 1 tend to infinity (see Lemma 4) the value of these integrals is dominated by the behavior of the integrand around zero. Accordingly, one can approximate these integrals using a Taylor expansion of f (t; k) around t = 0. Formally, using Lemma 5 (it is easy to show that all the above integrals satisfy the hypotheses of Lemma 5) and solving the non-negligible integrals, equation (23) can be re-written as 1 + k λ 1 + k(k−1) λ 2 1 = 1 + N −k Λ−λ 1 + (N −k)(N −k−1) (Λ−λ 1 ) 2 + o 1 Λ 2 , with o 1 Λ 2 being uniform over k ∈ N 2 , N − . Simplifying the above using Lemma 4 (e.g., o ( 1 /Λ−λ 1 ) = o ( 1 /Λ)), and using Λo ( 1 /Λ 2 ) = o( 1 /Λ),
k λ 1 1 + k − 1 λ 1 = N − k Λ − λ 1 1 + N − k − 1 Λ − λ 1 + o 1 Λ =⇒ λ 1 k = Λ − λ 1 N − k 1 + k−1 λ 1 1 + N −k−1 Λ−λ 1 + o 1 Λ .
Subtracting Λ−λ 1 N −k from both sides of the above equation, we have
λ 1 k − Λ − λ 1 N − k = Λ − λ 1 N − k k−1 λ 1 − N −k−1 Λ−λ 1 − o 1 Λ 1 + N −k−1 Λ−λ 1 + o 1 Λ .
Note that as Λ → ∞, the denominator of the above expression goes to 1. Further, multiplying and dividing by Λ and using Lemma 4, we have (observe all errors converge uniformly in k)
lim Λ→∞ λ 1 k − Λ − λ 1 N − k = 1 N k − 1 k N − N − k − 1 N − k N = 1 N − k − 1 k .
Lemma 7. While differentiating (23), the limits (derivative) and the integral can be interchanged.
(coalition C can block payoff vector Φ, if for every correlated strategy of players in N − C, there exists a correlated strategy of players in C which leaves them better-off) and max rule in Bloch et al. (2014) (the opponents/players in N − C are anticipated to arrange themselves in a partition that maximizes their own utilities). Interestingly, the pessimistic rule coincides with the above mentioned anticipation rules for our constant sum game, mainly because of economies of scale established in Theorem 1.(ii). There are other anticipation rules that do not coincide with the pessimal rule. For example, the optimistic rule (opponents are anticipated to arrange in such a way that the deviating coalition obtains the best utility) in Bloch et al. (2014), the Cournot Nash Equilibrium (opponents are anticipated to remain in their old coalitions) in (Martins-da-Rocha et al.(2011), etc. However, the impossibility result established in Theorem 2 also implies impossibility under these rules (if any coalition Q anticipates a higher utility than what its members derive in the current configuration under the pessimal rule (3), it would also anticipate higher utility using any other anticipation rule).
) in the grand coalition, it does not have an incentive to deviate, either alone or as part of a group.Duopolies:We are now left to examine the stability of duopolies, i.e., 2-partitions, under the RB-IA rule. Duopolies can, without loss of generality, be represented as P = {C 1 , C 2 }, with k := N C 1 ≥ N C 2 . In the following, we identify a family of stable duopolies under RB-IA rule. An interesting Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 17
(
irrespective of the associated payoff vector), are also part of stable configurations under RB-PA, but under a restricted class of payoff vectors. Specifically, the payoff vectors we identify are 'close' to proportional allocations. Next we investigate other natural payoff structures that also induce stability under RB-PA. In particular, we consider a payoff vector inspired by the classical Shapley value.Shapley value: Shapley value is one of the well-known sharing concepts used in cooperative game theory(Narahari (2014)). We begin by defining an extended version of Shapley value for Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games 20 Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
a) for any consistent payoff vector Φ, the configuration (P, Φ) cannot be blocked by a split under RB-IA, and (b) the configuration (P, Φ P p ) cannot be blocked by a split under RB-PA. These statements in turn follow from the fact that in heavy traffic, the per-server offered load Ψ(k) of the larger coalition increases monotonically with the number of servers k it possesses, i.e., its Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games 22 Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
(i) P is a stable partition under RB-IA if and only if P ∈ P, and (ii) (P, Φ P p ) is a stable configuration under RB-PA if and only if P ∈ P.Theorem 9 highlights that the 2-partitions that are stable under RB-IA and form stable configurations (with the proportional payoff vector) under RB-PA are those where the service capacities of the two coalitions are nearly matched. Formally, the larger coalition C 1 should not have a
scale induce stability in all duopolies, including those that have coalitions with highly asymmetric service capacities. This suggests that in general, at moderate congestion levels, the per-server utility of the larger (as before, by service capacity) coalition peaks at an intermediate value of k between N /2 and N, encouraging the formation of moderately asymmetric duopolies.
Figure 1
117 w ∈ {14, · · · , 21} None 18 − 40 w ∈ {20, · · · , 44} NoneTable 1Unstablepartitions under RB-PA for different allocation rules with N2 = N3 = N4 = 2, Set of stable partitions (under RB-IA) v/s Λ (on log scale) for N1 = 7, N2 = N3 = N4 = N5 = 2 Impact of congestion: In Figure 1, we consider a final example that demonstrates how the set of stable partitions under RB-IA varies with the market size Λ. Here, we consider five service Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 25 providers with service capacities N 1 = 7, N 2 = N 3 = N 4 = N 5 = 2. Note that the left and right extremes in the figure are consistent with the light-traffic and heavy traffic results (Theorems 9
B
(N, a) > B(LN, La) > B(M N, M a).
−
λ 1 + 2(N −k)−1 (Λ−λ 1 ) 2 + 3(N −k) 2 −6(N −k)+2 (Λ−λ 1 ) 3 N −k (Λ−λ 1 ) 2 − 2(N −k)(N −k−1) (Λ−λ 1 ) 3 − 3(N −k)(N −k−1)(N −k−2) (Λ−λ 1 ) 4
.
From Lemma 4, it follows that λ 1 Λ = k N + o(1) and Λ−λ 1 Λ = N −k N + o(1) as Λ → ∞, with the o(1) terms being uniform over k ∈ [N/2, N − ] . , with the o(1) term again being uniform over k ∈ [N/2, N − ] . Now, multiplying by Λ 2 in the numerator and denominator above and applying these results, we obtain lim Λ→∞ dλ 1
∞ 0 ff
0(t; k)e −λ 1 (k)t dt = (j) (0; k) t j j! e −λ 1 (k)t dt + o 1 Λ m−1 , with f (0) (·; k) = f (·; k),as Λ → ∞. Here, the o 1 Λ m−1 error is uniform over k ∈ N 2 , N − for > 0.Proof: Using the Taylor expansion of f (t; k) (for any t)
Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)14
Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)15
λ Q := min
P :Q∈P
Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
RB-PA rule: Next, we study the RB-PA rule. Our aim is to first compare the stability of two allocation mechanisms-proportional allocation and Shapley value. Consider the following example with Λ = 13 and 4 service providers. Here N 1 is varied from 2 − 41, while the remaining serviceShiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games
24
Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
capacities are fixed at N 2 = N 3 = N 4 = 2. Table 1 presents the set of 2-partitions that are unstable
under each allocation mechanism. Here, w denotes the number of servers in the coalition that
includes provider 1. For example, the second row considers the cases where N 1 lies between 10
and 17. In all these cases, the proportional payoff vector renders those 2-partitions with w ∈
{14, 15, · · · , 21} unstable, whereas all two partitions are stable under Shapley value. This suggests
that Shapley value renders more partitions stable in comparison to the proportional payoff vector.
Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
AcknowledgmentsThe first author's work is partially supported by the Prime Minister's Research Fellowship (PMRF), India.Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) Proof: Since the blocking probability of any coalition increases with increase in arrival rate, the derivative of the left hand side of (23) with respect to λ 1 (k) is not zero. Thus using Implicit Function Theorem, we obtain λ 1 (k) to be a continuously differentiable function of k and hence, T 4 :=It is sufficient to consider the limit of the form lim h→0Consider any h ∈ (0,h]. By Mean Value Theorem, there exists a k ∈ (k, k + h) such thatThe upper bound is integrable and hence the result follows by Lebesgue's Dominated ConvergenceTheorem.Proof of Theorem 9: (i) Consider any 2-partition P = {C 1 , C 2 } ∈ P and Φ ∈ Φ P . From Theorem 5.(iii), P is stable under RB-IA rule.Now, consider a 2-partition P = {C 1 , C 2 } / ∈ P. This implies there exists a C ⊂ C 1 such that N C 1 > N C > N/2. We will show that coalition C blocks the configuration (P, Φ P p ). From Lemma 8, we have λ P C 1 /ΛN C 1 → 1 /N C 1 and λ C/(ΛN C ) → 1 /N C > 1 /N C 1 as Λ → 0. Thus there exists a Λ > 0 such that for any Λ ≤ Λ, λ C/N C > λ P C 1 /N C 1 . It now follows that coalition C satisfies condition (7) for blocking. Moreover, under the proportional payoff vector Φ P p , (7) implies (8). This means that C blocks the configuration (P, Φ P p ), which in turn implies that P is not a stable partition under RB-IA rule. (ii) Under proportional payoff vector Φ P p , (4) is equivalent to(7), and hence the result under RB-PA follows along similar lines.Lemma 8. Consider a coalition C such that N C > N/2, thenProof: Let λ 1 = λ C . It is sufficient to show that λ 1 Λ−λ 1 → ∞ as Λ → 0. In light traffic, the reciprocal of the blocking probabilities of the two coalitions satisfyg(Λ) = 1. We therefore obtain,With Λ → 0, λ 2k−N 1 → 0 and R.H.S. is a finite constant, this implies, lim Λ→0 λ 1 Λ−λ 1 = ∞. Now observe that for 2-partition P = {C 1 , C 2 } with N C 1 > N C 2 , we have N C 1 > N/2 and hence the result follows.
Cooperation in service systems. S Anily, M Haviv, Operations Research. 583Anily, S. and Haviv, M (2010) Cooperation in service systems. Operations Research, 58(3), pp.660-673.
Homogeneous of degree one games are balanced with applications to service systems. S Anily, M Haviv, Tel Aviv University, Faculty of Management, The Leon Recanati Graduate School of Business AdministrationAnily, S. and Haviv, M (2011) Homogeneous of degree one games are balanced with applications to service systems. Tel Aviv University, Faculty of Management, The Leon Recanati Graduate School of Business Administration.
Subadditive and homogeneous of degree one games are totally balanced. S Anily, M Haviv, Operations Research. 624Anily, S. and Haviv, M (2014) Subadditive and homogeneous of degree one games are totally balanced. Operations Research, 62(4), pp.788-793.
The core of a cooperative game without side payments. Robert J Aumann, Transactions of the American Mathematical Society. 983Aumann, Robert J (1961) The core of a cooperative game without side payments. Transactions of the American Mathematical Society, vol. 98, no. 3, pp. 539-552.
Cooperative games with coalition structures. Robert J Aumann, Jacques H Dreze, International Journal of game theory. 34Aumann, Robert J and Dreze, Jacques H (1974) Cooperative games with coalition structures. International Journal of game theory, vol. 3, no. 4, pp. 217-237.
Sequential formation of coalitions in games with externalities and fixed payoff division. F Bloch, Games and economic behavior. 141Bloch, F (1996) Sequential formation of coalitions in games with externalities and fixed payoff division. Games and economic behavior, 14(1), pp.90-123.
Expectation formation rules and the core of partition function games. Francis Bloch, Anne Van Den Nouweland, Games and Economic Behavior. 88Bloch, Francis and Van den Nouweland, Anne (2014) Expectation formation rules and the core of partition function games. Games and Economic Behavior, vol. 88, pp. 339-353.
José Correa, Stier-Moses, E Nicolás, Wardrop equilibria. Wiley encyclopedia of operations research and management science. Correa, José R and Stier-Moses, Nicolás E (2010) Wardrop equilibria. Wiley encyclopedia of operations research and management science.
Cooperation in Markovian queueing models. M D García-Sanz, F R Fernández, M G Fiestras-Janeiro, I García-Jurado, J Puerto, European Journal of Operational Research. 1882García-Sanz, M.D., Fernández, F.R., Fiestras-Janeiro, M.G., García-Jurado, I. and Puerto, J (2008) Coop- eration in Markovian queueing models. European Journal of Operational Research, 188(2), pp.485-495.
Optimal sharing of surgical costs in the presence of queues. P González, C Herrero, Mathematical Methods of Operations Research. 59González, P. and Herrero, C (2004) Optimal sharing of surgical costs in the presence of queues. Mathematical Methods of Operations Research, 59, pp.435-446.
Efficiency in coalition games with externalities. Isa E Hafalir, Games and Economic Behavior. 612Hafalir, Isa E (2007) Efficiency in coalition games with externalities. Games and Economic Behavior, 61(2), pp.242-258.
Coalition formation games: A survey. Jana Hajduková, International Game Theory Review. 804Hajduková, Jana (2006) Coalition formation games: A survey. International Game Theory Review, vol. 8, no. 04, pp. 613-641.
Shiksha Singhal, Veeraruna Kavitha, Jayakrishnan Nair, On the ubiquity of duopolies in constant sum congestion games. 38Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games 38
Article submitted to Operations Research. manuscript no. (Please, provide the manuscript number!Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!)
Sharp bounds and simple approximations for the Erlang delay and loss formulas. A Harel, Management Science. 348Harel, A (1988) Sharp bounds and simple approximations for the Erlang delay and loss formulas. Management Science, 34(8), pp. 959-972.
Some properties of the Erlang loss function. D Jagerman, Bell System Technical Journal. 533Jagerman, D.L (1974) Some properties of the Erlang loss function. Bell System Technical Journal, 53(3), pp.525-551.
Inventory pooling games for expensive, low-demand spare parts. F Karsten, M Slikker, G Van Houtum, Naval Research Logistics (NRL). 595Karsten, F., Slikker, M. and van Houtum, G.J (2012) Inventory pooling games for expensive, low-demand spare parts. Naval Research Logistics (NRL), 59(5), pp.311-324.
Domain extensions of the Erlang loss function: Their scalability and its applications to cooperative games. F Karsten, M Slikker, G Van Houtum, Probability in the Engineering and Informational Sciences. 284Karsten, F., Slikker, M. and van Houtum, G.J (2014) Domain extensions of the Erlang loss function: Their scalability and its applications to cooperative games. Probability in the Engineering and Informational Sciences, 28(4), pp.473-488.
Resource pooling and cost allocation among independent service providers. Frank Karsten, Marco Slikker, Van Houtum, Operations Research. 632INFORMSKarsten, Frank and Slikker, Marco and Van Houtum, Geert-Jan (2015) Resource pooling and cost allocation among independent service providers. Operations Research, vol. 63, no. 2, pp. 476-488, INFORMS.
Non-emptiness of the alpha-core. Fundação Getulio Vargas. Escola de Pós-graduação em Economia. Victor Martins-Da-Rocha, Filipe, Nicholas C Yannelis, Martins-da-Rocha, Victor Filipe and Yannelis, Nicholas C (2011) Non-emptiness of the alpha-core. Fundação Getulio Vargas. Escola de Pós-graduação em Economia.
Game theory and mechanism design. Y Narahari, World Scientific4Narahari, Y (2014) Game theory and mechanism design. vol. 4. (World Scientific).
On the core of cooperative queueing games. U Ozen, M I Reiman, Q Wang, Operations Research Letters. 395Ozen, U., Reiman, M.I. and Wang, Q (2011) On the core of cooperative queueing games. Operations Research Letters, 39(5), pp.385-389.
A theory of endogenous coalition structures. D Ray, R Vohra, Games and economic behavior. 262Ray, D. and Vohra, R (1999) A theory of endogenous coalition structures. Games and economic behavior, 26(2), pp.286-336.
. W Rudin, Principles of mathematical analysis. 3McGraw-hillRudin, W (1976) Principles of mathematical analysis (vol. 3) (New York: McGraw-hill).
Coalitional games in partition form for joint spectrum sensing and access in cognitive radio networks. W Saad, Z Han, R Zheng, A Hjorungnes, T Basar, H Poor, IEEE Journal of Selected Topics in Signal Processing. 62Saad, W., Han, Z., Zheng, R., Hjorungnes, A., Basar, T. and Poor, H.V (2011) Coalitional games in partition form for joint spectrum sensing and access in cognitive radio networks. IEEE Journal of Selected Topics in Signal Processing, 6(2), pp.195-209.
Coalitional game theory for communication networks. Ieee signal processing magazine. W Saad, Z Han, M Debbah, A Hjorungnes, T Basar, 26Saad, W., Han, Z., Debbah, M., Hjorungnes, A. and Basar, T (2009) Coalitional game theory for communi- cation networks. Ieee signal processing magazine, 26(5), pp.77-97.
Coalition Formation Resource Sharing Games in Networks. Shiksha Singhal, Veeraruna Kavitha, Performance Evaluation. 152Shiksha Singhal and Veeraruna Kavitha (2021) Coalition Formation Resource Sharing Games in Networks. Performance Evaluation, vol. 152, 102239, ISSN 0166-5316.
On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research. Shiksha Singhal, Veeraruna Kavitha, Jayakrishnan Nair, 39manuscript no. (Please, provide the manuscript number!Shiksha Singhal, Veeraruna Kavitha and Jayakrishnan Nair: On the ubiquity of duopolies in constant sum congestion games Article submitted to Operations Research; manuscript no. (Please, provide the manuscript number!) 39
Coalition formation in constant sum queueing games. S Singhal, V Kavitha, J Nair, 2021 60th IEEE Conference on Decision and Control (CDC). IEEESinghal, S., Kavitha, V. and Nair, J (2021) Coalition formation in constant sum queueing games. In 2021 60th IEEE Conference on Decision and Control (CDC) (pp. 3812-3817). IEEE.
N-person games in partition function form. R M Thrall, W Lucas, Naval Research Logistics Quarterly. 101Thrall, R.M. and Lucas, W.F (1963) N-person games in partition function form. Naval Research Logistics Quarterly, 10(1), pp.281-298.
How to share the cost of cooperating queues in a tandem network. J Timmer, W Scheinhardt, In 2010 22nd International Teletraffic Congress (ITC 22). IEEETimmer, J. and Scheinhardt, W (2010) How to share the cost of cooperating queues in a tandem network?. In 2010 22nd International Teletraffic Congress (ITC 22) (pp. 1-7). IEEE.
Stable coalition structures with externalities. S Yi, Games and economic behavior. 202Yi, S.S (1997) Stable coalition structures with externalities. Games and economic behavior, 20(2), pp.201-237.
Capacity sharing and cost allocation among independent firms with congestion. Y Yu, S Benjaafar, Y Gerchak, Production and Operations Management24Yu, Y., Benjaafar, S. and Gerchak, Y (2015) Capacity sharing and cost allocation among independent firms with congestion. Production and Operations Management, 24(8), pp.1285-1310.
| [] |
[
"CAVL: Learning Contrastive and Adaptive Representations of Vision and Language",
"CAVL: Learning Contrastive and Adaptive Representations of Vision and Language"
] | [
"Shentong Mo \nCarnegie Mellon University\n\n",
"Jingfei Xia \nCarnegie Mellon University\n\n",
"Ihor Markevych \nCarnegie Mellon University\n\n"
] | [
"Carnegie Mellon University\n",
"Carnegie Mellon University\n",
"Carnegie Mellon University\n"
] | [] | Visual and linguistic pre-training aims to learn vision and language representations together, which can be transferred to visual-linguistic downstream tasks. However, there exists semantic confusion between language and vision during the pre-training stage. Moreover, current pretrained models tend to take lots of computation resources for fine-tuning when transferred to downstream tasks. In this work, we present a simple but effective approach for learning Contrastive and Adaptive representations of Vision and Language, namely CAVL. Specifically, we introduce a pair-wise contrastive loss to learn alignments between the whole sentence and each image in the same batch during the pre-training process. At the fine-tuning stage, we introduce two lightweight adaptation networks to reduce model parameters and increase training speed for saving computation resources. We evaluate our CAVL on six main down- | 10.48550/arxiv.2304.04399 | [
"https://export.arxiv.org/pdf/2304.04399v1.pdf"
] | 258,048,942 | 2304.04399 | 36b8a6955cba7c2c0325251504522fba1ac444d1 |
CAVL: Learning Contrastive and Adaptive Representations of Vision and Language
Shentong Mo
Carnegie Mellon University
Jingfei Xia
Carnegie Mellon University
Ihor Markevych
Carnegie Mellon University
CAVL: Learning Contrastive and Adaptive Representations of Vision and Language
Visual and linguistic pre-training aims to learn vision and language representations together, which can be transferred to visual-linguistic downstream tasks. However, there exists semantic confusion between language and vision during the pre-training stage. Moreover, current pretrained models tend to take lots of computation resources for fine-tuning when transferred to downstream tasks. In this work, we present a simple but effective approach for learning Contrastive and Adaptive representations of Vision and Language, namely CAVL. Specifically, we introduce a pair-wise contrastive loss to learn alignments between the whole sentence and each image in the same batch during the pre-training process. At the fine-tuning stage, we introduce two lightweight adaptation networks to reduce model parameters and increase training speed for saving computation resources. We evaluate our CAVL on six main down-
Introduction
Visual and language representations pre-training [24,27,40] has been an active research area in the multi-modal community. This is because it allows for the usage of pretrained models that achieve state-of-the-art comparable results for a variety of tasks without spending significant compute time for modeling language and visual distributions by leveraging features created by available pre-trained models. With a better pre-training model, it can be used in a * These authors contributed equally to this work. variety of areas in visual and language fields such as Visual Question Answering (VQA) [10] and Visual Commonsense Reasoning (VCR) [50]. In various architectures designed for different visual-linguistic tasks, a key point is to aggregate the multi-modal information in both visual and linguistic domains. However, there exists semantic confusion between vision and language at the pre-training stage, that is, misalignment between object/entities or image/text. Another problem is that, when transferred to downstream tasks, pre-trained models tend to take much training time and resources for fine-tuning.
In this work, we propose a simple but effective framework for learning Contrastive and Adaptive representations of Vision and Language, called CAVL, which involves contrastive pre-training and adaptive fine-tuning. Specifically, to eliminate the misalignment between language and vision during pre-training, we apply a Pair-wise Contrastive Loss (PwCL) to learn alignments between the whole sentence and each image, where we maximize the cosine similarity of visual and linguistic embeddings from correct pairs while minimizing the cosine similarity of embeddings of false pairs. To further eliminate the need for much training time at the fine-tuning stage, we introduce two lightweight adaptation networks in our CAVL to learn adaptive representations. One adapter is to use a shortcut block to obtain task-specific features and merge generalized features from the pre-trained model with the output block; the other adapter is to apply a bottleneck structure after attention and feed-forward modules in each layer in BERT. Our CAVL not only reduces the training parameters greatly but also maintain the performance at a competitive level.
We conduct extensive experiments on four main downstream tasks: VQA, VCR, Natural Language for Visual Reasoning, and Region-to-Phrase Grounding. Compared to previous state-of-the-art models [7,24,27,38,40,42], our CAVL achieves comparable or even better performance when transferred to visual-linguistic downstream tasks. Contrastive pre-training assists our CAVL in better understanding the relationship between image and text which improves results on downstream tasks. Ablation studies on adaptive fine-tuning also demonstrate the effectiveness and efficiency of the proposed adaptive fine-tuning in saving computation resources.
Overall, our main contributions in this work can be summarized as follows:
• We propose a simple but effective approach for learning alignments between visual and linguistic representations during pre-training, namely CAVL.
• We present two lightweight adaptation networks in our CAVL to further ease the need for large computation resources at the fine-tuning stage.
• Our CAVL achieves superior results when transferred to six main visual-linguistic downstream tasks.
• Extensive ablation studies demonstrate the efficiency of adaptive fine-tuning in reducing training parameters while achieving comparable performance.
Related Work
Visual representations pre-training. In recent years, visual representations pre-training has been applied to many downstream tasks, such as image classification, object detection, and segmentation. Typically, contrastive selfsupervised learning [4-6, 17, 29-31] is one of the popular methods to learn meaningful visual representations. Most previous methods learn visual representations from text paired with images in unsupervised, self-supervised, weakly supervised, and supervised. Since language and vision can share a similar semantic meaning, CLIP [34] is a commonly-used neural network trained on a variety of (image, text) pairs for learning transferable visual representations from natural language supervision. With the instruction of natural language and task-agnostic optimization, CLIP can predict the most relevant text snippet given an image, which is similar to the zero-shot capabilities of GPT-2 [36] and GPT-3 [3]. Huo et al. [14] apply a cross-modal contrastive learning framework called BriVL for image-text pre-training. Unlike CLIP that adopts a simple contrastive learning method, they make use of MoCo [11] for the crossmodal scenario by building a large queue-based dictionary to introduce negative samples in a batch for the pre-training. In this work, misalignments between visual and linguistic embeddings are mitigated by a pair-wise contrastive loss at the pre-training stage for the multi-modal scenario.
Linguistic representations pre-training. In the language modeling literature, there are two main frameworks for linguistic representations pre-training, including BERT [8] and GPT [35]. BERT [8] is a transformers-based [43] pretrained model, which advanced state-of-the-art results for a lot of natural language processing tasks with self-attention module. GPT [35] is another language modeling architecture that is based on transformers [43]. Input for GPT models is represented on a byte level with an exception of spaces which allows handling variable vocabularies and deal with unknown during training time tokens. In this work, we introduce a simple but effective framework based on BERT and evaluate our network on four main visual-linguistic downstream tasks.
Fusion of visual and linguistic representations pretraining. There is a bunch of work [7, 12, 15, 22, 24, 25, 27, 33, 39-42, 48, 49, 51] focusing on the fusion of visual and linguistic representations pre-training. Typically, UNITER [7] aims at learning to join image-text embeddings by using transformer on multi-model inputs with Masked Language Modeling (MLM), Masked Region Modeling, Image-Text Matching, and Word-Region Alignment as pretraining tasks. ERNIE-ViL [49] uses the joint distributions of vision-language with scene graphs of visual tasks to predict nodes of different types of scene graph parsed from the sentence. LXMERT [42] is a transformer based model with encoders: an object relationship encoder, a language encoder, and a cross-modality encoder. VisualBERT [24] is a simple and flexible framework used for vision-andlanguage tasks, where they use the self-attention module in BERT structure to combine the image embedding in vision and text embedding in language. ViLBERT [27] extends the traditional BERT by using two parallel BERTtype models that operate over text segments and image regions. VL-BERT [40] is another BERT-based model that takes regions of interest from images and sub-word information, where they pre-trains the model by predicting masked words with image clues and predicting masked regions with text clues. To address the noisy label and domain bias problems, CVLP [38] introduces a contrastive loss in the visual branch to discriminate between positive examples and negative ones. More recently, DocFormer [2] is proposed to enforce the multi-modal interaction between visual, text and spatial features for Visual Document Understanding. However, we adopt the pair-wise contrastive loss in both visual and linguistic branches to eliminate the misalignment between the whole sentence and each image during pre-training. Note that our CAVL is different from a recent work, ConVIRT [52], in two major ways. 1) Similar to CLIP, ConVIRT applies the contrastive loss to train a model that performs better in medical image classification with natural language supervision while in this work we focus on solving the misalignment problems in visual and linguistic area. 2) They generate visual and textual embeddings via an image encoder and a text encoder, separately. However, we combine two encoders into single BERT model and show that the pair-wise contrastive learning enables model to learn better aligned representations of image and text. Moreover, our PwCL loss differs from the VLM loss proposed in Unicoder-VL [23] in two ways. 1) Unicoder-VL introduces FC layers to predict the score between the whole image and sentence, while our PwCL calculates the cosine similarity by a simple dot-product without any additional parameters.
2) The VLM loss just samples one negative sample (image or caption) and applies a cross-entropy loss as binary classification to learn the alignments. However, we have B 2 − B negative samples in the whole batch of size B during pre-training such that the misalignment problems are more fully addressed. Adaptation networks in transformers. Adaptation is an important way when fine-tuning the BERT model, which allows us to achieve a promising result by updating much fewer parameters with less time and computation resources. This motivates researchers to use efficient adaptation methods in transformer-based models. MAD-X [32] adapts a multilingual model to arbitrary tasks and languages, where authors propose an invertible adapter and show good performance on different languages and tasks. Houlsby et al. [13] propose an intermediate layer inside transformer layers and train the intermediate layer with all other parameters frozen. In this work, we present two lightweight adapters to achieve competitive performance on domain-specific tasks with significantly reduced fine-tuning parameters and costs.
Method
Preliminary: BERT
BERT [8], which stands for Bidirectional Encoder Representations from Transformers, is a model that uses word tokens as input and optimizes language modeling objectives. All input word tokens are mapped to a set of embeddings and passed through several "encoder-style" blocks to generate representations. The input embeddings are calculated as the sum of token embedding, segment embedding, and position embedding. Then the input embeddings are passed through a multi-layer transformer, where each layer consists of two modules: multi-head attention and feedforward module. Each module is followed by a fully connected network and together wrapped in a residual addition.
There are two main steps for the BERT training process: pre-training and fine-tuning. Two language modeling objectives are used for the pre-training step: (1) Masked Language Modeling (MLM). The embedding of a certain input word would be randomly masked out and BERT is used to predict the masked words using the unmasked words.
(2) Next Sentence Prediction (NSP). BERT is learned to identify whether two sentences are consecutive sentences from one document given two sentences. Finally, in order to apply BERT to a particular task, we introduce a taskspecific input, an output layer, an objective, and the finetuned model with a specific dataset.
CAVL
In this part, we present a simple but effective approach for learning contrastive and adaptive representations of vision and language, namely CAVL, which consists of contrastive pre-training and adaptive fine-tuning. Specifically, contrastive pre-training is applied to mitigate the semantic confusion between visual and linguistic representations at the pre-training stage. When transferred to downstream tasks during the fine-tuning process, adaptive fine-tuning eliminates the need for much training time and large GPU memories.
Contrastive Pre-training
For contrastive pre-training, we adopt a self-attention mechanism within the BERT-based transformer to explicitly align elements of the input text and regions in the input image in a contrastive manner, as can be illustrated in Figure 2. Our CAVL consists of three main components: linguistic pre-training, visual pre-training, and contrastive fusion of visual-linguistic pre-training.
Linguistic pre-training. For language embeddings in the pre-training, we input three types of embeddings: 1) a token embedding e t for each subword in a sentence; 2) a segment embedding e s indicating which part of the text the token is from; 3) a position embedding e p for the position of the token in the sentence. Then we sum up all three embeddings in a contextual representation e n , n ∈ {1, 2, ..., N }, where N denotes the number of subwords in the sentence. After being fed into a BERT-based transformer, those contextual embeddings become e i . We adopt two similar objectives as BERT, including masked language modeling (MLM) and next sentence prediction (NSP). For the former objective, we randomly mask some parts of the input tokens with a special token (i.e., [MASK]), and the model • a big red telephone booth that a man is standing in • a person standing inside of a phone booth • this is an image of a man in a phone booth. • A man is standing in a red phone booth.
• A man using a phone in a phone booth. Figure 2. The overview of contrastive pre-training of our CAVL for pair-wise visual-linguistic representations. A self-attention mechanism within the BERT-based Transformer is applied to implicitly align elements of the input text and regions in the input image, and a contrastive learning framework is used for alignments between the whole sentence and each image in an explicit manner.
BERT-based Transformer
e′ 1 e′ N e′ N−1 e′ 2 … NSP MLM f 1 f 2 f K f K−1 f′ 1 f′ 2 f′ K f′ K−1 … a E′ F′ E′ 1 F′ 1 E′ 2 E′ 3 E′ 1⋅ F′ 1 E′ 2⋅ F′ 1 E′ 3⋅ F′ 1 ⋅ ⋅ ⋅ E′ B⋅ F′ 1 E′ B ⋅ ⋅ ⋅ E′ 1⋅ F′ 2 E′ 2⋅ F′ 2 E′ 3⋅ F′ 2 ⋅ ⋅ ⋅ E′ B⋅ F′ 2 E′ 1⋅ F′ 3 E′ 2⋅ F′ 3 E′ 3⋅ F′ 3 ⋅ ⋅ ⋅ E′ B⋅ F′ 3 ⋅ ⋅ ⋅ E′ 1⋅ F′ B E′ 2⋅ F′ B E′ 3⋅ F′ B ⋅ ⋅ ⋅ E′ B⋅ F′ B ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ F′ 2 F′ 3 F′ B ⋅ ⋅ ⋅ ⋅ ⋅ ⋅ Pair-wise Contrastive
is trained to predict the masked token. As for NSP, we train the model using the embedding [CLS] to classify whether a pair of given sentences are two consecutive sentences in a context.
Visual pre-training. For vision features in the pretraining, we extract image ROIs from an objection detection framework (i.e., Faster R-CNN) as the input f k , k ∈ {1, 2, ..., K}, where K is the number of image ROIs. The input f k is also composed of three types of visual embeddings: 1) an image feature embedding f i for each image ROI; 2) a segment embedding f s indicating which token embedding the image embedding is opposed to; 3) a position embedding f p for alignments between tokens and each image ROI. Following Visual-BERT in task-agnostic pretraining, we apply two captions for a text segment in the COCO dataset, where there are multiple captions corresponding to one image, as can be seen in Figure 2. Particularly, we use one of the captions as ground truth to describe the image, while we also apply a 50% probability to choose a caption from those two captions. Our model is trained to distinguish whether the caption is the ground truth or randomly drawn from two captions.
Visual-linguistic pre-training. To mitigate the semantic confusion between language and vision, we design a pairwise contrastive learning mechanism on visual and linguistic representations from the multi-layer transformer. Specifically, we calculate the cosine similarity between each pair of linguistic embeddings E b and visual embeddings F b in a batch of size B, where b ∈ {1, 2, ..., B}. Then, those similarities are jointly learned for alignments between the whole sentence and each image in the same batch, where we maximize the cosine similarity of the visual and linguistic embeddings of the B correct pairs in the batch while minimizing the cosine similarity of the embeddings of the B 2 −B false pairings. We apply a pair-wise contrastive loss over these similarities scores for optimization.
Specifically, we define the Pair-wise Contrastive Loss (PwCL) between linguistic embeddings E i and visual embeddings F j as:
L PwCL = − log B i=1 (E i · F i ) B i=1 B j=1 1 i =j (E i · F j )(1)
where 1 i =j is an indicator function to check if linguistic embeddings E i and visual embeddings F j are aligned or not. In this way, we maximize the cosine similarity of visual and linguistic embeddings from correct pairs while minimizing the cosine similarity of embeddings of false pairs. Intuitively, alignments between the whole sentence and each image are learned in our CAVL to mitigate the semantic confusion existing in the pre-training process.
It is worthy mentioning that this kind of contrastive pretraining methodology is different from a recent powerful framework, CLIP [34]. Concretely, we build a image-text alignment pre-training framework for the multi-modal scenario. First, this idea in CLIP is used to train a model that performs better in image classification with natural language supervision while in this paper we focus on solving multi-modal problems in visual and linguistic area. Second, they train an image encoder and a text encoder to generate vision and language embeddings. However, we combine two encoders into one BERT model and show that pair-wise contrastive learning enables model to learn better representations of image and text.
Adaptive Fine-tuning
In this section, we design one adaptation methods and compared that with other efficient methods to fine tune on downstreaming tasks. One idea is that the frozen part in the model provides basic information for the generalized problem while the updated part generates a task-specific feature. Inspired by this idea, we proposed a method that used a shortcut block aside from the pre-trained model and merges the output from the pre-trained model and shortcut block with an additional output block. The pre-trained model obtains generalized features between image and text, and the shortcut block acts as the selection neuron to capture the feature in each specific task. Then we apply an output block to combine generalized feature and task-specific feature together to get the final result. There are fewer layers in shortcut and output block than those in pre-trained model so the training time is reduced. We denote this type of adapter as Adapter I. Motivated by Neil et al. [13], we propose another adaptation method denoted as Adapter II by adding a bottleneck structure within each BERT layer. Specifically, an adapter is added at the end of the attention module and another one at the end of the feed-forward layer. The adapter output and input of both the attention module and feed-forward layer are added together to pass through the LayerNorm module. During the fine-tuning process, we freeze the attention module and feed-forward layer. So, the adapter acts as a projection module which reflects the generalized feature to task-specific features. The adapter with fewer parameters is designed as a bottleneck structure, which contains one linear layer, a GELU function, and another linear layer.
We show the network details about two types of adapters for adaptive fine-tuning in Figure 3. We also conduct a comprehensive ablation study on the adapters in Section 4.4.
Experiments
Pre-training & Implementation Details
Following previous work [24,27,40], we apply the same setting for a fair comparison with those baselines. Specifically, we pre-train our CAVL on MS COCO [26] and Visual Genome [20]. For visual tokens, we apply the pre-trained Faster R-CNN [37] to extract the image ROIs (at most 100 ROIs with detection scores higher than 0.5 for each image). We apply SGD with Adam [19] for optimization and use a total batch size of 512 for 10 epochs. We use the warm-up step number of 15% of the total training steps. The pretraining and fine-tuning costs are 88 and 10 hours on 4 Tesla V100-32G GPUs respectively.
Downstream Tasks
We evaluate our CAVL pre-trained models on four downstream tasks: (1) Visual Question Answering (VQA), (2) Visual Commonsense Reasoning (VCR), (3) Natural Language for Visual Reasoning (NLVR 2 ), and (4) Region-to-Phrase Grounding (Flickr30K). Unless otherwise specified, we adopt Adapter I as the adaptive fine-tuning method. Visual Question Answering (VQA). In the VQA task, we follow the experimental protocol in BUTD [1], aim to answer a question at the perceptual level given a natural image by choosing the correct answer from a shared set composed of 3,129 answers. Specifically, we conduct experiments on the VQA v2.0 dataset [10] based on the images from COCO [26] dataset. We split the dataset into train set (83k images and 444k questions), validation set (41k images and 214k questions), and test set (81k images and 448k questions). We report the results in Table 1. Compared to previous methods, our CAVL achieves better performance in terms of accuracy on both test-dev and test-std datasets. This infers that the pair-wise contrastive loss between visual and linguistic representations is beneficial for learning alignments between the whole sentence and each image. Visual Commonsense Reasoning (VCR). In the VCR task, we need to select the right answer to the given question and provide the rationale explanation for a higher-level cognitive and commonsense understanding of the given image.
In the experiments, we use an image and a list of catego- rized ROIs from the VCR dataset [50] to pick the correct one from 4 candidate answers and 4 candidate rationales, respectively. The task (Q → AR) can be split into two subtasks: question answering (Q→A) and answer justification (QA→R). We also split the VCR dataset into training (213k questions and 80k images), validation (27k questions and 10k images), and test (25k questions and 10k images) sets.
The results are reported in Table 1. Our CAVL achieves competitive performance, although we do not use the larger Conceptual Captions dataset in VL-BERT large . This implies that the pair-wise contrastive loss proposed at the pretraining stage is beneficial to eliminate the semantic confusion between vision and language. VL-BERT large also validates the importance of pre-training on a massive-scale dataset to improve the model's capacity. Natural Language for Visual Reasoning (NLVR). Following previous work [24], we evaluate our CAVL pretrained models on the NLVR 2 dataset for joint reasoning about natural language and images. In this task, we focus on predicting whether a natural language caption match with a pair of images. We report the comparison results with stateof-the-art methods in Table 2. As can be seen, our CAVL achieves the new state-of-the-art performance in terms of all experimental settings compared to previous methods. This demonstrates the effectiveness of the pair-wise contrastive loss incorporated in both visual and linguistic branches to mitigate the existing semantic confusion between vision and language at the pre-training stage. The results also show the advantage of our CAVL in joint reasoning about language and vision. Region-to-Phrase Grounding (RPG). In order to test the performance of our CAVL on RPG, we fine-tune our CAVL pre-trained models on the Flickr30K [47] Entities dataset consisting of 30k images and 250k annotations. Following BAN [18] and VisualBERT [24], we utilize image features from a pre-trained Faster R-CNN. For fine-tuning on Flickr30K Entities dataset, a self-attention block is applied to generate the average attention weights to predict the alignment between bounding boxes and phrases. Therefore, the box with the most attention from the phrase is the model prediction for a phrase to be grounded. In this task, we need to choose the right bounding regions in an image that spans from a sentence belong to. Table 3 reports the comparison results with existing methods. We can observe that our CAVL outperforms previous multi-modal methods by a large margin under the same pre-trained setting, which demonstrates the effectiveness of our CAVL in visual-linguistic grounding tasks. Text-to-Image Retrieval (TIR). Following previous work [27,28], we evaluate our CAVL on the Flickr30K [47] dataset for text-to-image retrieval and use recall rate (Re-call@1, 5, 10) as the evaluation metrics. In this task, we need to match an image to a caption that describes its content. Specifically, we calculate the Averaged Pair-wise Similarity (APS) score for each caption and one image. In order to choose the right caption-image pair, We fine-tune the model with a cross-entropy loss. At inference stage in the test set, we sort each caption-image pair according to the alignment score. The comparison results are reported in Table 4. As can be seen, our CAVL achieves the best performance compared to previous baselines. With the help of alleviating the semantic confusion, we also achieve comparable performance to ERNIE-ViL [49]. This demonstrates the effectiveness of our approach in Text-to-Image Retrieval.
A bathroom featuring an antique style sink and tiled walls A clock hangs on the wall underneath some pictures A group of bicyclists going together on the street A man is giving horse drawn carriage rides to a couple.
A stove that has several pots and pans on it A table with plates and dishes with a verity of food and a baby's bottle.
A very cluttered but very clean kept kitchen.
A woman takes a "selfie" with her cellphone with a cat on her shoulder An elephant is being cooled off by a hose.
Two girls standing by a spraying fire hydrant. Table 4. Comparison results of Text-to-Image Retrieval (TIR) and Zero-shot Text-to-Image Retrieval (ZS-TIR) on the Flickr30K.
Zero-shot Text-to-Image Retrieval (ZS-TIR). Furthermore, we follow previous work [27], and evaluate our CAVL on the Flickr30K [47] dataset for zero-shot text-toimage retrieval. In this setting, we directly use the pretrained weights without fine-tuning to compute the Averaged Pair-wise Similarity (APS) score for zero-shot retrieval. We report the results in Table 4. We also achieve a new state-of-the-art performance in terms of all metrics, which shows the effectiveness of our CAVL.
Visualizations
In this part, we visualize the pre-trained image and text pairs to validate the effectiveness of our CAVL in mitigating the semantic confusion between those representations. Specifically, we calculate the pair-wise similarity across pre-trained image and text pairs and report them in Figure 4.
As can be seen, our CAVL can learn alignments between the whole sentence and each image during pre-training.
Ablation Study
In this section, we perform extensive ablation studies on the effect of pair-wise contrastive pre-training, adaptive fine-tuning and batch size on the final performance of our CAVL, and the efficiency of the proposed adapters (Adapter I and II). Unless specified, we conduct all ablation studies on the VQA 2.0 dataset and report the mean and standard deviation of all results with 5 random seeds. Effect of each module and batch size. In Table 5, we explore the effect of each pre-training task proposed in our CAVL, which consists of Masked Language Modeling, Next Sentence Prediction, Pair-wise Contrastive Loss, and Adaptive Fine-tuning. We can observe that with the incorporation of the pair-wise contrastive loss, our CAVL achieves better performance than the baseline without the pair-wise contrastive loss between linguistic and visual embeddings. This demonstrates the effectiveness of the pairwise contrastive loss proposed in our CAVL. Our CAVL with adaptive fine-tuning achieves comparable performance while achieving fewer training parameters and saving computation resources.
We also evaluate the effect of the batch size on the final performance of our CAVL pre-trained models in Table 5. As can be seen, our CAVL performs the best APS at the batch size of 512, which shows the importance of the choice of the batch size in the pair-wise contrastive loss. Adding PwCL to baselines with the same batch size increases the accuracy from 70. 11 15), but the improvement is smaller than that of PwCL. This also conforms to the importance of larger batch size in vision-language pre-training by introducing more negative pairs. We also show the big improvements of our CAVL in VCR (3.45), NLVR2 (11.31), and RPG (10.43) in Table 1, 2 and 3.
Furthermore, we compare the Averaged Pair-wise Similarity (APS) between embeddings of each text-image pair during pre-training in Table 5. Specifically, we calculate the pair-wise dot product between linguistic embeddings E i and visual embeddings F i , i.e., 1 B B i=1 (E i · F i ). We can observe that our CAVL with the added pair-wise contrastive loss help increase the APS between each text-image pair, which verifies the effectiveness of our pair-wise contrastive pre-training in mitigating the semantic confusion between visual and linguistic embeddings during the pretraining process. Table 6, we compare the performance of various models in terms of training parameters, fine-tuning costs, and accuracy on test-dev, test-std sets. We observe that all CAVL based models achieve better results than the baselines. This further demonstrates the effectiveness of contrastive visual and linguistic pretraining in eliminating the semantic confusion during the pre-training process. The proposed CAVL jointly learns alignments between the whole sentence and each image in the same batch to improve the pre-trained model's generalizability. In the meanwhile, our CAVL with both adapters with fewer training parameters achieves comparable performance to baselines with large fine-tuning parameters, which validates the efficiency of our proposed adapters on finetuning fewer parameters to save computation resources.
Effect of adapters. As shown in
We also compare our CAVL with two types of adapters with current multi-modal methods [24,40] in terms of the average fine-tuning cost to evaluate how much time our CAVL could save during the fine-tuning phase. Typically, both CAVL variants (Adapter I /II) achieves better results than previous work on both test-dev and test-std settings, which reducing remarkable parameters and costs for fine-tuning. While Adapter I performs slightly better than Adapter II, Adapter II reduces the fine-tuning parameters and costs by a large margin, i.e., 59.40% and 76.17%. This implies the efficiency of our CAVL in the fine-tuning stage to learn effective representations for visual-linguistic downstream tasks.
Conclusion
In this work, we propose a simple but effective framework for learning contrastive and adaptive representations of vision and language, called CAVL, which involves contrastive pre-training and adaptive fine-tuning. The pair-wise contrastive loss is applied to mitigate the semantic confusion between language and vision during pre-training. Furthermore, we successfully introduce two lightweight adapters to eliminate the need for much computation time at the fine-tuning stage. Our CAVL achieves superior performance against baselines on six main vision-language downstream tasks. We conduct extensive experiments and ablation studies to demonstrate the efficiency of contrastive pretraining and adaptive fine-tuning.
Figure 1 .
1Illustration of two main problems (semantic confusion and high fine-tuning cost) existing in vision-language models, where we propose the CAVL (contrastive pre-training and adaptative fine-tuning) to address the issues.
Figure 3 .
3Adaptive Fine-tuning with Adapters in CAVL. Adapter I (Left): This adaptation method freezes the pre-trained model and trains a shortcut block and output block during fine-tuning. Adapter II (Right): This adaptation method follows[13] which adds an adapter right before each LayerNorm layer in the BERT. During the fine-tuning, it freezes feed-forward layers and attention modules while updating parameters in the adapter and LayerNorm layer. Blue blocks denote the frozen part and parameters in red blocks are updated during the fine-tuning phrase.
Figure 4 .
4Heatmap visualization of cosine similarities between image and text pre-trained representations. Alignments across image and text pairs are learned during pre-training.
Table 1. Comparison results on the VQA and VCR datasets.Model
VQA
VCR
test-dev test-std
Q → A
QA → R
Q → AR
val
test
val
test
val
test
LXMERT [42]
72.42
72.54
-
-
-
-
-
-
ViLBERT [27]
70.55
70.92
72.40 73.30 74.50 74.60 54.00 54.80
VisualBERT [24] 70.08
71.00
70.80 71.60 73.20 73.20 52.20 52.40
VL-BERT [40]
71.72
72.18
73.80 -
74.40 -
55.20 -
UNITER [7]
72.27
72.46
-
75.00 -
77.20 -
58.20
CVLP [38]
72.77
72.90
-
-
-
-
-
-
CAVL (ours)
72.83
73.05
75.33 75.65 76.52 77.87 58.65 59.47
Model
Dev
Test-P Test-U Test-U (Cons)
LXMERT [42]
-
74.45 76.20
42.10
VisualBERT [24] 67.40 67.00 67.30
26.90
UNITER [7]
76.93 75.58 -
-
CVLP [38]
-
76.20 -
-
CAVL (ours)
79.16 78.31 79.87
46.23
Table 2 .
2Comparison results on the NLVR 2 dataset.
Table 3 .
3Comparison results on the Flickr30K Entities dataset.
to 71.73, where the im-MLM NSP PwCL AF batch size test-dev (↑)Table 5. Ablation study on pair-wise contrastive pre-training, adaptive fine-tuning and batch size. MLM, NSP, PwCL, and AF denote Masked Language Modeling, Next Sentence Prediction, Pair-wise Contrastive Loss and Adaptive Fine-tuning.test-std (↑)
APS (↑)
64
70.11±0.12 71.03±0.15 0.43±0.08
64
70.82±0.13 71.56±0.15 0.52±0.06
64
70.06±0.12 70.95±0.13 0.41±0.06
64
71.73±0.15 72.08±0.17 0.68±0.04
64
71.68±0.14 72.02±0.16 0.67±0.05
128
72.18±0.13 72.32±0.16 0.72±0.04
256
72.42±0.11 72.67±0.13 0.76±0.03
512
72.83±0.05 73.05±0.07 0.83±0.02
1024
72.88±0.08 72.96±0.06 0.81±0.02
Model
Fine-tune Params (M)↓
test-dev ↑
test-std ↑
Fine-tune Cost (h) ↓
VisualBERT [24]
113.90
70.08
71.00
24.5
VL-BERT base [40]
115.04
71.16
-
25.6
VL-BERT large [40]
342.55
71.79
72.22
70.2
CAVL (Adapter I)
98.40
72.83±0.05 73.05±0.06
10.0
CAVL (Adapter II)
46.70
72.67±0.08 72.86±0.11
6.1
Table 6 .
6Ablation study on variants of adapters proposed in our CAVL. provement (1.62) is significant in the VQA task. Increasing the batch size from 64 to 1024 can boost the accuracy from 71.68 to 72.83 (1.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gouldand Lei Zhang, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern RecognitionPeter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, and Stephen Gouldand Lei Zhang. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of IEEE Con- ference on Computer Vision and Pattern Recognition, page 6077-6086, 2018. 5
Docformer: End-to-end transformer for document understanding. Srikar Appalaraju, Bhavan Jasani, Yusheng Bhargava Urala Kota, R Xie, Manmatha, arXiv:2106.11539arXiv preprintSrikar Appalaraju, Bhavan Jasani, Bhargava Urala Kota, Yusheng Xie, and R. Manmatha. Docformer: End-to-end transformer for document understanding. arXiv preprint arXiv:2106.11539, 2021. 2
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, arXiv:2005.14165Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordarXiv preprintTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Sub- biah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakan- tan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand- hini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Rad- ford, Ilya Sutskever, and Dario Amodei. Language mod- els are few-shot learners. arXiv preprint arXiv:2005.14165, 2020. 2
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, International Conference on Machine Learning. Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge- offrey Hinton. A simple framework for contrastive learn- ing of visual representations. In International Conference on Machine Learning, pages 1597-1607, 2020. 2
Big Self-Supervised Models are Strong Semi-Supervised Learners. Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, Geoffrey Hinton, Advances in Neural Information Processing Systems. 2020Ting Chen, Simon Kornblith, Kevin Swersky, Mohammad Norouzi, and Geoffrey Hinton. Big Self-Supervised Models are Strong Semi-Supervised Learners. In Advances in Neural Information Processing Systems, pages 22243-22255, 2020. 2
Xinlei Chen, Haoqi Fan, arXiv:2003.04297Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprintXinlei Chen, Haoqi Fan, Ross Girshick, and Kaiming He. Improved baselines with momentum contrastive learning. arXiv preprint arXiv:2003.04297, 2020. 2
UNITER: learning universal image-text representations. Yen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, Jingjing Liu, arXiv:1909.1174067arXiv preprintYen-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. UNITER: learning universal image-text representations. arXiv preprint arXiv:1909.11740, 2019. 2, 6, 7
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pref-training of deep bidirectional transformers for language understanding. 23arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pref-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018. 2, 3
Similarity reasoning and filtration for image-text matching. Haiwen Diao, Ying Zhang, Lin Ma, Huchuan Lu, The AAAI Conference on Artificial Intelligence. 2021Haiwen Diao, Ying Zhang, Lin Ma, and Huchuan Lu. Sim- ilarity reasoning and filtration for image-text matching. In The AAAI Conference on Artificial Intelligence, 2021. 7
Making the v in vqa matter: Elevating the role of image understanding in visual question answering. Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, Devi Parikh, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern Recognition15Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Ba- tra, and Devi Parikh. Making the v in vqa matter: Elevating the role of image understanding in visual question answer- ing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, page 6904-6913, 2017. 1, 5
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern RecognitionKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual rep- resentation learning. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pages 9729- 9738, 2020. 2
VLN BERT: a recurrent visionand-language bert for navigation. Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, Stephen Gould, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern RecognitionYicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez- Opazo, and Stephen Gould. VLN BERT: a recurrent vision- and-language bert for navigation. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2021. 2
Parameter-efficient transfer learning for NLP. Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, Sylvain Gelly, International Conference on Machine Learning. 35Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In International Conference on Machine Learning, pages 2790-2799, 2019. 3, 5
WenLan: Bridging vision and language by large-scale multi-modal pre-training. Yuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, Anwen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, Shizhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, Ji-Rong Wen, arXiv:2103.06561arXiv preprintYuqi Huo, Manli Zhang, Guangzhen Liu, Haoyu Lu, Yizhao Gao, Guoxing Yang, Jingyuan Wen, Heng Zhang, Baogui Xu, Weihao Zheng, Zongzheng Xi, Yueqian Yang, An- wen Hu, Jinming Zhao, Ruichen Li, Yida Zhao, Liang Zhang, Yuqing Song, Xin Hong, Wanqing Cui, Danyang Hou, Yingyan Li, Junyi Li, Peiyu Liu, Zheng Gong, Chuhao Jin, Yuchong Sun, Shizhe Chen, Zhiwu Lu, Zhicheng Dou, Qin Jin, Yanyan Lan, Wayne Xin Zhao, Ruihua Song, and Ji-Rong Wen. WenLan: Bridging vision and language by large-scale multi-modal pre-training. arXiv preprint arXiv:2103.06561, 2021. 2
Scaling up visual and vision-language representation learning with noisy text supervision. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yun-Hsuan Sung, Zhen Li, Tom Duerig, International Conference on Machine Learning. Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V. Le, Yun-Hsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. In International Con- ference on Machine Learning, 2021. 2
Gabriel Synnaeve, and Nicolas Carion. MDETRmodulated detection for end-to-end multi-modal understanding. Aishwarya Kamath, Mannat Singh, Yann Lecun, Ishan Misra, arXiv:2104.12763arXiv preprintAishwarya Kamath, Mannat Singh, Yann LeCun, Ishan Misra, Gabriel Synnaeve, and Nicolas Carion. MDETR - modulated detection for end-to-end multi-modal understand- ing. arXiv preprint arXiv:2104.12763, 2021. 6
Supervised contrastive learning. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan, Advances in Neural Information Processing Systems. Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. Supervised contrastive learning. In Advances in Neural Information Processing Systems, pages 18661-18673, 2020. 2
Bilinear attention networks. Jin-Hwa Kim, Jaehyun Jun, Byoung-Tak Zhang, Advances in Neural Information Processing Systems. Jin-Hwa Kim, Jaehyun Jun, and Byoung-Tak Zhang. Bilin- ear attention networks. In Advances in Neural Information Processing Systems, page 1571-1581, 2018. 6
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014. 5
Visual Genome: connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalantidis, Li-Jia Li, David A Shamma, Michael S Bernstein, Fei-Fei Li, International Journal of Computer Vision. 1235Ranjay Krishna, Yuke Zhu, Oliver Groth, Justin Johnson, Kenji Hata, Joshua Kravitz, Stephanie Chen, Yannis Kalan- tidis, Li-Jia Li, David A. Shamma, Michael S. Bernstein, and Fei-Fei Li. Visual Genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision, 123:32-73, 2017. 5
Gang Hua, Houdong Hu, and Xiaodong He. Stacked cross attention for image-text matching. Kuang-Huei Lee, Xi Chen, Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, and Xi- aodong He. Stacked cross attention for image-text matching. pages 201-216, 2018. 7
Less is More: clipbert for video-and-language learning via sparse sampling. Jie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L Berg, Mohit Bansal, Jingjing Liu, Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE/CVF Conference on Computer Vision and Pattern RecognitionJie Lei, Linjie Li, Luowei Zhou, Zhe Gan, Tamara L. Berg, Mohit Bansal, and Jingjing Liu. Less is More: clipbert for video-and-language learning via sparse sampling. In Pro- ceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021. 2
Unicoder-vl: A universal encoder for vision and language by cross-modal pre-training. Gen Li, Nan Duan, Yuejian Fang, Ming Gong, Daxin Jiang, The Thirty-Fourth AAAI Conference on Artificial Intelligence. 37Gen Li, Nan Duan, Yuejian Fang, Ming Gong, and Daxin Jiang. Unicoder-vl: A universal encoder for vision and lan- guage by cross-modal pre-training. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 11336- 11344, 2020. 3, 7
Visualbert: A simple and performant baseline for vision and language. Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, Kai-Wei Chang, arXiv:1908.035576arXiv preprintLiunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. Visualbert: A simple and perfor- mant baseline for vision and language. arXiv preprint arXiv:1908.03557, 2019. 1, 2, 5, 6, 8
Oscar: Object-semantics aligned pre-training for vision-language tasks. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, Jianfeng Gao, European Conference on Computer Vision. 27Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. Oscar: Object-semantics aligned pre-training for vision-language tasks. In European Conference on Computer Vision, 2020. 2, 7
Tsung-Yi Lin, Michael Maire, Serge J Belongie, Lubomir D Bourdev, Ross B Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, arXiv:1405.0312Microsoft COCO: common objects in context. arXiv preprintTsung-Yi Lin, Michael Maire, Serge J. Belongie, Lubomir D. Bourdev, Ross B. Girshick, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. Mi- crosoft COCO: common objects in context. arXiv preprint arXiv:1405.0312, 2014. 5
Vilbert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. Jiasen Lu, Dhruv Batra, Devi Parikh, Stefan Lee, Advances in Neural Information Processing Systems. 67Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. Vil- bert: Pretraining task-agnostic visiolinguistic representations for vision-and-language tasks. In Advances in Neural Infor- mation Processing Systems, pages 13-23, 2019. 1, 2, 5, 6, 7
Visualsparta: Sparse transformer fragment-level matching for large-scale text-to-image search. Xiaopeng Lu, Tiancheng Zhao, Kyusong Lee, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational Linguistics67Xiaopeng Lu, Tiancheng Zhao, and Kyusong Lee. Visu- alsparta: Sparse transformer fragment-level matching for large-scale text-to-image search. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, 2021. 6, 7
Siamese prototypical contrastive learning. Shentong Mo, Zhun Sun, Chao Li, Proceeedings of British Machine Vision Conference. eeedings of British Machine Vision ConferenceShentong Mo, Zhun Sun, and Chao Li. Siamese prototypi- cal contrastive learning. In Proceeedings of British Machine Vision Conference, 2021. 2
Rethinking prototypical contrastive learning through alignment, uniformity and correlation. Shentong Mo, Zhun Sun, Chao Li, Proceeedings of British Machine Vision Conference. eeedings of British Machine Vision Conference2022Shentong Mo, Zhun Sun, and Chao Li. Rethinking prototyp- ical contrastive learning through alignment, uniformity and correlation. In Proceeedings of British Machine Vision Con- ference, 2022. 2
Multi-level contrastive learning for self-supervised vision transformers. Shentong Mo, Zhun Sun, Chao Li, 2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV). Shentong Mo, Zhun Sun, and Chao Li. Multi-level con- trastive learning for self-supervised vision transformers. In 2023 IEEE/CVF Winter Conference on Applications of Com- puter Vision (WACV), pages 2777-2786, 2023. 2
Mad-x: An adapter-based framework for multi-task cross-lingual transfer. Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, Sebastian Ruder, 2020Jonas Pfeiffer, Ivan Vulić, Iryna Gurevych, and Sebastian Ruder. Mad-x: An adapter-based framework for multi-task cross-lingual transfer, 2020. 3
ImageBERT: cross-modal pre-training with large-scale weak-supervised image-text data. Di Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, Arun Sacheti, arXiv:2001.07966arXiv preprintDi Qi, Lin Su, Jia Song, Edward Cui, Taroon Bharti, and Arun Sacheti. ImageBERT: cross-modal pre-training with large-scale weak-supervised image-text data. arXiv preprint arXiv:2001.07966, 2020. 2
Learning transferable visual models from natural language supervision. Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever, arXiv:2103.0002024arXiv preprintAlec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020, 2021. 2, 4
Improving Language Understanding by Generative Pre-Training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. Improving Language Understanding by Genera- tive Pre-Training. 2019. 2
Language Models are Unsupervised Multitask Learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language Models are Unsuper- vised Multitask Learners. 2019. 2
Faster r-cnn: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, Advances in Neural Information Processing Systems. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. In Advances in Neural Information Pro- cessing Systems, pages 91-99, 2015. 5
Lei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard De Melo, Sen Su, arXiv:2007.13135Contrastive visual-linguistic pretraining. 26arXiv preprintLei Shi, Kai Shuang, Shijie Geng, Peng Su, Zhengkai Jiang, Peng Gao, Zuohui Fu, Gerard de Melo, and Sen Su. Contrastive visual-linguistic pretraining. arXiv preprint arXiv:2007.13135, 2020. 2, 6
KVL-BERT: knowledge enhanced visual-andlinguistic BERT for visual commonsense reasoning. Dandan Song, Siyi Ma, Zhanchen Sun, Sicheng Yang, Lejian Liao, arXiv:2012.07000arXiv preprintDandan Song, Siyi Ma, Zhanchen Sun, Sicheng Yang, and Lejian Liao. KVL-BERT: knowledge enhanced visual-and- linguistic BERT for visual commonsense reasoning. arXiv preprint arXiv:2012.07000, 2020. 2
VL-BERT: Pre-training of generic visual-linguistic representations. Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, Jifeng Dai, International Conference on Learning Representations. 6Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. VL-BERT: Pre-training of generic visual-linguistic representations. In International Confer- ence on Learning Representations, 2020. 1, 2, 5, 6, 8
VideoBERT: A joint model for video and language representation learning. Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, Cordelia Schmid, arXiv:1904.01766arXiv preprintChen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and Cordelia Schmid. VideoBERT: A joint model for video and language representation learning. arXiv preprint arXiv:1904.01766, 2019. 2
LXMERT: Learning crossmodality encoder representations from transformers. Hao Tan, Mohit Bansal, arXiv:1908.0749026arXiv preprintHao Tan and Mohit Bansal. LXMERT: Learning cross- modality encoder representations from transformers. arXiv preprint arXiv:1908.07490, 2019. 2, 6
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszko- reit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, page 5998-6008, 2017. 2
Consensus-aware visual-semantic embedding for image-text matching. Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, Lin Ma, European Conference on Computer Vision. Haoran Wang, Ying Zhang, Zhong Ji, Yanwei Pang, and Lin Ma. Consensus-aware visual-semantic embedding for image-text matching. In European Conference on Computer Vision, pages 18-34, 2020. 7
Position focused attention network for image-text matching. Yaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, Xin Fan, Proceedings of the 28th International Joint Conference on Artificial Intelligence. the 28th International Joint Conference on Artificial IntelligenceYaxiong Wang, Hao Yang, Xueming Qian, Lin Ma, Jing Lu, Biao Li, and Xin Fan. Position focused attention network for image-text matching. In Proceedings of the 28th In- ternational Joint Conference on Artificial Intelligence, page 3792-3798, 2019. 7
Camp: Cross-modal adaptive message passing for text-image retrieval. Zihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Junjie Yan, Xiaogang Wang, Jing Shao, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionZihao Wang, Xihui Liu, Hongsheng Li, Lu Sheng, Junjie Yan, Xiaogang Wang, and Jing Shao. Camp: Cross-modal adaptive message passing for text-image retrieval. In Pro- ceedings of the IEEE International Conference on Computer Vision, pages 5764-5773, 2019. 7
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier, Transactions of the Association for Computational Linguistics. 27Peter Young, Alice Lai, Micah Hodosh, and Julia Hocken- maier. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descrip- tions. Transactions of the Association for Computational Linguistics, 2:67-78, 2014. 6, 7
Ernie-vil: Knowledge enhanced vision-language representations through scene graph. Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hua Hao Tian, Haifeng Wu, Wang, arXiv:2006.16934arXiv preprintFei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-vil: Knowledge enhanced vision-language representations through scene graph. arXiv preprint arXiv:2006.16934, 2020. 2
Ernie-vil: Knowledge enhanced visionlanguage representations through scene graph. Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hua Hao Tian, Haifeng Wu, Wang, The Thirty-Fourth AAAI Conference on Artificial Intelligence. 26Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. Ernie-vil: Knowledge enhanced vision- language representations through scene graph. In The Thirty- Fourth AAAI Conference on Artificial Intelligence, 2021. 2, 6
From recognition to cognition: Visual commonsense reasoning. Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern Recognition16Rowan Zellers, Yonatan Bisk, Ali Farhadi, , and Yejin Choi. From recognition to cognition: Visual commonsense reason- ing. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, page 6720-6731, 2019. 1, 6
. Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, Jianfeng Gao, Pengchuan Zhang, Xiujun Li, Xiaowei Hu, Jianwei Yang, Lei Zhang, Lijuan Wang, Yejin Choi, and Jianfeng Gao.
Vinvl: Revisiting visual representations in vision-language models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition27Vinvl: Revisiting visual representations in vision-language models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5579- 5588, 2021. 2, 7
Yuhao Zhang, Hang Jiang, Yasuhide Miura, D Christopher, Curtis Manning, Langlotz, arXiv:2010.00747Contrastive learning of medical visual representations from paired images and text. arXiv preprintYuhao Zhang, Hang Jiang, Yasuhide Miura, Christopher D Manning, and Curtis Langlotz. Contrastive learning of medi- cal visual representations from paired images and text. arXiv preprint arXiv:2010.00747, 2020. 3
| [] |
[
"A Comprehensive Survey on Knowledge Distillation of Diffusion Models",
"A Comprehensive Survey on Knowledge Distillation of Diffusion Models"
] | [
"Weijian Luo [email protected] \nSchool of Mathematical Sciences\nPeking University Beijing\n100871China\n"
] | [
"School of Mathematical Sciences\nPeking University Beijing\n100871China"
] | [] | Diffusion Models (DMs), also referred to as score-based diffusion models, utilize neural networks to specify score functions. Unlike most other probabilistic models, DMs directly model the score functions, which makes them more flexible to parametrize and potentially highly expressive for probabilistic modeling. DMs can learn fine-grained knowledge, i.e., marginal score functions, of the underlying distribution. Therefore, a crucial research direction is to explore how to distill the knowledge of DMs and fully utilize their potential. Our objective is to provide a comprehensible overview of the modern approaches for distilling DMs, starting with an introduction to DMs and a discussion of the challenges involved in distilling them into neural vector fields. We also provide an overview of the existing works on distilling DMs into both stochastic and deterministic implicit generators. Finally, we review the accelerated diffusion sampling algorithms as a training-free method for distillation. Our tutorial is intended for individuals with a basic understanding of generative models who wish to apply DM's distillation or embark on a research project in this field.The optimal θ * in problem equation 5 is also called a maximum likelihood estimation (MLE) of the parameter θ. The term L(θ) = E p d log p θ (x) is the expected likelihood. Two important points about ARMs need to be emphasized. Firstly, ARMs explicitly model log-likelihood functions (equation 2 and equation 3), which limits the implementation's flexibility. Secondly, ARMs require a strict sequential order in their generating algorithm, which makes sampling from ARMs computationally inefficient. | 10.48550/arxiv.2304.04262 | [
"https://export.arxiv.org/pdf/2304.04262v1.pdf"
] | 258,049,177 | 2304.04262 | 773d045cca5cf3f67013ea4c7133ce64380d669d |
A Comprehensive Survey on Knowledge Distillation of Diffusion Models
Weijian Luo [email protected]
School of Mathematical Sciences
Peking University Beijing
100871China
A Comprehensive Survey on Knowledge Distillation of Diffusion Models
Diffusion Models (DMs), also referred to as score-based diffusion models, utilize neural networks to specify score functions. Unlike most other probabilistic models, DMs directly model the score functions, which makes them more flexible to parametrize and potentially highly expressive for probabilistic modeling. DMs can learn fine-grained knowledge, i.e., marginal score functions, of the underlying distribution. Therefore, a crucial research direction is to explore how to distill the knowledge of DMs and fully utilize their potential. Our objective is to provide a comprehensible overview of the modern approaches for distilling DMs, starting with an introduction to DMs and a discussion of the challenges involved in distilling them into neural vector fields. We also provide an overview of the existing works on distilling DMs into both stochastic and deterministic implicit generators. Finally, we review the accelerated diffusion sampling algorithms as a training-free method for distillation. Our tutorial is intended for individuals with a basic understanding of generative models who wish to apply DM's distillation or embark on a research project in this field.The optimal θ * in problem equation 5 is also called a maximum likelihood estimation (MLE) of the parameter θ. The term L(θ) = E p d log p θ (x) is the expected likelihood. Two important points about ARMs need to be emphasized. Firstly, ARMs explicitly model log-likelihood functions (equation 2 and equation 3), which limits the implementation's flexibility. Secondly, ARMs require a strict sequential order in their generating algorithm, which makes sampling from ARMs computationally inefficient.
Introduction
ARMs: Model Explicit Likelihood Functions
The objective of deep generative modeling is to train neural-parametrized models that can generate highly realistic data samples. Numerous deep generative models have been proposed to achieve this objective, each from a different perspective. In general, generative models aim to express and approximate certain sufficient characteristics of the underlying data distribution by minimizing probability divergences or metrics. The likelihood function, which is the log density of the underlying distribution, is a commonly used distributional characteristic. Auto-regressive models (ARMs) (Graves, 2013;Van Den Oord et al., 2016;Jozefowicz et al., 2016) are representative models that use neural networks to parametrize the log-likelihood function, i.e., the logarithm of probability density functions, and learn to match the underlying data's log-likelihood. ARMs are trained using KL divergence minimization. Let p d denote the underlying data distribution, from which we only have access to consistent samples, i.e., x ∼ p d . ARMs sum up a sequence of outputs of neural networks with strict orders to explicitly express the conditional factorization of the model's likelihood function density functions. Formally, the potential function of a distribution p is defined as a family of functions that satisfy the equation
e E(x) / e E(x) dx = p(x). (6) ∂ ∂θ E θ (x) .(13)
To calculate the expected log-likelihood gradient with respect to the parameter θ, we need to obtain consistent samples from the EBM-induced distribution x ∼ p θ , which is an unnormalized distribution. Fortunately, several MCMC algorithms are capable of generating such samples, including those proposed by Robert et al. (1999), Hastings (1970), Roberts and Rosenthal (1998), Xifara et al. (2014), andNeal et al. (2011). By combining the gradient formula equation 8 with MCMC algorithms, it is possible to train EBMs using maximum likelihood estimation. The key distinction between energy-based models (EBMs) and autoregressive models (ARMs) lies in how they utilize neural networks. While EBMs employ unconstrained neural networks to model potential functions, ARMs use neural networks as a component of explicit conditional densities. This difference in approach allows EBMs to tap into the full expressive power of neural networks by avoiding the constraints imposed by normalization requirements.
SBMs and DMs: From Potentials to Scores
In the preceding sections, we introduced EBMs and their training and sampling methods. To approximate data potential functions, EBMs use neural networks and generate samples using MCMC algorithms with learned potentials. Among the various MCMC methods available, Langevin dynamics (LD) or Langevin MC (Roberts and Tweedie, 1996) is a preferred choice for its ease of implementation and good performance even under weak conditions. Let p(x) be a differentiable density function, the LD is defined through a stochastic differential equation,
dX t = 1 2 ∇ Xt log p(X t )dt + dW t , p (0) = p 0 , t ∈ [0, ∞].(14)
Langevin dynamics (LD) is notable for two reasons. Firstly, when t → ∞, the marginal distribution of LD can converge to p(x) regardless of the initial distribution p 0 , given certain conditions on p(x). Secondly, the simulation of LD only requires the gradient of the potential function, or the score function ∇ x log p(x). Hence, even if the distribution p(x) is not normalized, using the potential function log p(x) for LD still generates valid samples as the normalized distribution. Score-based models (SBMs) take inspiration from LD and utilize neural networks to train a neural score function S θ that can match the underlying data distribution. If the neural score function is trained well enough to match the data score functions S d (x) := ∇ x log p d (x), samples from the SBM can be obtained by replacing the score function ∇ x log p(x) with the SBM's neural score function S θ (x) in the simulation of LD. Now, let's turn our attention to the training strategies of SBMs. Since S θ directly expresses the model's underlying score functions, the model's potential function p θ becomes intractable. Therefore, the commonly used KL divergence for training EBMs and ARMs as explained in previous sections does not work for training SBMs. Instead, SBMs minimize the Fisher divergence, a probability divergence that only requires the model's score functions. Formally, the Fisher divergence between two distributions p(x) and q(x) is defined as
D F (p, q) := E x∼p ∇ x log p(x) − ∇ x log q(x) 2 2 .(15)
Let p d denote the data distribution and S d (x) = ∇ x log p d (x) represents the data score function. The Fisher divergence between data and SBM-induced distribution p θ writes
D F (p d , p θ ) = E p d 1 2 S d (x) − S θ (x) 2 2 (16) = E p d 1 2 S d (x) 2 2 + E p d 1 2 S θ (x) 2 2 − E p d S d (x), S θ (x) .(17)
In practice, the first term of Fisher divergence equation 16 does not depend on the parameter θ and thus can be dropped. The third term is shown to be equivalent to a data-score-free form as
E p d S d (x), S θ (x) = −E p d D d=1 ∂s (d) θ (x) ∂x (d) .
Here the notation s
L SM (θ) : = E p d 1 2 S θ (x) 2 2 + E p d D d=1 ∂s (d) θ (x) ∂x (d) .(18)
The optimization problem in equation equation 18 is known as Score Matching (SM) (Hyvärinen, 2005). However, evaluating the gradient term ∇ x , S θ (x) by taking the data gradient through a neural network can be memory-intensive, which poses challenges when working with high-dimensional data. To overcome this limitation, several approaches have been proposed to improve the efficiency of SM Pang et al., 2020;Vincent, 2011). Among them, the seminal work (Vincent, 2011) proposed a so-called denoising score matching (DSM) objective that does not require taking data gradient but instead with the objective
L DSM (θ) : = E x∼p d ,x∼p(x|x) S θ (x) − ∇x log p(x|x) 2 2 .(19)
In the above paragraph, the objective function L DSM (θ) is minimized using a perturbation kernel p(x|x), which is efficient to sample and has an explicit expression. One common choice for the perturbation kernel is a Gaussian distribution N (x; x, σ 2 I) with a noise variance σ 2 . By minimizing this objective, we can obtain an approximation of the data distributioñ p d (x), which is similar to the original data distribution if the perturbation kernel is not too different from the identity. However, using the LD for sampling can be problematic for high-dimensional data because the data is concentrated around some low-dimensional manifold embedded in high-dimensional space. As a solution, score-based diffusion models use multiple or continuously-indexed perturbation kernels to improve both learning and sampling. This approach can improve the performance of SBMs by generating samples that are more accurate and representative of the original data distribution.
Diffusion Models: Multi-Level SBMs
In contrast to SBMs that use a single score network, score-based diffusion models (DMs) (Song et al., 2020d) employ a more advanced approach by utilizing a multiple-level or continuous-indexed score network S θ (x, t). Additionally, instead of a single perturbation kernel, DMs use a family of conditional transition kernels induced by stochastic differential equations to perturb the data. Consider a forward diffusion SDE
dX t = F (X t , t)dt + G(t)dW t , X 0 ∼ p (0) d = p d ,(20)
where W t is a Wiener process. Let p t (x t |x 0 ) denote the conditional transition kernel of the forward diffusion 20, and p (t) d denote the marginal distribution at diffusion time t, initialized with p (0) d = p d . Two special forward diffusions, variance preserving (VP) diffusion, and variance exploding (VE) diffusion (Song et al., 2020d) are favored across diffusion model literature.
VP Diffusion
The VP diffusion takes the form of
dX t = − 1 2 β(t)X t dt + β(t)dW t , t ∈ [0, T ],(21)
where β(t) is a pre-defined schedule function. The conditional transition kernel of VP diffusion has an explicit expression where α t = e − t 0 β(s)ds . With formula equation 22, the simulation of samples of time t is efficient, requiring only a scaling of an initial sample and adding a Gaussian noise
p(x t |x 0 ) = N (x t ; √ α t x 0 ; (1 − α t )I),(22)X t = √ α t X 0 + √ 1 − α t , X 0 ∼ p 0 .(23)
∼ N (0, I) is a standard normal vector of the same size as X 0 . With formula equation 22, obtaining X t is cheap because we do not need a sequential simulation of the diffusion process. Another advantage of VP diffusion is that under loose conditions, VP diffusion can transport arbitrary initial distribution p 0 to a standard multi-variate Gaussian distribution N (0, I). VP diffusion is perhaps the most widely used forward diffusion across DM literature ().
VE Diffusion
The VE diffusion takes the form
dX t = dσ 2 (t) dt dW t , t ∈ [0, T ].(24)
W t is an independent Weiner process. The transition kernel of VE diffusion writes
p(x t |x 0 ) = N (x t ; x 0 , σ(t)I).(25)
Similar to VP diffusion, the marginal samples of VE diffusion are cheap to be drawn with
X t = X 0 + σ(t) , X 0 ∼ p 0(26)
Here ∼ N (0, I) is a standard Gaussian vector. Both VE and VP diffusion processes have been successfully used in diffusion models for various tasks. However, in recent years, several new diffusion processes have been proposed that have either improved the performance of DMs or have been designed for specific tasks.
Training Method DMs minimize a weighted combination of DSM with perturbation kernels p t (.|.) at each time t. More precisely, DMs training objective writes
L W DSM (θ) = T t=0 w(t)L (t) DSM (θ)dt.(27)L (t) DSM (θ) = E x 0 ∼p (0) d ,xt|x 0 ∼pt(xt|x 0 ) S θ (x t , t) − ∇ xt log p t (x t |x 0 ) 2 2 .(28)
By minimizing objective equation 27, continuous-indexed score network S θ (x, t) is capable of matching marginal score functions of forward diffusion process equation 20. In some literature, the DSM objective equation 28 is reformulated as a noise-prediction objective that trains DMs by learning to predict the added noise. More precisely, consider the VP's transition kernel equation 22 as an instance, if x t and x 0 is obtained with reparametrization technique equation 23, then the gradient terms writes
∇ xt log p t (x t |x 0 ) = ∇ xt − 1 2(1 − α t ) x t − √ α t x 0 2 2 (29) = 1 1 − α t (x t − √ α t x 0 ) = 1 1 − α t √ 1 − α t (30) = 1 √ 1 − α t(31)
Combining gradient term equation 29 and DSM objective equation 28, the DSM objective for VP diffusion can be reformulated as
L DSM (θ) =E x 0 ∼p d , ∼N (0,I) w(t) 1 − α t √ 1 − α t S θ (x t , t) − 2 2 (32) = E x 0 ∼p d , ∼N (0,I) 1 1 − α t θ (x t , t) − 2 2(33)
The DSM can be reformulated using a noise-prediction objective where a neural network θ (x t , t) is trained to predict the added noise using the noised data sample x t and the score network S θ (x t , t). This reformulation involves a modified version of the score network, represented as θ (x t , t) : = √ 1 − α t S θ (x t , t), and a re-expression of the diffusion process as x t = √ α t x 0 + √ 1 − α t . A similar formulation can also be applied to the VE diffusion process, which is discussed in detail in the Appendix.
Sampling Strategy
The score network trained in DSM can be utilized in various applications, with one of the most direct applications being the design of a sampling strategy for approximating the underlying data distribution. The fundamental concept behind this mechanism is the existence of a reversed SDE equation 34 that has the same marginal distribution as the forward SDE equation 20,
dX t = [F (X t , t) − G 2 (t)∇ xt log p (t) (X t )]dt + G(t)dW t , t ∈ [T, 0], X T ∼ p (T ) d .(34)
Moreover, an ODE equation 35 is also found to share the same marginal distribution
dX t = [F (X t , t) − 1 2 G 2 (t)∇ xt log p (t) (X t )]dt.(35)
Both the reversed SDE 34 and ODE 35 rely on the true marginal score functions ∇ xt log p (t)
d (x t ). However, by replacing the true marginal score functions with the learned neural score functions S θ (x, t), generative SDEs and ODEs of DMs can be obtained. Moreover, the concept of generative ODEs can be extended to neural continuous-time normalizing flow models under certain circumstances. By using the learned score functions S θ (x, t), sampling from DMs can be achieved through the numerical solutions of sampling SDEs or ODEs. Numerous practical algorithms that use advanced numerical techniques have been developed to improve the generative performance of DMs or enhance the sampling efficiency with minimal loss of performance (Song et al., 2020b;Bao et al., 2022c;Zhao et al., 2023).
Successes of DMs
Since the pioneering works by Sohl-Dickstein et al. (2015), Ho et al. (2020b), and Song et al. (2020c), diffusion models (DMs) have emerged as the leading approach for generative modeling, finding widespread use in various domains, including neural image synthesis and editing Ramesh et al., 2022b;Saharia et al., 2022a;Rombach et al., 2022), audio and molecule synthesis (Hoogeboom et al., 2022;Chen et al., 2020), image segmentation (Baranchuk et al., 2021), and video or 3D object generation Molad et al., 2023;Poole et al., 2022b). DMs have shown remarkable performance improvements over time, as seen in the steady trend of unconditional Frechet Inception Score (Heusel et al., 2017) (FID) reductions on datasets such as CIFAR10, from 25.32 (Song and Ermon, 2019) to 1.97 (Karras et al.).
Diffusion Distillation
The concept of knowledge distillation (Hinton et al., 2015;Oord et al., 2018), which aims to create smaller and more efficient models while maintaining accuracy, has shown great success in various research domains. In particular, distilling knowledge from pre-trained classifiers has resulted in models with comparable accuracy, reduced model size, and improved inference efficiency (Touvron et al., 2021). Given the success of diffusion models (DMs) in numerous applications, there is a growing interest in distilling knowledge from these models to create smaller and more efficient versions. One of the key motivations for diffusion distillation is to significantly accelerate the sampling speed, which is currently hindered by the large number of neural function evaluations required. To improve the inference efficiency of DMs, researchers are exploring ways to distill the learned knowledge from DMs to efficient sampling mechanisms, such as a direct implicit generator or a fewer-steps vector field. By doing so, they have been able to create student models that have further improved inference efficiency with minimal performance loss. Some distilled student models require less than 10 neural function evaluations but still offer comparable generative performance to their larger counterparts.
Diffusion distillation also serves as a means to establish connections between DMs and other generative models, such as implicit generative models and normalizing flows. Through knowledge transfer between DMs and other models, researchers can study the micro-connections between them and explore their potential for future generative modeling research.
This paper provides a comprehensive review of existing research on diffusion distillation strategies. Our review is organized into three main categories: diffusion-to-field (D2F) distillation (Section 2), diffusion-to-generator (D2G) distillation (Section 3), and training-free (TF) distillation (Section 4). Each category contains studies that share similar settings and methodologies. In addition to our categorization, we also discuss broader topics in diffusion distillation throughout the rest of this survey.
Diffusion-to-Field Distillation
The D2F distillation approach aims to address the lack of efficiency in the deterministic sampling method of DMs by distilling the generative ODE equation 35 into another generative vector field that requires fewer NFEs to generate comparable samples. This approach can be categorized into two classes: output distillation and path distillation. Output distillation aims to teach a student vector field to replicate the output of the DM's deterministic sampling method. Path distillation, on the other hand, aims to produce a student ODE that has better path properties than the teacher ODE. Both output and path distillations can be used in combination to improve both the path properties and simulation efficiency of the teacher ODE.
Output Distillation
To begin with, we first recap the generative ODE equation 35
dX t = [F (X t , t) − 1 2 G 2 (t)∇ Xt log p (t) (X t )]dt, t ∈ [T, 0], p (T ) = p T
To make the discussion simpler, we consider the most naive Euler-Maruyama (EM) discretization method that solves the ODE with sequential updates
X t i = X t i+1 + [F (X t i=1 , t i+1 ) − 1 2 G 2 (t i+1 )∇ Xt i+1 log p (t i+1 ) (X t i+1 )](t i − t i+1 ), i = N, ..., 1
In the context of sampling from DMs, using EM methods directly for numerical solvers suffers from computational inefficiency. This is because the EM discretization error increases significantly with the step size, leading to poorly generated samples. To address this issue, an alternative approach called output distillation has been proposed. It involves training a student neural network to learn the output of the ODE using larger step sizes. Specifically, assuming that the step size ∆t is not too small, a student continuously-indexed neural mapping S (stu) φ (x, t) is trained to approximate the change in the ODE's output. This helps improve the computational efficiency of sampling from DMs by reducing the number of NFEs required for competitive sampling performance. More precisely, the student continuouslyindexed neural mapping S (stu) φ (x, t) is trained to approximate the change of teacher ODE's output between time t and t − ∆t
∆X : = X t−∆t − X t = t−∆t s=t [F (X s , s) − 1 2 G 2 (s)∇ Xs log p (s) (X s )]ds.(36)
The residual in equation 36 is tackled using numerical methods with sufficiently small step sizes. In order to overcome the computational inefficiency of this approach, a student network S (stu) φ is introduced as a distilled time-dependent vector field that approximates the non-linear residual in equation 36. By leveraging the expressive power of neural networks, empirical studies have shown that properly designed diffusion-to-field techniques can result in a new sampling ODE that requires only one NFE but still yields comparable generative performance in terms of Fretchet Inception Distance (FID) (). It is worth noting that in the following sections, we use θ to denote the parameters of teacher models (DMs) and φ to represent those of student models unless otherwise specified. Luhman and Luhman (2021) propose a knowledge distillation (KD) strategy to distill a DDIM sampler to a Gaussian model that requires only one NFE when sampling. DDIM is a typical deterministic sampling strategy for VP diffusion models. Assume θ (x, t) is DM's noise-prediction network as discussed in equation 32, the DDIM () sampler is a deterministic sampler that sequentially updates
X t i−1 = α t i−1 α t i (X t i − 1 − α t i θ (X t i )) + 1 − α t i−1 θ (X t i )(37)
DDIM is a deterministic sampling strategy that has been shown to significantly reduce the number of NFEs required for generating samples, with as few as 100 NFEs being sufficient for maintaining good generative performance. In contrast to the generative ODE method, DDIM is widely regarded as a superior deterministic sampling strategy. In their work, Luhman and Luhman (2021) propose to use a conditional Gaussian model as the student generative model
p stu (x 0 |x T ) = N (x 0 ; f φ (x T ), I).(38)
The neural network f φ (.) has the same input and output dimensions as the data. To implement it, the authors choose the architecture of f φ to be the same as that of the score network in the teacher DM. Let DDIM(.) denote the deterministic mapping induced by the DDIM, they took
p teacher (x 0 |x t ) = N (x 0 ; DDIM(x T ), I)
To train the student model, they propose to minimize the conditional KL divergence between the student model and the DDIM sampler
L(φ) = E x T ∼N (0,I) D KL p teacher (x 0 |x T ), p stu (x 0 |x T )(39)= E x T ∼N (0,I) D KL 1 2 f φ (x T ) − DDIM(x T ) 2 2(40)
The student model sampling strategy proposed by Luhman and Luhman (2021) is simple. It involves drawing a Gaussian random variable x T from a normal distribution with mean 0 and identity covariance matrix. They then obtain the mean vector of the student model x 0 by passing x T through the neural network f φ . This results in a one-NFE sampling model with an FID of 9.39, while the teacher generative ODE has an FID of 4.16. Although this method provides a first step towards considering knowledge distillation for diffusion models, it has a computational inefficiency as it requires generating final outputs of DDIM or other ODE sampler, which consists of hundreds of NFEs for computing a single batch of training data. The progressive distillation (PD) strategy proposed by aims to train a student neural network that requires half the number of NFEs as the teacher model by learning the two-step prediction of the teacher DM's deterministic sampling strategy. The teacher diffusion model is discretized to N time-stamps, denoted by T = t i i = 0, 1, ..., N − 1, while T = 0, 2, 4, ... represents N/2 even time-stamps of T . The student network is denoted by f φ(x, t), and the updates of the DDIM method from a time-stamp t j to another timestamp t i in the teacher DM are denoted by DDIM(x, t j , t i ) with i ≤ j. The PD trains the student network by minimizing
L(φ) = E x 0 ∼p d ,i∼U nif (T ), ∼N (0,I) f φ (x, t i ) − DDIM(x, t i , t i−2 ) ,(41)wherex = √ α t i x 0 + √ 1 − α t i is the forward diffused data at time t i of VP diffusion.
By minimizing the PD objective equation 41, the student network learns to output a two-step prediction of the teacher model, so the total NFEs are halved. After the student model is trained to accurately predict the teacher model's two-step sampling strategy, it replaces the teacher model, and a new student model is trained to further reduce the number of sampling steps by half. The authors of the paper used the same UNet architecture as the teacher model's score network and the DDIM method as the initial teacher sampling strategy in their implementation of the progressive distillation (PD) approach. Their results showed that successive PD rounds can reduce the required vector fields to only 4 NFEs, making it 250 times more efficient than the teacher diffusion's ODE sampler, with a 5% drop in generative performance as measured by FID. Both PD and KD use the teacher network's architecture to distill a few-step sampling method by learning from the many-step teacher sampling method. Both approaches are implemented by minimizing the L 2 error between the multi-step predictions of the teacher network and the single-step prediction of the student network. The key difference between PD and KD is that PD progressively reduces the number of required function evaluations, while KD directly trains a one-step student model for the final prediction. Thus, KD can be seen as an extreme PD method that reduces the teacher sampling strategies' full time-stamps to one in a single round.
A two-stage distillation strategy is proposed by Meng et al. (2022) to address the challenge of distilling knowledge from classifier-free guided conditional diffusion models like GLIDE , DALL·E-2 (Ramesh et al., 2022a), Stable Diffusion (Rombach et al., 2021) and Imagen (Saharia et al., 2022b). The key challenge is to transfer knowledge from the teacher DMs while preserving the classifier-free guidance mechanism, which successfully trains a single DM to learn both conditional and unconditional distributions. The conditional knowledge is integrated into the DM through a conditional input, while the unconditional knowledge is learned by replacing the conditional inputs with a None input. In the first stage of their strategy, a student conditional diffusion model with a classifier-free guidance input is trained to learn from the teacher diffusion model. The student model is implemented as a neural network with learnable parameters, which takes in an input x and a conditional context input c. For simplicity, the notation c is dropped in the following discussion. The student model is trained to match the output by minimizing the objective
L(φ 1 ) = E w∼pw,t∼U [0,1],x∼p d λ(t) f φ 1 (x t , t, w) −x w θ (x t ) 2 2(42)
Here λ(t) represents a pre-defined weighting function.
x w θ (x t ) = (1 + w)x c,θ (x t ) − wx θ (x t ), x t ∼ p t (x t |x 0 ) and p w = U [w min , w max ].
Note that the stage-one distillation only incorporates the classifier-free guidance through an additional input w for the student model, no reduction of NFEs and efficiency-related distillation is applied. The second stage of distillation employs the progressive diffusion (PD) strategy proposed by to significantly reduce the number of diffusion steps of the previously trained student model with classifier-free guidance inputs. The two-stage distillation approach is used to distill both pixel-space and latent-space classifier-free guided conditional diffusion models for various tasks, including class-conditional generation, text-guided image generation, text-guided image-to-image translation, and image inpainting. In fact, the distilled students achieved even better Fréchet Inception Distance (FID) scores than their teachers in the experiment of distilling pixel-space class-conditional generation on the ImageNet-64x64 dataset.
In recent work, Sun et al. (2022) proposed a feature space distillation method called Classifier-based Feature Distillation (CFD) for image data. The primary motivation behind this approach was to address the challenge of directly aligning pixels in the teacher's output and a few-step student model's output, which they found to be too difficult to learn. Instead, they trained the student networks to align with the multiple-step output of the teacher models in the feature space extracted by a pre-trained classifier. This approach follows a similar distillation strategy as the PD technique.
To achieve this, they proposed minimizing the KL divergence between the predicted probability distribution (after Softmax function) of the student model's one-step outputs and the teacher model's multiple-step outputs. They also found that incorporating additional terms such as entropy and diversity regularizations improved the distillation performance. They employed the same diffusion models and student models as and used DenseNet-201 (Huang et al., 2016) as the classifier in their implementations. Their distillation approach resulted in a student model with FID 3.80 on CIFAR10 with only 4 NFEs, which is lower than the DP's implementation in .
The PD strategy was also applied by Huang et al. (2022) to distill fast text-to-speech (TTS) generation models based on a pre-trained diffusion TTS model. They introduced a variance predictor and a spectrogram denoiser to improve the student model's architecture, which was specifically designed for TTS applications.
In contrast to mimicking the output of the generative ODE of diffusion models, Song et al. (2023) proposes to minimize the difference of self-consistency function of generative ODE to implement the output distillation. They randomly diffuse a real data sample and simulate a few steps of generative ODE to obtain another noisy data which lies on the same ODE path. They input the two noisy samples into a student model and minimize the difference in the outputs in order to ensure the self-consistency of generative ODE. They name their proposed model the consistency model (CM). CM can be viewed as another output distillation method that utilizes the self-consistent property of generative ODE for distillation.
For output distillation, the student network is trained to minimize the difference between its output and the output of the teacher ODE at the same time points. The output distillation is particularly useful when the teacher's ODE has a relatively simple form, such as diagonal ODEs, or when the teacher's ODE is difficult to train. In these cases, the student network can be trained to efficiently mimic the teacher ODE's output with much fewer NFEs. The trained student network can then be used for efficient sampling from the DMs.
It is worth noting that the output distillation only considers the output of the ODE at discrete time points, and thus may not fully capture the path properties of the DMs. Therefore, it may not be suitable for applications where path properties are important, such as image synthesis and editing. For such applications, path distillation can be used instead, which will be discussed in the next section.
Path Distillation
Output distillation is a technique that helps improve the sampling strategy of DMs by allowing the student neural network to mimic the multiple-step output of teacher models. In contrast, path distillation aims to refine DMs' sampling strategy to potentially have better properties. Some researchers argue that the forward (and reverse) diffusion process creates a curve in data space that connects the data distribution and prior distribution. Thus, path distillation focuses on refining the diffusion generative SDE or ODE to a straight version that has a more efficient sampling path than the teacher models. The key difference between path distillation and output distillation is that path distillation is more concerned with refining an existing teacher model's sampling strategy, whereas output distillation focuses on teaching the student to learn the skipped output of the teacher model, without changing the sampling path and mechanism of the student model.
The Reflow method, introduced by Liu et al. (2022b), is a path distillation approach that aims to improve generative speed by modifying a pre-trained teacher neural ODE through a student model. The student model straightens the path of the teacher model by minimizing a time average of L 2 loss between its outputs and interpolations of data samples and corresponding outputs from the pre-trained model. More precisely, let p T be an initial distribution
L(φ) = E x T ∼p T ,x 0 ∼p teacher (x 0 |x T ),t∼U [0,T ] x T − x 0 − f φ ( t T x T + T − t T x 0 , t) 2 2(43)
The term p teacher (x 0 |x T ) can be an arbitrary teacher model regardless it is an SDE or ODE. In Reflow distillation proposed by Liu et al. (2022b), if the teacher model is assumed to be a DDIM sampler, then p teacher (x 0 |x T ) = δ(x 0 = DDIM(x T )). Reflow distillation has the advantage of being repeatable for several rounds, which can further straighten the teacher model's path. Additionally, Reflow can be used in conjunction with progressive distillation as discussed in Section ??, where the Reflow strategy can be used first to straighten the teacher model's path, followed by progressive distillation to make the student model more efficient. Finally, the authors provided numerical comparisons between ODE paths of rectified and un-rectified models. In their work, Wu et al. (2022) applied the Reflow technique to the problem of 3D point cloud generation. They proposed a three-stage procedure that enables the construction of a generative ODE capable of producing high-quality point clouds. At their first training stage, they train a teacher generative ODE f θ by minimizing the objective
L(θ) = E x 0 ∼p d ,x T ∼N (0,I),t∼U [0,T ] f θ (x t , t) − (x 0 − x T ) 2 2 (44)
where x t = (t/T )x 0 + (T − t/T )x T is the interpolated point at time t. In the second stage, they applied the Reflow strategy to further straighten the teacher model. In the third stage, they use a student model f φ which finally distills the multiple-step teacher model into a single-step student model by minimizing
L(φ) = E x T ∼p d Dist x T + f φ (x T , T ), x 0(45)
Here x 0 ∼ p teacher (x 0 |x T ) is obtained from teacher model. The term Dist(., .) represents some distance function between two points. In their implementation, they use the Chamfer distance for measuring the distance between two cloud points. proposed an objective that can result in a forward process with much smaller curvations, to be more insensitive to truncation errors. Chen and Lipman (2023), Li et al. (2023a) and Albergo and Vanden-Eijnden (2022) further study the refinements of the diffusion model's generative path. Zheng et al. (2022) proposed another view for path distillation, which learns a mapping operator which could generate the path. Fan and Lee (2023) also proposes another path distillation method. They refine the generative path by finetuning the teacher ODE to minimize some IPM according to the forward path. Aiello et al. (2023) proposed to finetune the generative path by minimizing the MMD between each marginal distribution of the generative path and the data.
The diffusion-to-field distillation remains an active area of research in the field of efficient diffusion models. The aim is to use the knowledge gained from diffusion models and generative SDE/ODE to distill a student model with a faster sampling speed while maintaining a comparable level of generative performance to that of the teacher models.
Diffusion-to-generator Distillation
In contrast to the diffusion-to-field distillation discussed earlier, diffusion-to-generator (D2G) distillation is another important category of distillation methods. The primary goal of D2G distillation is to transfer the distributional knowledge learned by the diffusion model into an efficient generator. Unlike D2F distillation, which focuses on learning student models with the same input and output dimensions, D2G distillation usually involves implicit generators that may not have the same latent dimension as the data space. Additionally, both deterministic and stochastic generators are considered as student generators, depending on the specific application. The training objective for D2G distillation typically takes a similar form as the diffusion model's training objective, as opposed to the commonly used mean square error-based objective in D2F distillation.
Distill Deterministic Generator
There is increasing attention on distilling deterministic generators (e.g. neural radius field) as student models for further applications of pre-trained large-scale diffusion models. More specifically, the pre-trained text-to-image diffusion models are found notably useful for learning neural radius fields which have contents that are related to some given text prompts. The neural radius field (NeRF) (Mildenhall et al., 2020) is a kind of 3D object which use a multi-layer-perceptron (MLP) to map coordinates of a mesh grid to volume properties such as color and density. Given the camera parameters such as the angles of the views, the rendering algorithm outputs a 2D image that is a view projection of the 3D NeRF scene. From a given view, the NeRF is viewed as a deterministic 2D image generator whose MLP's parameters are learnable.
The limited availability of data for constructing NeRFs has motivated researchers to explore distillation methods to obtain NeRFs with contents related to given text prompts. The pioneering work by Poole et al. (Poole et al., 2022a) proposed a method called score distillation sampling (SDS) to distill a 2D text-to-image diffusion model into 3D NeRFs. Unlike traditional NeRF construction that requires images from multiple views of the target 3D objects, text-driven construction of NeRF lacks both the 3D object and the multiple views.
The SDS method optimizes the NeRF by minimizing the diffusion model's loss function using NeRF-rendered images from a fixed view. To avoid the computational expense of directly optimizing the diffusion model's loss function, the researchers propose to approximate the distillation objective by omitting the Unet jacobian term. Specifically, the rendered NeRF image from a fixed view with the learnable parameter φ is represented by x = g(φ), and the forward diffusion sample is denoted as x t = √ α t x 0 + √ 1 − α t as proposed in equation 23. The trained text-conditional diffusion model is denoted by θ (x, t, c), where t and c represent the time-stamp and text prompt. The SDS uses the gradient to update parameter φ with
Grad(φ) = ∂ ∂φ L(φ) = E t, ,x=g(φ) w(t)( θ (x t , t, c) − ) ∂x ∂φ(46)
The work by Poole et al. (2022a) achieved remarkable results in the task of generating 3D NeRFs from text prompts, which has spurred further research on the distillation of diffusion models into deterministic generators. Several works have extended the SDS approach to other applications, as reported in ; Lin et al. (2022); Deng et al. (2022); Xu et al. (2022);Singer et al. (2023). In particular, has conducted a thorough investigation of the tunable parameters of the SDS algorithm for text-to-3D generation. Meanwhile, Lin et al. (2022) have proposed a two-stage optimization strategy that further enhances the performance of SDS on high-resolution images. The first stage of their approach is similar to that of Poole et al. (2022a), which obtains a low-resolution NeRF. The second stage up-scales the trained NeRF from the first stage to a higher resolution and fine-tunes it for better performance.
The successful application of distillation to deterministic generators, particularly in the domain of 3D generative modeling, has been widely acknowledged. However, the research in developing better-performing and more efficient distillation strategies is still largely unexplored, making it a hot research topic that demands further exploration.
Distill Stochastic Generator
In this section, we will discuss distillation strategies for stochastic generators, also known as implicit generative models. These models have been widely used in generative modeling and differ from deterministic generators in that they use neural transforms to map latent vectors to generate data samples with randomness. Stochastic generators have shown great success in the past decade (Goodfellow et al., 2014;Karras et al., 2019;Brock et al., 2018), with their advantages being fast inference speed, low inference memory, and lightweight models. Distilling diffusion models into stochastic generators has been motivated by the need for extreme inference efficiency.
The work by Luhman and Luhman (2021) can be interpreted as a type of score distillation for stochastic generators. Specifically, they employ a Unit, which has the same neural architecture as the pre-trained diffusion model, to map a latent vector and calculate the mean of a Gaussian output. The dimensionality of their latent vector matches that of the output data, and they assume unit variance for Gaussian conditional generation. To train the student model, they minimize the conditional KL divergence between the generator's output and a deterministic sampler for the diffusion model. Since they use conditional Gaussian output, minimizing KL divergence is equivalent to minimizing the mean square error between the model's mean and the sampler's output, as discussed in Section 2.
Accelerated Sampling Algorithms as Diffusion Distillation
The diffusion model has the unique characteristic of separating training and sampling processes. During training, the diffusion model does not require sampling, and during sampling, there is a high level of flexibility to design and improve the process. Typically, the diffusion model is trained with all discrete or continuous noise levels, but sampling from it does not necessarily require querying all diffusion time levels. Recent research has shown that using a small subset of diffusion time levels can significantly accelerate the sampling process with much fewer NFEs. These accelerated sampling algorithms can be viewed as generalized distillations of diffusion models, where the goal is to train a big model but use a small one. We can further categorize these algorithms into training-free and training-based accelerating algorithms as two distinct categories of diffusion distillation.
To train a diffusion model, a neural score network is trained to match the marginal score functions of a data diffusion process. As Song et al. (2020d) pointed out, the reversed SDE equation 34 and ODE equation 35 share the same marginal distribution as the forward diffusion and can be used for sampling if simulated backward. Therefore, the reversed ODE or SDE serves as the starting point for sampling from diffusion models. Formally, the most naive simulation of reversed SDE uses an Euler-Maruyama discretization with a formula
X t+1 = X t − F (X t , t) − G 2 (t)S θ (X t , t) ∆t + G(t)∆t, t = T, ..., 1.(47)
Here S θ (x, t) represents the trained score network, i.e. the teacher model. The simple sampler presented in equation equation 47 updates a batch of samples by simulating the reversed SDE, taking all diffusion time levels into account sequentially. However, this sampling strategy that uses all-time levels suffers from poor computational efficiency. In later sections, we will introduce both training-based and training-free accelerated sampling algorithms. These algorithms aim to improve the sampling speed of diffusion models by utilizing a smaller subset of diffusion time levels.
Training-based Accelerating Algorithms
The selection of appropriate diffusion time levels is a crucial problem for accelerating sampling algorithms. Previous works have shown that it is possible to construct more efficient samplers that have comparable performance to the naive sampler by using only a subset of diffusion time levels. Watson et al. (2022) addressed this problem by proposing to solve a dynamic programming problem with the Fréchet Inception Distance (FID) metric as the target. They learned a scheduler model to choose the subset of steps to be used in the sampling process, resulting in better efficiency.
To improve the generative performance of diffusion models, proposed to train additional covariance networks. The idea is to capture the full covariance matrix of the latent space distribution, instead of assuming diagonal covariance as in the original diffusion model. By doing so, the model can better capture the complex correlation structure of the data and generate more realistic samples. The authors achieved promising results on various datasets and showed that their method outperforms the baseline diffusion model with diagonal covariance.
The approach proposed by Kim et al. (2022) involves using a likeliratio-estimation model, which is essentially a discriminator, to learn the difference between the ground truth score function and the model's score function. This difference is then used to obtain an unbiased score function by combining it with DM's score functions, after differentiating through the learned diffusion. The resulting refined DM is expected to offer improved generative performance even with fewer NFEs.
Training-free Accelerating Algorithms
The training-free algorithms aim to achieve comparable generative performance with a much fewer number of used diffusion time levels. The most distinguished part of trainingfree accelerating algorithms is to design faster sampling algorithms with no training of new parametric models but only inferencing by querying pre-trained DMs. Most of such algorithms center around different numerical solvers of generative SDE or ODE.
To begin with, we start with the denoising diffusion probabilistic models (DDPM) (Ho et al., 2020a). The DDPM implements a discretized version of VP diffusion and uses a discretized sampling scheme with
x t i−1 = 1 α t i (x t i − 1 − α t i √ 1 −ᾱ t i θ (x t i , t i )) + σ t i z i(48)
Here
α t i = 1 − β t i ,ᾱ t i = i j=1 α t i ≈ e − t i
s=0 e β(s) ds is a discretized inplementation of integral. σ t i is an arbitrary σ levels and z i is an independent Gaussian noise. The DDPM uses all 1000 diffusion levels (i.e. {t 1 , ..., t 1000 }) for sampling. Song et al. (2020a) re-formulate the derivation of DDPM under a non-Markov forward diffusion model and results in a new family of sampler
x t i−1 = √ α t i−1 x t i − √ 1 − α t i θ (x t i , t i ) √ α t i + 1 − α t i−1 − σ 2 t i · θ (x t i , t i ) + σ t i z t i(49)
The term α t i has the same meaning as the one in DDPM's sampling algorithm. The σ t i is a free hyper-parameter that controls the strength of randomness in the DDIM sampler. When σ = (1 − α t i−1 )/(1 − α t i ) 1 − α t i /α t i−1 , the DDIM sampler coincides with the DDPM sampler. The sampler is deterministic when σ t i = 0. The DDIM sampler requires the same pre-trained DM as the DDPM's sampler and is shown to be able to remain the generative performance by querying fewer diffusion time levels for sampling. The work of showed that there is an optimal reverse variance σ t i in the DDIM sampler family that minimizes the KL divergence between the forward and reversed Markov chain of DDIM. Moreover, they derived an explicit expression for this optimal variance with the form σ * 2 n = λ 2 n + β n α n − β n−1 − λ 2 n 2 · 1 −β n E qn(xn) ∇ xn log q n (x n ) 2 d )
They claimed the DDIM sampler with such optimal variance leads to improved sampling regardless of the pre-trained diffusion models. In practice, the optimal variance is estimated with the Monte Carlo method with the help of pre-trained DMs, with the form Γ n = 1 M M m=1 s n (x n,m ) 2 d , x n,m ∼ iid q n (x n ) (51) σ 2 n = λ 2 n + β n α n − β n−1 − λ 2 n 2 (1 −β n Γ n ). Liu et al. (2022a), Zhang and Chen (2022) and Lu et al. (2022a) find out the semi-linear structure of VP's reversed ODE
dX t dt = F (t)X t − 1 2 G 2 (t)∇ Xt log p (t) (X t )(53)
where F (t) = − 1 2 β(t) and G(t) = β(t) for VP diffusion. They applied an exponential integrator technique on ODE that further simplified VP's reversed ODE as
x t = α t α s X s − α t t s dλ τ dτ σ τ α τ θ (X τ , τ )dτ(54)
Where λ t : = log(α t /σ t ) is the log-SNR function. Lu et al. (2022a) further proposed a change of variable trick and Taylor expansion on a simplified algorithm and obtain higher-order solvers for VP reversed ODE. Higher order SDE solvers have been proposed by Jolicoeur-Martineau et al. (2021) as an alternative to EM discretization, and they have demonstrated improved sampling performance in terms of FID. Building on this work, Karras et al. (2022) have suggested better neural preconditioning of DMs along with the use of second-order Heun discretization for simulating reversed ODE and SDE. Their method has set a new record for generative performance for DMs with a score of 1.79 FID.
Other research works have also explored the use of advanced numerical solvers for DMs. For example, Li et al. (2023b), Wizadwongsa and Suwajanakorn (2023), and Lu et al. (2022b) have investigated the potential of different numerical solvers to accelerate DM's inference efficiency. The key idea of this training-free accelerated sampling algorithm is to achieve faster sampling without the need for additional training of parametric models. However, even with the best-performing training-free methods, more than 10 NFEs are typically required to achieve comparable generative performance. Zhao et al. (2023) proposes a unified framework that introduces the prediction-correction-type numerical solutions to analyze and design higher-order ODE solvers for diffusion models. Their proposed UniPC framework includes many well-studied solvers as their special cases.
Conclusion
This paper provides a comprehensive review of existing techniques for knowledge distillation of diffusion models. Our review is organized into three categories: diffusion-to-field distillation, diffusion-to-generator distillation, and accelerating algorithms as diffusion distillation. For each category, we discuss several landmark methodologies and their applications. Given the growing popularity of diffusion models in recent years, knowledge distillation of diffusion models has become an increasingly important research area that can impact the efficient application of these models. Although significant progress has been made in these areas, there is still much room for improvement in the efficiency and effectiveness of knowledge distillation of diffusion models.
θ
(x) denotes the d-th component of the model score function and x (d) denotes the d-th component of the input vector. So minimizing the Fisher divergence is equivalent to minimizing a tractable objective
Figure 1 :
1Forward and Reversed SDE of Diffusion Models. The figure is taken from Song et al. (2020d).
Figure 2 :
2Knowledge Distillation Strategy proposed inLuhman and Luhman (2021).
Figure 3 :
3Progressive Distillation Strategy proposed in.
Figure 4 :
4Reflow Strategy for path distillation. The figure is taken fromLiu et al. (2022b).
Figure 5 :
5Reflow Strategy for path distillation. The figure is taken fromLiu et al. (2022b).
Figure 6 :
6Score Distillation Sampling method. The figure is taken fromPoole et al. (2022b).
Fast inference in denoising diffusion models via mmd finetuning. Emanuele Aiello, Diego Valsesia, Enrico Magli, abs/2301.07969ArXiv. Emanuele Aiello, Diego Valsesia, and Enrico Magli. Fast inference in denoising diffusion models via mmd finetuning. ArXiv, abs/2301.07969, 2023.
M S Albergo, Eric Vanden-Eijnden, Building normalizing flows with stochastic interpolants. ArXiv, abs/2209.15571. M. S. Albergo and Eric Vanden-Eijnden. Building normalizing flows with stochastic inter- polants. ArXiv, abs/2209.15571, 2022.
Estimating the optimal covariance with imperfect mean in diffusion probabilistic models. Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu, Bo Zhang, International Conference on Machine Learning. Fan Bao, Chongxuan Li, Jiacheng Sun, Jun Zhu, and Bo Zhang. Estimating the optimal co- variance with imperfect mean in diffusion probabilistic models. In International Conference on Machine Learning, 2022a.
Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. Fan Bao, Chongxuan Li, Jun Zhu, Bo Zhang, abs/2201.06503ArXiv. Fan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. ArXiv, abs/2201.06503, 2022b.
Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. Fan Bao, Chongxuan Li, Jun Zhu, Bo Zhang, arXiv:2201.06503arXiv preprintFan Bao, Chongxuan Li, Jun Zhu, and Bo Zhang. Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models. arXiv preprint arXiv:2201.06503, 2022c.
Label-efficient semantic segmentation with diffusion models. Dmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, Artem Babenko, arXiv:2112.03126arXiv preprintDmitry Baranchuk, Ivan Rubachev, Andrey Voynov, Valentin Khrulkov, and Artem Babenko. Label-efficient semantic segmentation with diffusion models. arXiv preprint arXiv:2112.03126, 2021.
Large scale gan training for high fidelity natural image synthesis. Andrew Brock, Jeff Donahue, Karen Simonyan, abs/1809.11096ArXiv. Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. ArXiv, abs/1809.11096, 2018.
Wavegrad: Estimating gradients for waveform generation. Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, William Chan, arXiv:2009.00713arXiv preprintNanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020.
Riemannian flow matching on general geometries. T Q Ricky, Yaron Chen, Lipman, Ricky T. Q. Chen and Yaron Lipman. Riemannian flow matching on general geometries. 2023.
Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. Congyue Deng, Chiyu Max Jiang, C Qi, Xinchen Yan, Yin Zhou, Leonidas J Guibas, Drago Anguelov, abs/2212.03267ArXiv. Congyue Deng, Chiyu Max Jiang, C. Qi, Xinchen Yan, Yin Zhou, Leonidas J. Guibas, and Drago Anguelov. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors. ArXiv, abs/2212.03267, 2022.
Diffusion models beat gans on image synthesis. Prafulla Dhariwal, Alexander Nichol, Advances in Neural Information Processing Systems. 34Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in Neural Information Processing Systems, 34:8780-8794, 2021.
Optimizing ddpm sampling with shortcut fine-tuning. ArXiv. Ying Fan, Kangwook Lee, abs/2301.13362Ying Fan and Kangwook Lee. Optimizing ddpm sampling with shortcut fine-tuning. ArXiv, abs/2301.13362, 2023.
Generative adversarial nets. Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C Courville, Yoshua Bengio, NIPS. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron C. Courville, and Yoshua Bengio. Generative adversarial nets. In NIPS, 2014.
Generating sequences with recurrent neural networks. Alex Graves, arXiv:1308.0850arXiv preprintAlex Graves. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850, 2013.
Monte carlo sampling methods using markov chains and their applications. W Keith Hastings, W Keith Hastings. Monte carlo sampling methods using markov chains and their applications. 1970.
Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems. Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, 30Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochre- iter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017.
Distilling the knowledge in a neural network. Geoffrey Hinton, Oriol Vinyals, Jeff Dean, arXiv:1503.02531arXiv preprintGeoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
Denoising diffusion probabilistic models. ArXiv, abs. Jonathan Ho, Ajay Jain, P Abbeel, Jonathan Ho, Ajay Jain, and P. Abbeel. Denoising diffusion probabilistic models. ArXiv, abs/2006.11239, 2020a.
Denoising diffusion probabilistic models. Jonathan Ho, Ajay Jain, Pieter Abbeel, Advances in Neural Information Processing Systems. 33Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840-6851, 2020b.
. Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, David J Fleet, arXiv:2204.03458Video diffusion models. arXiv preprintJonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. arXiv preprint arXiv:2204.03458, 2022.
Equivariant diffusion for molecule generation in 3d. Emiel Hoogeboom, Garcia Vıctor, Clément Satorras, Max Vignac, Welling, International Conference on Machine Learning. PMLREmiel Hoogeboom, Vıctor Garcia Satorras, Clément Vignac, and Max Welling. Equivariant diffusion for molecule generation in 3d. In International Conference on Machine Learning, pages 8867-8887. PMLR, 2022.
Densely connected convolutional networks. Gao Huang, Zhuang Liu, Kilian Q Weinberger, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Gao Huang, Zhuang Liu, and Kilian Q. Weinberger. Densely connected convolutional networks. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2261-2269, 2016.
Prodiff: Progressive fast diffusion model for high-quality text-to-speech. Rongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, Yi Ren, Proceedings of the 30th ACM International Conference on Multimedia. the 30th ACM International Conference on MultimediaRongjie Huang, Zhou Zhao, Huadai Liu, Jinglin Liu, Chenye Cui, and Yi Ren. Prodiff: Progressive fast diffusion model for high-quality text-to-speech. Proceedings of the 30th ACM International Conference on Multimedia, 2022.
Estimation of non-normalized statistical models by score matching. Aapo Hyvärinen, J. Mach. Learn. Res. 6Aapo Hyvärinen. Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res., 6:695-709, 2005.
Gotta go fast when generating data with score-based models. Alexia Jolicoeur-Martineau, Ke Li, Remi Piche-Taillefer, Tal Kachman, Ioannis Mitliagkas, abs/2105.14080ArXiv. Alexia Jolicoeur-Martineau, Ke Li, Remi Piche-Taillefer, Tal Kachman, and Ioannis Mitliagkas. Gotta go fast when generating data with score-based models. ArXiv, abs/2105.14080, 2021.
Exploring the limits of language modeling. Rafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, Yonghui Wu, arXiv:1602.02410arXiv preprintRafal Jozefowicz, Oriol Vinyals, Mike Schuster, Noam Shazeer, and Yonghui Wu. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Elucidating the design space of diffusion-based generative models. Tero Karras, Miika Aittala, Timo Aila, Samuli Laine, Advances in Neural Information Processing Systems. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. In Advances in Neural Information Processing Systems.
Analyzing and improving the image quality of stylegan. Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, Timo Aila, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 8107-8116, 2019.
Elucidating the design space of diffusion-based generative models. Tero Karras, Miika Aittala, Timo Aila, Samuli Laine, abs/2206.00364ArXiv. Tero Karras, Miika Aittala, Timo Aila, and Samuli Laine. Elucidating the design space of diffusion-based generative models. ArXiv, abs/2206.00364, 2022.
Refining generative process with discriminator guidance in score-based diffusion models. Dongjun Kim, Yeongmin Kim, Wanmo Kang, Il-Chul Moon, abs/2211.17091ArXiv. Dongjun Kim, Yeongmin Kim, Wanmo Kang, and Il-Chul Moon. Refining generative process with discriminator guidance in score-based diffusion models. ArXiv, abs/2211.17091, 2022.
A tutorial on energy-based learning. Yann Lecun, Sumit Chopra, Raia Hadsell, Aurelio Ranzato, Fu Jie Huang, Yann LeCun, Sumit Chopra, Raia Hadsell, Aurelio Ranzato, and Fu Jie Huang. A tutorial on energy-based learning. 2006.
Minimizing trajectory curvature of ode-based generative models. Sangyun Lee, Beomsu Kim, Jong-Chul Ye, abs/2301.12003ArXiv. Sangyun Lee, Beomsu Kim, and Jong-Chul Ye. Minimizing trajectory curvature of ode-based generative models. ArXiv, abs/2301.12003, 2023.
Self-consistent velocity matching of probability flows. Lingxiao Li, Samuel Hurault, Justin M Solomon, abs/2301.13737ArXiv. Lingxiao Li, Samuel Hurault, and Justin M. Solomon. Self-consistent velocity matching of probability flows. ArXiv, abs/2301.13737, 2023a.
Era-solver: Error-robust adams solver for fast sampling of diffusion probabilistic models. Shengmeng Li, Luping Liu, Zenghao Chai, Runnan Li, Xu Tan, abs/2301.12935ArXiv. Shengmeng Li, Luping Liu, Zenghao Chai, Runnan Li, and Xu Tan. Era-solver: Error-robust adams solver for fast sampling of diffusion probabilistic models. ArXiv, abs/2301.12935, 2023b.
Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, Tsung-Yi Lin, High-resolution text-to. 33d content creation. ArXiv, abs/2211.10440, 2022Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. ArXiv, abs/2211.10440, 2022.
Pseudo numerical methods for diffusion models on manifolds. Luping Liu, Yi Ren, Zhijie Lin, Zhou Zhao, International Conference on Learning Representations. Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. In International Conference on Learning Representations.
Pseudo numerical methods for diffusion models on manifolds. Luping Liu, Yi Ren, Zhijie Lin, Zhou Zhao, abs/2202.09778Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao. Pseudo numerical methods for diffusion models on manifolds. ArXiv, abs/2202.09778, 2022a.
Flow straight and fast: Learning to generate and transfer data with rectified flow. Xingchao Liu, Chengyue Gong, Qiang Liu, abs/2209.03003ArXiv. Xingchao Liu, Chengyue Gong, and Qiang Liu. Flow straight and fast: Learning to generate and transfer data with rectified flow. ArXiv, abs/2209.03003, 2022b.
Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu, abs/2206.00927ArXiv. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver: A fast ode solver for diffusion probabilistic model sampling in around 10 steps. ArXiv, abs/2206.00927, 2022a.
Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, Jun Zhu, abs/2211.01095ArXiv. Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu. Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models. ArXiv, abs/2211.01095, 2022b.
Knowledge distillation in iterative generative models for improved sampling speed. Eric Luhman, Troy Luhman, abs/2101.02388ArXiv. Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. ArXiv, abs/2101.02388, 2021.
On distillation of guided diffusion models. Chenlin Meng, Ruiqi Gao, P Diederik, Stefano Kingma, Jonathan Ermon, Tim Ho, Salimans, abs/2210.03142ArXiv. Chenlin Meng, Ruiqi Gao, Diederik P. Kingma, Stefano Ermon, Jonathan Ho, and Tim Salimans. On distillation of guided diffusion models. ArXiv, abs/2210.03142, 2022.
Nerf: Representing scenes as neural radiance fields for view synthesis. Ben Mildenhall, P Pratul, Matthew Srinivasan, Jonathan T Tancik, Ravi Barron, Ren Ramamoorthi, Ng, European Conference on Computer Vision. Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoor- thi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In European Conference on Computer Vision, 2020.
Dreamix: Video diffusion models are general. Eyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, Yedid Hoshen, arXiv:2302.01329arXiv preprintEyal Molad, Eliahu Horwitz, Dani Valevski, Alex Rav Acha, Yossi Matias, Yael Pritch, Yaniv Leviathan, and Yedid Hoshen. Dreamix: Video diffusion models are general video editors. arXiv preprint arXiv:2302.01329, 2023.
Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo. M Radford, Neal, 2Radford M Neal et al. Mcmc using hamiltonian dynamics. Handbook of markov chain monte carlo, 2(11):2, 2011.
Glide: Towards photorealistic image generation and editing with text-guided diffusion models. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, Mark Chen, International Conference on Machine Learning. Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob McGrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, 2021.
Improved denoising diffusion probabilistic models. Alexander Quinn, Nichol , Prafulla Dhariwal, International Conference on Machine Learning. PMLRAlexander Quinn Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. In International Conference on Machine Learning, pages 8162-8171. PMLR, 2021.
Parallel wavenet: Fast high-fidelity speech synthesis. Aaron Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, International conference on machine learning. PMLRAaron Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George Driessche, Edward Lockhart, Luis Cobo, Florian Stimberg, et al. Parallel wavenet: Fast high-fidelity speech synthesis. In International conference on machine learning, pages 3918-3926. PMLR, 2018.
Efficient learning of generative models via finite-difference score matching. ArXiv, abs. Tianyu Pang, Kun Xu, Chongxuan Li, Yang Song, Stefano Ermon, Jun Zhu, Tianyu Pang, Kun Xu, Chongxuan Li, Yang Song, Stefano Ermon, and Jun Zhu. Efficient learning of generative models via finite-difference score matching. ArXiv, abs/2007.03317, 2020.
Dreamfusion: Text-to-3d using 2d diffusion. Ben Poole, Ajay Jain, Jonathan T Barron, Ben Mildenhall, abs/2209.14988ArXiv. Ben Poole, Ajay Jain, Jonathan T. Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. ArXiv, abs/2209.14988, 2022a.
Dreamfusion: Text-to-3d using 2d diffusion. Ben Poole, Ajay Jain, Jonathan T Barron, Ben Mildenhall, arXiv:2209.14988arXiv preprintBen Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022b.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, abs/2204.06125ArXiv. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. ArXiv, abs/2204.06125, 2022a.
Hierarchical text-conditional image generation with clip latents. Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen, arXiv:2204.06125arXiv preprintAditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. arXiv preprint arXiv:2204.06125, 2022b.
Monte Carlo statistical methods. P Christian, George Robert, George Casella, Casella, Springer2Christian P Robert, George Casella, and George Casella. Monte Carlo statistical methods, volume 2. Springer, 1999.
Optimal scaling of discrete approximations to langevin diffusions. O Gareth, Jeffrey S Roberts, Rosenthal, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 601Gareth O Roberts and Jeffrey S Rosenthal. Optimal scaling of discrete approximations to langevin diffusions. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 60(1):255-268, 1998.
Exponential convergence of langevin distributions and their discrete approximations. O Gareth, Richard L Roberts, Tweedie, Bernoulli. Gareth O Roberts and Richard L Tweedie. Exponential convergence of langevin distributions and their discrete approximations. Bernoulli, pages 341-363, 1996.
Highresolution image synthesis with latent diffusion models. Robin Rombach, A Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High- resolution image synthesis with latent diffusion models. 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 10674-10685, 2021.
High-resolution image synthesis with latent diffusion models. Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionRobin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Om- mer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684-10695, 2022.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, ; S Sara Mahdavi, Rapha Gontijo Lopes, arXiv:2205.11487Burcu Karagol Ayan. arXiv preprintSeyed Kamyar Seyed GhasemipourChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022a.
Photorealistic text-to-image diffusion models with deep language understanding. Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L , abs/2205.11487Burcu Karagol Ayan. Seyedeh Sara Mahdavi, Raphael Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad NorouziArXivChitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Den- ton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, Seyedeh Sara Mahdavi, Raphael Gontijo Lopes, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. ArXiv, abs/2205.11487, 2022b.
Progressive distillation for fast sampling of diffusion models. Tim Salimans, Jonathan Ho, abs/2202.00512ArXiv. Tim Salimans and Jonathan Ho. Progressive distillation for fast sampling of diffusion models. ArXiv, abs/2202.00512, 2022.
Textto-4d dynamic scene generation. Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, Yaniv Taigman, abs/2301.11280ArXiv. Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, and Yaniv Taigman. Text- to-4d dynamic scene generation. ArXiv, abs/2301.11280, 2023.
Deep unsupervised learning using nonequilibrium thermodynamics. Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, Surya Ganguli, International Conference on Machine Learning. PMLRJascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256-2265. PMLR, 2015.
Denoising diffusion implicit models. ArXiv, abs. Jiaming Song, Chenlin Meng, Stefano Ermon, Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. ArXiv, abs/2010.02502, 2020a.
. Jiaming Song, Chenlin Meng, Stefano Ermon, arXiv:2010.02502Denoising diffusion implicit models. arXiv preprintJiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020b.
Generative modeling by estimating gradients of the data distribution. Yang Song, Stefano Ermon, Advances in neural information processing systems. 32Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
Sliced score matching: A scalable approach to density and score estimation. Yang Song, Sahaj Garg, Jiaxin Shi, Stefano Ermon, Conference on Uncertainty in Artificial Intelligence. Yang Song, Sahaj Garg, Jiaxin Shi, and Stefano Ermon. Sliced score matching: A scalable approach to density and score estimation. In Conference on Uncertainty in Artificial Intelligence, 2019.
Score-based generative modeling through stochastic differential equations. Yang Song, Jascha Sohl-Dickstein, P Diederik, Abhishek Kingma, Stefano Kumar, Ben Ermon, Poole, arXiv:2011.13456arXiv preprintYang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020c.
Score-based generative modeling through stochastic differential equations. ArXiv, abs. Yang Song, Diederik P Jascha Narain Sohl-Dickstein, Abhishek Kingma, Stefano Kumar, Ben Ermon, Poole, Yang Song, Jascha Narain Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. ArXiv, abs/2011.13456, 2020d.
Consistency models. Yang Song, Prafulla Dhariwal, Mark Chen, Ilya Sutskever, Yang Song, Prafulla Dhariwal, Mark Chen, and Ilya Sutskever. Consistency models, 2023. URL https://arxiv.org/abs/2303.01469.
Accelerating diffusion sampling with classifier-based feature distillation. Wujie Sun, Defang Chen, Can Wang, Deshi Ye, Yan Feng, Chun Chen, abs/2211.12039ArXiv. Wujie Sun, Defang Chen, Can Wang, Deshi Ye, Yan Feng, and Chun Chen. Accelerating diffusion sampling with classifier-based feature distillation. ArXiv, abs/2211.12039, 2022.
Training data-efficient image transformers & distillation through attention. Hugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Hervé Jégou, International conference on machine learning. PMLRHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, and Hervé Jégou. Training data-efficient image transformers & distillation through attention. In International conference on machine learning, pages 10347-10357. PMLR, 2021.
Pixel recurrent neural networks. Aäron Van Den, Nal Oord, Koray Kalchbrenner, Kavukcuoglu, International conference on machine learning. PMLRAäron Van Den Oord, Nal Kalchbrenner, and Koray Kavukcuoglu. Pixel recurrent neural networks. In International conference on machine learning, pages 1747-1756. PMLR, 2016.
A connection between score matching and denoising autoencoders. Pascal Vincent, Neural Computation. 23Pascal Vincent. A connection between score matching and denoising autoencoders. Neural Computation, 23:1661-1674, 2011.
Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A Yeh, Greg Shakhnarovich, abs/2212.00774ArXiv. Haochen Wang, Xiaodan Du, Jiahao Li, Raymond A. Yeh, and Greg Shakhnarovich. Score jacobian chaining: Lifting pretrained 2d diffusion models for 3d generation. ArXiv, abs/2212.00774, 2022.
Learning fast samplers for diffusion models by differentiating through sample quality. Daniel Watson, William Chan, Jonathan Ho, Mohammad Norouzi, abs/2202.05830ArXiv. Daniel Watson, William Chan, Jonathan Ho, and Mohammad Norouzi. Learning fast samplers for diffusion models by differentiating through sample quality. ArXiv, abs/2202.05830, 2022.
Accelerating guided diffusion sampling with splitting numerical methods. ArXiv, abs/2301.11558. Suttisak Wizadwongsa, Supasorn Suwajanakorn, Suttisak Wizadwongsa and Supasorn Suwajanakorn. Accelerating guided diffusion sampling with splitting numerical methods. ArXiv, abs/2301.11558, 2023.
Fast point cloud generation with straight flows. Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, Qiang Liu, abs/2212.01747ArXiv. Lemeng Wu, Dilin Wang, Chengyue Gong, Xingchao Liu, Yunyang Xiong, Rakesh Ranjan, Raghuraman Krishnamoorthi, Vikas Chandra, and Qiang Liu. Fast point cloud generation with straight flows. ArXiv, abs/2212.01747, 2022.
Langevin diffusions and the metropolis-adjusted langevin algorithm. Tatiana Xifara, Chris Sherlock, Samuel Livingstone, Simon Byrne, Mark Girolami, Statistics & Probability Letters. 91Tatiana Xifara, Chris Sherlock, Samuel Livingstone, Simon Byrne, and Mark Girolami. Langevin diffusions and the metropolis-adjusted langevin algorithm. Statistics & Probability Letters, 91:14-19, 2014.
Neurallift-360: Lifting an in-the-wild 2d photo to a 3d object with 360°views. Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, Zhangyang Wang, abs/2211.16431ArXiv. Dejia Xu, Yifan Jiang, Peihao Wang, Zhiwen Fan, Yi Wang, and Zhangyang Wang. Neurallift- 360: Lifting an in-the-wild 2d photo to a 3d object with 360°views. ArXiv, abs/2211.16431, 2022.
Fast sampling of diffusion models with exponential integrator. ArXiv, abs. Qinsheng Zhang, Yongxin Chen, Qinsheng Zhang and Yongxin Chen. Fast sampling of diffusion models with exponential integrator. ArXiv, abs/2204.13902, 2022.
Unipc: A unified predictorcorrector framework for fast sampling of diffusion models. Wenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, Jiwen Lu, arXiv:2302.04867arXiv preprintWenliang Zhao, Lujia Bai, Yongming Rao, Jie Zhou, and Jiwen Lu. Unipc: A unified predictor- corrector framework for fast sampling of diffusion models. arXiv preprint arXiv:2302.04867, 2023.
Fast sampling of diffusion models via operator learning. Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, Anima Anandkumar, abs/2211.13449ArXiv. Hongkai Zheng, Weili Nie, Arash Vahdat, Kamyar Azizzadenesheli, and Anima Anandkumar. Fast sampling of diffusion models via operator learning. ArXiv, abs/2211.13449, 2022.
Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling. Song-Chun, Ying Nian Zhu, David Wu, Mumford, International Journal of Computer Vision. 27Song-Chun Zhu, Ying Nian Wu, and David Mumford. Filters, random fields and maximum entropy (frame): Towards a unified theory for texture modeling. International Journal of Computer Vision, 27:107-126, 1998.
| [] |
[
"LOW-DEGREE PERMUTATION RATIONAL FUNCTIONS OVER FINITE FIELDS",
"LOW-DEGREE PERMUTATION RATIONAL FUNCTIONS OVER FINITE FIELDS"
] | [
"Zhiguo Ding ",
"Michael E Zieve "
] | [] | [] | We determine all degree-4 rational functions f (X) ∈ F q (X) which permute P 1 (F q ), and answer two questions of Ferraguti and Micheli about the number of such functions and the number of equivalence classes of such functions up to composing with degree-one rational functions. We also determine all degree-8 rational functions f (X) ∈ F q (X) which permute P 1 (F q ) in case q is sufficiently large, and do the same for degree 32 in case either q is odd or f (X) is a nonsquare. Further, for thousands of other positive integers n, for each sufficiently large q we determine all degree-n rational functions f (X) ∈ F q (X) which permute P 1 (F q ) but which are not compositions of lower-degree rational functions in F q (X). Some of these results are proved by using a new Galoistheoretic characterization of additive (linearized) polynomials among all rational functions, which is of independent interest. 1 arXiv:2010.15657v2 [math.NT] | 10.4064/aa210521-12-11 | [
"https://export.arxiv.org/pdf/2010.15657v2.pdf"
] | 225,103,073 | 2010.15657 | a893563bd46d5d4b26fd8dcdab04358c9af1d203 |
LOW-DEGREE PERMUTATION RATIONAL FUNCTIONS OVER FINITE FIELDS
Zhiguo Ding
Michael E Zieve
LOW-DEGREE PERMUTATION RATIONAL FUNCTIONS OVER FINITE FIELDS
We determine all degree-4 rational functions f (X) ∈ F q (X) which permute P 1 (F q ), and answer two questions of Ferraguti and Micheli about the number of such functions and the number of equivalence classes of such functions up to composing with degree-one rational functions. We also determine all degree-8 rational functions f (X) ∈ F q (X) which permute P 1 (F q ) in case q is sufficiently large, and do the same for degree 32 in case either q is odd or f (X) is a nonsquare. Further, for thousands of other positive integers n, for each sufficiently large q we determine all degree-n rational functions f (X) ∈ F q (X) which permute P 1 (F q ) but which are not compositions of lower-degree rational functions in F q (X). Some of these results are proved by using a new Galoistheoretic characterization of additive (linearized) polynomials among all rational functions, which is of independent interest. 1 arXiv:2010.15657v2 [math.NT]
Introduction
Let q be a power of a prime p. A permutation polynomial is a polynomial f (X) ∈ F q [X] for which the map α → f (α) is a permutation of F q . Such polynomials have been studied both for their own sake and for use in various applications. Much less work has been done on permutation rational functions, namely rational functions f (X) ∈ F q (X) which permute P 1 (F q ) := F q ∪ {∞}. However, the topic of permutation rational functions seems worthy of study, both because permutation rational functions have the same applications as permutation polynomials, and because of the construction in [28] which shows how to use permutation rational functions over F q to produce permutation polynomials over F q 2 . Surprisingly little is known about permutation rational functions: for instance, the recent paper [5] contains the first classification result in this subject, namely a classification of degree-3 permutation rational functions. In this paper we give a much simpler proof of this result, and also classify permutation rational functions in many other degrees, sometimes under the assumption that certain additional conditions hold. In particular, we answer [5, Problems 9.1 and 9.2].
Date: February 28, 2023. Recall that the degree of a nonconstant rational function f (X) ∈ F q (X) is max(deg a, deg b) for any coprime a, b ∈ F q [X] such that f = a/b. The statements of our results use the following terminology: Definition 1.1. We say that nonconstant f, g ∈ F q (X) are equivalent if f = µ • g • ν for some degree-one µ, ν ∈ F q (X).
Plainly if f, g ∈ F q (X) are equivalent then f permutes P 1 (F q ) if and only if g permutes P 1 (F q ). For completeness, we begin with the following essentially immediate result. Lemma 1.2. Every degree-one f (X) ∈ F q (X) permutes P 1 (F q ). A degreetwo f (X) ∈ F q (X) permutes P 1 (F q ) if and only if q is even and f (X) is equivalent to X 2 .
The following result is a more conceptual version of [5, Thms. 5.1 and 6.2]. Theorem 1.3. A degree-three f (X) ∈ F q (X) permutes P 1 (F q ) if and only if it is equivalent to one of the following:
(1) X 3 where q ≡ 2 (mod 3), (2) ν −1 • X 3 • ν where q ≡ 1 (mod 3) and for some δ ∈ F q 2 \ F q we have ν(X) = (X − δ q )/(X − δ) and ν −1 (X) = (δX − δ q )/(X − 1), (3) X 3 − αX where 3 | q and either α = 0 or α is a nonsquare in F q .
Remark. The functions ν, ν −1 in case (2) satisfy ν −1 • ν = X = ν • ν −1 , and also ν(P 1 (F q )) is the set Λ of (q + 1)-th roots of unity in F q 2 [28, Lemma 3.1]. Thus ν −1 • X n • ν permutes P 1 (F q ) if and only if X n permutes Λ, i.e., (n, q + 1) = 1. Moreover, ν −1 • X n • ν is in F q (X) [4]. These functions ν −1 • X n • ν are instances of the general class of Rédei functions; see [4]. We note that [5, Thms. 5.1 and 6.2] present the functions in (2) as rational functions whose coefficients satisfy certain conditions, and that from the presentation in [5] one would not expect these functions to have analogues in other degrees.
Remark. The functions in (1) and (3) of Theorem 1.3 are members of well-known classes of permutation polynomials. Specifically, X n permutes F q when (n, q − 1) = 1. An additive polynomial is a polynomial of the form r i=0 α i X p i with α i ∈ F q and p := char(F q ); any such polynomial permutes F q if and only if it has no nonzero roots in F q . (Additive polynomials are sometimes called "linearized polynomials" or "p-polynomials".) Theorem 1.3 was proved in [5] via a long and complicated argument involving computer calculations of prime components of certain ideals, among other things. We give two very short non-computational proofs of Theorem 1.3 using different methods than [5]; we hope that the new understanding provided by these proofs will help readers adapt our methods to address further questions. We then treat the much more difficult case of degree-4 permutation rational functions, which requires different methods. Theorem 1.4. A degree-four f (X) ∈ F q (X) permutes P 1 (F q ) if and only if one of the following holds:
(1) q is odd and f (X) is equivalent to X 4 − 2αX 2 − 8βX + α 2 X 3 + αX + β for some α, β ∈ F q such that X 3 + αX + β is irreducible in F q [X], (2) q is even and f (X) is equivalent to X 4 +αX 2 +βX for some α, β ∈ F q such that X 3 + αX + β has no roots in F * q , (3) q ≤ 8 and f (X) is equivalent to a rational function in Table 1 on page 22.
Moreover, f (X) is exceptional (cf. Definition 1.5) if and only if (1) or (2) holds.
Remark. The functions in (2) of Theorem 1.4 are additive polynomials. The functions in (1) are a new class of permutation rational functions (although the characteristic zero analogue of these functions appears in [11,Thm. 7.1]). At the end of Section 5 we will give a single form which combines (1) and (2). We will also determine precisely when two functions appearing in the conclusion of the above result are equivalent; in particular, for any fixed odd q we will show that any two functions in (1) are equivalent. We note that the function in (1) is 4 times the map on Xcoordinates induced by the multiplication-by-2 endomorphism of the elliptic curve Y 2 = X 3 + αX + β, and as such it is a member of a large class of permutation rational functions which are coordinate projections of elliptic curve isogenies. We will address such functions in a subsequent paper, where we will give an alternate proof of Theorem 1.4 based on elliptic curve arguments.
Remark. After completing this research, we learned that Hou has independently and simultaneously studied degree-4 permutation rational functions [15]. Theorem 1.4 corrects and refines Hou's result, which shows that every degree-four permutation rational function either satisfies one of (1)-(3) of (1). However, we note that this equivalence is not immediate, since to write down the equivalence one must show that the union of the images of P 1 (F q ) under the two rational functions X 4 /(X − 1) and X 3 + X 2 is all of P 1 (F q ). This fact seems nontrivial and interesting for its own sake; we will explain it elsewhere. Also Hou shows that for q ≤ 7 any degree-four permutation rational function which does not satisfy (1) or (2) F q (X) and the number of equivalence classes of such functions; it does not seem to be possible to use Hou's result to answer these questions.
Definition 1.5. A rational function f (X) ∈ F q (X) is exceptional if f (X)
permutes P 1 (F q ) for infinitely many integers .
Since bijectivity of f (X) on P 1 (F q ) implies bijectivity of f (X) on P 1 (F q ), we see that every exceptional rational function in F q (X) permutes P 1 (F q ). The following quantitative converse was proved in [13,Thm. 2.5]:
Lemma 1.6. If f (X) ∈ F q (X) has degree n ≥ 2 and permutes P 1 (F q ), where √ q > 2(n − 2) 2 + 1,
then f (X) is exceptional. This inequality holds in particular when q ≥ 4n 4 .
In light of Lemma 1.6, for any fixed n the study of degree-n permutation rational functions over F q reduces to the study of exceptional rational functions for all but finitely many values of q. Our next result classifies exceptional rational functions of degree 8. Note that any rational function equivalent to an exceptional rational function is exceptional. Theorem 1.7. If f (X) ∈ F q (X) is exceptional and deg(f ) = 8 then q is even and f (X) is equivalent to an additive polynomial. The same conclusion holds if f (X) ∈ F q (X) is a degree-8 permutation rational function with q > 73 2 .
The following result classifies exceptional rational functions of degree 32, modulo the classification of degree-16 exceptional rational functions in characteristic 2.
Theorem 1.8. A degree-32 rational function f (X) ∈ F q (X)
is exceptional if and only if q is even and f (X) is equivalent to either
(1) g(X 2 ) for some exceptional g ∈ F q (X) of degree 16, or (2) L 1 • µ • L 2 for some degree-one µ ∈ F q (X) and some exceptional additive L 1 , L 2 ∈ F q [X].
Our final results address indecomposable exceptional rational functions, which are defined as follows. Definition 1.9. For any field K, a rational function f (X) ∈ K(X) of degree at least 2 is indecomposable if it cannot be written as g(h(X)) with g, h ∈ K(X) of degree at least 2.
The importance of indecomposability comes from the following classical result [6, Thm. 1]: Lemma 1.10. A rational function f (X) ∈ F q (X) of degree at least 2 is exceptional if and only if f = g 1 • g 2 • · · · • g r for some indecomposable exceptional g i ∈ F q (X).
In light of Lemma 1.10, in order to classify exceptional rational functions it suffices to classify indecomposable exceptional rational functions.
Theorem 1.11. If f (X) ∈ F q (X)
is an exceptional rational function of degree 128 then q is even. If in addition f (X) is indecomposable then f (X) is equivalent to an additive polynomial.
In Theorem 1.12 we will prove similar results for degrees that are not prime powers, which turns out to be an easier situation to address via our methods. In the case of prime power degree, the degrees 2, 3, 4, 8, 32, and 128 in the above results are the only degrees for which our method of proof yields a conclusion of the desired form. It seems conceivable that one might be able to classify prime-degree exceptional rational functions by further developing the approach used for polynomials in [9, Thm. 8.1] and [18,App.], but we do not pursue that problem here. However, with current techniques it seems quite difficult to classify degree-9 exceptional rational functions, for instance. We note that there exist indecomposable additive exceptional polynomials of any prescribed prime power degree, and there exist other types of indecomposable exceptional rational functions in many classes of prime power degrees. Theorem 8.3 of [5] asserts that there are no exceptional rational functions of degree 6 with nonzero derivative. By Lemmas 1.10 and 1.2, a degree-6 exceptional f (X) ∈ F q (X) is indecomposable if and only if f (X) = 0, so the nontrivial portion of [5,Thm. 8.3] is the assertion that there are no indecomposable exceptional rational functions of degree 6. We prove the following vast generalization of this result:
Theorem 1.12. Suppose f (X) ∈ F q (X)
is an indecomposable exceptional rational function whose degree n satisfies n < 4096 and n is not a prime power. Then n ∈ {28, 45, 325, 351, 496, 784, 819, 1225, 1456, 2025, 3321}.
Remark. We caution the reader that the authors of [5] overlooked essentially all related results in the literature, including those in the papers they cited, and several of their "new" results are in fact known. In order to help readers of [5] avoid rediscovering known results, in Section 9 we correct several inaccuracies in [5] and provide the current state of knowledge on the topics addressed in [5].
In this paper we have chosen to use elementary arguments even when shorter arguments were possible if one used more advanced tools. We did this in order to make this paper accessible to the largest possible audience, since we hope to entice members of the permutation polynomial community to study permutation rational functions. This paper is organized as follows. In the next section we quickly review background material on exceptionality and monodromy groups. In Section 3 we prove Lemma 1.2 and Theorem 1.3. In Section 4 we prove a characterization of additive polynomials among all rational functions, and use it to describe the indecomposable exceptional rational functions of degree 8, 32, and 128. In Section 5 we prove Theorem 1.4 -which is the most difficult result in this paper, given what was known previously -and then we use this result in Section 6 to answer [5, Problems 9.1 and 9.2] about the number of degree-4 permutation rational functions, and the number of equivalence classes of such functions. In Section 7 we prove Theorems 1.7, 1.8, and 1.11, and finally in Section 8 we prove Theorem 1.12.
Background material
In this section we recall some basic facts and tools, giving proofs when we cannot find suitable references.
2.1. Separable rational functions.
Definition 2.1. For any field K, a rational function f (X) ∈ K(X) is separable if f (X) / ∈ K(X p ) where p := char(K).
Note that constant rational functions are not separable. We will often use without comment the following equivalent characterizations of separable rational functions: Lemma 2.2. Let K be a field, let x be transcendental over K, and pick f (X) ∈ K(X) of degree n > 0. Then [K(x) : K(f (x))] = n, and the following are equivalent:
(1) f (X) is separable, (2) f (X) = 0, (3) the field extension K(x)/K(f (x)) is separable,(4)
the numerator of f (X) − t has no multiple roots in the algebraic closure of K(t), where t is transcendental over K.
Proof. Write f (X) = a(X)/b(X) with coprime a, b ∈ K[X], and put t := f (x) and G(X) := a(X) − tb(X). Then G(X) ∈ K[X, t] is a degree-n polynomial in X which is not divisible by any nonconstant polynomial in K[X].
Since G(X) is also a degree-1 polynomial in t, we see that G(X) is irreducible in K[X, t], so by Gauss's lemma it is irreducible in K(t)[X]. Since G(x) = 0, this shows that G(X) is a nonzero constant times the minimal polynomial of
x over K(t), so since K(t, x) = K(x) it follows that [K(x) : K(f (x))] = n.
Thus (3) does not hold if and only if G(X) has multiple roots in the algebraic closure K(t) of K(t), or equivalently G(X) and G (X) have common roots in K(t), i.e., gcd(G(X), G (X)) = 1. Since gcd(G(X), G (X)) divides G( Conversely, if f (X) = 0 then, since f (X) = (b(X)a (X) − a(X)b (X))/b(X) 2 , it follows that b(X)a (X) = a(X)b (X); but then coprimality of a(X) and b(X) implies a(X) | a (X), so that a (X) = 0, and likewise b (X) = 0, whence f (X) is not separable.
The definitions immediately imply
Lemma 2.
3. If f (X) ∈ F q (X) is indecomposable but not separable then f = µ • X p where p := char(F q ) and µ(X) ∈ F q (X) has degree 1.
Every nonconstant f (X) ∈ F q (X) has a unique expression as f (X) = g(X p r ) where p := char(F q ), r ≥ 0, and g(X) ∈ F q (X) is separable. Since X p permutes F q for all , we see that f (X) is exceptional if and only if g(X) is exceptional. Thus the study of exceptional rational functions reduces to the separable case.
2.2.
Monodromy groups. For any field K and any separable f (X) ∈ K(X) of degree n > 1, let x be transcendental over K, let Ω be the Galois closure of K(x)/K(t) where t := f (x), and let L be the algebraic closure of K in Ω. Then A := Gal(Ω/K(t)) and G := Gal(Ω/L(t)) are called the arithmetic monodromy group and the geometric monodromy group of f (X), respectively. Write S for the set of conjugates of x over K(t), and put A 1 := Gal(Ω/K(x)) and G 1 := Gal(Ω/L(x)). We will use the following classical result, in which K denotes an algebraic closure of K: Lemma 2.4. With notation as above, G and A are transitive subgroups of Sym(S) and G is a normal subgroup of A with A/G ∼ = Gal(L/K), where in addition |S| = n. In particular, if K is finite then A/G is cyclic. Moreover, (1) is known, we do not know a reference proving it in this setting, so we provide a proof. By the Galois correspondence, A is primitive if and only if there are no fields strictly between Ω A = K(t) and Ω A 1 = K(x). By Lüroth's theorem [22,Thm. 2], every field between K(t) and K(x) has the form
(1) f (X) is indecomposable if and only if A is primitive (cf. Defini- tion 2.5). (2) If K is finite then the following are equivalent: (a) f (X) is exceptional, (b) if H(X, Y ) ∈ K[X, Y ] divides the numerator of f (X) − f (Y ), and H(X, Y ) is irreducible in K[X, Y ], then H(X, Y ) equals α(X − Y ) for some α ∈ K * ,K(h(x)) for some h(X) ∈ K(X), where since t ∈ K(h(x)) we have t = g(h(x)) with g ∈ K(X), so that f = g • h. Conversely, if f = g • h with g, h ∈ K(X) then [K(x) : K(h(x))] = deg(h) and [K(h(x)) : K(t)] = deg(g). Thus f (X)
is decomposable if and only if there is a field strictly between K(t) and K(x).
Remark. The above proof of (1) is an algebraicization of the topological proof in [21,§II] of the analogous result over the complex numbers.
2.3. Degree-one rational functions. We will often use the following results without explicit comment.
Lemma 2.6. For any field K, any degree-one µ(X) ∈ K(X) induces a bijective function µ :
P 1 (K) → P 1 (K).
Proof. For any β ∈ K, the numerator of µ(X) − β is a nonzero polynomial whose degree is 0 if β = µ(∞) and 1 otherwise. Thus in any case β has a unique µ-preimage in P 1 (K). Also the unique µ-preimage of ∞ is ∞ if µ(X) is a polynomial, and otherwise is the unique root of the denominator of µ(X).
Lemma 2.7. For any field K, any pairwise distinct α 1 , α 2 , α 3 ∈ P 1 (K), and any pairwise distinct β 1 , β 2 , β 3 ∈ P 1 (K), there is a unique degree-one µ(X) ∈ K(X) such that µ(α i ) = β i for each i.
Proof. In case α 1 = β 1 = ∞, this follows from the fact that there is a unique line through two points in K × K. To deduce the general case, pick degreeone ρ, τ ∈ K(X) with ρ(∞) = α 1 and τ (β 1 ) = ∞, and note that µ(α i ) = β i if and only if ν := τ • µ • ρ maps the ρ-preimage of α i to τ (β i ).
Corollary 2.8. For any field K, and any degree-one µ(
X) ∈ K(X), there is a unique degree-one µ −1 (X) ∈ K(X) such that µ −1 • µ = X = µ • µ −1 . Proof. If µ −1 • µ = X then µ −1 maps µ(α) → α for each α ∈ {0, 1, ∞}.
Conversely, since deg(µ) = 1, the function µ : P 1 (K) → P 1 (K) is injective, so that µ(0), µ(1), and µ(∞) are pairwise distinct. By Lemma 2.7, there is a unique degree-one µ −1 (X) ∈ K(X) such that µ −1 (µ(α)) = α for each α ∈ {0, 1, ∞}. Then µ −1 •µ and µ•µ −1 are degree-one rational functions fixing 0, 1, ∞ and µ(0), µ(1), µ(∞), respectively, so each of these compositions agrees with X at three points and hence equals X.
Remark. Explicitly, if µ(X) = (αX + β)/(γX + δ) then µ −1 (X) = (δX − β)/(−γX + α).
2.4. Automorphisms of K(x). For any field K, write Γ K for the set of degree-one rational functions in K(X), and note that Γ K is a group under the operation of functional composition, by Corollary 2.8. The following two results are immediate. Lemma 2.9. Let K be a field, and let x be transcendental over K.
For each µ(X) ∈ Γ K , let σ µ : K(x) → K(x) map f (x) → f (µ −1 (x)). Then σ µ is in Aut K (K(x)), and the map µ(X) → σ µ is an isomorphism Γ K → Aut K (K(x)).
In light of the above result, we will identify Aut K (K(x)) with Γ K whenever convenient. We sometimes identify Γ K with PGL 2 (K) via the following result.
Lemma 2.10. For any field K, the map α β γ δ → αX + β γX + δ induces a surjective homomorphism φ : GL 2 (K) Γ K whose kernel consists of the constant multiples of the identity matrix, so that φ induces an isomorphism PGL 2 (K) → Γ K .
Fixed fields and Galois closures.
Lemma 2.11. Let K be a field, let x be transcendental over K, and let G be a finite subgroup of Aut K (K(x)). If f (X) ∈ K(X) has degree |G| and G fixes f (x) then the fixed field K(x) G equals K(f (x)).
Proof. By basic Galois theory and Lemma 2.2 we have
[K(x) : K(x) G ] = |G| = deg(f ) = [K(x) : K(f (x))]. Since K(x) G ⊇ K(f (x)), it follows that K(x) G = K(f (x)).
Lemma 2.12. Let K be a field, let x be transcendental over K, and let f (X) ∈ K(X) be a separable rational function of degree n. Let Ω be the Galois closure of K(x)/K(f (x)), and let L be the algebraic closure of K in Ω. If the geometric monodromy group G of f (X) has order n then Ω = L(x).
Proof. Writing t := f (x), we have [Ω : L(t)] = |G| = n = [L(x) : L(t)],
which implies the result since Ω ⊇ L(x).
2.6. Branch points. Definition 2.13. For any algebraically closed field K and any nonconstant f (X) ∈ K(X) of degree n, a branch point of f (X) is an element β ∈ P 1 (K) which has fewer than n distinct f -preimages in P 1 (K).
Lemma 2.14. If β ∈ K is a branch point of f (X), then either β = f (∞) or β = f (γ) for some γ ∈ K such that f (γ) = 0.
Any separable rational function has only finitely many branch points.
Proof. Write f (X) = a(X)/b(X) where a, b ∈ K[X] are coprime. If β ∈ K \ {f (∞)} is a branch point of f (X) then c(X) := a(X) − βb(X)
is a degree-n polynomial having fewer than n distinct roots, so c(X) has a multiple root γ. Thus c (γ) = 0, so that a (γ) = βb (γ), whence
f (γ) = b(γ)a (γ) − a(γ)b (γ) b(γ) 2 = b (γ) · (b(γ)β − a(γ)) b(γ) 2 = − b (γ) · (β − f (γ)) b(γ) = 0, where we note that b(γ) = 0 since a(γ)/b(γ) = β = ∞. If f (X)
is separable then f (X) = 0, so the numerator of f (X) is a nonzero polynomial and hence has only finitely many roots.
Lemma 2.
15. If f (X) ∈ F q (X) \ F q then the q-th power map permutes the branch points of f (X), and also permutes the f -preimages of any element of P 1 (F q ).
Proof. This holds because the multiplicity of α as an f -preimage of f (α) equals the multiplicity of α q as an
f -preimage of f (α q ) = f (α) q .
3. Degree at most 3
In this section we give quick elementary proofs of Lemma 1.2 and Theorem 1.3. We give two proofs of the latter result, one being a very short proof using basic Galois theory and the other a slightly longer proof using essentially nothing.
Proof of Lemma 1.2. Lemma 2.6 shows that degree-one rational functions are bijective. If q is even then clearly X 2 permutes P 1 (F q ), so that also any function equivalent to X 2 is a permutation rational function. Conversely, we now suppose that f (X) ∈ F q (X) is a degree-2 permutation rational function. Write α := f (0) and β := f (∞), so that α, β ∈ P 1 (F q ), and α = β by the permutation hypothesis. Letting µ(X) ∈ F q (X) be a degreeone rational function which maps α and β to 0 and ∞, respectively, it follows that f (X) := µ(f (X)) permutes P 1 (F q ) and fixes 0 and ∞. Thus f (X) = (X 2 + γX)/b(X) for some γ ∈ F q and some nonzero b(X) ∈ F q [X] of degree at most 1. By the permutation condition, the only zero of f (X) in
F q is 0, so we must have γ = 0. Likewise the only pole of f (X) in P 1 (F q ) is ∞, so b(X) has no roots in F q and thus b(X) is constant (since deg(b) ≤ 1). Hence f (X) = δX 2 for some δ ∈ F * q , so f (X) is equivalent to X 2 . Since f (1) = f (−1)
, the permutation condition implies 1 = −1 so q is even. Theorem 1.3 asserts that all degree-3 permutation rational functions are equivalent to X 3 or an additive polynomial or a member of the following class of rational functions which is not as widely known:
Definition 3.1. A Rédei function is a rational function f (X) ∈ F q (X) for which there exist two points in F q 2 \F q which each have a unique f -preimage in F q .
These were introduced in a non-conceptual way in [20]. The following result from [4] describes basic properties of Rédei functions. Here Λ is the group of (q + 1)-th roots of unity in F q 2 .
Lemma 3.2.
A rational function f (X) ∈ F q (X) of degree n > 0 is a Rédei function if and only if f = µ • X n • ν for some degree-one µ, ν ∈ F q 2 (X) such that µ(Λ) = P 1 (F q ) and ν(P 1 (F q )) = Λ. If f (X) ∈ F q (X) is a Rédei function of degree n > 0, then the following are equivalent:
(1) f (X) permutes P 1 (F q ), (2) f (X) is exceptional, (3) (n, q + 1) = 1.
A degree-n Rédei function is indecomposable if and only if n is prime. For any n and q, there is a unique equivalence class of degree-n Rédei functions in F q (X), and it includes
ν −1 • X n • ν where ν(X) = (X − δ q )/(X − δ) for any δ ∈ F q 2 \ F q .
We now give two proofs of Theorem 1.3, one assuming familiarity with basic facts about Galois theory and degree-one rational functions, and the other assuming nothing.
Proof of Theorem 1.3. The remarks after Theorem 1.3 show that the functions in (1)-(3) permute P 1 (F q ), so that also any function equivalent to one of these permutes P 1 (F q ). Henceforth assume that f (X) ∈ F q (X) has degree 3 and permutes P 1 (F q ). We may assume q > 9, since the result is easy to verify via computer when q ≤ 9. Thus f (X) is exceptional by Lemma 1.6. If f (X) is inseparable then f (X) = h(X p ) with p := char(F q ) and h(X) ∈ F q (X), so that p = 3 and deg(h) = 1, whence f (X) is equivalent to X 3 . Henceforth assume f (X) is separable. By Lemma 2.4, the arithmetic and geometric monodromy groups A and G of f (X) are transitive subgroups of S 3 such that G is a proper subgroup of A, so that G = A 3 and A = S 3 . Thus F q 2 (x)/F q 2 (f (x)) is Galois of degree 3, with Galois group generated by an order-3 automorphism of F q 2 (x) which fixes each element of F q 2 and maps x → µ(x) for some degree-one µ(X) ∈ F q 2 (X). It follows that µ(X) has order 3 under composition. Thus there is some degree-one ν ∈ F q (X) such that θ(X)
:= ν −1 • µ • ν is either X + 1 (if 3 | q) or ωX with ω of order 3 (if 3 q). Since f (x) := f • ν(x) is fixed by the F q -automorphism σ of F q (x) which maps x → θ(x), and the order of σ is 3 which equals [F q (x) : F q ( f (x))], it follows that the fixed field of σ is F q ( f (x)). Since plainly σ fixes g(x), where g(X) := X 3 − X if 3 | q and g(X) = X 3 if 3 q, it follows that there is a degree-one ρ(X) ∈ F q (X) such that ρ • f (X) = g(X)
. If g(X) = X 3 − X then f (X) has a unique branch point, and this branch point has a unique f -preimage, so that both the branch point and its preimage must be in P 1 (F q ). Up to equivalence over F q , we may assume that both of these points are ∞, so that f is equivalent to a(X) • (X 3 − X) • b(X) for some degree-one a, b ∈ F q [X], whence f (X) is equivalent to an additive polynomial over F q . Henceforth assume 3 q, so that f (X) is equivalent over F q to X 3 , and thus f (X) has exactly two branch points, each of which has a unique preimage. If the preimages of the branch points are in P 1 (F q ) then, up to equivalence, we may assume that the unique root of f (X) is 0 and the unique pole of f (X) is ∞, so that f (X) = γX 3 with γ ∈ F * q . Finally, suppose that δ ∈ F q \ F q is a preimage of a branch point of f (X). Since the q-th power map permutes the set of preimages of branch points, it follows that δ ∈ F q 2 and the preimages of branch points are δ and δ q . Thus the branch points of f (X) are f (δ) and f (δ q ) = f (δ) q , which are in F q 2 \ F q . Hence f (X) is a Rédei function, so the conclusion follows from Lemma 3.2.
Our second proof of Theorem 1.3 uses the following lemma. Lemma 3.3. Let f (X) ∈ F q (X) be a separable exceptional rational function of degree 3. Then f (µ(X)) = f (X) for some degree-one µ(X) ∈ F q 2 (X) \ F q (X) having order 3 under composition.
Proof. Let G(X, Y ) be the numerator of f (X) − f (Y ), so that G(X, Y ) ∈ F q [X, Y ] has X-degree 3 and Y -degree 3, and G(X, Y ) is not divisible by any nonconstant element of F q [X] or F q [Y ]. By Lemma 2.4, every irre-
ducible factor of G(X, Y ) in F q [X, Y ] which remains irreducible in F q [X, Y ] is a constant times X − Y . Note that X − Y divides G(X, Y ), and write G(X, Y ) = (X −Y )H(X, Y ) where H(X, Y ) ∈ F q [X, Y ] has X-degree 2 and Y -degree 2. Thus H(X, Y ) must factor in F q [X, Y ] as the product of two irreducible polynomials H 1 , H 2 ∈ F q [X, Y ] which each have X-degree 1 and Y -degree 1. Hence H i (X, Y ) is the numerator of X − µ i (Y ) for some degree- one µ i (X) ∈ F q (X)
. Let Γ f be the set of all degree-one ν(X) ∈ F q (X) for which f • ν = f , so that Γ f is closed under composition, and also Γ f is closed under the map ψ : ν(X) → ν (q) (X) which raises all coefficients of ν(X) to the q-th power. Since Γ f is also the set of degree-one ν(X) ∈ F q (X) for which the numerator of
X − ν(Y ) divides G(X, Y ), we have Γ f = {X, µ 1 (X), µ 2 (X)}, and Lemma 2.4 implies Γ f ∩ F q (X) = {X}. Thus if µ 1 (X) = X then since µ (q) 2 (X) ∈ S we must have µ (q)
2 (X) = µ 2 (X), so that µ 2 (X) ∈ F q (X) and hence µ 2 (X) = X. In this case the numerator
of f (X) − f (Y ) is (X − Y ) 3 , so that every element of F q \ f (∞)
has a unique f -preimage in P 1 (F q ), and hence is a branch point of f (X), which contradicts separability by Lemma 2.14. Hence µ 1 (X) = X, so µ 1 (X) / ∈ F q (X). Since S = {X, µ 1 (X), µ 2 (X)} is closed under ψ, it follows that ψ interchanges µ 1 (X) and µ 2 (X), so that µ 1 (X) ∈ F q 2 (X). Since S is closed under composition, µ 1 (X) has order 3 under composition.
Alternate proof of Theorem 1.3. Just as in the start of the first proof, it suffices to show that every separable exceptional f (X) ∈ F q (X) of degree 3 is equivalent to a function in (1)-(3). By Lemma 3.3, there is a degree-one µ(X) ∈ F q 2 (X) \ F q (X) which has order 3 under composition and satisfies f (µ(X)) = f (X). The numerator of µ(X) − X has degree 2 if µ(∞) = ∞ and degree at most 1 otherwise, so the set ∆ of fixed points of µ(X) in P 1 (F q ) has size 1 or 2. Since µ(X) has order 3 and f (µ(X)) = f (X), if β ∈ P 1 (F q ) is not fixed by µ(X) then β, µ(β), and µ(µ(β)) are three distinct f -preimages of f (β), so they comprise all f -preimages of f (β), whence f (β) is not a branch point of f (X), and also f (β) / ∈ f (∆). Hence the elements of ∆ are the only f -preimages of f (∆), so (since |∆| ≤ 2) each element of f (∆) is a branch point of f (X). Thus f (∆) is the set of branch points of f (X), and hence is preserved by the q-th power map. Moreover, if some δ ∈ ∆ is not in P 1 (F q ) then, since f (δ q ) = f (δ) q is in f (∆), we have ∆ = {δ, δ q }; since δ q has the same multiplicity as an f -preimage of f (δ q ) as does δ as an fpreimage of f (δ), while the elements of ∆ are the only f -preimages of f (∆), it follows that δ is the unique f -preimage of f (δ), so that f (δ) ∈ F q 2 \ F q .
First assume µ(X) has a unique fixed point δ in P 1 (F q ). Then δ ∈ P 1 (F q ), so also f (δ) ∈ P 1 (F q ). Pick degree-one ν, θ ∈ F q (X) such that ν(δ) = ∞ = θ(f (δ), and put g := θ • f • ν −1 and ρ := ν • µ • ν −1 . Then g • ρ = g where g(X) is in F q (X) and has degree 3, with ∞ being the unique g-preimage of ∞, so that g(X) is a polynomial. Also ρ(X) is a degree-one rational function in F q 2 (X) \ F q (X) with order 3 under composition, and ∞ is the unique fixed point of ρ(X). Thus ρ(X) = X + γ for some γ ∈ F q 2 \ F q , and we must have char(F q ) = 3 since ρ(X) has order 3. Since g(X) is a degree-3 polynomial satisfying g(X + γ) = g(X) we have g(X) = a(X) • (X 3 − γ 2 X) for some degree-one a ∈ F q [X]. Since g(X) ∈ F q [X], this implies γ 2 ∈ F q , so that f (X) is equivalent (over F q ) to X 3 − δX where α := γ 2 is a nonsquare in F q .
Henceforth assume that |∆| = 2. Now suppose ∆ ⊆ P 1 (F q ), and pick ν ∈ F q (X) of degree one such that ν(∆) = {0, ∞}. Then g := f • ν −1 is a degree-3 rational function in F q (X), and ρ := ν • µ • ν −1 is a degree-one rational function in F q 2 (X) \ F q (X) with order 3 under composition, where ρ fixes 0 and ∞. Thus ρ(X) = ωX where ω ∈ F q 2 \ F q has order 3, so since g • ρ = g we have g = θ • X 3 for some degree-one θ ∈ F q (X), whence f (X) is equivalent to X 3 . Plainly ω ∈ F q 2 \ F q has order 3 just when q ≡ 2 (mod 3).
Finally, suppose ∆ contains an element outside P 1 (F q ), so that ∆ = {δ, δ q } for some δ ∈ F q 2 \ F q such that f (δ) ∈ F q 2 \ F q and δ is the unique f -preimage of f (δ), while also δ q is the unique f -preimage of f (δ) q . Put ν(X) := (X − δ q )/(X − δ) and θ(X) := (X − f (δ) q )/(X − f (δ)), so that g := θ • f • ν −1 has 0 as its unique root and ∞ as its unique pole, whence
g(X) = γX 3 for some γ ∈ F * q 2 . Thus f (X) = θ −1 • γX 3 • ν = τ • f (X) where τ := θ −1 • γX • ν and f (X) := ν −1 • X 3 • ν.
It is easy to check that ν(X) and θ(X) map P 1 (F q ) bijectively onto the set Λ of (q + 1)-th roots of unity in F q 2 . Since f (X) permutes P 1 (F q ), it follows that g(X) = γX 3 permutes Λ, so that γ ∈ Λ and (3, q + 1) = 1. Since ρ := ν • µ • ν −1 is a degree-one rational function in F q 2 (X) with order 3 under composition, and ρ fixes 0 and ∞, we know ρ(X) = ωX where ω ∈ F * q 2 has order 3, so char(F q ) = 3 which implies q ≡ 1 (mod 3). Thus τ (X) is a degree-one rational function in F q 2 (X) which permutes P 1 (F q ), so that τ (X) takes values in P 1 (F q ) at 0, 1, and ∞, whence τ (X) ∈ F q (X). This implies f (X) = τ −1 • f is in F q (X), so that f (X) is equivalent to f (X) = ν −1 • X 3 • ν.
Remark. Our proofs of Theorem 1.3 (and also the proof in [5]) rely on computer calculations to show that there are no non-exceptional degree-3 permutation rational functions in F q (X) when q ≤ 9. This can also be shown without a computer, as follows. For any f (X) ∈ F q (X) of degree 3, it is easy to show that the normal closure of F q (x)/F q (f (x)) has genus at most 1. We will show in a subsequent paper that there are no nonexceptional indecomposable permutation rational functions f (X) ∈ F q (X) (of any degree) for which the normal closure of F q (x)/F q (f (x)) has genus at most 1.
Additive polynomials
In this section we prove a Galois-theoretic characterization of additive polynomials among all rational functions, and use it to describe the indecomposable exceptional rational functions in F q (X) of degree 2 r when r ∈ {3, 5, 7}. The results of this section are also used in our treatment of degree-4 exceptional rational functions in the next section. For each positive integer s, an additive polynomial in F q [X] induces a homomorphism from the additive group of F q s to itself. Since a homomorphism from a finite group to itself is bijective if and only if it has trivial kernel, this yields the following characterization of exceptional additive polynomials.
Lemma 4.2. If f (X) ∈ F q [X]
is additive then the following are equivalent:
(1) f (X) is exceptional, (2) f (X) permutes F q , (3) f (X) has no roots in F * q .
The main result of this section is the following Galois-theoretic characterization of additive polynomials. Proof of Proposition 4.3. It is well known that (2) implies (1), so we assume (1). Writing n := deg(f ), we may assume n > 1 since otherwise the result is immediate. Write p := char(F q ), let x be transcendental over F q , and put t := f (x). Let Ω be the Galois closure of F q (x)/F q (t), and let F Q be the algebraic closure of F q in Ω, where Q = q k . Since n = |G|, Lemma 2.12 implies Ω = F Q (x). Thus G is a subgroup of Γ := Aut F Q (F Q (x)), and Lemma 2.9 shows that Γ consists of the maps h(x) → h(µ(x)) with µ(X) ∈ F Q (X) of degree one. In particular, Γ has order Q 3 − Q, and one Sylow p-subgroup of Γ is the group U consisting of the maps h(x) → h(x + α) with α ∈ F Q . Since G is a p-group contained in the finite group Γ, basic group theory shows that G is contained in a Sylow p-subgroup of Λ, and that any two Sylow p-subgroups are conjugate. Thus there is some σ ∈ Λ such that σ −1 Gσ is contained in U . The elements of G := σ −1 Gσ are thus h(x) → h(x + α) with α varying over an order-n subgroup H of F Q . The fixed field F Q (x) G contains r(x) := τ ∈ G τ (x) = λ∈H (x + λ), where r(X) is additive by Lemma 4.4. Since deg(r) = | G|, by Lemma 2.11 we have
F Q (x) G = F Q (r(x)). It follows that F Q (x) G = F Q (σ(r(x))), so if σ maps h(x) to h(µ(x)) for some degree-one µ(X) ∈ F Q (X) then F Q (x) G = F Q (r(µ(x))). Since F Q (x) G = F Q (t) = F Q (f (x))
, this implies f = ν • r • µ for some degree-one ν ∈ F Q (X). Since ∞ is the unique element of P 1 (F q ) which has a unique r-preimage, ν(∞) is the unique element of P 1 (F q ) which has a unique f -preimage. But since f (X) ∈ F q (X), also ν(∞) q has a unique f -preimage, so that ν(∞) q = ν(∞) and thus ν(∞) ∈ P 1 (F q ). Likewise the unique fpreimage of ν(∞) is µ −1 (∞), which must equal its q-th power and hence lies in P 1 (F q ). Thus there are degree-one ρ, θ ∈ F q (x) such that ρ(ν(∞)) = ∞ and θ(∞) = µ −1 (∞). Since ∞ is fixed by the degree-one rational functions ρ • ν and µ • θ in F Q (X), it follows that ρ • ν and µ • θ are degree-one polynomials. Since r(X) is additive, this implies that g :
= (ρ • ν) • r • (µ • θ)
is an additive polynomial plus a constant. But g(X) equals ρ • f • θ, so since ρ, f, θ ∈ F q (X) it follows that g(X) = α + L(X) for some α ∈ F q and some additive L(X) ∈ F q [X]. Thus f = ρ −1 • (X + α) • L • θ −1 is equivalent to an additive polynomial. Lemma 4.5. If K is an algebraically closed field with char(K) = 2, and x is transcendental over K, then Aut K (K(x)) has no subgroup isomorphic to (C 2 ) 3 , and every subgroup of Aut K (K(x)) isomorphic to (C 2 ) 2 is conjugate to σ, τ where σ(x) = −x and τ (x) = 1/x. Proof. Let G be a subgroup of Γ := Aut K (K(x)) such that G ∼ = (C 2 ) r with r ∈ {2, 3}. As in Lemma 2.9, we identify Γ with the group Γ K of degree-one rational functions in K(X) under the operation of functional composition. Pick some σ ∈ G of order 2. Then σ ∈ K(X) is a degreeone rational function having order 2 under composition, so σ = X. Since the numerator of σ − X has degree 2 if σ(∞) = ∞ and degree at most 1 otherwise, σ has either one or two fixed points in P 1 (K). If σ has just one fixed point then some conjugate of σ in Γ K has ∞ as its unique fixed point, and hence equals X + α with α ∈ K * , so the order of σ is char(K) and hence is not 2, contradiction. Thus σ has two fixed points, so there exists µ ∈ Γ K which maps these fixed points to 0 and ∞. Then σ := µσµ −1 fixes 0 and ∞, and has order 2, so σ = −X. Note that σ is in G := µGµ −1 , which is isomorphic to (C 2 ) r . Since G is abelian, each τ ∈ G \ σ must commute with σ, and hence must permute the fixed points 0, ∞ of σ. If τ fixes 0 and ∞ then τ = αX with α ∈ K * , and since τ has order at most 2 it follows that α ∈ {1, −1} so τ ∈ σ , contradiction. Thus τ interchanges 0 and ∞, so τ = β/X with β ∈ K * . Pick one such τ , and note that any ρ ∈ G \ σ must have the form ρ(X) = γ/X with γ ∈ K * , so that G contains τ ρ = βγ −1 X, whence βγ −1 ∈ {1, −1} so that ρ ∈ σ, τ . Thus G = σ, τ , so that r = 2. Finally, for ν := δX where δ 2 = β, we have ν −1 Gν = ν −1 σν, ν −1 τ ν = −X, 1/X . Proposition 4.6. If f (X) ∈ F q (X) is an indecomposable exceptional rational function of degree 2 r with r ∈ {3, 5, 7} then q is even and f (X) is equivalent to an additive polynomial.
Proof. Since f (X) is indecomposable, it must be separable by Lemma 2.3. Let A and G be the arithmetic and geometric monodromy groups of f (X), so Lemma 2.4 implies that A is a primitive subgroup of S n with n := 2 r , G is a transitive normal subgroup of A with cyclic quotient, and the onepoint stabilizers A 1 and G 1 have a unique common orbit. By testing all normal subgroups G of all primitive subgroups A of S n , we find that the only possibility is that G ∼ = (C 2 ) r . If char(F q ) = 2 then Proposition 4.3 implies that f (X) is equivalent to an additive polynomial. Henceforth assume char(F q ) > 2. Let x be transcendental over F q , write t := f (x), let Ω be the Galois closure of F q (x)/F q (t), and let F Q be the algebraic closure of F q in Ω, where Q = q k . Lemma 2.12 implies Ω = F Q (x), so G is a subgroup of Aut F Q (F Q (x)), which in turn embeds into Aut F Q (F Q (x)). But G ∼ = (C 2 ) r with r > 2, contradicting Lemma 4.5.
Remark. We do not know whether Proposition 4.6 remains true for larger values of r. Our proof for r ≤ 7 does not by itself imply the result for r = 9 or r = 11, since in those cases there exist groups A and G satisfying the conditions used in the proof, but for which G is not (C 2 ) r .
Permutation rational functions of degree 4
In this section we prove Theorem 1.4.
Lemma 5.1. Suppose q is odd and f (X) ∈ F q (X) is a separable degree-4 rational function whose geometric monodromy group G is isomorphic to
(C 2 ) 2 . Then f = µ • (X 2 + X −2 ) • ν for some degree-one µ, ν ∈ F q (X).
The branch points of f (X) are µ(∞), µ(2), and µ(−2), each of which has exactly two f -preimages.
Proof. Since |G| = deg(f ), if x is transcendental over F q then the extension
F q (x)/F q (f (x))
is Galois with Galois group G. By Lemma 4.5, there is some ρ ∈ Aut Fq (F q (x)) such that G := ρ −1 Gρ equals σ, τ , where σ(x) = −x and τ (x) = 1/x. Here ρ(x) = ν(x) for some degree-one ν ∈ F q (X). The fixed field
F q (x) G contains both f (x) := f (ν −1 (x)) and g(x), where g(X) := X 2 +X −2 . Since deg( f ) = deg(g) = | G| = [F q (x) : F q (x) G ], Lemma 2.11 implies F q (x) G = F q (g(x)
). Thus f (X) = µ • g for some degree-one µ ∈ F q (X), so that f = µ • g • ν. For any α ∈ P 1 (F q ), the images of α under the four elements of G all have the same image under g(X), so that if g(α) is a branch point of g(X) then some nonidentity element of G fixes α, whence either α = −α (so α ∈ {0, ∞}) or α = 1/α (so α ∈ {1, −1}) or α = −1/α (so α ∈ {i, −i} where i 2 = −1). Since g(X) induces a surjective map P 1 (F q ) → P 1 (F q ), it follows that every branch point of g(X) is in g({0, ∞, 1, −1, i, −i}) = {∞, 2, −2}. Conversely it is clear that each of ∞, 2, and −2 has two g-preimages. Since deg(µ) = deg(ν) = 1, it follows that the branch points of f (X) are µ(∞), µ(2), and µ(−2), which are distinct and which each have two f -preimages.
Lemma 5.2. Suppose q is odd and f 1 , f 2 ∈ F q (X) are degree-4 exceptional rational functions. Then there exist exactly three degree-one µ ∈ F q (X) such that µ • f 1 (X) and f 2 (X) have the same branch points, and for each such µ(X) there is exactly one ν ∈ F q (X) for which µ • f 1 • ν = f 2 .
Proof. Note that f 1 (X) is separable since char(F q ) deg(f 1 ). Let A and G be the arithmetic and geometric monodromy groups of f 1 (X), so Lemma 2.4 implies A is a subgroup of S 4 , and G is a transitive normal subgroup of A with cyclic quotient, and the one-point stabilizers A 1 and G 1 have a unique common orbit. It is easy to check that the only possibility is A = A 4 and G = (C 2 ) 2 , so that [A : G] = 3. By Lemma 5.1, f 1 (X) has exactly three branch points, each of which has exactly two f 1 -preimages in P 1 (F q ), and f 1 = ρ 1 •g •θ 1 for some degree-one ρ 1 , θ 1 ∈ F q (X), where g(X) := X 2 +X −2 . Since f 1 (X) is exceptional, in particular f 1 (X) permutes P 1 (F q ). If a branch point of f 1 (X) is in P 1 (F q ) then it has two preimages in P 1 (F q ) and exactly one preimage in P 1 (F q ); but this is impossible since the set of f 1 -preimages of any element of P 1 (F q ) is preserved by the q-th power map. Hence no branch point of f 1 (X) is in P 1 (F q ). Since the q-th power map preserves the set of branch points of f 1 (X), it follows that the three branch points of f 1 (X) are α, α q , α q 2 for some α ∈ F q 3 \ F q . Likewise f 2 = ρ 2 • g • θ 2 and the branch points of f 2 (X) are β, β q , β q 2 for some β ∈ F q 3 \ F q . Thus for any degree-one µ(X) ∈ F q (X) such that µ • f 1 (X) and f 2 (X) have the same branch points, we must have µ(α) = β q j for some j ∈ {0, 1, 2}, and then µ(α q ) = µ(α) q = β q j+1 and likewise µ(α q 2 ) = β q j+2 . Conversely,
for each j ∈ {0, 1, 2} there is a unique degree-one µ j (X) ∈ F q (X) such that µ j (α q i ) = β q i+j for each i ∈ {0, 1, 2}. Write µ j (X) q = µ (q) j (X q ) where µ (q)
j (X) is obtained from µ j (X) by raising all coefficients to the q-th power. Then µ
(q) j (α q i ) = µ j (α q i−1 ) q = β q i+j , so that µ (q)
j (X) and µ j (X) take the same values as one another at each of the three elements α q i . It follows that µ (q) j (X) = µ j (X), so that µ j (X) ∈ F q (X). Here µ j • f 1 (X) and f 2 (X) have the same branch points, so the three functions µ j (X) comprise all degreeone µ(X) ∈ F q (X) for which µ • f 1 (X) and f 2 (X) have the same branch points.
Fix j and put µ := µ j . Let ∆ := {∞, 2, −2} be the set of branch points of g(X). The q-th power map transitively permutes the set of branch points of f 1 (X), which is ρ 1 (∆) since f 1 (X) = ρ 1 • g • θ 1 (X). Upon replacing ρ 1 and θ 1 by ρ for a suitable , we may assume that ρ 1 (∞) = α; note that this replacement preserves the identity f 1 = ρ 1 • g • θ 1 , since f (q ) 1 = f 1 and g (q ) = g. Further, since −g(iX) = g(X) when i 2 = −1, we may replace ρ 1 and θ 1 by ρ 1 (−X) and iθ 1 (X) if necessary in order to assume that ρ 1 (2) = α q . Likewise, we may assume that ρ 2 (∞) = µ(α) and ρ 2 (2) = µ(α q ), so that ρ 2 (δ) = µ(ρ 1 (δ)) for each δ ∈ ∆. Since ρ 2 (X) and µ(ρ 1 (X)) are degree-one rational functions which agree at three points, we have ρ 2 (X) = µ(ρ 1 (X)), whence
f 2 • θ −1 2 = ρ 2 • g = µ • f 1 • θ −1 1 .
Let S be the set of all degree-one η ∈ F q (X) for which µ • f 1 • η = f 2 . Then S contains θ := θ −1 1 • θ 2 , and Gal(F q (X)/F q (X)) permutes S. Moreover, for any degree-one η ∈ F q (X) we have η ∈ S if and only if f 1 • θ • η −1 = f 1 , or equivalently θ • η −1 ∈ G. In particular, we have |S| = 4. If η(X) ∈ G lies in F q (X) then, since f 1 • η = f 1 , injectivity of f 1 (X) on P 1 (F q ) implies that η(X) fixes each element of P 1 (F q ), so that η(X) has at least four fixed points which implies η(X) = X. Since Gal(F q (X)/F q (X)) permutes G, and G is a four-element set with a unique fixed point under this map, it follows that the three nonidentity elements of G comprise a single orbit under this map, and are in F q 3 (X) \ F q (X). For any η ∈ S \ {θ} we know that θ • η −1 is a nonidentity element of G, and hence is in F q 3 (X) \ F q (X). Thus at least one of θ or η is not in F q 4 (X). Since Gal(F q (X)/F q (X)) permutes the fourelement set S, it follows that the orbits of this action have sizes 1 and 3, and the size-1 orbit consists of the unique ν ∈ F q (X) for which µ•f 1 •ν = f 2 .
Proof of Theorem 1.4. Pick f (X) ∈ F q (X) of degree four. If f (X) is inseparable then char(F q ) = 2, and Lemma 1.2 implies f (X) permutes P 1 (F q ) if and only if f (X) is equivalent to X 2 •µ•X 2 for some degree-one µ ∈ F q (X). Since X 2 • µ = ν • X 2 for some degree-one ν ∈ F q (X), this says f (X) is equivalent to X 4 . Henceforth assume f (X) is separable. If f (X) permutes P 1 (F q ) but f (X) is not exceptional, then Proposition 1.6 implies q ≤ 81. For q ≤ 81, a computer search shows that Table 1 contains representatives for all equivalence classes of non-exceptional degree-4 bijective rational functions.
It remains to determine the exceptional functions f (X). Let A and G be the arithmetic and geometric monodromy groups of f (X), so Lemma 2.4 implies A is a subgroup of S 4 , and G is a transitive normal subgroup of A with cyclic quotient, where the one-point stabilizers A 1 and G 1 have a unique common orbit. By inspection, the only possibility is A = A 4 and G = (C 2 ) 2 . If q is even then Proposition 4.3 implies f (X) is equivalent to an additive polynomial, and by Lemma 4.2 this polynomial is exceptional just when it has no roots in F * q . Henceforth assume q is odd. By Lemma 5.2, there is at most one equivalence class of degree-4 exceptional rational functions over F q , so it remains only to show that the functions in (1) of Theorem 1.4 are exceptional. Pick α, β ∈ F q for which X 3 + αX + β is irreducible, and put f (X) := X 4 − 2αX 2 − 8βX + α 2 X 3 + αX + β .
X 4 + αX 3 + X X 2 + X + 1 α 3 + α = 1 6 7 X 4 + 3X 3 5 X 4 + X + 1 X 2 + 2 1 X 4 + X 3 + 1 X 2 + 2 3 4 X 4 + ωX X 3 + ω 2 ω 2 + ω = 1 6 X 4 + X 2 + X X 3 + ω 2 X 4 + ωX 2 + X X 3 + X + 1 2 3 X 4 − X 2 + X 1 X 4 + X + 1 X 2 + 1 1 X 4 + X 3 + 1 X 2 + 1 3 2 X 4 + X 3 + X 1 X 4 + X 3 + X X 2 + X + 1 2
Let γ 1 , γ 2 , γ 3 be the roots of
X 3 + αX + β, so that γ i ∈ F q 3 \ F q . Then the numerator of f (X) − f (Y ) is (X − Y ) 3 i=1 XY − γ i (X + Y ) − α − 2γ 2 i .
Since each of the above factors has Y -degree 1, and plainly the numerator of f (X) − f (Y ) has no factor of the form Y − δ with δ ∈ F q , we see that each factor is irreducible in F q [X, Y ]. But none of the factors other than X − Y is a constant multiple of a polynomial in F q [X, Y ], so that f (X) is exceptional by Lemma 2.4.
Remark. The polynomials f (X) in case (1) of Theorem 1.4 have the unusual property that the branch points of g(X) := f (X)/4 are precisely the three elements of g −1 (∞) \ {∞}, namely the three roots of X 3 + αX + β.
Remark. Cases (1) and (2) of Theorem 1.4 may be combined into a single case which covers both even and odd characteristic. Namely, it is easy to deduce from Theorem 1.4 that a separable f (X) ∈ F q (X) of degree 4 is exceptional if and only if f (X) is equivalent to
X 4 − 2αX 2 − 8βX + α 2 X 3 + αX + β for some α, β ∈ F q such that X 3 + αX + β is irreducible in F q [X].
The number of degree-4 permutation rational functions
In this section we answer [5, Problems 9.1 and 9.2], which for each q ask for the number of equivalence classes of degree-4 permutation rational functions over F q , an explicit representative for each class, and the total number of degree-4 permutation rational functions over F q . If q is odd then any function in (1) of Theorem 1.4 represents the unique equivalence class. If q is even then a system of distinct representatives for the classes consists of X 4 , all polynomials X 4 +X 2 +αX with α ∈ F q \{β 3 + β : β ∈ F q }, and if q ≡ 4 (mod 6) then in addition X 4 + γX and X 4 + γ 2 X for a single prescribed non-cube γ ∈ F * q . The number of equivalence classes of non-exceptional degree-4 permutation rational functions f (X) ∈ F q (X) is
(1) 0 if q > 8,(2)1 if q = 7,(3)2 if q ∈ {2, 5},(4)3 if q ∈ {3, 8},(5)5 if q = 4.
Representatives for the distinct classes are all the entries in Table 1, using all possible values for α and ω, except that for each of the two choices of ω the third entry for q = 4 yields the same equivalence class.
Proof. The non-exceptional case follows from Theorem 1.4 and routine computations. The exceptional case for q odd follows from Theorem 1.4 and Lemma 5.2. Now suppose q is even. By Theorem 1.4, each exceptional f (X) ∈ F q (X) of degree 4 is equivalent to an additive polynomial L(X) with no roots in F * q . If L(X) is inseparable then L = h(X) • X 2 for some h(X) ∈ F q [X] of degree 2, and plainly h(0) = 0; since L(X) has no roots in F * q , also h(X) has no roots in F * q , so that h(X) is a monomial and thus L(X) is equivalent to X 4 . If L(X) is separable then, by composing on both sides with polynomials of the form δX with δ ∈ F * q , we see that L(X) is equivalent to either X 4 + X 2 + αX with α ∈ F * q or X 4 + γX with γ varying over a set of coset representatives for F * q /(F * q ) 3 . The condition that these polynomials have no roots in F * q says that α ∈ ∆ := F q \ {β 3 + β : β ∈ F q } and γ / ∈ (F * q ) 3 . Here |∆| = (q + 1)/3 by [23,Prop. 4.6]. It remains only to show that equivalent separable degree-4 monic additive polynomials f, g with degree-2 coefficient in {0, 1} are equal. So suppose that g = µ • f • ν for some degree-one µ, ν ∈ F q (X). Since g −1 (∞) = {∞} = f −1 (∞), and any element of F q has four distinct preimages under each of f (X) and g(X), we see that both µ(X) and ν(X) must fix ∞, and hence must be degree-one polynomials. If neither f (X) nor g(X) has a degree-2 term then the ratio of their degree-1 coefficients is a cube so f (X) = g(X). If at least one of f (X) or g(X) has a degree-2 term then equating coefficients of X 4 and X 2 shows that ν(X) and µ(X) are monic, and then since f (X) and g(X) are additive it follows that f (X) = g(X).
Problem 9.2 in [5] asks for an explicit formula for the number of degree-4 permutation rational functions over F q . To this end, let Γ be the group of degree-one rational functions over F q under the operation of functional composition. Then the equivalence classes of nonconstant rational functions over F q are precisely the orbits of Γ×Γ on the set F q (X)\F q under the group action (µ(X), ν(X)) : g(X) → µ • g • ν −1 . By the orbit-stabilizer theorem, in order to compute the size of an equivalence class it suffices to compute the size of the stabilizer of any prescribed element of the class. Proposition 6.2. If f (X) ∈ F q (X) is a degree-4 permutation rational function, then the number of pairs (µ(X), ν(X)) of degree-one rational functions in F q (X) for which µ • f • ν = f is as follows:
(1) 3 if q is odd and f (X) is exceptional, (2) q if q is even and f (X) is equivalent to an additive polynomial having a degree-2 term, (3) 3q if q ≡ 4 (mod 6) and f (X) is equivalent to X 4 + γX with γ ∈ F * q , (4) q 3 − q if q is even and f (X) is equivalent to X 4 , (5) the stabilizer size listed in the corresponding entry of Table 1
, if f (X)
is equivalent to a rational function in Table 1.
Proof. If f (X) is non-exceptional, and hence is equivalent to a function in Table 1, then this is a simple computation. Now assume f (X) is exceptional. If q is odd then the result follows from Lemma 5.2 by putting f 1 = f 2 = f . Henceforth assume q is even, so that f (X) is equivalent to an additive polynomial having no roots in F * q . Since equivalent functions yield the same number of pairs (µ(X), ν(X)), we may assume that f (X) is a monic additive polynomial. If f (X) = X 4 then for any degree-one ν(X) ∈ F q (X) there is a unique degree-one µ(X) ∈ F q (X) such that µ • f • ν = f , so there are q 3 − q pairs (µ, ν). If f (X) = X 4 then f −1 (∞) = {∞}, while any element of F q has at least two distinct f -preimages in F q . Thus if degreeone µ, ν ∈ F q (X) satisfy µ • f • ν = f then both µ(X) and ν(X) must fix ∞, so that µ(X) = γX + δ and ν(X) = αX + β with α, γ ∈ F * q and β, δ ∈ F q . By equating coefficients, we see that µ • f • ν equals f (X) if and only if δ = γf (β), γ = 1/α 4 , and α 4−j = 1 for each j occurring as the degree of a term of f (X). This yields the stated formulas.
Remark. In case q is odd and f (X) is given by (1) of Theorem 1.4, one can explicitly describe the pairs (µ(X), ν(X)) with µ • f • ν = f . Let γ be a root of X 3 + αX + β, so that γ 1 := γ ∈ F q 3 \ F q , and put γ 2 := γ q and γ 3 := γ q 2 . It suffices to describe the pairs with µ(4γ 1 ) = 4γ 2 , since the other pairs are then (µ • µ, ν • ν) and (X, X). Put
µ(X) := 4 −(3β + δ)X + 4α 2 3αX + 24β − 4δ
where δ := γ 2 1 γ 2 + γ 2 2 γ 3 + γ 2 3 γ 1 and := γ 1 γ 2 2 + γ 2 γ 2 3 + γ 3 γ 2 1 .
Writing Tr for the trace map from F q 3 to F q , let ν(X) be the rational function −( + 3β) + Tr((γ 1 − γ 2 ) 3q 2 +2q+1 2 ) X + α 2 − α Tr((γ 1 − γ 2 ) q 2 +2q+1 2 ) 3αX + δ + 3β + Tr((γ 1 − γ 2 ) q 2 +2q+3 2
)
.
Then µ(X), ν(X) ∈ F q (X) satisfy µ(4γ 1 ) = 4γ 2 and µ • f • ν = f .
Formulas for the number of degree-4 permutation polynomials over F q follow immediately from the previous two results, via the orbit-stabilizer theorem.
Corollary 6.3. The number of degree-4 permutation rational functions over F q is as follows:
(1) (q 3 − q) 2 /3 if q odd and q > 7, (2) q(q − 1)(q + 2)(q 3 + 1)/3 if q even and q > 8, Remark. Of all the counting results in this section, we find that the most illuminating one is Proposition 6.2, about the size of the stabilizer of a permutation rational function under the stated group action.
7. Exceptional rational functions of degrees 8, 32, and 128
In this section we prove Theorems 1.7, 1.8, and 1.11.
Proof of Theorem 1.7. It suffices to prove the first sentence of Theorem 1.7, since this implies the second sentence by Lemma 1.6. Let f (X) ∈ F q (X) be exceptional of degree 8. If f (X) is indecomposable then the conclusion follows from Proposition 4.6. Thus we may assume f (X) = g(h(X)) with g, h ∈ F q (X) of degree at least 2. Plainly g(X) and h(X) are exceptional. The degrees of g(X) and h(X) are 2 and 4 in some order, which by Lemma 1.2 implies that q is even and one of g(X) and h(X) is equivalent to X 2 . It follows that f (X) = 0, so that f (X) is inseparable and thus f (X) = u(X 2 ) with u(X) ∈ F q (X) of degree 4, where again u(X) is exceptional. By Theorem 1.4, u(X) = µ • L • ν where µ, ν ∈ F q (X) have degree one and L(X) = X 4 + αX 2 + βX with α, β ∈ F q where L(X) has no roots in F * q . Then ν • X 2 = X 2 • ρ for some degree-one ρ(X) ∈ F q (X), so that f (X) = µ • L • X 2 • ρ, whence f (X) is equivalent to L(X 2 ) as desired.
Proof of Theorem 1.8. Let f (X) ∈ F q (X) be an exceptional rational function of degree 32. Proposition 4.6 implies the conclusion if f (X) is indecomposable, so we may assume that f = g • h with g, h ∈ F q (X) both of degree at least 2. Then g(X) and h(X) are exceptional. If either g(X) or h(X) has degree 2 then Lemma 1.2 implies q is even and either g (X) = 0 for n in [2..4095] do if not IsPrimePower(n) then for A in PrimitiveGroups(n) do if A notin {Alt(n),Sym(n)} then A1:=Stabilizer(A,1); for GG in NormalSubgroups(A) do G:=GG'subgroup; if #G gt 1 and G ne A and IsCyclic(A/G) and 1 eq #{i:i in Orbits(Stabilizer(G,1))|i in Orbits(A1)} and IsTransitive(G) then n; continue n; end if; end for; end if; end for; end if; end for;
Remark. The reason Theorem 1.12 restricts to n < 4096 is that Magma's database of primitive groups consists of the primitive groups of degree less than 4096. If this database were extended to larger degrees then one could extend Theorem 1.12 to larger degrees via the same proof.
9. Corrections to the paper [5] The paper [5] did not accurately describe the literature on this topic, which might hinder readers of [5] who would like to contribute to this topic. For the benefit of such readers, we now correct several inaccuracies in [5].
Although [5, p. 867] says "there is a lack of references that deal, in a compact and self-contained way, with the finite field theoretic framework", in fact this is done in [2,7,9,11,13,17,24] among many other sources.
Although the explicit production of all rational functions with prescribed monodromy groups is discussed in [5, p. 868], no references are given, suggesting that [5] was the first paper to do this in any situation. In fact, this was done previously in [9,12,14,16,18,25,26,27], via an assortment of powerful methods.
The authors of [5] assert that Theorem 3.2 of their paper is a generalization to rational functions of a result which had been proved previously only for polynomials. However, this is not the case -instead, [5,Thm. 3.2] is a weaker version of [2,Thms. 4 and 5], which appeared 50 years before [5]. In the intervening 50 years the latter result has been generalized in [7, Thm. 1 and Prop. 1], [9, General Exceptionality Lemma, p. 185], and [13, Thm. 1.1] to more general settings than rational functions. Although [5, p. 871] asserts that Cohen and Fried only proved this result for polynomials, in fact neither Cohen nor Fried has written any paper proving these results in just the polynomial case (the authors of [5] say their proofs follow ideas of Cohen, but the only paper of Cohen's they cite is a survey paper which does not contain any proofs, and the authors of [5] overlooked all historical comments in that survey paper, including the fact that the "new" rational function analogues in [5] were well known).
The result [5, Lemma 4.1] is a weaker version of a very special case of [13, Thm. 2.5]; we note that the proof of [5, Lemma 4.1] uses among other things a different result from [13].
The paper [5] neglects to mention that major progress towards classifying groups satisfying conditions (1)-(3) on [5, p. 872] was made in [8,9] and especially [11].
X) in K(t)[X], and G(X) is irreducible in K(t)[X], it follows that (3) does not hold if and only if gcd(G(X), G (X)) = G(X). Since deg(G ) < deg(G), the latter condition says that G (X) = 0, i.e., a (X) = tb (X), which in turn says that a (X) = 0 = b (X). Writing p := char(K), it follows that (3) does not hold if and only if a, b ∈ K[X p ], i.e., if and only if f (X) is not separable. Plainly if f (X) is not separable then f (X) = 0.
(c) A 1 and G 1 have exactly one common orbit on S.
Definition 2.5. A subgroup A of S n is primitive if A is transitive and there are no groups strictly between A and the stabilizer of 1 in A. Proof. By Lemma 2.2 we have [L(x) : L(t)] = n = [K(x) : K(t)], so since f (x) = t it follows that the numerator of f (X) − t is irreducible in L(t)[X]. Thus the first two sentences follows from basic Galois theory. Item (2) is a weaker version of [2, Thms. 4 and 5]. Although
Definition 4 . 1 .
41An additive polynomial in F q [X] is a polynomial of the form r i=0 α i X p i with α i ∈ F q and p := char(F q ).
Proposition 4 . 3 .
43If f (X) ∈ F q (X) is separable with geometric monodromy group G then the following are equivalent:(1) |G| = deg(f ), and deg(f ) is a power of char(F q ), (2) f (X) is equivalent to an additive polynomial in F q [X].The proof uses the following easy classical result[19, Thm. 8 of Ch. 1]. Lemma 4.4. If f (X) ∈ F q [X] is squarefree then the roots of f (X) form a group under addition if and only if f (X) is additive.
Remark. The polynomial case of Proposition 4.3 implies the main portions of [1, Thm. 3], [3, Thm. 1.1], and [25, Thm. 2.10(a)].
Proposition 6 . 1 .
61The number of equivalence classes of exceptional f (X) ∈ F q (X) of degree 4 is (1) 1 if q is odd, (2) (q + 4)/3 if q ≡ 2 (mod 6), (3) (q + 8)/3 if q ≡ 4 (mod 6).
Theorem 1.4 or is equivalent to one of several other classes of examples. Our result shows that Hou's extra classes of examples are superfluous, in the sense that they are repetitions of (1)-(3). In particular, Hou lists a family of examples over each finite field of characteristic 3, and by our result these are all equivalent to the functions in
must be equivalent to one of 19 specific functions, while our result replaces Hou's list of 19 functions by a list of 14 functions. Finally, the three permutations of P 1 (F 8 ) in ourTable 1are counterexamples to Hou's result. Hou's proof is quite long and computational, involving among other things the computation of a polynomial in five variables having more than 100 terms. By contrast, our proof is short and conceptual, using a completely different approach based on Galois theory. Moreover, it does not seem to be quicker to deduce Theorem 1.4 from Hou's result than to prove Theorem 1.4 directly. We apply our Theorem 1.4 to answer [5, Problems 9.1
and 9.2] about the number of degree-4 permutation rational functions in
Table 1 .
1Non-exceptional degree-4 permutation rational
functions over F q (the stabilizer size is defined in Proposi-
tion 6.2)
q
f (X)
Conditions Stabilizer size
8
The authors thank the referee for helpful comments. The second author thanks the National Science Foundation for support under grant DMS-1601844.
(X) ∈ F q (X) is exceptional. Finally, if {deg(g), deg(h)} = {4, 8} then Theorem 1.7 implies q is even, so Theorems 1, h ∈ F q (X) of degree at least 2. Then g, h are exceptional. Since one of g(X) and h(X) has degree 2 r. or h (X) = 0, so that f (X) = 0, whence f = u(X 2 ) where u. with r ∈ {1, 3, 5}, the combination of Lemma 1.2 and Theorems 1.7 and 1.8 implies that q is even. 8. Indecomposable exceptional rational functions ofor h (X) = 0, so that f (X) = 0, whence f = u(X 2 ) where u(X) ∈ F q (X) is exceptional. Finally, if {deg(g), deg(h)} = {4, 8} then Theorem 1.7 implies q is even, so Theorems 1, h ∈ F q (X) of degree at least 2. Then g, h are exceptional. Since one of g(X) and h(X) has degree 2 r with r ∈ {1, 3, 5}, the combination of Lemma 1.2 and Theorems 1.7 and 1.8 implies that q is even. 8. Indecomposable exceptional rational functions of
If f (X) ∈ F q (X) is a separable exceptional rational function of degree n ≥ 5, then the arithmetic monodromy group of f (X) is not in {A n. Lemma 8.1.. S n }Lemma 8.1. If f (X) ∈ F q (X) is a separable exceptional rational function of degree n ≥ 5, then the arithmetic monodromy group of f (X) is not in {A n , S n }.
But the stabilizer of 1 in A n has orbits {1} and {2, 3, . . . , n}, so these are also the orbits of the stabilizer of 1 in each of A and G. This contradicts exceptionality by Lemma 2.4. Proof of Theorem 1.12. Suppose f (X) ∈ F q (X) is an indecomposable exceptional rational function of degree n, where n < 4096 and n is not a prime power. Then f (X) is separable by Lemma 2.3. Let A and G be the arithmetic and geometric monodromy groups of f (X), so that A is a subgroup of S n and G is a transitive normal subgroup of A with cyclic quotient. By Lemma 2.4, indecomposability of f (X) implies that A is primitive, and exceptionality implies that if A 1 and G 1 are the subgroups of elements of A and G which fix the element 1 of {1, 2. } {a N , S N, Proof. Let A and G be the arithmetic and geometric monodromy groups of f (X). ≤ 2, so that |G| ≤ 2|G 0 | and thus |G 0 | ≥ |G|/2 ≥ n/2 > 1. Since G is normal in A, also G 0 is normal in A n , so that G 0 = A n. n} then A 1 and G 1 have a unique common orbit on {1, 2, . . . , n}. Since n > 1 (by indecomposability) and n is not a prime power, we have n ≥ 6, so Lemma 8.1 implies A / ∈ {A n , S n }Proof. Let A and G be the arithmetic and geometric monodromy groups of f (X), and suppose that A ∈ {A n , S n }. Since n ≥ 5, we know that A n is simple. Since G is transitive, we have |G| ≥ n. Next, G 0 := G ∩ A n satisfies [G : G 0 ] ≤ [A : A n ] ≤ 2, so that |G| ≤ 2|G 0 | and thus |G 0 | ≥ |G|/2 ≥ n/2 > 1. Since G is normal in A, also G 0 is normal in A n , so that G 0 = A n . But the stabilizer of 1 in A n has orbits {1} and {2, 3, . . . , n}, so these are also the orbits of the stabilizer of 1 in each of A and G. This contradicts exceptionality by Lemma 2.4. Proof of Theorem 1.12. Suppose f (X) ∈ F q (X) is an indecomposable ex- ceptional rational function of degree n, where n < 4096 and n is not a prime power. Then f (X) is separable by Lemma 2.3. Let A and G be the arithmetic and geometric monodromy groups of f (X), so that A is a sub- group of S n and G is a transitive normal subgroup of A with cyclic quotient. By Lemma 2.4, indecomposability of f (X) implies that A is primitive, and exceptionality implies that if A 1 and G 1 are the subgroups of elements of A and G which fix the element 1 of {1, 2, . . . , n} then A 1 and G 1 have a unique common orbit on {1, 2, . . . , n}. Since n > 1 (by indecomposability) and n is not a prime power, we have n ≥ 6, so Lemma 8.1 implies A / ∈ {A n , S n }.
for each candidate n, we use the following Magma code to test whether there are groups satisfying the conditions required of A and G. References. Finally, Finally, for each candidate n, we use the following Magma code to test whether there are groups satisfying the conditions required of A and G. References
Polynomial relations in characteristic p. A Bremner, P Morton, Quart. J. Math. Oxford. 2A. Bremner and P. Morton, Polynomial relations in characteristic p, Quart. J. Math. Oxford (2) 29 (1978), 335-347.
The distribution of polynomials over finite fields, Acta Arith. S D Cohen, 17S. D. Cohen, The distribution of polynomials over finite fields, Acta Arith. 17 (1970), 255-271.
The factorable core of polynomials over finite fields. S D Cohen, J. Austral. Math. Soc. (Ser. A). 49S. D. Cohen, The factorable core of polynomials over finite fields, J. Austral. Math. Soc. (Ser. A) 49 (1990), 309-318.
Z Ding, M E Zieve, Rédei functions. in preparationZ. Ding and M. E. Zieve, Rédei functions, in preparation.
Full classification of permutation rational functions and complete rational functions of degree three over finite fields. A Ferraguti, G Micheli, Des. Codes Cryptogr. 88A. Ferraguti and G. Micheli, Full classification of permutation ratio- nal functions and complete rational functions of degree three over finite fields, Des. Codes Cryptogr. 88 (2020), 867-886.
Arithmetical properties of function fields (II) The generalized Schur problem. M D Fried, Acta Arith. 25M. D. Fried, Arithmetical properties of function fields (II) The general- ized Schur problem, Acta Arith. 25 (1974), 225-258.
On a theorem of MacCluer. M Fried, Acta Arith. 25M. Fried, On a theorem of MacCluer, Acta Arith. 25 (1974), 121-126.
Galois groups and complex multiplication. M Fried, Trans. Amer. Math. Soc. 235M. Fried, Galois groups and complex multiplication, Trans. Amer. Math. Soc. 235 (1978), 141-163.
Schur covers and Carlitz's conjecture. M D Fried, R Guralnick, J Saxl, Israel J. Math. 82M. D. Fried, R. Guralnick and J. Saxl, Schur covers and Carlitz's con- jecture, Israel J. Math. 82 (1993), 157-225.
Exceptional polynomials of affine type. R M Guralnick, P Müller, J. Algebra. 194R. M. Guralnick and P. Müller, Exceptional polynomials of affine type, J. Algebra 194 (1997), 429-454.
The rational function analogue of a question of Schur and exceptionality of permutation representations. R M Guralnick, P Müller, J Saxl, Mem. Amer. Math. Soc. 162773R. M. Guralnick, P. Müller and J. Saxl, The rational function analogue of a question of Schur and exceptionality of permutation representations, Mem. Amer. Math. Soc. 162 (2003), no. 773, viii + 79 pp.
A new family of exceptional polynomials in characteristic two. R M Guralnick, J E Rosenberg, M E Zieve, Annals of Math. 172R. M. Guralnick, J. E. Rosenberg and M. E. Zieve, A new family of ex- ceptional polynomials in characteristic two, Annals of Math. 172 (2010), 1361-1390.
Exceptional covers and bijections on rational points. R M Guralnick, T J Tucker, M E Zieve, Int. Math. Res. Notices. 1art. rnm004, 20 ppR. M. Guralnick, T. J. Tucker and M. E. Zieve, Exceptional covers and bijections on rational points, Int. Math. Res. Notices 2007, no. 1, art. rnm004, 20 pp.
Polynomials with PSL(2) monodromy. R M Guralnick, M E Zieve, Annals of Math. 172R. M. Guralnick and M. E. Zieve, Polynomials with PSL(2) monodromy, Annals of Math. 172 (2010), 1315-1359.
Rational functions of degree four that permute the projective line over a finite field. X.-D Hou, Comm. Algebra. 49X.-D. Hou, Rational functions of degree four that permute the projective line over a finite field, Comm. Algebra 49 (2021), 3798-3809.
Monodromy groups of polynomial mappings. A A Klyachko, Studies in Number Theory. Saratov6in RussianA. A. Klyachko, Monodromy groups of polynomial mappings, in: Studies in Number Theory, 6, Izdat. Saratov. Univ., Saratov, 1975, 82-91 (in Russian).
. R Lidl, G L Mullen, G Turnwald, Dickson Polynomials, Pitman Monogr, Surveys Pure Appl. Math. 65Longman Sci. & Tech.R. Lidl, G. L. Mullen and G. Turnwald, Dickson Polynomials, Pitman Monogr. Surveys Pure Appl. Math. 65, Longman Sci. & Tech., Harlow, 1993.
A Weil-bound free proof of Schur's conjecture. P Müller, Finite Fields Appl. 3P. Müller, A Weil-bound free proof of Schur's conjecture, Finite Fields Appl. 3 (1997), 25-32.
On a special class of polynomials. O Ore, Trans. Amer. Math. Soc. 35O. Ore, On a special class of polynomials, Trans. Amer. Math. Soc. 35 (1933), 559-584.
Über eindeutig umkehrbare Polynome in endlichen Körpern. L Rédei, Acta Univ. Szeged. Sect. Sci. Math. 11L. Rédei,Über eindeutig umkehrbare Polynome in endlichen Körpern, Acta Univ. Szeged. Sect. Sci. Math. 11 (1946), 85-92.
Prime and composite polynomials. J F Ritt, Trans. Amer. Math. Soc. 23J. F. Ritt, Prime and composite polynomials, Trans. Amer. Math. Soc. 23 (1922), 51-66.
Polynomials with Special Regard to Reducibility. A Schinzel, Cambridge Univ. PressCambridgeA. Schinzel, Polynomials with Special Regard to Reducibility, Cam- bridge Univ. Press, Cambridge, 2000.
A new criterion for permutation polynomials. G Turnwald, Finite Fields Appl. 1G. Turnwald, A new criterion for permutation polynomials, Finite Fields Appl. 1 (1995), 64-82.
On Schur's conjecture. G Turnwald, J. Austral. Math. Soc. (Ser. A). 58G. Turnwald, On Schur's conjecture, J. Austral. Math. Soc. (Ser. A) 58 (1995), 312-357.
G Turnwald, Some notes on monodromy groups of polynomials. Berlinde Gruyter1Number Theory in ProgressG. Turnwald, Some notes on monodromy groups of polynomials, in: Number Theory in Progress, vol. 1, de Gruyter, Berlin (1999), 539-552.
Sulle equazioni algebriche contenenti linearmente un parametro e risolubili per radicali. O Zariski, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. 5O. Zariski, Sulle equazioni algebriche contenenti linearmente un parametro e risolubili per radicali, Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Natur. (5) 33 (1924), no. 2, 80-82.
Sopra una classe di equazioni algebriche contenenti linearmente un parametro e risolubili per radicali. O Zariski, Rend. Circolo Mat. Palermo. 50O. Zariski, Sopra una classe di equazioni algebriche contenenti linear- mente un parametro e risolubili per radicali, Rend. Circolo Mat. Palermo 50 (1926), 196-218.
Permutation polynomials on F q induced from Rédei function bijections on subgroups of F * q. M E Zieve, arXiv:1310.0776v2Monatsh. Math., to appear. Available atM. E. Zieve, Permutation polynomials on F q induced from Rédei func- tion bijections on subgroups of F * q , Monatsh. Math., to appear. Available at arXiv:1310.0776v2, 7 Oct 2013.
| [] |
[
"TENSORIZED LSSVMS FOR MULTITASK REGRESSION",
"TENSORIZED LSSVMS FOR MULTITASK REGRESSION"
] | [
"Jiani Liu \nSchool of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina\n",
"Qinghua Tao \nESAT-STADIUS\n3001Leuven, HeverleeKUBelgium\n",
"Ce Zhu \nSchool of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina\n",
"Yipeng Liu \nSchool of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina\n",
"Johan A K Suykens \nESAT-STADIUS\n3001Leuven, HeverleeKUBelgium\n"
] | [
"School of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina",
"ESAT-STADIUS\n3001Leuven, HeverleeKUBelgium",
"School of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina",
"School of Information and Communication Engineering\nUniversity of Electronic Science and Technology of China\n610054ChengduChina",
"ESAT-STADIUS\n3001Leuven, HeverleeKUBelgium"
] | [] | Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement. The advent of multimodal data allows tasks to be referenced by multiple indices. Highorder tensors are capable of providing efficient representations for such tasks, while preserving structural task-relations. In this paper, a new MTL method is proposed by leveraging low-rank tensor analysis and constructing tensorized Least Squares Support Vector Machines, namely the tLSSVM-MTL, where multilinear modelling and its nonlinear extensions can be flexibly exerted. We employ a high-order tensor for all the weights with each mode relating to an index and factorize it with CP decomposition, assigning a shared factor for all tasks and retaining task-specific latent factors along each index. Then an alternating algorithm is derived for the nonconvex optimization, where each resulting subproblem is solved by a linear system. Experimental results demonstrate promising performances of our tLSSVM-MTL. | 10.1109/icassp49357.2023.10094580 | [
"https://export.arxiv.org/pdf/2303.02451v1.pdf"
] | 257,365,712 | 2303.02451 | be9bbbddfe1039a79d47dbf08e56b2dda22e41d7 |
TENSORIZED LSSVMS FOR MULTITASK REGRESSION
Jiani Liu
School of Information and Communication Engineering
University of Electronic Science and Technology of China
610054ChengduChina
Qinghua Tao
ESAT-STADIUS
3001Leuven, HeverleeKUBelgium
Ce Zhu
School of Information and Communication Engineering
University of Electronic Science and Technology of China
610054ChengduChina
Yipeng Liu
School of Information and Communication Engineering
University of Electronic Science and Technology of China
610054ChengduChina
Johan A K Suykens
ESAT-STADIUS
3001Leuven, HeverleeKUBelgium
TENSORIZED LSSVMS FOR MULTITASK REGRESSION
Multitask learningtensor regressionCP decompositionLSSVMshared factor
Multitask learning (MTL) can utilize the relatedness between multiple tasks for performance improvement. The advent of multimodal data allows tasks to be referenced by multiple indices. Highorder tensors are capable of providing efficient representations for such tasks, while preserving structural task-relations. In this paper, a new MTL method is proposed by leveraging low-rank tensor analysis and constructing tensorized Least Squares Support Vector Machines, namely the tLSSVM-MTL, where multilinear modelling and its nonlinear extensions can be flexibly exerted. We employ a high-order tensor for all the weights with each mode relating to an index and factorize it with CP decomposition, assigning a shared factor for all tasks and retaining task-specific latent factors along each index. Then an alternating algorithm is derived for the nonconvex optimization, where each resulting subproblem is solved by a linear system. Experimental results demonstrate promising performances of our tLSSVM-MTL.
Introduction
Multitask learning (MTL) lies on the exploitation of the coupling information across different tasks, so as to benefit the parameter estimation for each individual task [1,3,20]. MTL has been widely applied in many fields, such as social sciences [5,6,19], medical diagnosis [7,14], etc. Various MTL methods have been developed and shown promising performance for related tasks. Among them, support vector machines (SVMs) get great success [4]. Specifically, based on the minimization of regularization functionals, the regularized MTL is proposed in [5,6] with kernels including a task-coupling parameter. An MTL method based on SVM+, as an extension of SVM, is developed in [8] and compared with standard SVMs in [7] and regularized MTL in [14]. Moreover, the least squares SVM (LSSVM) [15] is also generalized for MTL [19], where the inequality constraints in SVMs are modified into equality ones and a linear system is solved in dual instead of the typical quadratic programming. These SVM-based MTL methods were all applied with the typical vector/matrix expressions.
Tensors, a natural extension for vectors and matrices, provide a more effective way to preserve multimodal information and describe complex dependencies [9,10]. Different usages of tensor representations have been successfully applied to MTL [13,17,18,[21][22][23][24][25]. For instance, motivated by the multidimensional input, [25] proposed to factorize the weight tensor for each task into a sparse task-specific part and a low rank shared part. In [18], it formulates the input as a tensor and extracts its spatial and temporal latent factors, based on which a prediction model is built. It is also intriguing to encode the projection matrices of all classifiers into tensors and apply tensor nuclear norm constraints for task relations [21,23].
The aforementioned works are all set with a single index for the involved tasks. In practice, tasks can be referenced by multiple indices with physical meanings. Taking a multimodal data task for example, restaurant recommendations consider different aspects of rating (e.g., food and service) and customers. It naturally leads to T 1 ×T 2 tasks spanned by two indices, and thus a single index fails to preserve such information. Therefore, [13] considered tasks with multiple indices and imposed low Tucker rank regularization over the stacked coefficient tensor to explore task relations. In [13], the applied Tucker decomposition can suffer from a dimensionality curse if the tensor order increases. For rank minimization, a convex relaxation is used to handle the whole weight tensor in each iteration and thereby can be problematic for large-scale data. Two variants were later developed in [17,24] with different convex relaxations for Tucker rank minimization. Though nonconvex optimization was also considered in [13], it required adjusting several ranks within Tucker, making the tuning procedures rather complicated. Besides, they all considered multilinear modelling, while nonlinearity is highly desirable for well describing complex data and tasks.
In this paper, we develop a tensorized MTL method for regression by leveraging LSSVMs, namely the tLSSVM-MTL, which constructs a high-order weight tensor on LSSVMs and indexes the tasks along different modes into groups by multiple indices. Unlike [13,17,24], we factorize the constructed tensor into CP forms since the factors are easy to explain from subspace perspective, and enable all tasks to share a common latent factor and meanwhile retain taskspecific factors. In our method, both linear and nonlinear feature maps (or kernels) can be flexibly employed. For optimization, an alternating minimization strategy is proposed with each subproblem solved by a linear system in the dual. Numerical experiments show advantageous performances of our tLSSVM-MTL over matrix-based and existing tensor-based MTL methods.
The next section gives some premieres. Section 3 presents the modelling and optimization for our tLSSVM-MTL. Experimental results and conclusions are in Sections 4 and 5.
Preliminaries
Scalars, vectors, matrices, and tensors are represented as x, x, X, and X , respectively. For clarity, we denote the row and the column in a matrix X as X[i, :] T = x i,: and X[:, j] = x :,j .
CP decomposition [2,12] Given a tensor X ∈ R I1×···×I N , CP decomposition factorizes the tensor into a summation of several rank-one components as X = K k=1 u 1 k • · · · • u N k , where K is the CP rank indicating the smallest number of rank-one components required in this representation. We represent the CP decomposition as X = [[U 1 , . . . , U N ]] with U n = [u n 1 , . . . , u n K ] for n = 1, . . . , N . LSSVM LSSVM [15] is a variant of SVMs [4] by forming equality constraints. For regression with data {x i , y i } m i=1 , the primal problem of LSSVM is given as:
min w,b,e J (w, b, e) = C 2 m i=1 (e i ) 2 + 1 2 w w s. t. w φ(x i ) + b = y i − e i , where φ : R d → R d h is the feature mapping function, w ∈ R d h and b ∈ R
are the modelling coefficients, e i denotes the point-wise regression error, and C > 0 is the regularization hyperparameter. In LSSVMs, the Lagrangian dual problem gives a linear system, instead of the quadratic programming in classic SVMs, making certain problems more tractable.
Tensorized LSSVMs for MTL
Tensorized Modelling
Assuming T tasks are involved with data {x t i ∈ R dt , y t i ∈ R} mt i=1 , T sets of parameters {w t , b t } T t=1
are thereby required for predictions in MTL. Here we focus on homogeneous attributes with d t = d. Thus, the complete weight matrix is W = [w 1 ; . . . ; w T ]. Instead of using a single index for these T tasks, multiple indices for an efficient and structured representation can be considered to construct a higher-order tensor [13]. In this paper, the weight tensor is constructed as W ∈ R d h ×T1×···T N and we factorize it into CP form for the structural relatedness across different tasks, such that: where L = [l :,1 ; · · · ; l :,K ] ∈ R d h ×K is the shared factor exploiting coupling information across tasks, U n = [u n :,1 , . . . , u n :,K ] ∈ R Tn×K corresponds to the n-th index with u n :,k = [u n 1,k , . . . , u n Tn,k ] . The task-specific coefficient is thus formulated as:
W = K k=1 l :,k • u 1 :,k • · · · • u N :,k = [[L, U 1 , . . . , U N ]],(1)· · · w t · · · → W → W T1 T2 dh w t T1 T2 d h ⇓ l :,1 u 1 t 1 ,1 u 2 t 2 ,1 l :,K u 1 t 1 ,K u 2 t 2 ,K + · · · + = t = t1, t2w t = K k=1 l :,k · u 1 t1,k . . . u N t N ,k .(2)
Each task is now spanned by N indices, i.e., t = t 1 , . . . , t N with t n = 1, . . . , T n , n = 1, . . . , N , so that the total number of tasks is calculated by T = N n=1 T n . Fig. 1 gives a graphical illustration for a third-order case. It is explicit that {l :,1 , · · · , l :,K } learns the coupling information across tasks and is always involved in the prediction for each task. In contrast, the variation of u n tn,: affects a certain group of tasks relating to the index t n . For instance, for n = 1, t 1 = 1, the updating of u 1 1,: affects tasks in {t = 1, . . . , t N |t l = 1, · · · , T l , l = 1}. In other words, the correlations between tasks can be explored by splitting them into different modes (indices) with a high-order tensor, enabling structural captures of dependencies from multiple modes than using a single mode. In this way, CP rank K indicates the number of latent shared features l :,k in this representation. With the imposed low CP rank, the learned coefficients can be more compact in gaining informative modelling.
Then, our tensorized LSSVM for MTL regression, i.e., tLSSVM-MTL, is constructed in the primal form as:
min L,U n ,b t ,e t i C 2 T t=1 mt i=1 (e t i ) 2 + 1 2 tr LL + 1 2 N n=1 tr U n U n s.t. ( K k=1 (l :,k · u 1 t1,k . . . u N t N ,k )) φ(x t i ) + b t (3) = y t i − e t i , t = t 1 , . . . , t N .
With the constructed tensor and the deployed factorization, our proposed tLSSVM-MTL successfully extends the existing LSSVMs to deal with multitasks referenced by multiple indices; the low CP rank factorization enables to explicitly attain the shared factor L seeking for common information and these U n maintaining task-specific information, which together boosts the overall performance of all tasks.
Optimization Algorithm
In (3), the product operations between the shared L and the task-specific U 1 , . . . , U N result in nonconvexity, but can be decoupled by block coordinate descents. We thus design an alternating updating strategy to optimize each factor iteratively, where each subproblem successfully degenerates to be convex by solving a linear system with Lagrangian duality.
1)
Step L, b t , e t i with fixed U n . The primal problem with respect to L, b t , e t i is given by
min L,b t ,e t i C 2 T t=1 mt i=1 (e t i ) 2 + 1 2 tr(LL ) s. t. ( K k=1 (l :,k · u t,k )) φ(x t i ) + b t = y t i − e t i ,
where u t,k u 1 t1,k · · · u N t N ,k for t = t 1 , . . . , t N , t n = 1, . . . , T n . With dual variables α t i ∈ R corresponding to each equality constraint, the Lagrangian function is obtained as
L L, b t , e t i = C 2 T t=1 mt i=1 e t i 2 + 1 2 tr(LL ) − T t=1 mt i=1 α t i ((Lu t ) φ(x t i ) + b t − y t i + e t i ),
with u t [u t,1 , . . . , u t,K ] ∈ R K . Then, stationary point conditions are obtained as
∂L ∂L = 0 =⇒ L = T t=1 mt i=1 α t i φ(x t i )u t , ∂L ∂b = 0 =⇒ A α = 0, b = b 1 , . . . , b T , ∂L ∂e = 0 =⇒ Ce = α, ∂L ∂α = 0 =⇒ Φw + Ab = y − e. where A = blockdiag(1 m1 , · · · , 1 m T ) ∈ R m×T , w = [(Lu 1 ) , · · · , (Lu T ) ] ∈ R T d h , the task-specific fea- ture mapping matrix Φ t = [φ(x t 1 ), . . . , φ(x t mt )
] ∈ R mt×d h and Φ = blockdiag(Φ 1 , · · · , Φ T ) ∈ R m×T d h for all T tasks. All outputs, regression errors, and dual variables are denoted as y = [y 1 1 , y 1 2 , . . . , y T m T ] ∈ R m , e = [e 1 1 , e 1 2 , . . . , e T m T ] ∈ R m , and α = [α 1 1 , α 1 2 , . . . , α T m T ] ∈ R m , respectively. By eliminating L and e t i , a linear system is attained as:
0 T ×T A A Q + 1 C I m×m b α = 0 T y ,(4)
where Q ∈ R m×m is computed by the components in tensor W and the kernel function k :
R d × R d → R induced by φ(·), such that Q(j, j ) = u t , u q k x t i , x q p , j = t−1 r=1 m r + i, j = q−1
r=1 m r + p, i = 1, · · · , m t , p = 1, · · · , m q with i, p indexing the samples in the involved tasks t and q, respectively. With the solution of dual variables (4), i.e., α, we can get the updated L = T t=1
mt i=1α t i φ(x t i )u t .
2)
Step U n , b t , e t i with fixed L. With fixed L, we alternate to optimize U n , b t , e t i . The corresponding primal problem is: min
u n tn,: ,b t ,e t i C 2 t∈St n mt i=1 (e t i ) 2 + 1 2 u n tn,: 2 2 s. t. u n tn,: z t i + b t = y t i − e t i , where z t i is calculated by L φ(x t i ) u 1 t1,:
· · · u n−1 tn−1,: u n+1 tn+1,:
· · · u N t N ,: ∈ R K , the involved tasks t is contained in the index set S tn = {t 1 , . . . , t N |t l = 1, . . . , T l , l = 1, . . . , N, l = n} with cardinality |S tn | = l,l =n T l . With dual variables λ tn , we have the Lagrangian function:
L u n tn,: , b t , e t i = C 2 t∈St n mt i=1 e t i 2 + 1 2 u n tn,: 2 2 − t∈St n mt i=1 λ t i u n tn,: z t i + b t − y t i + e t i ,
where λ tn = {λ t i |t ∈ S tn , i = 1, . . . , m t } ∈ R Mt n corresponds to the involved constraints in optimizing u n tn,: . Similarly, by deriving the stationary conditions and eliminating u n tn,: and e t i therein, we get the linear system:
0 |St n |×|St n | A tn A tn Q tn + 1 C I Mt n b tn λ tn = 0 |St n | y tn ,(5)
where A tn = blockdiag(1 mt ) ∈ R Mt n ×|St n | with t ∈ S tn , and y tn , α tn , b tn ∈ R Mt n are vectors collecting y t i , α t i , and b t i involved in the equality constraints, respectively. Here, the matrix Q tn ∈ R Mt n ×Mt n is computed by Q tn (j, j ) = z t i , z q p , where t, q ∈ S tn , i = 1, . . . , m t , p = 1, · · · , m q . The proposed alternating algorithm gives the final solutions after convergence. In this paper, we set the convergence condition for factors U n , such that n U n k+1 − U n k 2 F / U n k 2 F < 10 −3 . After optimization, the prediction for any given input x of the t-th task is obtained either with • the expression 1) using explicit feature map φ(·):
f t (x) = (Lu t ) φ(x) + b t(6)
• the expression 2) using kernel function k(·, ·):
f t (x) = mq p=1 T q=1 λ q p k(x, x q p ) u t , u q + b t .(7)
Note that expression 1) is the primal representation, while expression 2) is not strictly the dual representation, due to the existence of parameters u t , u q in the primal. This is because the optimization algorithm alternates to update different factors of the tensor and the resulting Lagrangian dual forms correspond to each subproblem during iterations, not to the original nonconvex problem (3). Nonetheless, the problem can be efficiently resolved by sets of linear systems, and both expressions 1) and 2) consider correlations across tasks and task-specific information.
Numerical Experiments
We evaluate the performance of the proposed method on both synthetic and real-world data. Root mean square error (RMSE), Q 2 , and the correlation of the predictedŷ and the ground-truth y are measured, where Q 2 is defined as 1 − y −ŷ 2 F / y 2 F and each iterative method is repeated 10 times for an average. Except for RMSE, a higher metric value indicates a better result. There are three hyperparameters to be tuned in our tLSSVM-MTL, i.e., K, C, and the kernel function, and the hyperparameters in the compared methods are also tuned, where 5-fold cross-validation is used.
1) Simulated data
The simulated dataset is generated as: 1) the coefficient tensor via the CP form W = [[L, U 1 , · · · , U N ]], where each entry is randomly generated from N (0, 1); 2) x t i , b t and noise e t i from distribution N (0, 1); 3) the response y t = y t + σe t consisting of y t = X t K k=1 l k · u 1 t1,k . . . u N t N ,k + b t 1 mt and e t given by the signal-to-noise ratio (SNR). We set d = 100, N = 3, T 1 = 3, T 2 = 4, T 3 = 5 with T = 60 tasks, K = 3, and 60 training samples and 20 test samples for each task. This experiment mainly aims to validate the efficacy of our tensorized tLSSVM-MTL and optimization results of the proposed algorithm; thus, the MTL-LSSVM counterpart is compared. Fig. 2 presents the performance evaluations on simulated data with different SNR levels, showing that the proposed tLSSVM-MTL consistently provides more accurate predictions on varied SNRs, and its advantage is slightly better with larger SNRs. Additionally, we plot the RMSE during the iterative updates in our method, where RMSE sharply decreases and then converges to a small error. The results of this experiment verify the effectiveness of the proposed method. Table 1: Performance comparison on real-world datasets. Table 1 presents the prediction results by MTL-LSSVM, MLMTL-C, MLMTL-NC, and the proposed tLSSVM-MTL with both linear and RBF kernels, where the best results are in bold. The results show that our proposed method substantially improves the prediction accuracy in terms of all considered metrics. Our advantages appear more prominent for Restaurant & Consumer and CCDS datasets with RBF kernels, particularly on Q 2 and Correlation metrics which achieve significant improvements. In fact, these two datasets contain larger numbers of tasks, i.e., T = 414 and T = 35, and the used multiple indices are endowed with specific meanings in prior to their real-world applications, thereby enabling our model to well learn the underlying structural information.
2) Real-world Data
In Table 1, we also compare the CPU time. We can see that the existing matrix-based MTL-LSSVM and MLMTL-C run faster, due to their convexity benefiting a simple optimization. When comparing with the nonconvex tensorbased MLMTL-NC, our method is more efficient, particularly for the Student Performance dataset, still showing the promising potentials of our tensorized model and the designed iterative updates. Nevertheless, more efficient computations can be expected with further investigations.
Conclusion
In this paper, we proposed a novel method for MTL regression, which can be regarded as a tensorized generalization and also a multimodal extension of multitask LSSVMs. The proposed method considers multitasks with different indices in the constructed coefficient tensor, which is factorized with low CP rank into a common factor and taskspecific factors. In the proposed method, both multilinear and nonlinearity can be flexibly modelled either through feature mappings or kernel functions. In optimization, an alternating strategy is derived to update these factors by solving linear programming subproblems with Lagrangian duality. Experimental results on simulated and real-world data show our great potentials over the compared relevant methods. In future, different tensorization techniques and faster computations are promising to be extended to wider ranges of tasks.
Figure 1 :
1An illustration on our tensorized representations.
Figure 2 :
2Performance on simulated data with different SNRs.
locations and construct 5×17 regression tasks. MTL-LSSVM[19] and two tensor-based methods, i.e., Convex and Nonconvex Multilinear MTL (MLMTL-C and MLMTL-NC)[13], are compared.Three datasets for MTL are employed: Restaurant & Consumer [16], Student performance 2 , and Comprehensive
Climate (CCDS). Restaurant & Consumer Dataset contains the rating scores of 138 consumers to different restaurants
in 3 aspects, leading to 138 × 3 regression tasks. Student performance Dataset contains student grades in 3 periods
and other attributes like sex, and age, where we build 3 × 2 regression tasks by separating the data according to sex
and grade period. Comprehensive Climate Dataset (CCDS) gives monthly climate records of 17 variables in North
America from 1990 to 2001 [11], where we select 5 Restaurant & Consumer
Metric
RMSE
Q 2
Correlation CPU Time
MTL-LSSVM
0.65
41.83%
62.54%
0.45
MTL-LSSVM-rbf
0.65
41.90%
62.55%
0.51
MLMTL-C
0.65
40.42 %
61.31%
0.45
MLMTL-NC
0.74
18.61%
56.12%
41.10
tLSSVM-MTL
0.61
45.41%
67.03%
22.86
tLSSVM-MTL-rbf
0.59
49.13%
69.54%
19.36
Student Performance
Metric
RMSE
Q 2
Correlation CPU Time
MTL-LSSVM
2.99
93.55%
44.66%
0.03
MTL-LSSVM-rbf
2.49
95.56%
67.49%
0.04
MLMTL-C
3.11
93.03%
36.45%
3.21
MLMTL-NC
3.34
91.96%
21.51%
19.10
tLSSVM-MTL
2.99
93.54%
45.79%
0.72
tLSSVM-MTL-rbf
2.44
95.73%
68.59%
0.41
CCDS
Metric
RMSE
Q 2
Correlation CPU Time
MTL-LSSVM
0.79
29.71%
55.50%
1.08
MTL-LSSVM-rbf
0.70
46.70%
68.36%
1.50
MLMTL-C
0.76
34.56%
58.79%
5.31
MLMTL-NC
0.83
24.04%
50.02%
29.44
tLSSVM-MTL
0.78
32.64%
58.03%
24.07
tLSSVM-MTL-rbf
0.65
54.50%
74.49%
22.01
https://archive.ics.uci.edu/ml/datasets/Student+Performance
Convex multi-task feature learning. A Argyriou, T Evgeniou, M Pontil, Machine Learning. 73A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243-272, 2008.
Analysis of individual differences in multidimensional scaling via an n-way generalization of "Eckart-Young" decomposition. J D Carroll, J Chang, Psychometrika. 353J D. Carroll and J. Chang. Analysis of individual differences in multidimensional scaling via an n-way general- ization of "Eckart-Young" decomposition. Psychometrika, 35(3):283-319, 1970.
Multitask learning. R Caruana, Machine Learning. 28R. Caruana. Multitask learning. Machine Learning, 28(1):41-75, 1997.
Support-vector networks. C Cortes, V Vapnik, Machine Learning. 20C. Cortes and V. Vapnik. Support-vector networks. Machine Learning, 20(3):273-297, 1995.
Learning multiple tasks with kernel methods. T Evgeniou, C A Micchelli, M Pontil, Journal of Machine Learning Research. 6T. Evgeniou, C. A. Micchelli, and M. Pontil. Learning multiple tasks with kernel methods. Journal of Machine Learning Research, 6:615-637, dec 2005.
Regularized multi-task learning. T Evgeniou, M Pontil, the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. T. Evgeniou and M. Pontil. Regularized multi-task learning. In the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109-117, 2004.
Predictive learning with structured (grouped) data. L Liang, F Cai, V Cherkassky, Neural Networks. 225-6L. Liang, F. Cai, and V. Cherkassky. Predictive learning with structured (grouped) data. Neural Networks, 22(5-6):766-773, 2009.
Connection between svm+ and multi-task learning. L Liang, V Cherkassky, IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). IEEEL. Liang and V. Cherkassky. Connection between svm+ and multi-task learning. In 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), pages 2048-2054. IEEE, 2008.
J Liu, C Zhu, Z Long, Y Liu, Tensor regression. Foundations and Trends® in Machine Learning. 14J. Liu, C. Zhu, Z. Long, and Y. Liu. Tensor regression. Foundations and Trends® in Machine Learning, 14(4):379-565, 2021.
Tensor computation for data analysis. Y Liu, J Liu, Z Long, C Zhu, SpringerY. Liu, J. Liu, Z. Long, and C. Zhu. Tensor computation for data analysis. Springer, 2022.
Spatial-temporal causal modeling for climate change attribution. A Lozano, H Li, A Niculescu-Mizil, Y Liu, C Perlich, J Hosking, N Abe, the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. A. C Lozano, H. Li, A. Niculescu-Mizil, Y. Liu, C. Perlich, J. Hosking, and N. Abe. Spatial-temporal causal mod- eling for climate change attribution. In the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 587-596, 2009.
Canonical polyadic tensor decomposition with low-rank factor matrices. A Phan, P Tichavskỳ, K Sobolev, K Sozykin, D Ermilov, A Cichocki, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEA. Phan, P. Tichavskỳ, K. Sobolev, K. Sozykin, D. Ermilov, and A. Cichocki. Canonical polyadic tensor de- composition with low-rank factor matrices. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4690-4694. IEEE, 2021.
Multilinear multitask learning. B Romera-Paredes, H Aung, N Bianchi-Berthouze, M Pontil, the International Conference on Machine Learning. B. Romera-Paredes, H. Aung, N. Bianchi-Berthouze, and M. Pontil. Multilinear multitask learning. In the International Conference on Machine Learning, pages 1444-1452, 2013.
Implementation and comparison of svm-based multi-task learning methods. H Shiao, V Cherkassky, The 2012 International Joint Conference on Neural Networks (IJCNN). IEEEH. Shiao and V. Cherkassky. Implementation and comparison of svm-based multi-task learning methods. In The 2012 International Joint Conference on Neural Networks (IJCNN), pages 1-7. IEEE, 2012.
Least squares support vector machine classifiers. J Suykens, J Vandewalle, Neural Processing Letters. 93J. AK Suykens and J. Vandewalle. Least squares support vector machine classifiers. Neural Processing Letters, 9(3):293-300, 1999.
Effects of relevant contextual features in the performance of a restaurant recommender system. B Vargas-Govea, G González-Serna, R Ponce-Medellın, ACM RecSys. 1156B. Vargas-Govea, G. González-Serna, and R. Ponce-Medellın. Effects of relevant contextual features in the performance of a restaurant recommender system. ACM RecSys, 11(592):56, 2011.
Multitask learning meets tensor factorization: task imputation via convex optimization. K Wimalawarne, M Sugiyama, R Tomioka, Advances in Neural Information Processing Systems. 27K. Wimalawarne, M. Sugiyama, and R. Tomioka. Multitask learning meets tensor factorization: task imputation via convex optimization. Advances in Neural Information Processing Systems, 27, 2014.
Spatio-temporal multi-task learning via tensor decomposition. J Xu, J Zhou, P Tan, X Liu, L Luo, IEEE Transactions on Knowledge and Data Engineering. 336J. Xu, J. Zhou, P. Tan, X. Liu, and L. Luo. Spatio-temporal multi-task learning via tensor decomposition. IEEE Transactions on Knowledge and Data Engineering, 33(6):2764-2775, 2019.
Multi-task least-squares support vector machines. S Xu, X An, X Qiao, L Zhu, Multimedia Tools and Applications. 712S. Xu, X. An, X. Qiao, and L. Zhu. Multi-task least-squares support vector machines. Multimedia Tools and Applications, 71(2):699-715, 2014.
Multi-task learning with task relations. Z Xu, K Kersting, the IEEE International Conference on Data Mining. Z. Xu and K. Kersting. Multi-task learning with task relations. In the IEEE International Conference on Data Mining, pages 884-893, 2011.
Deep multi-task representation learning: A tensor factorisation approach. Y Yang, T Hospedales, the International Conference on Learning Representations. Y. Yang and T. Hospedales. Deep multi-task representation learning: A tensor factorisation approach. In the International Conference on Learning Representations, 2017.
Deep multi-task learning via generalized tensor trace norm. Y Zhang, Y Zhang, W Wang, arXiv:2002.04799arXiv preprintY. Zhang, Y. Zhang, and W. Wang. Deep multi-task learning via generalized tensor trace norm. arXiv preprint arXiv:2002.04799, 2020.
Tensor multi-task learning for person re-identification. Z Zhang, Y Xie, W Zhang, Y Tang, Q Tian, IEEE Transactions on Image Processing. 29Z. Zhang, Y. Xie, W. Zhang, Y. Tang, and Q. Tian. Tensor multi-task learning for person re-identification. IEEE Transactions on Image Processing, 29:2463-2477, 2019.
Multilinear multitask learning by rank-product regularization. Q Zhao, X Rui, Z Han, D Meng, IEEE Transactions on Neural Networks and Learning Systems. 314Q. Zhao, X. Rui, Z. Han, and D. Meng. Multilinear multitask learning by rank-product regularization. IEEE Transactions on Neural Networks and Learning Systems, 31(4):1336-1350, 2020.
Multitask feature learning meets robust tensor decomposition for EEG classification. Q Zheng, Y Wang, P Heng, IEEE Transactions on Cybernetics. 514Q. Zheng, Y. Wang, and P. Heng. Multitask feature learning meets robust tensor decomposition for EEG classi- fication. IEEE Transactions on Cybernetics, 51(4):2242-2252, 2019.
| [] |
[
"QUANTUM-INSPIRED TENSOR NEURAL NETWORKS FOR OPTION PRICING A PREPRINT *",
"QUANTUM-INSPIRED TENSOR NEURAL NETWORKS FOR OPTION PRICING A PREPRINT *"
] | [
"Raj G Patel \nCentre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada\n\nUniversity of Toronto\nM5S 2E4TorontoOntarioCanada\n",
"Chia-Wei Hsing \nMultiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain\n",
"Serkan Ş Ahin \nMultiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain\n",
"Samuel Palmer \nCentre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada\n",
"Saeed S Jahromi \nMultiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain\n\nDonostia International Physics Center\nPaseo Manuel de Lardizabal 4E-20018San SebastiánSpain\n",
"Shivam Sharma \nMultiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain\n",
"Tomas Dominguez \nCentre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada\n\nUniversity of Toronto\nM5S 2E4TorontoOntarioCanada\n",
"Kris Tziritas \nCentre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada\n",
"Christophe Michel \nCrédit Agricole\n12, Place des Etats\n\nUnis -CS\n70052 -92547Montrouge CedexFrance\n",
"Vincent Porte \nCrédit Agricole\n12, Place des Etats\n\nUnis -CS\n70052 -92547Montrouge CedexFrance\n",
"Mustafa Abid \nCrédit Agricole\n12, Place des Etats\n\nUnis -CS\n70052 -92547Montrouge CedexFrance\n",
"Stéphane Aubert \nCrédit Agricole\n12, Place des Etats\n\nUnis -CS\n70052 -92547Montrouge CedexFrance\n",
"Pierre Castellani \nCrédit Agricole\n12, Place des Etats\n\nUnis -CS\n70052 -92547Montrouge CedexFrance\n",
"Samuel Mugel \nCentre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada\n",
"Román Orús \nMultiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain\n\nDonostia International Physics Center\nPaseo Manuel de Lardizabal 4E-20018San SebastiánSpain\n\nIkerbasque Foundation for Science\nMaria Diaz de Haro 3E-48013BilbaoSpain\n"
] | [
"Centre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada",
"University of Toronto\nM5S 2E4TorontoOntarioCanada",
"Multiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain",
"Multiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain",
"Centre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada",
"Multiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain",
"Donostia International Physics Center\nPaseo Manuel de Lardizabal 4E-20018San SebastiánSpain",
"Multiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain",
"Centre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada",
"University of Toronto\nM5S 2E4TorontoOntarioCanada",
"Centre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada",
"Crédit Agricole\n12, Place des Etats",
"Unis -CS\n70052 -92547Montrouge CedexFrance",
"Crédit Agricole\n12, Place des Etats",
"Unis -CS\n70052 -92547Montrouge CedexFrance",
"Crédit Agricole\n12, Place des Etats",
"Unis -CS\n70052 -92547Montrouge CedexFrance",
"Crédit Agricole\n12, Place des Etats",
"Unis -CS\n70052 -92547Montrouge CedexFrance",
"Crédit Agricole\n12, Place des Etats",
"Unis -CS\n70052 -92547Montrouge CedexFrance",
"Centre for Social Innovation\nMultiverse Computing\n192 Spadina Ave, Suite 509M5T 2C2TorontoCanada",
"Multiverse Computing\nPaseo de Miramón 17020014San SebastiánSpain",
"Donostia International Physics Center\nPaseo Manuel de Lardizabal 4E-20018San SebastiánSpain",
"Ikerbasque Foundation for Science\nMaria Diaz de Haro 3E-48013BilbaoSpain"
] | [] | Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory. * | 10.48550/arxiv.2212.14076 | [
"https://export.arxiv.org/pdf/2212.14076v1.pdf"
] | 255,341,119 | 2212.14076 | 87424ecec833142a5d7d05f92f7f00be744fa6c0 |
QUANTUM-INSPIRED TENSOR NEURAL NETWORKS FOR OPTION PRICING A PREPRINT *
January 2, 2023
Raj G Patel
Centre for Social Innovation
Multiverse Computing
192 Spadina Ave, Suite 509M5T 2C2TorontoCanada
University of Toronto
M5S 2E4TorontoOntarioCanada
Chia-Wei Hsing
Multiverse Computing
Paseo de Miramón 17020014San SebastiánSpain
Serkan Ş Ahin
Multiverse Computing
Paseo de Miramón 17020014San SebastiánSpain
Samuel Palmer
Centre for Social Innovation
Multiverse Computing
192 Spadina Ave, Suite 509M5T 2C2TorontoCanada
Saeed S Jahromi
Multiverse Computing
Paseo de Miramón 17020014San SebastiánSpain
Donostia International Physics Center
Paseo Manuel de Lardizabal 4E-20018San SebastiánSpain
Shivam Sharma
Multiverse Computing
Paseo de Miramón 17020014San SebastiánSpain
Tomas Dominguez
Centre for Social Innovation
Multiverse Computing
192 Spadina Ave, Suite 509M5T 2C2TorontoCanada
University of Toronto
M5S 2E4TorontoOntarioCanada
Kris Tziritas
Centre for Social Innovation
Multiverse Computing
192 Spadina Ave, Suite 509M5T 2C2TorontoCanada
Christophe Michel
Crédit Agricole
12, Place des Etats
Unis -CS
70052 -92547Montrouge CedexFrance
Vincent Porte
Crédit Agricole
12, Place des Etats
Unis -CS
70052 -92547Montrouge CedexFrance
Mustafa Abid
Crédit Agricole
12, Place des Etats
Unis -CS
70052 -92547Montrouge CedexFrance
Stéphane Aubert
Crédit Agricole
12, Place des Etats
Unis -CS
70052 -92547Montrouge CedexFrance
Pierre Castellani
Crédit Agricole
12, Place des Etats
Unis -CS
70052 -92547Montrouge CedexFrance
Samuel Mugel
Centre for Social Innovation
Multiverse Computing
192 Spadina Ave, Suite 509M5T 2C2TorontoCanada
Román Orús
Multiverse Computing
Paseo de Miramón 17020014San SebastiánSpain
Donostia International Physics Center
Paseo Manuel de Lardizabal 4E-20018San SebastiánSpain
Ikerbasque Foundation for Science
Maria Diaz de Haro 3E-48013BilbaoSpain
QUANTUM-INSPIRED TENSOR NEURAL NETWORKS FOR OPTION PRICING A PREPRINT *
January 2, 2023
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory. *
Introduction
Partial Differential Equations (PDEs) are an indispensable tool for modeling a variety of problems in quantitative finance. Typical approaches for solving such PDEs, which are mostly parabolic, rely on classical mesh-based numerical methods or Monte Carlo approaches. However, scaling these approaches to higher dimensions has always been a challenge because of their dependency on spatio-temporal grids as well as on a large number of sample paths. As an alternative, recent advancements in deep learning, which leverages their compositional structure, have enabled us to bypass some of these challenges by approximating the unknown solution using Dense Neural Networks (DNNs) [1,2,3,4]. This has opened up a wide array of possibilities in quantitative finance where we now can consider all participating assets without making any provisional assumptions on their correlations. As a result, we can now consider a whole basket of assets thereby allowing us to transform, extend and solve our problem in higher dimensions.
The basic idea of these approaches is to reformulate the high-dimensional PDEs as forward-backward stochastic differential equations (FBSDE) [5]. The solution of the corresponding FBSDE can be written as a deterministic function of time and the state process. Under suitable regularity assumptions, the FBSDE solution can represent the solution of the underlying parabolic PDE. Efficient methods for approximating the FBSDE solution with a DNN have been put forward recently in Refs. [6,7]. However, in spite of their apparent success, DNN approaches for solving PDEs are computationally expensive and limited by memory [8,9,10].
In this paper, we show how to overcome this problem by combining Tensor Networks (TN) [11] with DNNs. TNs were originally proposed in physics to efficiently describe strongly-correlated structures. At the fundamental level though, TNs are nothing but efficient descriptions of high-dimensional vectors and operators. Because of this, TNs have emerged as a promising tool in machine learning (ML) and optimization. In particular, TNs have proven successful in ML tasks such as classification [12,13,14,15,16,17,18], generative modeling [19,20,21] and sequence modeling [22]. Following the ideation from Refs. [8,23,24], we transform a DNN into what we call a Tensor Neural Network (TNN). In doing so, we observe enhanced training performance and reduced memory consumption. To validate the improvement, we perform an exhaustive search across all the DNN with the same number of parameters as the TNN. However, none of them match the performance of the TNN. Our main test bed for benchmarking is the Heston model, widely used in financial pricing theory. This paper is organized as follows: in Section 2, we briefly review the concept of TNN and show how one can incorporate them in a Neural Network (NN). In Section 3, we demonstrate how the TNN Init weight initialization scheme works. In Section 4, we then formulate the mathematical problem at hand. In Section 5, we show how a parabolic PDE mapped into SDEs can be solved using Neural Networks. We then present our results for the Heston model in Section 6 on how TNN and TNN Init outperform DNN. Finally, in Section 7 we present our conclusions.
Tensorizing Neural Networks
A way of tensorizing Neural Networks is to replace the weight matrix of a dense layer by a Tensor Network [11,24]. In particular, we choose an MPO representation [11] of the weight matrix that is analogous to the Tensor-Train format [8], and we call this layer a TN layer. This representation, however, is not unique, and is determined by two additional parameters: the MPO bond dimension, and the number of tensors in the MPO. In the simplest case, the MPO may consist of only two tensors, W 1 and W 2 , as shown in Fig. 1. The MPO in the figure has bond dimension χ and physical dimension d as the input and output dimension. The TN layer with such an MPO can be initialized in the same manner as a weight matrix of a dense layer. Figure 1: The process of contracting a 2-node MPO and reshaping it into the weight matrix W in each forward pass.
W 2 W 1 χ d d d d d 2 d 2 W contraction reshaping d d d d
In the forward pass of the TN layer, we first contract the MPO along the bond index and then reshape the resulting rank-4 tensor into a matrix as shown in Fig. 1. This matrix is the corresponding weight matrix of a TN layer. The weight matrix can then be multiplied with the input vector. We apply an activation function to the resulting output vector, thereby finishing the forward pass. The weight matrix takes the form
W = χ α=1 (A α ⊗ B α ), W ∈ R d 2 ×d 2 ,(1)
where number of parameters as a dense layer with d 2 neurons. Any choice where χ < d 2 /2 will result in a weight matrix W comprising of d 4 − 2χd 2 fewer parameters than the weight matrix of a dense layer, thus allowing for potential parameter savings. In principle, when χ = d 2 , we have sufficient degrees of freedom to be able to construct an arbitrary d 2 × d 2 matrix. Thus, we expect that by increasing the bond dimension, the TN layer behaves increasingly similar to a dense layer, as shown empirically in [24].
W 1 = [A 1 , A 2 , · · · , A χ ], A α ∈ R d×d and W 2 = [B 1 , B 2 , · · · , B χ ], B α ∈ R
The existence of Kronecker product in Eq.(1) implies that there is a correlation between the matrix elements in W, i.e. each element will be a sum of products of elements of the tensors A and B. The parameters to be trained are not the matrix elements of the weight matrix, but the elements of the individual tensors of the MPO. This can exhibit interesting training behavior and can lead to faster convergence of the loss function as we show in Section 6.
By implementing the TN layer in this way and with a ML library which supports automatic differentiation such as TensorFlow or PyTorch, one can optimize the MPO weights in a similar fashion as those of dense layers in DNN and train the TNN. As an alternative, one could work with an explicit TN layer without contraction of the MPO, including tensor optimizations as in other TN algorithms (e.g., variational), provided one can find a way to decompose the activation function. We observe, however, that for most interesting NN structures, we do not actually need this option. Additionally, the TNN structure is not limited to a single TN layer. It can be extended to any number of TN layers or combinations of dense and TN layers. This provides flexibility when designing TNN architectures which is favorable for the problems of interest.
Tensor Network Initializer
To build upon the structural advantages of TNNs, we propose an alternate framework for initializing the weights. Under this framework, we initialize the weight matrices W 1 and W 2 similarly to how we initialized them while tensorizing neural networks as shown in Fig. 1. However, under this scheme, once the MPO is contracted and reshaped into the weight matrix W during the initialization stage, we only use and update the weights in W during the learning process. To verify the empirical advantages of this approach, we show preliminary results in Section 6.1.
Problem Formulation
Although Refs. [6,24] have shown results for solving the Black-Scholes-Barenblatt model with a squared payoff, one of the fundamental drawbacks of using the Black-Scholes-Barenblatt model is that it assumes the volatility of asset prices remains constant over time; however, it appears that volatility itself is a stochastic process. To overcome this issue, one of the widely adopted alternatives to the Black-Scholes model is a stochastic volatility model known as the Heston model.
Under this model, we aim to solve the following PDE:
∂u ∂t + r − 1 2 v ∂u ∂x + κ(θ − v) ∂u ∂v + 1 2 v ∂ 2 u ∂x 2 + 1 2 η 2 v ∂ 2 u ∂v 2 + ρηv ∂ 2 u ∂x∂v = ru, u(T, x, v) = φ(x, v).(2)
where κ, θ, η > 0 are positive constants, r ∈ R is an interest rate, v is the variance factor of the asset and u is the price process of the contingent claim with a payoff function φ at time T > 0.
Using the Feynman-Kac formalism, this can be re-casted into a system of forward-backward stochastic differential equations. For the forward system, we have two one-dimensional Q-Brownian motions W S and W v with correlation ρ, and an asset price process S c = (S c t ) t≥0 which satisfies the stochastic differential equation for the Heston model dS c
Quantum-Inspired Tensor Neural Networks for Option Pricing
A PREPRINT Comparing Eq. 4 and Eq. 3, we immediately observe the advantage of working with X as opposed to S. The absence of the state variable X on the right-hand side of the SDE describing its evolution leads us to fewer numerical errors than directly simulating S.
Following this, we consider a partition {t n } N dt n=0 of the interval [0, T ] with ∆t n = t n+1 − t n , and we introduce a Q-Brownian motion Z independent of W X with
W v t d = Q ρW X t + 1 − ρ 2 Z t for each t > 0
. If we slightly abuse notation and write Y n = Y tn for any continuous stochastic process Y on [0, T ], then a simple application of the Euler discretization yields
X n+1 = X n + √ v n ∆W X n + r − 1 2 v n ∆t n v n+1 = v n + κ(θ − v n )∆t n + η √ v n ∆W v n , where ∆W X n = W X n+1 − W X n , ∆W v n = W v n+1 − W v n and ∆Z n = Z n+1 − Z n .
Given that we now use discretized versions of Eq. 3 and Eq. , 4, we drop the c indicating the continuous nature of the evolution. To overcome the issue of having an ill-defined square root of v n , we truncate v n when it becomes negative leading us to:
X n+1 = X n + v + n ∆W X n + r − 1 2 v + n ∆t n v n+1 = v n + κ(θ − v + n )∆t n + η v + n ∆W v n ,(5)
where v + n := max{v n , 0}. Once we have the simulations, we use Feynman-Kac's formalism and Ito's Lemma to get the backward SDE of the option which is as follows:
du c t = ru c t dt + ∂u c t ∂X c t ∂u c t ∂v c t √ v c t 0 ηρ √ v c t η 1 − ρ 2 √ v c t dW X t dZ t(6)
subject to the terminal condition u T = φ(X T , v T ), where c refers to the continuous nature of Eq. 6. Specifically, to price a European Call option with strike K, we would simulate Eq. 6 with terminal condition φ(X T , v T ) = max(e X T −K, 0). Similarly, to price a European Put option with strike K, we would simulate Eq. 6 with terminal condition φ(X T , v T ) = max(K − e X T , 0). To benchmark our approach, we use an analytical solution obtained through Fourier transformation techniques presented in [25]. This Fourier representation depends on the value u ω (t, X t , v t ) of the contingent claim paying φ(X T , v T ) = e iωT for ω ∈ R, which may be computed using approaches presented in [26,27,28] to be
u ω (t, X t , v t ) = e Aω(t)+Bω(t)vt+Cω(t)Xt , for A ω (t) = κθ r − (T − t) − 2 η 2 log 1 − r − r −1 + e −γ(T −t) 1 − r − r −1 + B ω (t) = r − 1 − e −γ(T −t) 1 − r − r −1 + e −γ(T −t) , C ω (t) = iω,
where r ± := β±γ η 2 , α := 1 2 (ω 2 + iω), β = κ − ρηiω and γ := β 2 + 2η 2 α. The primary reason for selecting this problem was to benchmark our approach and test its robustness against a model with an analytical solution. However, the utility of our approach would be more evident for models where performing the integrals to obtain u ω is not trivial, and the numerical approaches would succumb to a range of difficulties. Under such circumstances, our approach is indeed robust against varying model complexities thereby equipping us with an interesting tool to price more complex and exotic products reliably.
Neural Networks for PDE
To begin with, we use an Euler discretization with N time steps to simulate M paths, (X j , v j ) j≤M of (X, v). Each path is a vector of size N ,
(X j , v j ) = (X j i , v j i ) i≤N ,(7)
with (X j i , v j i ) corresponding to the value of (X, v) at the i th point, t i , in the partition of the interval [0, T ]. The simulation step has resulted in N M vectors of triples (X j i , v j i , t i ). These will be the inputs of a Neural Network whose task is to learn the value function,
u t = E e −r(T −t) (X T − K) + |F t ,(8)
at each point t i ∈ [0, T ], under the base set of parameters as specified in Sec. 6. The Neural Network outputs M paths,
u j (θ) = u j i (θ) i≤N ,(9)
with u j i (θ) being an estimator for u i . Following this, the output u j of the Neural Network can be used to estimate the gradient,
∇ (X,v) u j ,(10)
by automatic differentiation.
We can now use the estimate (10) of the derivative term obtained from the Neural Network to obtain another estimate for the value process u. Indeed, an Euler discretization motivates the definition of the estimate u j (θ) = ( u j i (θ)) i≤N with u j 0 (θ) = u j 0 (θ) and
u j i+1 = (1 + r∆t) u j i + ∂ u j i ∂Xi ∂ u j i ∂vi √ v i 0 ηρ √ v i η 1 − ρ 2 √ v i ∆W X i ∆Z i(11)for 1 ≤ i ≤ N .
For each batch, we have now defined two estimators for the value process at points in the partition of the interval [0, T ], u j (θ) and u j (θ).
We train the Neural Network by comparing these two estimators to one another, and also by comparing their terminal conditions to the one found using the payoff function φ and the simulation of (X, v). In other words, we strive to minimize the loss
L(θ) = M j=1 N i=1 u j i (θ) − u j i (θ) 2 + M j=1 u j N (θ) − φ(X N ) 2(13)
Notice that we could have included a term comparing u j N (θ) and φ(X N ), but this is not necessary as the triangle inequality implies that this difference is already dominated by the loss function.
The goal was to find the solution of a given PDE and we aimed to approximate it using a neural network. In the process of doing so, we first had the PDE alongside an SDE of an underlying state with known initial condition. From this, we derived the SDE of the state variable modeled by the PDE using the Feynman-Kac formalism and a simple application of Ito's Lemma. This SDE has a known terminal condition. Once we have a system of FBSDE, we can use the approach described here to find or approximate the solution of the SDE which, in turn, can be used as an approximation for the solution of the PDE we are interested in. Having seen how learning would take place, we now look into the results about TNN and TNN Init by showcasing their advantages over DNN.
Results: Option Pricing
We can now apply the methodology described in Sec. 5 to benchmark the performance of DNN. Whereas to benchmark the TNN performance, we apply the methodology described in Sec. 2 alongside the learning framework described in Sec. 5.
For the model, unless otherwise specified, we use the base set of parameters: r = 0, S 0 = 1, √ v 0 = 20%, κ = 3, √ θ = 40%, η = 1 and ρ = −0.5. Furthermore, Y t and Z t represent u(t, X t ) and Du(t, X t ). We partition the time domain [0, T ] into N = 500 equally spaced intervals. For the loss, instead of using the mean squared error (MSE) which is classically used for regression problems, we use the log-cosh loss which helps in speeding up the convergence as compared to the work in Ref. [1] and which is of the form 1 N N i=1 ln(cosh(ŷ i − y i )) [24], whereŷ i is the NN estimate and y i is the target. To optimize the model weights, we use the Adam Optimizer with batch of size 100 and a fixed learning rate of 10 −3 . Given the simplicity of the payoff structure, for all our experiments, we use a 2-hidden layer architecture. For simplicity, we only construct TN layers that are symmetric in each input and each output dimension. In practice, we choose the first layer in our NN to be a dense layer with neurons that match the input shape of the second TN layer. That is, a DNN(x, y) corresponds to a two-layer dense network with x neurons in layer 1 and y neurons in layer 2. On the other hand, a TNN(x) corresponds to a two layer TNN architecture with the first being a dense layer with x neurons and the second layer a TN layer with x neurons.
Results
Dense Neural Network vs Tensor Neural Network
We now investigate the behavior of the loss and the option price at t = 0 for three different architectures, TNN(16), DNN(4,24) and DNN (16,16). Note that, in comparison with TNN(16), DNN (16,16) has the same number of neurons but more parameters. Whereas, DNN(4,24) has the same number of parameters but a different number of neurons. All three architectures achieve the same accuracy level upon convergence. So, although TNN(16) achieves the same accuracy as DNN (16,16) with fewer parameters, we find DNN (4,24) to be equally good in terms of accuracy and number of parameters. Hence, the number of parameters may not be used as a measure of compression without considering alternative DNN architectures with the same parameter count, which is a major drawback in the experiments performed in Ref. [8]. (16) with bond dimension 2 (blue), the corresponding best DNN with equivalent parameters (orange) and the DNN with equivalent neurons (green). The plots illustrates the resulting mean ± standard deviation from 100 runs with different seeds.
Moreover, we see in our experiments that the architectures differ in convergence speed. DNN (4,24) converges the fastest among all the DNN architectures with the same parameter count as that of TNN (16). However, we observe that the TNN architecture converges faster than DNN(4,24) as seen in Fig. 2. It also converges faster than DNN (16,16). Besides the estimate of the option price as observed in the Fig. 3, we can also look into the variance of the estimate. As observed in Fig. 3, we see that TNN not only provides us with better estimates, it also adds stability to our results by reducing the variance as compared to the results of the dense architectures.
In summary, TNN not only allows for memory savings with respect to DNN for the same number of neurons, it also converges faster for the same number of parameters and neurons with a reduction in variance.
Dense Neural Network vs Tensor Neural Network Initializer
We now look into the loss behaviour and the option price at t = 0 for two different architectures, TNN INIT(16) and DNN (16,16). Here, TNN INIT(16) indicates a two layer dense neural network where the weight matrix is initialized by contracting the MPO and reshaped into the weight matrix W which is then used during the learning process.
In comparison with TNN INIT (16), DNN (16,16) has the same number of neurons and parameters which makes it an ideal architecture to compare against for the potential speed-up from using the TN Initializer approach referenced in To summarize, we establish that for a similar paramter count, TNN outperforms TNN INIT which eventually outperforms the best performing dense architecture with a similar paramter count. Given these advantages, this opens up an interesting avenue for future work within weight initialization for the architectural development of machine learning models.
Runtime Comparison
On comparing runtime for different architectures via AWS EC2 G4dn instances on Nvidia T4 GPU, we observe that all the analysed TNNs converge significantly faster (up to twelve times) than equivalent DNN with same parameter count. Even the smallest TNN architecture (161 parameters) outperforms all the analyzed equivalent DNN architectures in terms of convergence speed as shown in Fig. 7. A particular aspect of this result to look into is to understand how this relates to the compute-efficient training trade-off. We know that larger models converge to lower errors in fewer gradient updates than smaller models. But furthermore, this increase in convergence speed via fewer gradient updates also outpaces the extra computational costs associated with using larger models [29], thereby justifying their advantage. This is exactly what we observe here as the runtime to converge goes down as we increase the model size.
Bermudan Options
We now look into the preliminary results for pricing Bermudan Options using an extension of the classical Longstaff-Schwartz algorithm [30]. Under the classical approach, we estimate the conditional expected value using regression. However, the main difficulty there stems from high dimensions when trying to get the conditional expectation. In this section, we demonstrate the results of addressing the issues in the classical model using DNN [31] and how we can achieve better results using a TNN.
To understand why neural networks seem to be a good choice over classical regression based approaches, we need to realize that the finite basis functions can not fully represent the conditional payoff whereas we can leverage the approximation power of neural networks. Furthermore, with an increase in dimensionality, we have growing number of basis functions to select from, which eventually results in sub-optimal results, from an accuracy and time perspective, in higher dimensions. Neural networks help here since we are replacing the basis functions and performing gradient descent instead of ordinary least square regression. Instead of choosing the correct basis functions, we learn them, thereby making it easier to scale in higher dimensions.
Having realized why neural networks are better suited for applications in higher dimensions, we now look at the results for DNN and TNN for pricing Bermudan options in higher dimensions. Here, we price a call option on the maximum of d assets in the Black Scholes model with payoff (max i=1...d S i T − K) + where d is 5 and 10 as investigated in [31,32]. For parameters, we use S 0 = {100, 100, ...} ∈ R d , strike (K) = 100, dividend = 10%, risk-free rate = 5%, volatility = diag(0.2) ∈ R d×d , time to maturity = 3 years and N = 9 uniform exercise dates. For comparing the performance, we use DNN (16,16) and TNN (16). As noted earlier, in comparison with TNN(16), DNN (16,16) has the same number of neurons but more parameters. For the loss function, we use the standard mean squared error (MSE). For optimizing the model, we use the Adam Optimizer with a fixed learning rate of 10 −3 with Leaky ReLU as the choice of activation function. On an elementary level, TNN seems to outperform DNN in terms of minimizing the loss here as observed in Fig. 8 To tackle this problem rigorously, we can use a sequence of feed-forward neural networks, where each neural network corresponds to learning the continuation value between two exercise dates and the max of continuation value and exercise value as its terminal condition. We can start with terminal time and go back recursively, while learning the continuation value through different neural networks. For each neural network, we can use least squares residual of the associated backward stochastic differential equation as the loss function. An approach like this [33] can facilitate learning the PDE directly rather than using a Monte-Carlo based approach which introduces forward bias. Furthermore, the computational cost of this framework grows quadratically with dimension d, in contrast to exponential growth observed in the Longstaff-Schwartz method, which makes it an interesting avenue to look into to compare TNNs and DNNs rigorously.
Conclusions and Outlook
We have shown how we can leverage TNN to solve parabolic PDEs for pricing European options. As an extension, we can also tackle PDEs for early exercise options with N discrete time steps by stacking N feedforward neural networks. This can reduce the computational cost from exponential in the Longstaff-Schwartz method to quadratic. However, this is left for future exploration.
Under this regime, we addressed some of the shortcomings in the existing literature when quantifying the advantages of TNN by analyzing parameter savings and convergence speed. Empirically, we demonstrated that TNN provides significant parameter savings as compared to DNN while attaining the same accuracy with a smaller variance. We further illustrated that TNN achieves a speedup in training by comparing TNN against entire families of DNN architectures with similar parameter counts. Besides TNN, we also introduced TNN Initializer, a weight initialization scheme, that empirically outperformed DNN. Despite the absence of theoretical bounds, which can be an area of further analysis, the methodologies described here can be used to improve training from a memory and speed perspective for a wide variety of problems in machine learning. Quantifying the complexity of a problem and adapting the methodology to problems where this approach can provide a significant edge, can be an interesting avenue for future work.
Figure 2 :
2Training loss over epochs for TNN
Figure 3 :
3Option Price at t 0 over epochs for TNN(16) with bond dimension 2 (blue), the corresponding best DNN with equivalent parameters (orange) and the DNN with equivalent neurons (green). The plots illustrate the resulting mean ± standard deviation from 100 runs. The black dotted line indicates the analytical solution obtained from solving the Heston PDE using Fourier transformation approach discussed at the end of Sec. 4.Sec. 3. On comparing the two, we note that TNN INIT(16) indeed converges significantly faster than DNN(16,16) despite having the same number of parameters and neurons. This also comes with a significant reduction in variance over 100 runs which adds to the increased stability that TNN INIT(·) provides, when compared to a DNN(·) architecture with an identical number of neurons and parameters and glorot uniform initialization.Given that TNN INIT(16) outperforms DNN(16,16), a natural question that arises is how would this fare against the TNN(16) architecture. However, before drawing that comparison, it is essential to know that TNN(16)'s parameter count is 45% of the parameter count of TNN INIT(16). Despite this shrinkage in parameter count, we observe that TNN(16) performs almost identical to TNN INIT(16), as shown in the left panel ofFig. 4, which adds to the potential of TNNs. To make a fair comparison between the TNN and TNN INIT architectures and complete the loop, we compare the performance of TNN(16) and TNN INIT(10), which have a similar parameter count. On comparing the two, we indeed see TNN outperforming the TNN INIT architecture, as observed in the right panel ofFig. 4, thereby concluding our comparison between the architectures.
Figure 4 :Figure 5 :Figure 6 :Figure 7 :
4567(left panel) Training loss over epochs for TNN(16) with a bond dimension 2 and that of a TNN Init(16) with a bond dimension 2 (right panel) training loss over epochs for TNN(16) with a bond dimension 2 and that of a TNN Init(10) with a bond dimension 2. The plots indicate resulting mean ± standard deviation from 100 runs. Training loss over epochs for DNN(16,16) (Orange) and DNN(16,16) with TN-Initializer (Blue). The plots illustrates the resulting mean ± standard deviation from 100 runs. Option Price at t 0 over epochs for DNN(16,16) (Orange) and DNN(16,16) with TN-Initializer (Blue). The plots illustrates the resulting mean ± standard deviation from 100 runs. The black dotted line indicates the analytical solution obtained from solving the Heston PDE using Fourier transformation approach discussed at the end of Sec. Time taken to converge for a heston model for TNN and
Figure 8 :
8Loss evolution for Bermudan Option with (left panel) d=5 assets and (right panel) d=10 assets
d×d are the two rank-3 weight tensors connected by a virtual bond α of dimension χ. The resulting weight matrix W is of dimension d 2 × d 2 , so it contains d 4 elements. Notice that these elements are not independent since they come from the TN structure with2χd 2 trainable parameters. So, if we initialized the MPO with bond dimension χ = d 2 /2, we would have the same
Quantum-Inspired Tensor Neural Networks
for Option Pricing
A PREPRINT
. This also corresponds to TNN outperforming DNN in terms of prices (For d=5, we get 26.01 ± 0.18 for DNN and 26.13 ± 0.18 for TNN. On the other hand, for d=10, we get 37.92 ± 0.27 for DNN and 38.14 ± 0.25 for TNN). The losses here are based on the performance of TNNs and DNNs at each exercise date. Here, each exercise date is indicated by the discontinuous jumps, i.e., 1000 epochs per exercise date, that we observe in the loss plots. As observed, the loss over epochs for each exercise date is lower for TNNs than DNNs, making them an interesting architecture to investigate. These however are rudimentary results and need more exploration to properly quantify the TNN advantage.Quantum-Inspired Tensor Neural Networks
for Option Pricing
A PREPRINT
t = rS c t dt + S c t v c t dW S t dv c t = κ(θ − v c t ) dt + η v c t dW v t .(3)Here, we add a c as the superscript to denote the continuous nature of the equation which we later drop when we perform discretization. In this model, the volatility of the asset S is given by the stochastic process ( √ v c t ) t≥0 . Knowing that the underlying is a positive stochastic process, we introduce the semi-martingale X c = log(S c ). Through a simple application of Ito's lemma we find thatdX c t = v c t dW S t + r − 1 2 v c t dt.(4)
Acknowledgements -We acknowledge the regular fruitful discussions with the technical teams both at Crédit Agricole and Multiverse Computing.
Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differential equations. Maziar Raissi, arXiv:1804.07010v1arXiv preprintMaziar Raissi. Forward-backward stochastic neural networks: Deep learning of high-dimensional partial differ- ential equations. arXiv preprint arXiv:1804.07010v1, 2018.
Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. Beck Christian, Jentzen Weinan, Arnulf, Journal of Nonlinear Science. 294Beck Christian, E Weinan, and Jentzen Arnulf. Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations. Journal of Nonlinear Science, 29(4):1563-1619, jan 2019.
Solving high-dimensional partial differential equations using deep learning. Han Jiequn, Jentzen Arnulf, E Weinan, Proceedings of the National Academy of Sciences. 11534Han Jiequn, Jentzen Arnulf, and E Weinan. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505-8510, aug 2018.
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. E Weinan, Han Jiequn, Jentzen Arnulf, Communications in Mathematics and Statistics. 54E Weinan, Han Jiequn, and Jentzen Arnulf. Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Communications in Math- ematics and Statistics, 5(4):349-380, nov 2017.
Second-order backward stochastic differential equations and fully nonlinear parabolic PDEs. Patrick Cheridito, H M Soner, Touzi Nizar, Nicolas Victoir, Communications on Pure and Applied Mathematics. 607Patrick Cheridito, H M Soner, Touzi Nizar, and Nicolas Victoir. Second-order backward stochastic differential equations and fully nonlinear parabolic PDEs. Communications on Pure and Applied Mathematics, 60(7):1081- 1110, nov 2006.
Maziar Raissi, Paris Perdikaris, George Em Karniadakis, arXiv:1711.10561Datadriven solutions of nonlinear partial differential equations. arXiv preprintpart iMaziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part i): Data- driven solutions of nonlinear partial differential equations. arXiv preprint arXiv:1711.10561, 2017.
Maziar Raissi, Paris Perdikaris, George Em Karniadakis, arXiv:1711.10566Datadriven discovery of nonlinear partial differential equations. arXiv preprintpart iiMaziar Raissi, Paris Perdikaris, and George Em Karniadakis. Physics informed deep learning (part ii): Data- driven discovery of nonlinear partial differential equations. arXiv preprint arXiv:1711.10566, 2017.
Tensorizing neural networks. Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, Dmitry P Vetrov, Advances in Neural Information Processing Systems. 28Alexander Novikov, Dmitrii Podoprikhin, Anton Osokin, and Dmitry P Vetrov. Tensorizing neural networks. Advances in Neural Information Processing Systems, 28, 2015.
Very deep convolutional networks for large-scale image recognition. Karen Simonyan, Andrew Zisserman, International Conference on Learning Representations (ICLR). Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR), 2015.
Restructuring of deep neural network acoustic models with singular value decomposition. Jian Xue, Jinyu Li, Yifan Gong, Interspeech. Quantum-Inspired Tensor Neural Networks for Option Pricing A PREPRINTJian Xue, Jinyu Li, and Yifan Gong. Restructuring of deep neural network acoustic models with singular value decomposition. Interspeech, 2013, pages 2365-2369, 2013. Quantum-Inspired Tensor Neural Networks for Option Pricing A PREPRINT
A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Roman Orús, Annals of Physics. 349Roman Orús. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Annals of Physics, 349:117-158, 2014.
Supervised learning with tensor networks. Stoudenmire Edwin, David J Schwab, Advances in Neural Information Processing Systems. 29Stoudenmire Edwin and David J. Schwab. Supervised learning with tensor networks. Advances in Neural Information Processing Systems 29, pages 4799-4807, 2016.
Learning relevant features of data with multi-scale tensor networks. Stoudenmire Edwin, Quantum Science and Technology. 3334003Stoudenmire Edwin. Learning relevant features of data with multi-scale tensor networks. Quantum Science and Technology, 3(3):034003, 2018.
Supervised learning with generalized tensor networks. Ivan Glasser, Nicola Pancotti, J Ignacio Cirac, arXiv:1806.05964arXiv preprintIvan Glasser, Nicola Pancotti, and J Ignacio Cirac. Supervised learning with generalized tensor networks. arXiv preprint arXiv:1806.05964, 2018.
Tensor network for machine learning. Stavros Efthymiou, Jack Hidary, Stefan Leichenauer, arXiv:1906.06329arXiv preprintStavros Efthymiou, Jack Hidary, and Stefan Leichenauer. Tensor network for machine learning. arXiv preprint arXiv:1906.06329, 2019.
Matrix product state-based quantum classifier. Amandeep Bhatia, Ajay Mandeep Kaur Saggi, Sushma Kumar, Jain, Neural computation. 317Amandeep Bhatia, Mandeep Kaur Saggi, Ajay Kumar, and Sushma Jain. Matrix product state-based quantum classifier. Neural computation, 31(7):1499-1517, 2019.
Machine learning by unitary tensor network of hierarchical tree structure. Ding Liu, Peter Shi-Ju Ran, Cheng Wittek, Raul Blázquez Peng, Gang García, Maciej Su, Lewenstein, New Journal of Physics. 21773059Ding Liu, Shi-Ju Ran, Peter Wittek, Cheng Peng, Raul Blázquez García, Gang Su, and Maciej Lewenstein. Machine learning by unitary tensor network of hierarchical tree structure. New Journal of Physics, 21(7):073059, 2019.
From probabilistic graphical models to generalized tensor networks for supervised learning. I Glasser, N Pancotti, J I Cirac, IEEE Access. 8I. Glasser, N. Pancotti, and J. I. Cirac. From probabilistic graphical models to generalized tensor networks for supervised learning. IEEE Access, 8:68169-68182, 2020.
Unsupervised generative modeling using matrix product states. Zhao-Yu Han, Jun Wang, Heng Fan, Lei Wang, Pan Zhang, Physical Review X. 831012Zhao-Yu Han, Jun Wang, Heng Fan, Lei Wang, and Pan Zhang. Unsupervised generative modeling using matrix product states. Physical Review X, 8:031012, 2018.
Tree tensor networks for generative modeling. Song Cheng, Lei Wang, Tao Xiang, Pan Zhang, Physical Review B. 99155131Song Cheng, Lei Wang, Tao Xiang, and Pan Zhang. Tree tensor networks for generative modeling. Physical Review B, 99:155131, 2019.
Generative tensor network classification model for supervised machine learning. Zheng-Zhi Sun, Cheng Peng, Ding Liu, Gang Shi-Ju Ran, Su, Physical Review B. 10175135Zheng-Zhi Sun, Cheng Peng, Ding Liu, Shi-Ju Ran, and Gang Su. Generative tensor network classification model for supervised machine learning. Physical Review B, 101:075135, 2020.
Modeling sequences with quantum states: A look under the hood. Tai-Danae Bradley, Edwin Miles Stoudenmire, John Terilla, Machine Learning: Science and Technology. Tai-Danae Bradley, Edwin Miles Stoudenmire, and John Terilla. Modeling sequences with quantum states: A look under the hood. Machine Learning: Science and Technology, 2020.
Tensor-train decomposition. I V Oseledets, SIAM Journal on Scientific Computing. 335I. V. Oseledets. Tensor-train decomposition. SIAM Journal on Scientific Computing, 33(5):2295-2317, 2011.
Raj Patel, Chia-Wei Hsing, Serkan Sahin, Saeed S Jahromi, Samuel Palmer, Shivam Sharma, Christophe Michel, Vincent Porte, Mustafa Abid, Stephane Aubert, Pierre Castellani, Chi-Guhn Lee, arXiv:2208.02235Samuel Mugel, and Roman Orús. Quantum-inspired tensor neural networks for partial differential equations. arXiv preprintRaj Patel, Chia-Wei Hsing, Serkan Sahin, Saeed S. Jahromi, Samuel Palmer, Shivam Sharma, Christophe Michel, Vincent Porte, Mustafa Abid, Stephane Aubert, Pierre Castellani, Chi-Guhn Lee, Samuel Mugel, and Roman Orús. Quantum-inspired tensor neural networks for partial differential equations. arXiv preprint arXiv:2208.02235, 2022.
Transform analysis and asset pricing for affine jump-diffusions. Darrell Duffie, Jun Pan, Kenneth Singleton, Econometrica. 68Darrell Duffie, Jun Pan, and Kenneth Singleton. Transform analysis and asset pricing for affine jump-diffusions. Econometrica, 68:1343-1376, 02 2000.
Option valuation using the fast fourier transform. Peter Carr, Morgan Stanley, Dilip Madan, J. Comput. Finance. 2Peter Carr, Morgan Stanley, and Dilip Madan. Option valuation using the fast fourier transform. J. Comput. Finance, 2, 03 2001.
An analysis of the heston stochastic volatility model: Implementation and calibration using matlab. Ricardo Crisostomo, Ricardo Crisostomo. An analysis of the heston stochastic volatility model: Implementation and calibration using matlab. 2015.
Kendall's Advanced Theory of Statistics. M G Kendall, Alan Stuart, Keith Ord, Distribution Theory. Wiley16 editionKendall MG, Alan Stuart, and Keith Ord. Kendall's Advanced Theory of Statistics, Volume 1, Distribution Theory. Wiley, 6 edition, 1994.
Train large, then compress: Rethinking model size for efficient training and inference of transformers. Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, Joseph E Gonzalez, Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin, Kurt Keutzer, Dan Klein, and Joseph E. Gonzalez. Train large, then compress: Rethinking model size for efficient training and inference of transformers. 2020.
Valuing american options by simulation: A simple least-squares approach. Francis Longstaff, Eduardo Schwartz, Review of Financial Studies. 14Francis Longstaff and Eduardo Schwartz. Valuing american options by simulation: A simple least-squares approach. Review of Financial Studies, 14:113-47, 02 2001.
Neural network regression for bermudan option pricing. Bernard Lapeyre, Jérôme Lelong, arXiv:1907.06474arXiv preprintBernard Lapeyre and Jérôme Lelong. Neural network regression for bermudan option pricing. arXiv preprint arXiv:1907.06474, 2019.
Optimal stopping via randomized neural networks. Calypso Herrera, Florian Krach, Pierre Ruyssen, Josef Teichmann, Calypso Herrera, Florian Krach, Pierre Ruyssen, and Josef Teichmann. Optimal stopping via randomized neural networks. 2021.
Deep neural network framework based on backward stochastic differential equations for pricing and hedging american options in high dimensions. Yangang Chen, Justin W L Wan, Yangang Chen and Justin W. L. Wan. Deep neural network framework based on backward stochastic differential equations for pricing and hedging american options in high dimensions. 2019.
| [] |
[
"Bottomonium sequential suppression and strong heavy-quark potential in heavy-ion collisions",
"Bottomonium sequential suppression and strong heavy-quark potential in heavy-ion collisions"
] | [
"Liuyuan Wen \nDepartment of Physics\nTianjin University\n300350TianjinChina\n",
"Baoyi Chen \nDepartment of Physics\nTianjin University\n300350TianjinChina\n"
] | [
"Department of Physics\nTianjin University\n300350TianjinChina",
"Department of Physics\nTianjin University\n300350TianjinChina"
] | [] | We employ the time-dependent Schrödinger equation with different complex potentials to study the bottomonium sequential suppression in Pb-Pb collisions at √ sNN = 2.76 TeV and 5.02 TeV andAu-Au collisions at √ sNN = 200 GeV. Both color screening effect and the random scatterings with thermal partons are considered in the real and imaginary parts of the heavy-quark potentials. As the real part of the heavy-quark potential is between the free energy F (T, r) and the internal energy U (T, r) of heavy quarkonium, we parametrize different potentials with a function of F and U to evolve the bottomonium wave packages in the medium. We find that when the real part of the potential is close to U (T, r), it can explain well the pattern of bottomonium sequential suppression where their nuclear modification factors satisfy the relation RAA(1s) > RAA(2s) > RAA(3s) observed in experiments. In the other limit of F (T, r), bottomonium wave packages tend to expand due to weak attracted force, which results in evident transitions from Υ(2s) to Υ(3s) components and does not satisfy the sequential suppression pattern. We suggest that the bottomonium sequential suppression can be a probe of strong heavy-quark potential in the medium. | 10.1016/j.physletb.2023.137774 | [
"https://export.arxiv.org/pdf/2208.10050v2.pdf"
] | 251,719,568 | 2208.10050 | 109020e92d75c80dcc3e20d351481b011f144a4b |
Bottomonium sequential suppression and strong heavy-quark potential in heavy-ion collisions
Liuyuan Wen
Department of Physics
Tianjin University
300350TianjinChina
Baoyi Chen
Department of Physics
Tianjin University
300350TianjinChina
Bottomonium sequential suppression and strong heavy-quark potential in heavy-ion collisions
(Dated: March 10, 2023)
We employ the time-dependent Schrödinger equation with different complex potentials to study the bottomonium sequential suppression in Pb-Pb collisions at √ sNN = 2.76 TeV and 5.02 TeV andAu-Au collisions at √ sNN = 200 GeV. Both color screening effect and the random scatterings with thermal partons are considered in the real and imaginary parts of the heavy-quark potentials. As the real part of the heavy-quark potential is between the free energy F (T, r) and the internal energy U (T, r) of heavy quarkonium, we parametrize different potentials with a function of F and U to evolve the bottomonium wave packages in the medium. We find that when the real part of the potential is close to U (T, r), it can explain well the pattern of bottomonium sequential suppression where their nuclear modification factors satisfy the relation RAA(1s) > RAA(2s) > RAA(3s) observed in experiments. In the other limit of F (T, r), bottomonium wave packages tend to expand due to weak attracted force, which results in evident transitions from Υ(2s) to Υ(3s) components and does not satisfy the sequential suppression pattern. We suggest that the bottomonium sequential suppression can be a probe of strong heavy-quark potential in the medium.
I. INTRODUCTION
Relativistic heavy-ion collisions can generate an extremely hot deconfined medium consisting of quarks and gluons, called Quark-Gluon Plasma (QGP) [1]. Light hadrons are melt into partons with deconfinement phase transitions. While heavy quarkonium with large binding energy may survive in the medium up to a dissociation temperature T d [2][3][4]. The production and collective flows of heavy quarkonium are regarded as probes of the hot medium profile in heavy-ion collisions [5][6][7][8][9][10][11][12]. At temperatures below T d , quarkonium bound states also suffer random scatterings with thermal partons. In transport models [13][14][15][16][17][18][19], color screening effect and the inelastic scatterings are considered in the collision cross section with thermal partons. While in potential models, the effect of random scatterings is approximated to be an imaginary potential on the quarkonium wave function [20][21][22][23]. Both color screening effect and particle scatterings are included in the complex potentials of quarkonium. The inverse transition from color-octet to color-singlet state is ignored herein, which turns out to be less important in the nuclear modification factors of bottomonium [24]. There have been some potential models such as the Schrödinger-Langevin equation which describes the evolution of quarkonium wave functions with the stochastic potential directly [22]. Open quantum system approaches are also developed to evolve the density matrix of the subsystems with the Lindblad equation formalism [25,26] and the stochastic wave packages with the Stochastic Schrödinger equation [27,28].
The sequential suppression pattern of bottomonium states have been observed in Pb-Pb collisions at the Large Hadron Collider (LHC) [29][30][31] and the Relativis- * [email protected] tic Heavy-ion Collider (RHIC) [32,33]. The dissociation of quarkonium is closely connected with the in-medium heavy quark potentials. One can use the sequential suppression pattern to extract the in-medium potential. In this work, we take different complex potentials to evolve the bottomonium wave packages in the hot medium. The inner motion of wave packages is described with the timedependent Schrödinger equation. As the real part of the heavy quark potential is between two limits: the free energy F and the internal energy U , a parametrized potential consisting of F and U are taken in the Hamiltonian. We studied the suppression of bottomonium states Υ(1s, 2s, 3s) at LHC energy (2.76 TeV, 5.02 TeV) Pb-Pb collisions and the RHIC energy (200 GeV) Au-Au collisions. The regeneration process from uncorrelated bottom and anti-bottom quarks in QGP are neglected in the present framework [34,35]. It turns out to be less important compared with the primordial production.
Firstly, we will introduce the framework of the Schrödinger equation model. Then we parametrize the complex potentials based on the data from Lattice QCD calculations. The realistic spatial distribution of bottomonium produced in Pb-Pb collisions is proportional to the number of binary collisions. While the initial momentum distribution is extracted from the pp collisions. After bottomonium wave packages move out of the hot medium, the final production of bottomonium states is obtained by projecting the wave packages to the Υ(1s, 2s, 3s) states which are defined as the eigenstates of the vacuum Cornell potential. The inclusive nuclear modification factors of each state are given after considering the feed-down process from higher to lower states.
II. THEORETICAL MODEL
In the bottomonium evolution, hot medium effects are included in the temperature-dependent complex poten-arXiv:2208.10050v2 [nucl-th] 9 Mar 2023 tial. Neglect the viscosity of the medium, both real and imaginary potentials become central. There is no mixing between the states with different angular momentum in the evolution of bottomonium wave package. Due to the large mass of bottom quarks compared with the inner motion of bottomonium, one can neglect the relativistic effect. We separate the radial part of the Schrödinger equation for bottomonium evolution [36],
i ∂ ∂t ψ(r, t) = [− 2 2m µ ∂ 2 ∂r 2 + V (r, T ) + L(L + 1) 2 2m µ r 2 ]ψ(r, t)(1)
where r is the radius of the wave package. t is the proper time in the local rest frame of bottomonium. The reduced mass is defined as m µ = m 1 m 2 /(m 1 + m 2 ) = m b /2 with bottom mass to be m b = 4.62 GeV. The wave package ψ(r, t) = rR(r, t) is a product of the r and the radial wave function R(r, t). It is regarded as a quantum superposition of bottomonium eigenstates of the vacuum Cornell potential, ψ(r,t) r = nl c nl (t)Φ nl (r). Φ nl (r) is the radial wave function of the eigenstate with the fixed radial and angular quantum number (n, l). The square of the coefficient |c nl (t)| 2 is the fraction of the corresponding eigenstate in the wave package. It changes with time due to the complex heavy-quark potential V (r, T ).
In the hot medium, the real part of the heavy quark potential is reduced by the color screening. It is between the free energy F (T, r) and the internal energy U (r, T ). Some studies have indicated a strong attracted potential of quarkonium in the medium [7,36,37]. We parametrize the free energy with the form [38],
F (r, T ) = − α r e −m D r + σ m D (1 − e −m D r )(2)
where m D = T 4πNc 3 α(1 + N f 6 ) is the in-medium gluon Debye mass [38]. The values of α and σ are determined in the Cornell potential with the masses of bottomonium states (1S,1P), α = π/12, σ = 0.2 (GeV) 2 , where the corresponding vacuum masses of the states are m 1S,1P = (9.47, 9.79) GeV [3]. The factors of color and flavor are taken as N c = N f = 3. The internal energy is obtained via the relation U (r, T ) = F + T (−∂F/∂T ),
U (r, T ) = − α r (1 + m D r)e −m D r + 2σ m D [1 − e −m D r ] − σre −m D r(3)
In the following calculations, we parametrize the real potential with a combination of F and U ,
V R = xF + (1 − x)U .
The parameter x ∈ [0, 1] indicates that the real potential is between the free energy and the internal energy. The dependence of (r, T ) in the parameter x has been temporarily neglected.
Random scatterings from thermal partons contribute noise terms in the potential. The stochastic evolu- FIG. 1. The imaginary part of the heavy quark potential scaled with the temperature iVI /T as a function of the radius. Different uncertainties of the data points are partially considered with "Band 1" and "Band 2". The potential data is cited from [39].
tion of the wave package with a noisy potential is wellapproximated by the evolution of the averaged wave function with complex potential [36,40]. According to the results from lattice QCD calculations, the imaginary potential shows near-linear dependence on the temperature. Therefore, we fit the lattice data points with the polynomial up to cubic terms of the radius [41]. Due to the large uncertainty in the data points, we perform two kinds of V I parametrization for the following calculations. Firstly, we fit the central values of the data points at r < 1.2 fm, while data points at a very large radius have been neglected due to extremely large uncertainty. This line is then shifted upward to partially include the uncertainty in the data points [36]. Two lines are plotted as the lower and upper limits of the "Band 1" in Fig.1, where the expressions of two lines are,
V upper I (r, T ) = −iT (a 1r 3 + a 2r 2 + a 3r ) V lower I (r, T ) = −iT (b 1r 3 + b 2r 2 + b 3r ),(4)
where a 1 = 0.2096, a 2 = 0.1486, a 3 = 0.0854 and b 1 = 0.3605, b 2 = 0.2536, b 3 = 0.0909. i is an imaginary number.r ≡ r/f m is the dimensionless variable. In order to consider a more uncertainty in the data points, we use all the data points to fit a line and then consider one standard deviation error bar in the fit. This will give a larger band labeled with "Band 2" in Fig.1. The larger uncertainty in the V I will result in significant uncertainty in bottomonium R AA . In the following calculations about bottomonium, V I labeled with "Band 1" is firstly employed. In the last section, V I labeled with "Band 2" is also considered to study the uncertainty of model parameters.
The time-dependent Schrödinger equation with complex potentials can be solved numerically with the Crank-Nicolson method, where the wave package is directly evolved in the coordinates instead of projecting to a series of eigenstates. The discrete form of the radial Schrödinger equation becomes (with natural unit = c = 1),
T n+1 j,k ψ n+1 k = V n j .(5)
Here j and k are the index of rows and columns in the matrix T respectively. The non-zero elements in the matrix are,
T n+1 j,j = 2 + 2a + bV n+1 j , T n+1 j,j+1 = T n+1 j+1,j = −a, V n j = aψ n j−1 + (2 − 2a − bV n j )ψ n j + aψ n j+1 ,(6)
the parameters are a = i∆t/(2m µ (∆r) 2 ), and b = i∆t. ∆t = 0.001 fm/c and ∆r = 0.03 fm is the step of the time and the radius. The subscript j and the superscript n represents the coordinate and the time, r j = j · ∆r and t n = n · ∆t respectively. As the potential depends on the temperature of the medium which changes with the time and the coordinate, hamiltonian depends on time.
At each time step, we calculate the inverse of the matrix T to get the wave package at the next time step. As the medium local temperature only changes evidently after a time scale larger than ∼ 0.1 fm/c, We can approximately use the same value of the matrix in this time scale, and update the value of the matrix and its inverse after a time period of 100∆t. This approximation significantly reduces the numerical computation time. The fractions of the bottomonium eigenstates in the wave package of the bottom dipole is obtained by projecting ψ(r, t) to the wave functions of the corresponding eigenstates Φ nl . In Pb-Pb collisions, bottom pairs are produced in parton hard scatterings, where the initial wave functions of bottom pairs are close to a Delta function in coordinates at the time scale τ ∼ 1/(2m b ). As bulk medium are described with the hydrodynamic equations from τ 0 ∼ 0.6 fm/c [42], where the medium reaches local equilibrium, we start evolving the Schrödinger equation from τ ≥ τ 0 and assume that the bottom pairs have evolved into bottomonium eigenstates in the time scale τ ∼ τ 0 . In the Schrödinger equation, the initial conditions of wave packages at τ = τ 0 are taken as the bottomonium vacuum eigenstates. The mixing between different eigenstates in the wave package will be induced by the in-medium potential which corresponds to the transitions between different eigenstates. With multiple wave packages generated at different positions, they move along different trajectories in the hot medium and experience different temperature profiles. We randomly generate a large set of wave packages representing the primordially produced bottomonium, and evolve their wave packages event-byevent with the Schrödinger equation. The ensembleaveraged final fractions of the bottomonium eigenstates in the wave package are obtained and used to calculate the direct and inclusive nuclear modification factor R AA (1s, 2s, 3s).
III. INITIAL CONDITIONS AND HOT MEDIUM EVOLUTIONS
In proton-proton (pp) collisions, experiments have measured the production cross section of Υ(1s, 2s, 3s) respectively.
The inclusive cross sections σ exp of Υ(1s, 1p, 2s, 2p, 3s) with the feed-down process are listed in table I. The direct cross section without the feed-down process is connected with the experimental data with the branching ratios from the particle data group [43]. It satisfies the relation σ exp (1s) = σ direct (1s) + nl σ direct (nl)B nl→1s . The radial and angular quantum number n, l represents the states of (1p,2s,2p,3s) when taking different values. In the calculations, We treat the χ b0,b1,b2 (1p) to be the same 1p state where the branching ratio of B 1p→1s is the average of the three branching ratios B(χ b0,b1,b2 (1p) → 1s). The potential for the χ b0,b1,b2 (1p) is also the same in the Schrödinger equation. Similarly, χ b (2p) represents the sum of χ b0,b1,b2 (2p) states. For the inclusive cross sections of excited states, we can obtain a similar relation σ exp (2s) = σ direct (2s) + nl σ direct (nl)B nl→2s where n, l here represents higher states above Υ(2s). With this method we extract the direct cross sections of bottomonium states listed in table I. In the initial conditions of the Schrödinger equation, the number of wave packages initialized as different bottomonium eigenstates at t = τ 0 satisfy the ratio σ 1s direct : σ 1p direct : σ 2s direct : σ 2p direct : σ 3s direct in table I. After initializing the wave function of bottom pair to be the bottomonium eigenstates at t = τ 0 , the relative motion of the wave packages are controlled by the Schrödinger equation. While the center of the wave packages move in the medium with a constant total momentum p T . We assume that the momentum distribution of wave packages satisfy the bottomonium momentum distribution measured in pp collisions. The normalized initial transverse momentum distribution of Υ(1s) in 5.02 TeV pp collisions is parametrized with the form,
dN Υ pp dφp T dp T = (n − 1) π(n − 2) p 2 T pp [1 + p 2 T (n − 2) p 2 T pp ] −n (7)
where the parameter is fitted to be n = 2.5, and the mean transverse momentum square at the central rapidity is estimated to be p 2 T pp = (80, 55, 28) (GeV/c) 2 at 5.02 TeV, 2.76 TeV and 200 GeV respectively based on the measurements in pp collisions [47,48,52]. Spatial distri-butions of the wave package is proportional to the number of binary collisions in the overlap of two nuclei. We assume the same distribution for bottomonium excited states as the ground state. The cold nuclear matter effects such as the shadowing effect also changes the initial distributions of the wave packages. We take the EPS09 NLO model to calculate the shadowing factor R S (x T ) of bottomonium [53]. We randomly generate the wave packages based on the spatial distribution,
dN Υ AA dx T ∝ T A (x T + b/2)T B (x T − b/2)R S (x T )(8)
where T A(B) is the thickness function. b is the impact parameter. The center of the wave packages move in the hot medium with a constant momentum satisfying the distribution Eq. (7). For the hot medium evolutions, we use the (2+1) dimensional ideal hydrodynamic model developed by Tsinghua Group. The hydrodynamic model have been applied in the transport model to explain well both charmonium and bottomonium observables at RHIC [7,54] and LHC [55][56][57] energies. At √ s N N = 5.02 TeV Pb-Pb collisions, the maximum initial temperature at the center of the fireball is determined to be T 0 (x T = 0) = 510 MeV at the time τ 0 = 0.6 fm in central rapidity. The spatial distributions of the initial energy density is obtained with the optical Glauber initial condition. The equation of state of the QGP and the hadron gas are taken as the ideal gas and hadron resonance gas, respectively. The phase transition between two phases is a first-order transition with the critical temperature determined to be T c = 165 MeV by the bag model. Wave packages start evolution with the parametrized complex potentials from τ ≥ τ 0 along different trajectories. When the local temperature along quantum trajectories is smaller than T c , in-medium potential is replaced with the Cornell potential, where the fraction of the bottomonium eigenstates in wave package do not change any more. With this approximation, we have neglected the hadron gas effects, which is believed to be small for tightly bound bottomonium.
IV. OBSERVABLES
At each time step, the dynamical evolution of the wave package is described with the Schrödinger equation where the hot medium effects and the cold nuclear matter effects are encoded in the complex potential and the initial conditions respectively. The wave packages move with a constant velocity in the medium. The final fractions of the bottomonium eigenstates in the wave packages are calculated when the center of wave packages move out of the QGP. After ensemble average over a large set of wave packages generated with different initial positions and the total momenta, we get the mean fraction |c nl (t)| 2 of the bottomonium eigenstate labelled with the quantum number (n, l) in the wave package to be,
|c nl (t)| 2 en =´d x Υ dp Υ |c nl (t, x Υ , p Υ )| 2 dN Υ AA dxΥdpΎ dx Υ dp Υ dN Υ AA dxΥdpΥ(9)
where x Υ and p Υ are the position and the total momentum of the center of a wave package respectively. dN Υ /dx Υ dp Υ represents the distribution of wave packages in phase space. After the evolution of the Schrödinger equation, excited states decay into the ground state and contribute to the inclusive R AA of Υ(1s) and Υ(2s),
R AA (1s) = nl |c nl (t)| 2 en f nl pp B nl→1s nl |c nl (t 0 )| 2 en f nl pp B nl→1s(10)
B nl→1s is the branching ratio of the eigenstate (n, l) decaying into the 1s state. The values of branching ratios are taken from the particle data group. In the calculation of inclusive R AA (1s), all the decay channels of
χ b (1p) → Υ(1s), Υ(2s) → Υ(1s), χ b (2p) → Υ(1s), Υ(3s) → Υ(1s)
are included in Eq. (10). For inclusive R AA (2s), the decay contribution from χ b (2p), Υ(3s) are considered. There is no higher states decaying into Υ(3s).
V. NUMERICAL RESULTS
In heavy-ion collisions, we take different complex potentials in the Schrödinger equation to calculate the nuclear modification factors of bottomonium at LHC and RHIC energies. The real part of the potential is parametrized with a function of F and U . While the imaginary potential is parametrized with a polynomial function. Different uncertainties in the imaginary potential are considered with the "Band 1" and "Band 2" in Fig.1. In this section, V I with a smaller uncertainty is taken into the calculations.
In Fig.2, the suppression of bottomonium R AA as a function of the number of participant N p are mainly induced by the imaginary potential and the color screening effect. In the case of V R = U , strong heavy quark potential can constrain the wave package. The theoretical calculations show a clear pattern of the sequential suppression, R 1s AA > R 2s AA > R 3s AA in Fig.2. The bands of the theoretical results come from the uncertainty in the imaginary potential (V upper I and V lower I ). For excited states such as Υ(3s), its geometry size is larger than the ground state. As V I increases with the distance and the temperature, bottomonium excited states suffer stronger dissociation than the ground state. However, when the heavy quark potential is close to the free energy, such as in the cases of V R = F and V R = 0.2U + 0.8F , the attracted force in the wave package becomes weak. The wave package tends to expand outside. The components of the ground state tend to transit into the excited states. In the weak heavy quark potential with V R = F in Fig.2, the theoretical band of R 1s AA is smaller than the experimental data. Besides, the pattern of of Υ(2s, 3s) sequential suppression is not evident anymore.
For the p T -differential R AA in Fig.3, there is a clear sequential suppression pattern in R AA (1s, 2s, 3s) with V R = U . The theoretical bands can qualitatively explain the experimental data of Υ(1s, 2s, 3s). The slight increase of R AA (p T ) with p T is because of the fact that wave packages with larger momentum tend to escape from the hot medium with a shorter time, where the suppression is weaker. While with a weak heavy quark potential (V R = F ), the quantum transition in the wave package makes R 3s AA become larger than the value of Υ(2s). This is due to the expansion of wave package, which increases the overlap between the wave package and the Υ(3s) wave function. The uncertainty in the imaginary potential is partially included in R AA with a band. It does not change the conclusion about the sequential suppression and the strong heavy quark potential. Therefore, the sequential suppression pattern of bottomonium indicates a strong heavy quark potential in the hot medium. The exact r− and T −dependence in the parameter x in the parametrization of V R is beyond the scope of present work and will be left in following works.
Next we turn to the experimental data about bottomonium at √ s N N = 2.76 TeV Pb-Pb collisions. The ex- complex potential with
V R = F R AA p T ALICE -Y(1s) CMS -Y(1s) CMS -Y(2s) CMS -Y(3s) Y(1s) Y(2s) Y(3s) FIG. 3.
The nuclear modification factors of bottomonium states Υ(1s, 2s, 3s) as a function of pT in minimum-bias √ sNN = 5.02 TeV Pb-Pb collisions. The complex potentials are taken, where the real part of the potential is parametrized with the form VR = xF + (1 − x)U , with x = (0, 0.5, 0.8, 1) respectively in above panels. The band of the theoretical results corresponds to the uncertainty of the imaginary potential fitted before. The experimental data is cited from ALICE [58] and CMS [59] Collaborations.
perimental data in Fig.4 is cited from CMS Collaboration. At √ s N N = 2.76 TeV, the production cross section of Υ(1s, 2s) are extracted to be dσ Υ /dy = (30.3, 10) nb respectively based on the measurement in the central rapidity pp collisions [30,60]. While the production cross section of Υ(3s) is extracted with the same ratio of σ(Υ(2s))/σ(Υ(3s)) used in 5.02 TeV collisions. We obtain the differential cross section of Υ(3s) to be dσ Υ(3s) /dy = 3.58 nb in the central rapidity. For p-wave states, their cross sections at 2.76 TeV are obtained with the same ratio σ(Υ(1p, 2p))/σ(Υ(1s)) used in 5.02 TeV. The direct cross section before feed-down process can be obtained based on the same approach in the case of 5.02 TeV. The shadowing nuclear modification factor on bottomonium production is calculated with the EPS09 NLO model as done in 5.02 TeV. Hot medium evolution is also updated with a new input of the energy density profile, where the maximum local temperature at the center of the medium is determined as T 0 (x T = 0|b = 0) = 484 MeV.
In Fig.4, the nuclear modification factor of three bottomonium states Υ(1s, 2s, 3s) are calculated with different complex potentials. The real part of the potential is taken as free energy and the internal energy V R = F and U . It seems that both potentials can explain the data of the ground and the excited state Υ(1s, 2s). However, the relations between R AA (2s) and R AA (3S) become very different in the weak (V R = F ) and strong (V R = U ) binding scenarios. In the weak heavy-quark potential, bottomonium wave packages tend to expand outside in the hot medium. This results in the transition of 2s to 3s states, and gets R AA (3s) ≥ R AA (2s). When taking a strong potential V R = U , theoretical studies show a clear pattern of sequential suppression between Υ(1s, 2s, 3s). More experimental data especially about the Υ(2s, 3s) can help to reveal the in-medium heavy-quark potentials.
At the end, we turn to the bottomonium suppression at √ s N N = 200 GeV Au-Au collisions. In the central rapidity of pp collision, bottomonium different cross section is fitted to be dσ Υ(1s,2s,3s) /dy = (2.35, 0.77, 0.27) nb based on the Υ(1s, 2s) measurements from STAR Collaboration, while Υ(3s) cross section is scaled with the same ratio of σ(Υ(2s))/σ(Υ(3s)) as in 5.02 TeV. The medium temperatures at RHIC energy is much lower than the values at LHC energies. The maximum temperature in the central Au-Au collisions is determined to be T 0 (x T = 0|b = 0) = 390 MeV. The shadowing factor on bottomonium is much smaller at RHIC energy. With weak and strong potentials, we calculated the R AA (1s, 2s, 3s) as a function of N p and p T respectively in Fig.5. Even theoretical bands with different choices of V R can explain the Υ(1s) data, but the pattern of sequential suppression in Υ(2s, 3s) can only be seen in the situation of V R = U . Employing different imaginary potentials do not change the suppression pattern in three states Υ(1s, 2s, 3s). For excited states, theoretical bands slightly underestimate the experimental data but still stay in the uncertainty of the data points. This is due to the imaginary potential which results in strong suppression on excited states in the temperature regions at RHIC energy.
VI. UNCERTAINTY DISCUSSION
Now we discuss the sensitivity of the results on the parameters. As final bottomonium is dominated by the primordial production, the initial p T distribution function of bottomonium Eq. (7) is mostly cancelled in the numerator and denominator of the R AA (p T ). Besides, R AA shows weak dependence on p T in a fixed centrality. Therefore, R AA depends weakly on the choices of n and p 2 T pp in the initial p T distribution. The rapidity differential cross section dσ/dy used in both numerator and denominator of R AA is also canceled. Another important ingredient is the imaginary potential. The degree of bottomonium suppression depends on the magnitude of the imaginary potential. As imaginary potential increases with the radius, bottomonium excited states suffer stronger suppression compared with the ground state. In previous sections, bottomonium R AA are calculated by taking the V I labeled with "Band 1". When we take V I with one standard deviation error bar ("Band 2"), the corresponding R AA s are also calculated and plotted in Fig.6-7. As one can see, the uncertainty in R AA becomes very large to prevent any solid conclusions. While the sequential suppression pattern of Υ(1s, 2s, 3s) can still be seen when we take a strong heavy quark potential. That means the conclusion of this work is not changed by different choices of the imaginary potential.
VII. SUMMARY
In this work, we employ the time-dependent Schrödinger equation with different complex potentials to study the sequential suppression of bottomonium states Υ(1s, 2s, 3s) at √ s N N = (5.02, 2.76) TeV Pb-Pb collisions and 200 GeV Au-Au collisions. The imaginary part of the heavy quark potential is fitted with the lattice QCD calculations, where the uncertainty of the data points are considered in the imaginary potential with a band. While the real part of the potential is between the free energy F and the internal energy U . The realistic real potential is parametrized as a function of F and U . With different real potentials, the attracted force in the wave packages of bottomonium becomes different. We find that the sequential suppression pattern of Υ(1s, 2s, 3s) is explained well when the real potential is strong and close to U which can constrain the wave package in the hot medium. This conclusion is not changed by different choices of the imaginary potential. Therefore, the sequential suppression pattern of bottomonium states is suggested to be a probe of the strong in-medium heavy quark potential in heavy-ion collisions.
FIG. 2 .
2The nuclear modification factors of bottomonium states Υ(1s, 2s, 3s) as a function of the number of participant Np in √ sNN = 5.02 TeV Pb-Pb collisions. The complex potentials are taken where the real part of the potential is parametrized with the form VR = xF + (1 − x)U , with x = (0, 0.5, 0.8, 1) respectively in above panels. The band of the theoretical results corresponds to the uncertainty of the imaginary potential fitted before. The experimental data is cited from ALICE[58] and CMS[59] Collaborations.
FIG. 4 .
4The nuclear modification factors of bottomonium states Υ(1s, 2s, 3s) as a function of Np and pT in √ sNN = 2.76 TeV Pb-Pb collisions. RAA(pT ) is in minimum-bias. The real part of the potential is taken as VR = U and VR = F respectively. The band of the theoretical results corresponds to the uncertainty of the imaginary potential. The experimental data is cited from CMS Collaboration[30].
FIG. 5 .
5The nuclear modification factors of bottomonium states Υ(1s, 2s, 3s) as a function of Np and pT in √ sNN = 200 GeV Au-Au collisions. RAA(pT ) is in cent.0-60%. The real part of the potential is taken as VR = F and VR = U respectively. The band of the theoretical results corresponds to the uncertainty of the imaginary potential. The experimental data are cited from STAR Collaboration [61, 62].
FIG. 6 .
6The nuclear modification factors of bottomonium Υ(1s, 2s, 3s) in √ sNN = 5.02 TeV Pb-Pb collisions. The imaginary potential labeled with "Band 2" inFig.1is taken. Other parameters are the same as in the previous figures.
FIG. 7 .
7The nuclear modification factors of bottomonium states Υ(1s, 2s, 3s) in √ sNN = 2.76 TeV Pb-Pb and 200 GeV Au-Au collisions. The imaginary potential labeled with "Band 2" in Fig.1 is taken. Other parameters are the same as in the previous figures.
TABLE I .
IBottomonium production cross section measured in √ sNN = 5.02 TeV pp collisions. σexp and σ direct represent the cross sections with and without the feed-down process. The values of σexp are cited from [44-51]. exp (nb) 57.6 33.51 19 29.42 6.8 σ direct (nb) 37.97 44.2 18.27 37.68 8.21State
Υ(1s) χ b (1p) Υ(2s) χ b (2p) Υ(3s)
σ
. A Bazavov, T Bhattacharya, M Cheng, C Detar, H T Ding, S Gottlieb, R Gupta, P Hegde, U M Heller, F Karsch, Phys. Rev. D. 8554503A. Bazavov, T. Bhattacharya, M. Cheng, C. DeTar, H. T. Ding, S. Gottlieb, R. Gupta, P. Hegde, U. M. Heller and F. Karsch, et al. Phys. Rev. D 85, 054503 (2012)
. T Matsui, H Satz, Phys. Lett. B. 178T. Matsui and H. Satz, Phys. Lett. B 178, 416-422 (1986)
. H Satz, J. Phys. G. 3225H. Satz, J. Phys. G 32, R25 (2006)
. J Zhao, K Zhou, S Chen, P Zhuang, Prog. Part. Nucl. Phys. 114103801J. Zhao, K. Zhou, S. Chen and P. Zhuang, Prog. Part. Nucl. Phys. 114, 103801 (2020)
. B Chen, M Hu, H Zhang, J Zhao, Phys. Lett. B. 802135271B. Chen, M. Hu, H. Zhang and J. Zhao, Phys. Lett. B 802, 135271 (2020)
. J Zhao, B Chen, P Zhuang, Phys. Rev. C. 105334902J. Zhao, B. Chen and P. Zhuang, Phys. Rev. C 105, no.3, 034902 (2022)
. Y Liu, B Chen, N Xu, P Zhuang, Phys. Lett. B. 697Y. Liu, B. Chen, N. Xu and P. Zhuang, Phys. Lett. B 697, 32-36 (2011)
. X Du, R Rapp, JHEP. 0315X. Du and R. Rapp, JHEP 03, 015 (2019)
. M Strickland, Phys. Rev. Lett. 107132301M. Strickland, Phys. Rev. Lett. 107, 132301 (2011)
. M Strickland, D Bazow, Nucl. Phys. A. 879M. Strickland and D. Bazow, Nucl. Phys. A 879, 25-58 (2012)
. N Brambilla, M Á Escobedo, M Strickland, A Vairo, P Vander Griend, J H Weber, Phys. Rev. D. 104994049N. Brambilla, M.Á. Escobedo, M. Strickland, A. Vairo, P. Vander Griend and J. H. Weber, Phys. Rev. D 104, no.9, 094049 (2021)
. X Yao, Int. J. Mod. Phys. A. 36202130010X. Yao, Int. J. Mod. Phys. A 36, no.20, 2130010 (2021)
. L Yan, P Zhuang, N Xu, Phys. Rev. Lett. 97232301L. Yan, P. Zhuang and N. Xu, Phys. Rev. Lett. 97, 232301 (2006)
. B Chen, T Guo, Y Liu, P Zhuang, Phys. Lett. B. 765B. Chen, T. Guo, Y. Liu and P. Zhuang, Phys. Lett. B 765, 323-327 (2017)
. X Zhao, R Rapp, Phys. Lett. B. 664X. Zhao and R. Rapp, Phys. Lett. B 664, 253-257 (2008)
. X Du, R Rapp, Nucl. Phys. A. 943X. Du and R. Rapp, Nucl. Phys. A 943, 147-158 (2015)
. X Yao, W Ke, Y Xu, S A Bass, B Müller, JHEP. 0146X. Yao, W. Ke, Y. Xu, S. A. Bass and B. Müller, JHEP 01, 046 (2021)
. X Yao, T Mehen, JHEP. 0262X. Yao and T. Mehen, JHEP 02, 062 (2021)
. J Zhao, P Zhuang, Phys. Rev. C. 105664907J. Zhao and P. Zhuang, Phys. Rev. C 105, no.6, 064907 (2022)
. J P Blaizot, M A Escobedo, Phys. Rev. D. 98774007J. P. Blaizot and M. A. Escobedo, Phys. Rev. D 98, no.7, 074007 (2018)
. J P Blaizot, M Á Escobedo, Phys. Rev. D. 104554034J. P. Blaizot and M.Á. Escobedo, Phys. Rev. D 104, no.5, 054034 (2021)
. R Katz, P B Gossiaux, Annals Phys. 368R. Katz and P. B. Gossiaux, Annals Phys. 368, 267-295 (2016)
. P B Gossiaux, R Katz, Nucl. Phys. A. 956P. B. Gossiaux and R. Katz, Nucl. Phys. A 956, 737-740 (2016)
. Y Akamatsu, Phys. Rev. D. 91556002Y. Akamatsu, Phys. Rev. D 91, no.5, 056002 (2015)
. N Brambilla, M A Escobedo, J Soto, A Vairo, Phys. Rev. D. 96334021N. Brambilla, M. A. Escobedo, J. Soto and A. Vairo, Phys. Rev. D 96, no.3, 034021 (2017)
. N Brambilla, M Á Escobedo, M Strickland, A Vairo, P Vander Griend, J H Weber, JHEP. 05136N. Brambilla, M.Á. Escobedo, M. Strickland, A. Vairo, P. Vander Griend and J. H. Weber, JHEP 05, 136 (2021)
. Y Akamatsu, M Asakawa, S Kajimoto, A Rothkopf, JHEP. 0729Y. Akamatsu, M. Asakawa, S. Kajimoto and A. Rothkopf, JHEP 07, 029 (2018)
. Z Xie, B Chen, arXiv:2205.13302nucl-thZ. Xie and B. Chen, [arXiv:2205.13302 [nucl-th]].
. A M Sirunyan, CMSPhys. Lett. B. 790A. M. Sirunyan et al. [CMS], Phys. Lett. B 790, 270-293 (2019)
. V Khachatryan, CMS]Phys. Lett. B. 770V. Khachatryan et al. [CMS], Phys. Lett. B 770, 357-379 (2017)
. B B Abelev, ALICEPhys. Lett. B. 738B. B. Abelev et al. [ALICE], Phys. Lett. B 738, 361-372 (2014)
. A Adare, PHENIXPhys. Rev. C. 91224913A. Adare et al. [PHENIX], Phys. Rev. C 91, no.2, 024913 (2015)
. L Adamczyk, Phys. Lett. B. 735Phys. Lett. BL. Adamczyk et al. [STAR], Phys. Lett. B 735, 127-137 (2014) [erratum: Phys. Lett. B 743, 537-541 (2015)]
. B Chen, J Zhao, Phys. Lett. B. 772B. Chen and J. Zhao, Phys. Lett. B 772, 819-824 (2017)
. J P Blaizot, D De, P Boni, G Faccioli, Garberoglio, Nucl. Phys. A. 946J. P. Blaizot, D. De Boni, P. Faccioli and G. Garberoglio, Nucl. Phys. A 946, 49-88 (2016)
. L Wen, X Du, S Shi, B Chen, Chin.Phys. C. 46114102L. Wen, X. Du, S. Shi, B. Chen, Chin.Phys. C 46,(2022) 114102,
. X Du, S Y F Liu, R Rapp, Phys. Lett. B. 796X. Du, S. Y. F. Liu and R. Rapp, Phys. Lett. B 796, 20-25 (2019)
. A Islam, M Strickland, JHEP. 21235A. Islam and M. Strickland, JHEP 21, 235 (2020)
. Y Burnier, A Rothkopf, Phys. Rev. D. 95554511Y. Burnier and A. Rothkopf, Phys. Rev. D 95, no.5, 054511 (2017)
. A Islam, M Strickland, Phys. Lett. B. 811135949A. Islam and M. Strickland, Phys. Lett. B 811, 135949 (2020)
. S Shi, K Zhou, J Zhao, S Mukherjee, P Zhuang, Phys. Rev. D. 10511S. Shi, K. Zhou, J. Zhao, S. Mukherjee and P. Zhuang, Phys. Rev. D 105, no.1, 1 (2022)
. W Zhao, H J Xu, H Song, Eur. Phys. J. C. 779645W. Zhao, H. j. Xu and H. Song, Eur. Phys. J. C 77, no.9, 645 (2017)
. M Tanabashi, Phys. Rev. D. 98330001Particle Data GroupM. Tanabashi et al. [Particle Data Group], Phys. Rev. D 98, no.3, 030001 (2018)
. S Chatrchyan, CMS]Phys. Lett. B. 727S. Chatrchyan et al. [CMS], Phys. Lett. B 727, 101-125 (2013)
. G Aad, ATLASPhys. Rev. D. 87552004G. Aad et al. [ATLAS], Phys. Rev. D 87, no.5, 052004 (2013)
. V Khachatryan, CMSPhys. Rev. D. 83112004V. Khachatryan et al. [CMS], Phys. Rev. D 83, 112004 (2011)
. R Aaij, LHCbEur. Phys. J. C. 722025R. Aaij et al. [LHCb], Eur. Phys. J. C 72, 2025 (2012)
. R Aaij, LHCbEur. Phys. J. C. 7442835R. Aaij et al. [LHCb], Eur. Phys. J. C 74, no.4, 2835 (2014)
. R Aaij, LHCbEur. Phys. J. C. 74103092R. Aaij et al. [LHCb], Eur. Phys. J. C 74, no.10, 3092 (2014)
. J Adam, ALICE]Eur. Phys. J. C. 764184J. Adam et al. [ALICE], Eur. Phys. J. C 76, no.4, 184 (2016)
. R Aaij, LHCbJHEP. 0664R. Aaij et al. [LHCb], JHEP 06, 064 (2013)
. D Acosta, CDFPhys. Rev. Lett. 88161802D. Acosta et al. [CDF], Phys. Rev. Lett. 88, 161802 (2002)
. K J Eskola, H Paukkunen, C A Salgado, arXiv:0902.4154JHEP. 0465hep-phK. J. Eskola, H. Paukkunen and C. A. Salgado, JHEP 04, 065 (2009) [arXiv:0902.4154 [hep-ph]].
. Y Liu, Z Qu, N Xu, P Zhuang, J. Phys. G. 3775110Y. Liu, Z. Qu, N. Xu and P. Zhuang, J. Phys. G 37, 075110 (2010)
. B Chen, Y Liu, K Zhou, P Zhuang, Phys. Lett. B. 726B. Chen, Y. Liu, K. Zhou and P. Zhuang, Phys. Lett. B 726, 725-728 (2013)
. B Chen, Chin. Phys. C. 4312124101B. Chen, Chin. Phys. C 43, no.12, 124101 (2019)
. W Shi, W Zha, B Chen, Phys. Lett. B. 777W. Shi, W. Zha and B. Chen, Phys. Lett. B 777, 399-405 (2018)
. S Acharya, ALICE]Phys. Lett. B. 790S. Acharya et al. [ALICE], Phys. Lett. B 790, 89-101 (2019)
. A M Sirunyan, CMSPhys. Rev. Lett. 12014142301A. M. Sirunyan et al. [CMS], Phys. Rev. Lett. 120, no.14, 142301 (2018)
. X Du, R Rapp, M He, Phys. Rev. C. 96554901X. Du, R. Rapp and M. He, Phys. Rev. C 96, no.5, 054901 (2017)
. Z Ye, Nucl. Phys. A. 967Z. Ye [STAR], Nucl. Phys. A 967, 600-603 (2017)
| [] |
[
"The Academia Sinica Systems of Voice Conversion for VCC2020",
"The Academia Sinica Systems of Voice Conversion for VCC2020"
] | [
"Yu-Huai Peng \nInstitute of Information Science\nAcademia Sinica\nTaiwan\n",
"Cheng-Hung Hu \nResearch Center for Information Technology Innovation\nAcademia Sinica\nTaiwan\n",
"Alexander Kang \nResearch Center for Information Technology Innovation\nAcademia Sinica\nTaiwan\n",
"Hung-Shin Lee \nInstitute of Information Science\nAcademia Sinica\nTaiwan\n",
"Pin-Yuan Chen \nInstitute of Information Science\nAcademia Sinica\nTaiwan\n",
"Yu Tsao \nResearch Center for Information Technology Innovation\nAcademia Sinica\nTaiwan\n",
"Hsin-Min Wang \nInstitute of Information Science\nAcademia Sinica\nTaiwan\n"
] | [
"Institute of Information Science\nAcademia Sinica\nTaiwan",
"Research Center for Information Technology Innovation\nAcademia Sinica\nTaiwan",
"Research Center for Information Technology Innovation\nAcademia Sinica\nTaiwan",
"Institute of Information Science\nAcademia Sinica\nTaiwan",
"Institute of Information Science\nAcademia Sinica\nTaiwan",
"Research Center for Information Technology Innovation\nAcademia Sinica\nTaiwan",
"Institute of Information Science\nAcademia Sinica\nTaiwan"
] | [] | This paper describes the Academia Sinica systems for the two tasks of Voice Conversion Challenge 2020, namely voice conversion within the same language (Task 1) and cross-lingual voice conversion (Task 2). For both tasks, we followed the cascaded ASR+TTS structure, using phonetic tokens as the TTS input instead of the text or characters. For Task 1, we used the international phonetic alphabet (IPA) as the input of the TTS model. For Task 2, we used unsupervised phonetic symbols extracted by the vector-quantized variational autoencoder (VQ-VAE). In the evaluation, the listening test showed that our systems performed well in the VCC2020 challenge. | 10.21437/vcc_bc.2020-28 | [
"https://arxiv.org/pdf/2010.02669v1.pdf"
] | 222,140,910 | 2010.02669 | 4618c18f331738497a530fd31ac8eb1582b0a263 |
The Academia Sinica Systems of Voice Conversion for VCC2020
Yu-Huai Peng
Institute of Information Science
Academia Sinica
Taiwan
Cheng-Hung Hu
Research Center for Information Technology Innovation
Academia Sinica
Taiwan
Alexander Kang
Research Center for Information Technology Innovation
Academia Sinica
Taiwan
Hung-Shin Lee
Institute of Information Science
Academia Sinica
Taiwan
Pin-Yuan Chen
Institute of Information Science
Academia Sinica
Taiwan
Yu Tsao
Research Center for Information Technology Innovation
Academia Sinica
Taiwan
Hsin-Min Wang
Institute of Information Science
Academia Sinica
Taiwan
The Academia Sinica Systems of Voice Conversion for VCC2020
Index Terms: Voice conversion challengeIPAASRTTSVQVAETransformer
This paper describes the Academia Sinica systems for the two tasks of Voice Conversion Challenge 2020, namely voice conversion within the same language (Task 1) and cross-lingual voice conversion (Task 2). For both tasks, we followed the cascaded ASR+TTS structure, using phonetic tokens as the TTS input instead of the text or characters. For Task 1, we used the international phonetic alphabet (IPA) as the input of the TTS model. For Task 2, we used unsupervised phonetic symbols extracted by the vector-quantized variational autoencoder (VQ-VAE). In the evaluation, the listening test showed that our systems performed well in the VCC2020 challenge.
Introduction
Voice conversion (VC) is a means of converting one voice to another. This is a technique that modifies speech waveform by converting non-linguistic information while retaining linguistic information. While there are a wide variety of types and applications of VC, the most typical one is speaker voice conversion that converting speaker identity information while retaining linguistic information. In order to improve the VC technology, the voice conversion challenge (VCC) has been launched since 2016, and the VCC2020 challenge is the third one in the series.
There are two tasks in VCC2020. The first task (Task 1) is VC within the same language, i.e., mono-lingual VC. The speech utterances of 4 source and 4 target speakers (consisting of both female and male speakers) from fixed corpora are used as training data. Each speaker utters a sentence set consisting of 70 sentences in English. Only 20 sentences are parallel and the other 50 sentences are nonparallel between the source and target speakers. The second task (Task 2) is cross-lingual VC. The training data includes the speech utterances of 6 target speakers (consisting of both female and male speakers) from fixed corpora and the speech utterances of the source speakers in the first task. Each target speaker utters another sentence set consisting of around 70 sentences in a different language; 2 target speakers utter in Finnish, 2 target speakers utter in German, and 2 target speakers utter in Mandarin. Other voices of the same source speakers in English are provided later as test data consisting of around 25 sentences for each speaker. Each participant need to generate converted voices from them using the developed 16 conversion systems for the first task or 24 conversion systems for the second task.
In this paper, we describe our systems for both tasks in VCC2020. For more detailed information about VCC2020, please refer to the official website 1
System Descriptions
We implemented two VC systems for VCC2020, one for Task 1 and the other for Task 2.
Task 1: voice conversion within the same language
For the first task, we built the VC system with the Kaldi ASR [2] and ESPNet-TTS (Tacotron2 TTS) [3] toolkits, as shown in Figure 1. The two models were trained independently. Finally, we used the Parallel WaveGAN [4] vocoder to generate waveforms to enhance naturalness and similarity. We will describe each model in the following subsections.
Kaldi ASR
To train the ASR model, we followed the Kaldi recipe 2 of the LibriSpeech corpus [5]. The training process of our ASR system was divided into two parts, data processing, and acoustic modeling.
For data processing of the LibriSpeech corpus, we first extracted two MFCC features with different resolutions. The 13dimensional MFCCs were used to train the GMM-based acoustic models. The 40-dimensional MFCCs were used to train the i-vector extractor and NN-based acoustic models. Then, we used the CMU pronunciation dictionary [6] to convert the English word transcriptions into the CMU pronunciation format, and mapped the CMU pronunciation symbols into the corresponding IPA symbols [7].
For acoustic modeling, the training set of the LibriSpeech corpus was first used to train the GMM-based acoustic models and the i-vector extractor. Following the model structure and training steps of the recipe, we created the alignment and lattice based on the GMM-based acoustic models, performed the data cleanup procedure, and extracted the 400-dimensional i-vectors to train the NN-based acoustic models. For the NNbased acoustic models, we selected the "chain" model structure (i.e., TDNN-F) [8,9,10]. The 40-dimensional MFCCs and the 400-dimensional i-vectors were concatenated as the input of TDNN-F.
Tacotron2 TTS
To train the Tacotron2 TTS model [11], we followed the ES-PNet recipe of the LibriTTS corpus 3 [12]. First, we extracted the 80-dimensional Mel-spectrogram and 512-dimensional xvector from each utterance in the LibriTTS corpus. The speaker model used to extract the x-vectors was pre-trained with the Kaldi toolkit. As with the training of the Kaldi ASR system, the English word transcriptions were converted into the IPA symbols.
Following the model structure and training steps of the recipe, we obtained the multi-speaker Tacotron2 TTS model, which converts an IPA symbol sequence to the 80-dimensional Mel-spectrogram under the condition of a 512-dimensional xvector. Lastly, we finetuned the Tacotron2 TTS model with the training data and average x-vector of the target speaker to obtain the speaker-dependent Tacotron2 TTS model. Note that utterance-dependent x-vectors were used to train the multispeaker Tacotron2 TTS model, while the average x-vector of the target speaker was used to finetune the speaker-dependent Tacotron2 TTS model. In our preliminary experiments, this combination produced the best performance.
ParallelWaveGAN vocoder
For waveform synthesis, we used ParallelWaveGAN [4] as the vocoder. To train the ParallelWaveGAN, similar to the ASR+TTS/ParallelWaveGAN baseline system (T22) [13] (which uses the cascaded seq-to-seq ASR+TTS (Transformer) model for VC and ParallelWaveGAN as the vocoder), we followed the open-source ParallelWaveGAN recipe 4 . We combined the VCTK corpus [14] and the training set of VCC2020, and extracted the 80-dimensional Mel-spectrogram as the input.
Conversion
In the conversion phase, the 40-dimensional MFCCs and 400dimensional i-vectors of each input utterance were first extracted for the Kaldi ASR system to output the IPA symbol sequence. Then, the Tacotron2 TTS model of the target speaker was used to convert the IPA sequence to the 80-dimensional Mel-spectrogram of the target speaker. Finally, the Parallel-WaveGAN vocoder was used to convert the 80-dimensional Mel-spectrogram of the target speaker to the waveform.
Task 2: cross-lingual voice conversion
For the second task, we built the VC system with an unsupervised phonetic symbol extractor and a Transformer TTS model [15], as shown in Figure 2. Because the ASR model trained on the English corpus could not deal with non-English input speech well, we applied a variational autoencoder (VAE) based method in our system to extract the phoneme-like (or character-like) speech representations. Many studies [16,17,18,19,20] have shown that VAE-based methods have an ability to decompose spectral features into the speaker codes and the phonetic codes. Therefore, we applied the VQVAE [17] Figure 2: The flow chart of our system for Task 2.
former can only generate output with the same length as the input, while the latter can model the duration. Consequently, our system can be regarded as a seq-to-seq system.
VQVAE-based phonetic symbol extractor
Through the vector-quantization mechanism, VQVAE can quantize the latent vector representation obtained from the encoder into a discrete phonetic representation. The discrete phonetic representation is the index of the codeword closest to the latent vector in the codebook of the vector quantizer. In this way, the 80-dimensional Mel-spectrogram of an utterance is converted to a phoneme-like phonetic symbol sequence.
As shown in Figure 2 (cf.
Step 1), to train VQVAE, we used the W-GAN mechanism with a gradient penalty to make VQVAE perform better. The loss function LG used to update the generator of VQVAE is as follows,
LG = p(x|zq(x), y)
+ ||sg[ze(x)] − e|| 2 2 + ||ze(x) − sg[e]|| 2 2 − E(D(G(zq(x), y))),(1)
where x denotes the input feature, y denotes the speaker code, ze denotes the latent vector representation, e denotes the closest token of the latent vector representation, zq(x) = ze(x) + sg[e−ze(x)] denotes the quantized discrete phonetic representation, sg denotes the stopping gradient, G denotes the decoder, and D denotes the discriminator. The loss function LD used to update the discriminator is as follows,
LD = − E(D(x)) + E(D(G(zq(x), y))) + gp(x, G(zq(x), y)),(2)
where gp is the gradient penalty. We used the ResNet architecture to form the encoder, the decoder, and the discriminator. In the ResNet architecture, as shown in Tables 1 to 3 , there are 4 res-layers, each containing 3 residual blocks. In the residual block of the decoder, we first concatenated the input quantized latent vector representation of size B × 128 × T with the speaker code of size B × 128 × T along the time axis as the new input feature, and additionally
input B × 128 × T - res1 B × 128 × T GLU layer norm. res2 B × 128 × T GLU layer norm. res3 B × 256 × T GLU layer norm. res4 B × 80 × T GLU layer norm. skip-sum B × 80 × T None None conv1d B × 80 × T GLU layer norm. conv1d B × 80 × T None None
applied skip-connections, which early output skip features of size B × 80 × T . After training the VQVAE, we only retained the encoder and vector quantizer to extract the phonetic symbol sequence from the input speech in the subsequent steps.
Transformer TTS
To train the Transformer TTS model, we followed the ES-PNet recipe of the LibriTTS corpus, but replaced the training corpus with a combination of the VCTK corpus and the VCC2020 training set. First, we extracted the 80-dimensional Mel-spectrogram and 512-dimensional x-vector from each utterance in the training corpus. The speaker model used to extract the x-vectors was pre-trained with the Kaldi toolkit. The phoneme-like phonetic symbol sequence of each training utterance was extracted using the VQVAE encoder and vector quantizer.
Following the model structure and training steps of the recipe, we obtained the multi-speaker Transformer TTS model, which convert the phoneme-like phonetic symbol sequence to
Conversion
In the conversion phase, the 80-dimensional Mel-spectrogram of each source utterance was first passed to the VQVAE encoder and vector quantizer to generate the phoneme-like phonetic symbol sequence. Then, the Transformer TTS model of the target speaker was used to convert the phoneme-like phonetic symbol sequence to the 80-dimensional Mel-spectrogram of the target speaker. Finally, the ParallelWaveGAN vocoder was used to convert the 80-dimensional Mel-spectrogram of the target speaker to the waveform.
Experiment Results
As stated in the final report of VCC2020 [1], all submitted systems were grouped according to their performance. The systems in each group did not differ significantly in performance.
According to the evaluation results of VCC2020 [1], our system for Task 1 ranked in the fifth group in terms of naturalness (31 systems were ranked and divided into 18 groups). In terms of similarity to the target speaker, our system for Task 1 ranked in the first group (31 systems were divided into 9 groups). For Task 2, our system ranked in the fifth group in terms of naturalness (28 systems were divided into 15 groups) and ranked in the sixth group in terms of similarity to the target speaker (28 systems were divided into 13 groups). For Task 1, we presented a new ASR+TTS system. In recent studies, the ASR+TTS systems achieved good performance. Therefore, we tried to make some improvements on the basis of the baseline ASR+TTS system (T22) [13]. We built our ASR system based on the IPA symbols. Our goal was not only to use this ASR+TTS system to accomplish Task 1, but also to apply the same ASR+TTS system to Task 2. However, we found that the model could not have consistent performance in some VC pairs in the cross-lingual VC task. One possible reason is that we did not have enough training data to train our ASR+TTS system for the cross-lingual VC task. It turned out that our ASR+TTS system performed as well as the baseline ASR+TTS system (T22) in the mono-lingual VC task. Note that the baseline ASR+TTS system is a cascade of seq-to-seq ASR and Transformer TTS models implemented using the end-toend speech processing toolkit "ESPNet" [3] 5 . According to the evaluation results of VCC2020, in Task 1, our system roughly ranked in the top 30% in terms of naturalness and similarity.
For Task 2, we modified the traditional VQVAE VC system and replaced the decoder with a Transformer TTS model. In our preliminary experiments, we found that replacing the decoder with the Transformer TTS model could improve the naturalness, but the similarity was almost the same. This result is in line with the VCC2020 evaluation results. Comparing our system with the two VQVAE-based systems (T19 and T20) in [1], we can see that in naturalness, our system is comparable to T20, but better than T19; while in similarity, our system is worse than T19, but better than T20. In addition, comparing our system with the VAE-based baseline system (T16), i.e., the CycleVAE VC system with ParallelWaveGAN as the vocoder [21,22], we can see that our system is better than T16 in naturalness, but worse than T16 in similarity. In the naturalness test, our system was ranked in the fifth group with the MOS score of 3.00, while the baseline T16 system was ranked in the ninth group with the MOS score of 2.56. In the similarity test, our system was ranked in the sixth group with the score of 2.41, while the baseline T16 system was ranked in the fourth group with the score of 2.69. In the challenge, our MOS score in Task 2 was in the upper range, about the top 30%. However, our system ranked in the middle in terms of similarity. In terms of ranking, our Task 2 system was not as good as our Task 1 system. There are two possible reasons. First, we did not optimize the VQVAE encoder to suit the task. Second, the vanilla VQVAE model we used has its own performance limitations. We will try to improve our system on these two issues in the future.
Conclusions
Ideally, the ASR+TTS model can perfectly retain the linguistic information and synthesize the linguistic information with new personal identity information into the target speech. However, according to our preliminary experiments, the ASR+TTS model only performed well in mono-lingual VC tasks, but not in cross-lingual VC tasks. Therefore, we built an alternative VQVAE+TTS model for Task 2. We expected that the encoder of the VQVAE model could replace the role of ASR. The difference between VQVAE encoder and ASR is that the output of ASR is a sequence of symbols defined by human, such as words and phones (in IPA or phonetic posteriograms), while the output of VQVAE encoder is a sequence of tokens automatically learned by the machine. The codewords in the codebook learned by the VQ part in VQVAE could be regarded as recognition units in the ASR model. According to our experiments, the VQVAE+TTS model did achieve better performance than the ASR+TTS model in the cross-lingual task. However, as discussed in Section 3, there are still some problems to be solved.
Figure 1 :
1The flow chart of our system for Task 1.
and [1]. 1 http://www.vc-challenge.org/
structure in our system to extract the character-like phonetic symbol sequence as the input of the Transformer TTS model. Note that we replaced the VQVAE decoder with the Transformer TTS model, because the 4 https://github.com/kan-bayashi/ ParallelWaveGANStep 1. Training the VQVAE with W-GANStep 2. Training Transformer TTSResNet
Encoder
Vector
Quantizer
ResNet
Decoder
80-dim
Mel-
spectrogram
80-dim
Mel-
spectrogram
128-dim
phonetic token
128-dim
speaker
embedding
Real/Fake
Training Phase
ResNet
Encoder
Vector
Quantizer
80-dim
Mel-
spectrogram
phonetic token ID
512-dim
x-vector
80-dim
Mel-
spectrogram
Fixed
Conversion Phase
ResNet
Encoder
Vector
Quatizer
80-dim
Source Mel-
spectrogram
80-dim
Target Mel-
spectrogram
phonetic token ID
Transformer
TTS
512-dim
Target x-vector
Transformer
TTS
Parallel
WaveGAN
Target
Wavefrom
Discriminator
Table 1 :
1The architecture and specifications of Encoder, where res1 to res4 denote 4 ResNet-based layers, and B and T represent the batch size and temporal length, respectively.Layer
Feature Size
Activation
Normalization
input
B × 80 × T
-
res1
B × 256 × T
leaky ReLU
layer norm.
res2
B × 128 × T
leaky ReLU
layer norm.
res3
B × 128 × T
leaky ReLU
layer norm.
res4
B × 128 × T
leaky ReLU
layer norm.
conv1d
B × 128 × T
None
None
Table 2 :
2The architecture and specifications of Decoder, where res1 to res4 denote 4 ResNet-based layers, skip-sum denotes the summation of all skip features, and B and T represent the batch size and temporal length, respectively.Layer
Feature Size
Activation Normalization
Table 3 :
3The architecture and specifications of Discriminator, where res1 to res4 denote 4 ResNet-based layers, and B and T represent the batch size and temporal length, respectively.Layer
Feature Size
Activation
Normalization
input
B × 80 × T
-
res1
B × 256 × T
leaky ReLU
layer norm.
res2
B × 128 × T
leaky ReLU
layer norm.
res3
B × 64 × T
leaky ReLU
layer norm.
res4
B × 32 × T
leaky ReLU
layer norm.
conv1d
B × 1 × T
None
None
https://github.com/kaldi-asr/kaldi/tree/ master/egs/librispeech 3 https://github.com/espnet/espnet/tree/ master/egs/libritts/tts1
https://github.com/espnet/espnet/tree/ master/egs/vcc20
Voice conversion challenge 2020: Intra-lingual semi-parallel and cross-lingual voice conversion. Y Zhao, W.-C Huang, X Tian, J Yamagishi, R K Das, T Kinnunen, Z Ling, T Toda, Proc. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCAY. Zhao, W.-C. Huang, X. Tian, J. Yamagishi, R. K. Das, T. Kin- nunen, Z. Ling, and T. Toda, "Voice conversion challenge 2020: Intra-lingual semi-parallel and cross-lingual voice conversion," in Proc. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA, 2020.
The Kaldi speech recognition toolkit. D Povey, A Ghoshal, G Boulianne, L Burget, O Glembek, N Goel, M Hannemann, P Motlíček, Y Qian, P Schwarz, J Silovský, G Stemmer, K Vesel, Proc. ASRU. ASRUD. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlíček, Y. Qian, P. Schwarz, J. Silovský, G. Stemmer, and K. Vesel, "The Kaldi speech recog- nition toolkit," in Proc. ASRU, 2011.
ESPnet-TTS: Unified, reproducible, and integratable open source end-to-end textto-speech toolkit. T Hayashi, R Yamamoto, K Inoue, T Yoshimura, S Watanabe, T Toda, K Takeda, Y Zhang, X Tan, Proc. ICASSP. ICASSPT. Hayashi, R. Yamamoto, K. Inoue, T. Yoshimura, S. Watanabe, T. Toda, K. Takeda, Y. Zhang, and X. Tan, "ESPnet-TTS: Uni- fied, reproducible, and integratable open source end-to-end text- to-speech toolkit," in Proc. ICASSP, 2020.
Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. R Yamamoto, E Song, J M Kim, Proc. ICASSP. ICASSPR. Yamamoto, E. Song, and J. M. Kim, "Parallel WaveGAN: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram," in Proc. ICASSP, 2020.
Lib-riSpeech: An ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, Proc. ICASSP. ICASSPV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- riSpeech: An ASR corpus based on public domain audio books," in Proc. ICASSP, 2015.
CMU pronouncing dictionary. Carnegie Mellon UniversityOnlineCarnegie Mellon University, CMU pronouncing dictionary. [On- line]. Available: http://www.speech.cs.cmu.edu/cgi-bin/cmudict
International Phonetic Association, Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. International Phonetic Association, Handbook of the Interna- tional Phonetic Association: A Guide to the Use of the Interna- tional Phonetic Alphabet, 1999.
Sequencediscriminative training of deep neural networks. K Vesely, A Ghoshal, L Burget, D Povey, Proc. Interspeech. InterspeechK. Vesely, A. Ghoshal, L. Burget, and D. Povey, "Sequence- discriminative training of deep neural networks," in Proc. Inter- speech, 2013.
Purely sequence-trained neural networks for ASR based on lattice-free MMI. D Povey, V Peddinti, D Galvez, P Ghahrmani, V Manohar, X Na, Y Wang, S Khudanpur, Proc. Interspeech. InterspeechD. Povey, V. Peddinti, D. Galvez, P. Ghahrmani, V. Manohar, X. Na, Y. Wang, and S. Khudanpur, "Purely sequence-trained neu- ral networks for ASR based on lattice-free MMI," in Proc. Inter- speech, 2016.
Semi-orthogonal low-rank matrix factorization for deep neural networks. D Povey, G Cheng, Y Wang, K Li, H Xu, M Yarmohamadi, S Khudanpur, Proc. Interspeech. InterspeechD. Povey, G. Cheng, Y. Wang, K. Li, H. Xu, M. Yarmohamadi, and S. Khudanpur, "Semi-orthogonal low-rank matrix factoriza- tion for deep neural networks," in Proc. Interspeech, 2018.
Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions. J Shen, R Pang, R J Weiss, M Schuster, N Jaitly, Z Yang, Z Chen, Y Zhang, Y Wang, R Skerrv-Ryan, R A Saurous, Y Agiomvrgiannakis, Y Wu, Proc. ICASSP. ICASSPJ. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerrv-Ryan, R. A. Saurous, Y. Agiomvrgiannakis, and Y. Wu, "Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions," in Proc. ICASSP, 2018.
LibriTTS: A corpus derived from LibriSpeech for text-to-speech. H Zen, V Dang, R Clark, Y Zhang, R J Weiss, Y Jia, Z Chen, Y Wu, H. Zen, V. Dang, R. Clark, Y. Zhang, R. J. Weiss, Y. Jia, Z. Chen, and Y. Wu, "LibriTTS: A corpus derived from LibriSpeech for text-to-speech," 2019.
The Sequence-to-Sequence Baseline for the Voice Conversion Challenge 2020: Cascading ASR and TTS. W.-C Huang, T Hayashi, S Watanabe, T Toda, Proc. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCAW.-C. Huang, T. Hayashi, S. Watanabe, and T. Toda, "The Sequence-to-Sequence Baseline for the Voice Conversion Chal- lenge 2020: Cascading ASR and TTS," in Proc. ISCA Joint Work- shop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA, 2020.
CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit. C Veaux, J Yamagishi, K Macdonald, 10.7488/ds/1994C. Veaux, J. Yamagishi, and K. MacDonald, "CSTR VCTK corpus: English multi-speaker corpus for CSTR voice cloning toolkit," 2012. [Online]. Available: https://doi.org/10.7488/ds/ 1994
Neural speech synthesis with Transformer network. N Li, S Liu, Y Liu, S Zhao, M Liu, Proc. AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial IntelligenceN. Li, S. Liu, Y. Liu, S. Zhao, and M. Liu, "Neural speech syn- thesis with Transformer network," in Proc. AAAI Conference on Artificial Intelligence, 2019.
Voice conversion from non-parallel corpora using variational auto-encoder. C C Hsu, H T Hwang, Y C Wu, Y Tsao, H M Wang, Proc. APSIPA ASC. APSIPA ASCC. C. Hsu, H. T. Hwang, Y. C. Wu, Y. Tsao, and H. M. Wang, "Voice conversion from non-parallel corpora using variational auto-encoder," in Proc. APSIPA ASC, 2016.
Neural discrete representation learning. A Van Den, O Oord, K Vinyals, Kavukcuoglu, Proc. NIPS. NIPSA. Van Den Oord, O. Vinyals, and K. Kavukcuoglu, "Neural dis- crete representation learning," in Proc. NIPS, 2017.
Voice conversion from unaligned corpora using variational autoencoding wasserstein generative adversarial networks. C C Hsu, H T Hwang, Y C Wu, Y Tsao, H M Wang, Proc. Interspeech. InterspeechC. C. Hsu, H. T. Hwang, Y. C. Wu, Y. Tsao, and H. M. Wang, "Voice conversion from unaligned corpora using variational au- toencoding wasserstein generative adversarial networks," in Proc. Interspeech, 2017.
AUTOVC: Zero-shot voice style transfer with only autoencoder Loss. K Qian, Y Zhang, S Chang, X Yang, M Hasegawa-Johnson, Proc. ICML. ICMLK. Qian, Y. Zhang, S. Chang, X. Yang, and M. Hasegawa- Johnson, "AUTOVC: Zero-shot voice style transfer with only au- toencoder Loss," in Proc. ICML, 2019.
Unsupervised representation disentanglement using cross domain features and adversarial learning in variational autoencoder based voice conversion. W C Huang, H Luo, H T Hwang, C C Lo, Y H Peng, Y Tsao, H M Wang, IEEE Transactions on Emerging Topics in Computational Intelligence. 44W. C. Huang, H. Luo, H. T. Hwang, C. C. Lo, Y. H. Peng, Y. Tsao, and H. M. Wang, "Unsupervised representation disentanglement using cross domain features and adversarial learning in varia- tional autoencoder based voice conversion," IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 4, no. 4, pp. 468-479, 2020.
Non-Parallel Voice Conversion with Cyclic Variational Autoencoder. P L Tobing, Y.-C Wu, T Hayashi, K Kobayashi, T Toda, Proc. Interspeech. InterspeechP. L. Tobing, Y.-C. Wu, T. Hayashi, K. Kobayashi, and T. Toda, "Non-Parallel Voice Conversion with Cyclic Variational Autoen- coder," in Proc. Interspeech, 2019.
The baseline system of voice conversion challenge 2020 with cyclic variational autoencoder and parallel wavegan. P L Tobing, Y.-C Wu, T Toda, Proc. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCAP. L. Tobing, Y.-C. Wu, and T. Toda, "The baseline system of voice conversion challenge 2020 with cyclic variational autoen- coder and parallel wavegan," in Proc. ISCA Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. ISCA, 2020.
| [
"https://github.com/kan-bayashi/",
"https://github.com/kaldi-asr/kaldi/tree/",
"https://github.com/espnet/espnet/tree/",
"https://github.com/espnet/espnet/tree/"
] |
[
"IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Fast Differentiable Matrix Square Root and Inverse Square Root",
"IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Fast Differentiable Matrix Square Root and Inverse Square Root"
] | [
"Member, IEEEYue Song ",
"Senior Member, IEEENicu Sebe ",
"Member, IEEEWei Wang "
] | [] | [] | Computing the matrix square root and its inverse in a differentiable manner is important in a variety of computer vision tasks. Previous methods either adopt the Singular Value Decomposition (SVD) to explicitly factorize the matrix or use the Newton-Schulz iteration (NS iteration) to derive the approximate solution. However, both methods are not computationally efficient enough in either the forward pass or the backward pass. In this paper, we propose two more efficient variants to compute the differentiable matrix square root and the inverse square root. For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Padé Approximants (MPA). The backward gradient is computed by iteratively solving the continuous-time Lyapunov equation using the matrix sign function. A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration. Moreover, we validate the effectiveness of our methods in several real-world applications, including de-correlated batch normalization, second-order vision transformer, global covariance pooling for large-scale and fine-grained recognition, attentive covariance pooling for video recognition, and neural style transfer. The experiments demonstrate that our methods can also achieve competitive and even slightly better performances. Code is available at https://github.com/KingJamesSong/FastDifferentiableMatSqrt. | 10.1109/tpami.2022.3216339 | [
"https://export.arxiv.org/pdf/2201.12543v2.pdf"
] | 246,430,704 | 2201.12543 | 37c4075d54e5cad74279ac4d43b3ba7e14dd611d |
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Fast Differentiable Matrix Square Root and Inverse Square Root
Member, IEEEYue Song
Senior Member, IEEENicu Sebe
Member, IEEEWei Wang
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 1 Fast Differentiable Matrix Square Root and Inverse Square Root
Index Terms-Differentiable Matrix DecompositionDecorrelated Batch NormalizationGlobal Covariance PoolingNeural Style Transfer !
Computing the matrix square root and its inverse in a differentiable manner is important in a variety of computer vision tasks. Previous methods either adopt the Singular Value Decomposition (SVD) to explicitly factorize the matrix or use the Newton-Schulz iteration (NS iteration) to derive the approximate solution. However, both methods are not computationally efficient enough in either the forward pass or the backward pass. In this paper, we propose two more efficient variants to compute the differentiable matrix square root and the inverse square root. For the forward propagation, one method is to use Matrix Taylor Polynomial (MTP), and the other method is to use Matrix Padé Approximants (MPA). The backward gradient is computed by iteratively solving the continuous-time Lyapunov equation using the matrix sign function. A series of numerical tests show that both methods yield considerable speed-up compared with the SVD or the NS iteration. Moreover, we validate the effectiveness of our methods in several real-world applications, including de-correlated batch normalization, second-order vision transformer, global covariance pooling for large-scale and fine-grained recognition, attentive covariance pooling for video recognition, and neural style transfer. The experiments demonstrate that our methods can also achieve competitive and even slightly better performances. Code is available at https://github.com/KingJamesSong/FastDifferentiableMatSqrt.
INTRODUCTION
Consider a positive semi-definite matrix A. The principle square root A 1 2 and the inverse square root A − 1 2 are mathematically of practical interests, mainly because some desired spectral properties can be obtained by such transformations. An exemplary illustration is given in Fig. 1. As can be seen, the matrix square root can shrink/stretch the feature variances along with the direction of principle components, which is known as an effective spectral normalization for covariance matrices. The inverse square root, on the other hand, can be used to whiten the data, i.e., make the data has a unit variance in each dimension. These appealing spectral properties are very useful in many computer vision applications. In Global Covariance Pooling (GCP) [1], [2], [3], [4] and other related high-order representation methods [5], [6], the matrix square root is often used to normalize the high-order feature, which can benefit some classification tasks like general visual recognition [2], [3], [5], fine-grained visual categorization [7], and video action recognition [6]. The inverse square root is used as the whitening transform to eliminate the feature correlation, which is widely applied in decorrelated Batch Normalization (BN) [8], [9], [10] and other related models that involve the whitening transform [11], [12]. In the field of neural style transfer, both the matrix square root and its inverse are adopted to perform successive Whitening and Coloring Transform (WCT) to transfer the style information for better generation fidelity [13], [14], [15].
To compute the matrix square root, the standard method is via Singular Value Decomposition (SVD). Given : Exemplary visualization of the matrix square root and its inverse. Given the original data X∈R 2×n , the matrix square root performs an effective spectral normalization by stretching the data along the axis of small variances and squeezing the data in the direction with large variances, while the inverse square root transforms the data into the uncorrelated structure that has unit variance in all directions.
symmetric matrix A, its matrix square root is computed as:
A 1 2 = (UΛU T ) 1 2 = UΛ 1 2 U T(1)
where U is the eigenvector matrix, and Λ is the diagonal eigenvalue matrix. As derived by Ionescu et al. [16], the partial derivative of the eigendecomposition is calculated as:
∂l ∂A = U K T (U T ∂l ∂U ) + ( ∂l ∂Λ ) diag U T(2)
where l is the loss function, denotes the element-wise product, and () diag represents the operation of setting the off-diagonal entries to zero. Despite the long-studied theories and well-developed algorithms of SVD, there exist two obstacles when integrating it into deep learning frameworks.
arXiv:2201.12543v2 [cs.CV] 19 Oct 2022
One issue is the back-propagation instability. For the matrix K defined in eq. (2), its off-diagonal entry is K ij = 1 /(λi−λj), where λ i and λ j are involved eigenvalues. When the two eigenvalues are close and small, the gradient is very likely to explode, i.e., K ij →∞. This issue has been solved by some methods that use approximation techniques to estimate the gradients [4], [17], [18]. The other problem is the expensive time cost of the forward eigendecomposition. As the SVD is not supported well by GPUs [19], performing the eigendecomposition on the deep learning platforms is rather time-consuming. Incorporating the SVD with deep models could add extra burdens to the training process. Particularly for batched matrices, modern deep learning frameworks, such as Tensorflow and Pytorch, give limited optimization for the matrix decomposition within the mini-batch. They inevitably use a for-loop to conduct the SVD one matrix by another. However, how to efficiently perform the SVD in the context of deep learning has not been touched by the research community.
To avoid explicit eigendecomposition, one commonly used alternative is the Newton-Schulz iteration (NS iteration) [20], [21] which modifies the ordinary Newton iteration by replacing the matrix inverse but preserving the quadratic convergence. Compared with SVD, the NS iteration is rich in matrix multiplication and more GPU-friendly. Thus, this technique has been widely used to approximate the matrix square root in different applications [1], [3], [9]. The forward computation relies on the following coupled iterations:
Y k+1 = 1 2 Y k (3I − Z k Y k ), Z k+1 = 1 2 (3I − Z k Y k )Z k (3)
where Y k and Z k converge to A 1 2 and A − 1 2 , respectively. Since the NS iteration only converges locally (i.e., ||A|| 2 <1), we need to pre-normalize the initial matrix and postcompensate the resultant approximation as Y 0 = 1 ||A||F A and A 1 2 = ||A|| F Y k . Each forward iteration involves 3 matrix multiplications, which is more efficient than the forward pass of SVD. However, the backward pass of the NS iteration takes 14 matrix multiplications per iteration. Consider that the NS iteration often takes 5 iterations to achieve reasonable performances [3], [9]. The backward pass is much more time-costing than the backward algorithm of SVD. The speed improvement could be larger if a more efficient backward algorithm is developed.
To address the drawbacks of SVD and NS iteration, i.e. the low efficiency in either the forward or backward pass, we derive two methods that are efficient in both forward and backward propagation to compute the differentiable matrix square root and its inverse. In the forward pass (FP), we propose using Matrix Taylor Polynomial (MTP) and Matrix Padé Approximants (MPA) for approximating the matrix square root. The former approach is slightly faster but the latter is more numerically accurate. Both methods yield considerable speed-up compared with the SVD or the NS iteration in the forward computation. The proposed MTP and MPA can be also used to approximate the inverse square root without any additional computational cost. For the backward pass (BP), we consider the gradient function as a Lyapunov equation and propose an iterative solution using the matrix sign function. The backward pass costs fewer matrix multiplications and is more computationally efficient than the NS iteration. Our proposed iterative Lyapunov solver applies to both the matrix square root and the inverse square root. The only difference is that deriving the gradient of inverse square root requires 3 more matrix multiplications than computing that of matrix square root.
Through a series of numerical tests, we show that the proposed MTP-Lya and MPA-Lya deliver consistent speed improvement for different batch sizes, matrix dimensions, and some hyper-parameters (e.g., degrees of power series to match and iteration times). Moreover, our proposed MPA-Lya consistently gives a better approximation of the matrix square root and its inverse than the NS iteration. Besides the numerical tests, we conduct extensive experiments in a number of computer vision applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for large-scale and fine-grained image recognition, attentive global covariance pooling for video action recognition, and neural style transfer. Our methods can achieve competitive performances against the SVD and the NS iteration with the least amount of time overhead. Our MPA is suitable in use cases where the high precision is needed, while our MTP works in applications where the accuracy is less demanded but the efficiency is more important. The contributions of the paper are twofold:
• We propose two fast methods that compute the differentiable matrix square root and the inverse square root. The forward propagation relies on the matrix Taylor polynomial or matrix Padé approximant, while an iterative backward gradient solver is derived from the Lyapunov equation using the matrix sign function. • Our proposed algorithms are validated by a series of numerical tests and several real-world computer vision applications. The experimental results demonstrate that our methods have a faster calculation speed and also have very competitive performances.
This paper is an expanded version of [22]. In the conference paper [22], the proposed fast algorithms only apply to the matrix square root A 1 2 . For the application of inverse square root A − 1 2 , we have to solve the linear system or compute the matrix inverse. However, both techniques are not GPU-efficient enough and could add extra computational burdens to the training. In this extended manuscript, we target the drawback and extend our algorithm to the case of inverse square root, which avoids the expensive computation and allows for faster calculation in more application scenarios. Compared with computing the matrix square root, computing the inverse square root consumes the same time complexity in the FP and requires 3 more matrix multiplications in the BP. The paper thus presents a complete solution to the efficiency issue of the differentiable spectral layer. Besides the algorithm extension, our method is validated in more computer vision applications: global covariance pooling for image/video recognition and neural style transfer. We also shed light on the peculiar incompatibility of NS iteration and Lyapunov solver discussed in Sec. 5.7.3.
The rest of the paper is organized as follows: Sec. 2 describes the computational methods and applications of differentiable matrix square root and its inverse. Sec. 3 introduces our method that computes the end-to-end matrix square root, and Sec. 4 presents the extension of our method to the inverse square root. Sec. 5 provides the experimental results, the ablation studies, and some in-depth analysis. Finally, Sec. 6 summarizes the conclusions.
RELATED WORK
In this section, we recap the previous approaches that compute the differentiable matrix square root and the inverse square root, followed by a discussion on the usage in some applications of deep learning and computer vision.
Computational Methods
Ionescu et al. [16], [23] first formulate the theory of matrix back-propagation, making it possible to integrate a spectral meta-layer into neural networks. Existing approaches that compute the differentiable matrix square root and its inverse are mainly based on the SVD or NS iteration. The SVD calculates the accurate solution but suffers from backward instability and expensive time cost, whereas the NS iteration computes the approximate solution but is more GPU-friendly. For the backward algorithm of SVD, several methods have been proposed to resolve this gradient explosion issue [4], [17], [18], [24], [25]. Wang et al. [17] propose to apply Power Iteration (PI) to approximate the SVD gradient. Recently, Song et al. [4] propose to rely on Padé approximants to closely estimate the backward gradient of SVD.
To avoid explicit eigendecomposition, Lin et al. [1] propose to substitute SVD with the NS iteration. Following this work, Li et al. [2] and Huang et al. [8] adopt the NS iteration in the task of global covariance pooling and decorrelated batch normalization, respectively. For the backward pass of the differentiable matrix square root, Lin et al. [1] also suggest viewing the gradient function as a Lyapunov equation. However, their proposed exact solution is infeasible to compute practically, and the suggested Bartels-Steward algorithm [26] requires explicit eigendecomposition or Schur decomposition, which is again not GPU-friendly. By contrast, our proposed iterative solution using the matrix sign function is more computationally efficient and achieves comparable performances against the Bartels-Steward algorithm (see the ablation study in Sec. 5.7.3).
Applications
Global Covariance Pooling
One successful application of the differentiable matrix square root is the Global Covariance Pooling (GCP), which is a meta-layer inserted before the FC layer of deep models to compute the matrix square root of the feature covariance. Equipped with the GCP meta-layers, existing deep models have achieved state-of-the-art performances on both generic and fine-grained visual recognition [1], [2], [3], [4], [7], [27], [28], [29]. Inspired by recent advances of transformers [30], Xie et al. [5] integrate the GCP meta-layer into the vision transformer [31] to exploit the second-order statistics of the high-level visual tokens, which solves the issue that vision transformers need pre-training on ultra-large-scale datasets. More recently, Gao et al. [6] propose an attentive and temporal-based GCP model for video action recognition.
Decorrelated Batch Normalization
Another line of research proposes to use ZCA whitening, which applies the inverse square root of the covariance to whiten the feature, as an alternative scheme for the standard batch normalization [32]. The whitening procedure, a.k.a decorrelated batch normalization, does not only standardize the feature but also eliminates the data correlation. The decorrelated batch normalization can improve both the optimization efficiency and generalization ability of deep neural networks [8], [9], [10], [11], [12], [33], [34], [35], [36].
Whitening and Coloring Transform
The WCT [13] is also an active research field where the differentiable matrix square root and its inverse are widely used. In general, the WCT performs successively the whitening transform (using inverse square root) and the coloring transform (using matrix square root) on the multi-scale features to preserve the content of current image but carrying the style of another image. During the past few years, the WCT methods have achieved remarkable progress in universal style transfer [13], [37], [38], domain adaptation [15], [39], and image translation [14], [40].
Besides the three main applications discussed above, there are still some minor applications, such as semantic segmentation [41] and super resolution [42]. Table 1 summarizes the notation we will use from now on. This section presents the forward pass and the backward propagation of our fast differentiable matrix square root. For the inverse square root, we introduce the derivation in Sec. 4.
FAST DIFFERENTIABLE MATRIX SQUARE ROOT
Forward Pass
Matrix Taylor Polynomial
We begin with motivating the Taylor series for the scalar case. Consider the following power series:
(1 − z) 1 2 = 1 − ∞ k=1 1 2 k z k(4)
where 1 2 k denotes the binomial coefficients that involve fractions, and the series converges when z<1 according to the Cauchy root test. For the matrix case, the power series can be similarly defined by:
(I − Z) 1 2 = I − ∞ k=1 1 2 k Z k (5)
where I is the identity matrix. Let us substitute Z with (I−A), we can obtain:
A 1 2 = I − ∞ k=1 1 2 k (I − A) k(6)
Similar with the scalar case, the power series converge only if ||(I − A)|| p <1, where || · || p denotes any vectorinduced matrix norms. To circumvent this issue, we can first pre-normalize the matrix A by dividing ||A|| F . This can guarantee the convergence as ||I− A ||A||F || p <1 is always satisfied. Afterwards, the matrix square root A 1 2 is postcompensated by multiplying ||A|| F . Integrated with these two operations, eq. (6) can be re-formulated as:
A 1 2 = ||A|| F · I − ∞ k=1 1 2 k (I − A ||A|| F ) k(7)
Truncating the series to a certain degree K yields the MTP approximation for the matrix square root. For the MTP of degree K, K−1 matrix multiplications are needed. The MTP enjoys the fast calculation, but it converges uniformly and sometimes suffers from the so-called "hump phenomenon", i.e., the intermediate terms of the series grow quickly but cancel each other in the summation, which results in a large approximation error. Expanding the series to a higher degree does not solve this issue either. The MPA, which adopts two polynomials of smaller degrees to construct a rational approximation, is able to avoid this caveat. To visually illustrate this impact, we depict the approximation of the scalar square root in Fig. 2. The Padé approximants consistently deliver a better approximation than NS iteration and Taylor polynomial. In particular, when the input is close to the convergence boundary (z=1) where NS iteration and Taylor polynomials suffer from a larger approximation error, our Padé approximants still present a reasonable estimation. The superior property also generalizes to the matrix case.
Matrix Padé Approximant
The MPA is computed as the fraction of two sets of polynomials: denominator polynomial
1 − M m=1 p m z m 1 − N n=1 q n z n = 1 − M +N k=1 1 2 k z k(8)
where p m and q n also apply to the matrix case. This matching gives rise to a system of linear equations:
− 1 2 1 − q 1 = −p 1 , − 1 2 2 + 1 2 1 q 1 − q 2 = −p 2 , − 1 2 M + 1 2 M − 1 q 1 + · · · − q M = p M , · · · · · ·(9)
Solving these equations directly determines the coefficients. We give the Python-like pseudo-codes in Fig. 3. The numerator polynomial and denominator polynomials of MPA are given by:
P M = I − M m=1 p m (I − A ||A|| F ) m , Q N = I − N n=1 q n (I − A ||A|| F ) n .(10)
Then the MPA for approximating the matrix square root is computed as:
A 1 2 = ||A|| F Q −1 N P M .(11)
Compared with the MTP, the MPA trades off half of the matrix multiplications with one matrix inverse, which slightly increases the computational cost but converges more quickly and delivers better approximation abilities. Moreover, we note that the matrix inverse can be avoided, as eq. (11) can be more efficiently and numerically stably computed by solving the linear system Q N A 1 2 = ||A|| F P M . According to Van et al. [43], diagonal Padé approximants (i.e., P M and Q N have the same degree) usually yield better approximation than the non-diagonal ones. Therefore, to match the MPA and MTP of the same degree, we set M =N = K−1 2 . Table 2 summarizes the forward computational complexity. As suggested in Li et al. [3] and Huang et al. [9], the iteration times for NS iteration are often set as 5 such that reasonable performances can be achieved. That is, to consume the same complexity as the NS iteration does, our MTP and MPA can match to the power series up to degree 16. However, as illustrated in Fig. 4, our MPA achieves better accuracy than the NS iteration even at degree 8. This observation implies that our MPA is a better option in terms of both accuracy and speed.
Backward Pass
Though one can manually derive the gradient of the MPA and MTP, their backward algorithms are computationally expensive as they involve the matrix power up to degree K, where K can be arbitrarily large. Relying on the AutoGrad package of deep learning frameworks can be both timeand memory-consuming since the gradients of intermediate variables would be computed and the matrix inverse of MPA is involved. To attain a more efficient backward algorithm, we propose to iteratively solve the gradient equation using the matrix sign function. Given the matrix A and its square root
A 1 2 , since we have A 1 2 A 1 2 =A, a perturbation on A leads to: A 1 2 dA 1 2 + dA 1 2 A 1 2 = dA(12)
Using the chain rule, the gradient function of the matrix square root satisfies:
A 1 2 ∂l ∂A + ∂l ∂A A 1 2 = ∂l ∂A 1 2(13)
As pointed out by Li et al. [1], eq. (13) actually defines the continuous-time Lyapunov equation (BX+XB=C) or a special case of Sylvester equation (BX+XD=C). The closedform solution is given by:
vec( ∂l ∂A ) = A 1 2 ⊗ I + I ⊗ A 1 2 −1 vec( ∂l ∂A 1 2 )(14)
where vec(·) denotes unrolling a matrix to vectors, and ⊗ is the Kronecker product. Although the closed-form solution exists theoretically, it cannot be computed in practice due to the huge memory consumption of the Kronecker product. Supposing that both A 1 2 and I are of size 256×256, the Kronecker product A 1 2 ⊗I would take the dimension of 256 2 ×256 2 , which is infeasible to compute or store. Another approach to solve eq. (13) is via the Bartels-Stewart algorithm [26]. However, it requires explicit eigendecomposition or Schulz decomposition, which is not GPU-friendly and computationally expensive.
To attain a GPU-friendly gradient solver, we propose to use the matrix sign function and iteratively solve the Lyapunov equation. Solving the Sylvester equation via matrix sign function has been long studied in the literature of numerical analysis [44], [45], [46]. One notable line of research is using the family of Newton iterations. Consider the following continuous Lyapunov function:
BX + XB = C(15)
where B refers to A 1 2 in eq. (13), C represents ∂l ∂A 1 2 , and X denotes the seeking solution ∂l ∂A . Eq. (15) can be represented by the following block using a Jordan decomposition:
H = B C 0 −B = I X 0 I B 0 0 −B I X 0 I −1(16)
The matrix sign function is invariant to the Jordan canonical form or spectral decomposition. This property allows the use of Newton's iterations for iteratively solving the Lyapunov function. Specifically, we have: [21]). For a given matrix H with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) sign(H) 2 = I; 2) if H has the Jordan decomposition H=TMT −1 , then its sign function satisfies sign(H)=Tsign(M)T −1 .
Lemma 1 (Matrix Sign Function
We give the complete proof in the Supplementary Material. Lemma 1.1 shows that sign(H) is the matrix square root of the identity matrix, which indicates the possibility of using Newton's root-finding method to derive the solution [21]. Here we also adopt the Newton-Schulz iteration, the modified inverse-free and multiplication-rich Newton iteration, to iteratively compute sign(H). This leads to the coupled iteration as:
B k+1 = 1 2 B k (3I − B 2 k ), C k+1 = 1 2 − B 2 k C k + B k C k B k + C k (3I − B 2 k ) .(17)
The equation above defines two coupled iterations for solving the Lyapunov equation. Since the NS iteration converges only locally, i.e., converges when ||H 2 k −I||<1, here we divide H 0 by ||B|| F to meet the convergence condition. This normalization defines the initialization B 0 = B ||B||F and C 0 = C ||B||F . Relying on Lemma 1.2, the sign function of eq. (16) can be also calculated as:
sign(H) = sign B C 0 −B = I 2X 0 −I(18)
As indicated above, the iterations in eq. (17) have the convergence:
lim k→∞ B k = I, lim k→∞ C k = 2X(19)
After iterating k times, we can get the approximate solution X= 1 2 C k . Instead of choosing setting iteration times, one can also set the termination criterion by checking the convergence ||B k − I|| F <τ , where τ is the pre-defined tolerance. Table 3 compares the backward computation complexity of the iterative Lyapunov solver and the NS iteration. Our proposed Lyapunov solver spends fewer matrix multiplications and is thus more efficient than the NS iteration. Even if we iterate the Lyapunov solver more times (e.g., 7 or 8), it still costs less time than the backward calculation of NS iteration that iterates 5 times.
FAST DIFFERENTIABLE INVERSE SQUARE ROOT
In this section, we introduce the extension of our algorithm to the inverse square root.
Forward Pass
Matrix Taylor Polynomial
To derive the MTP of inverse square root, we need to match to the following power series:
(1 − z) − 1 2 = 1 + ∞ k=1 − 1 2 k z k(20)
Similar with the procedure of the matrix square root in eqs. (5) and (6), the MTP approximation can be computed as:
A − 1 2 = I + ∞ k=1 − 1 2 k (I − A ||A|| F ) k(21)
Instead of the post-normalization of matrix square root by multiplying ||A|| F as done in eq. (7), we need to divide ||A|| F for computing the inverse square root:
A − 1 2 = 1 ||A|| F · I + ∞ k=1 − 1 2 k (I − A ||A|| F ) k(22)
Compared with the MTP of matrix square root in the same degree, the inverse square root consumes the same computational complexity.
Matrix Padé Approximant
The matrix square root A 1 2 of our MPA is calculated as
||A|| F Q −1 N P M .
For the inverse square root, we can directly compute the inverse as:
A − 1 2 = ( ||A|| F Q −1 N P M ) −1 = 1 ||A|| F P −1 M Q N (23)
The extension to inverse square root comes for free as it does not require additional computation. For both the matrix square root and inverse square root, the matrix polynomials Q N and P M need to be first computed, and then one matrix inverse or solving the linear system is required. Another approach to derive the MPA for inverse square root is to match the power series in eq. (20) and construct the MPA again. The matching is calculated as:
1 + M m=1 r m z m 1 + N n=1 s n z n = 1 + M +N k=1 − 1 2 k z k(24)
where r m and s n denote the new Padé coefficients. Then the matrix polynomials are computed as:
R M = I + M m=1 r m (I − A ||A|| F ) m , S N = I + N n=1 s n (I − A ||A|| F ) n .(25)
The MPA for approximating the inverse square root is calculated as:
A − 1 2 = 1 ||A|| F S −1 N R M .(26)
This method for deriving MPA also leads to the same complexity. Notice that these two different computation methods are equivalent to each other. Specifically, we have:
Proposition 1. The diagonal MPA 1 √ ||A||F S −1 N R M is equivalent to the diagonal MPA 1 √ ||A||F P −1
M Q N , and the relation p m =−s n and q n = − r m hold for any m=n.
We give the detailed proof in Supplementary Material. Since two sets of MPA are equivalent, we adopt the implementation of inverse square root in eq. (23) throughout our experiments, as it shares the same P M and Q N with the matrix square root.
Backward Pass
For the inverse square root, we can also rely on the iterative Lyapunov solver for the gradient computation. Consider the following relation:
A 1 2 A − 1 2 = I.(27)
A perturbation on both sides leads to:
dA 1 2 A − 1 2 + A 1 2 dA − 1 2 = dI.(28)
Using the chain rule, we can obtain the gradient equation after some arrangements:
∂l ∂A 1 2 = −A − 1 2 ∂l ∂A − 1 2 A − 1 2 .(29)
Injecting this equation into eq. (13) leads to the reformulation:
A 1 2 ∂l ∂A + ∂l ∂A A 1 2 = −A − 1 2 ∂l ∂A − 1 2 A − 1 2 A − 1 2 ∂l ∂A + ∂l ∂A A − 1 2 = −A −1 ∂l ∂A − 1 2 A −1 .(30)
As can be seen, now the gradient function resembles the continuous Lyapunov equation again. The only difference with eq. (13) is the r.h.s. term, which can be easily computed
as −(A − 1 2 ) 2 ∂l ∂A − 1 2 (A − 1 2 ) 2 with 3 matrix multiplications.
For the new iterative solver of the Lyapunov equation BX+XB=C, we have the following initialization:
B 0 = A − 1 2 ||A − 1 2 || F = ||A 1 2 || F A − 1 2 C 0 = −A −1 ∂l ∂A − 1 2 A −1 ||A − 1 2 || F = −||A 1 2 || F A −1 ∂l ∂A − 1 2 A −1 .(31)
Then we use the coupled NS iteration to compute the gradient ∂l ∂A = 1 2 C k . Table 3 presents the complexity of the backward algorithms. Compared with the gradient of matrix square root, this extension marginally increases the computational complexity by 3 more matrix multiplications, which is more efficient than a matrix inverse or solving a linear system.
EXPERIMENTS
In the experimental section, we first perform a series of numerical tests to compare our proposed method with SVD and NS iteration. Subsequently, we evaluate our methods in several real-world applications, including decorrelated batch normalization, second-order vision transformer, global covariance pooling for image/video recognition, and neural style transfer. The implementation details are kindly referred to the Supplementary Material.
Baselines
In the numerical tests, we compare our two methods against SVD and NS iteration. For the various computer vision experiments, our methods are compared with more differentiable SVD baselines where each one has its specific gradient computation. These methods include (1) Power Iteration (PI), (2) SVD-PI [17], (3) SVD-Taylor [4], [18], and (4) SVD-Padé [4]. We put the detailed illustration of baseline methods in the Supplementary Material.
Numerical Tests
To comprehensively evaluate the numerical performance and stability, we compare the speed and error for the input of different batch sizes, matrices in various dimensions, different iteration times of the backward pass, and different polynomial degrees of the forward pass. In each of the following tests, the comparison is based on 10, 000 random covariance matrices and the matrix size is consistently 64×64 unless explicitly specified. The error is measured by calculating the Mean Absolute Error (MAE) and Normalized Root Mean Square Error (NRMSE) of the matrix square root computed by the approximate methods (NS iteration, MTP, and MPA) and the accurate method (SVD).
For our algorithm of fast inverse square root, since the theory behind the algorithm is in essence the same with the matrix square root, they are expected to have similar numerical properties. The difference mainly lie in the forward error and backward speed. Thereby, we conduct the FP error analysis and the BP speed analysis for the inverse square root in Sec. 5.2.1 and Sec. 5.2.2, respectively. For the error analysis, we compute the error of whitening transform by ||σ(A − 1 2 X)−I|| F where σ(·) denotes the extracted eigenvalues. In the other numerical tests, we only evaluate the properties of the algorithm for the matrix square root.
Forward Error versus Speed
Both the NS iteration and our methods have a hyperparameter to tune in the forward pass, i.e., iteration times for NS iteration and polynomial degrees for our MPA and MTP. To validate the impact, we measure the speed and error of both matrix square root and its inverse for different hyper-parameters. The degrees of our MPA and MTP vary from 6 to 18, and the iteration times of NS iteration range from 3 to 7. As can be observed from Fig. 4, our MTP has the least computational time, and our MPA consumes slightly more time than MTP but provides a closer approximation. Moreover, the curve of our MPA consistently lies below that of the NS iteration, demonstrating our MPA is a better choice in terms of both speed and accuracy. Our Lyapunov solver is more efficient than NS iteration as fewer matrix multiplications are involved. Our solver for inverse square root only slightly increases the computational cost. Table 3: our Lyapunov solver is much more efficient than NS iteration. For the NS iteration of 5 times, our Lyapunov solver still has an advantage even when we iterate 8 times. Moreover, the extension of our Lyapunov solver for inverse square root only marginally increases the computational cost and is sill much faster than the NS iteration. Fig. 6: Speed comparison for each method versus different batch sizes. Our methods are more batch-efficient than the SVD or NS iteration.
Backward Speed versus Iteration
Speed versus Batch Size
In certain applications such as covariance pooling and instance whitening, the input could be batched matrices instead of a single matrix. To compare the speed for batched input, we conduct another numerical test. The hyper-parameter choices follow our experimental settings in decorrelated batch normalization. As seen in Fig. 6, our MPA-Lya and MTP-Lya are consistently more efficient than the NS iteration and SVD. To give a concrete example, when the batch size is 64, our MPA-Lya is 2.58X faster than NS iteration and 27.25X faster than SVD, while our MTP-Lya is 5.82X faster than the NS iteration and 61.32X faster than SVD.
As discussed before, the current SVD implementation adopts a for-loop to compute each matrix one by one within the mini-batch. This accounts for why the time consumption of SVD grows almost linearly with the batch size. For the NS iteration, the backward pass is not as batch-friendly as our Lyapunov solver. The gradient calculation requires measuring the trace and handling the multiplication for each matrix in the batch, which has to be accomplished ineluctably by a for-loop. Our backward pass can be more efficiently implemented by batched matrix multiplication.
Speed and Error versus Matrix Dimension
In the last numerical test, we compare the speed and error for matrices in different dimensions. The hyper-parameter settings also follow our experiments of ZCA whitening. As seen from Fig. 7 left, our proposed MPA-Lya and MTP-Lya consistently outperform others in terms of speed. In particular, when the matrix size is very small (<32), the NS iteration does not hold a speed advantage over the SVD. By contrast, our proposed methods still have competitive speed against the SVD. Fig. 7 right presents the approximation error using metrics MAE and NRMSE. Both metrics agree well with each other and demonstrate that our MPA-Lya always has a better approximation than the NS iteration, whereas our MTP-Lya gives a worse estimation but takes the least time consumption, which can be considered as a trade-off between speed and accuracy.
Decorrelated Batch Normalization
As a substitute of ordinary BN, the decorrelated BN [8] applies the ZCA whitening transform to eliminate the correlation of the data. Consider the reshaped feature map X∈R C×BHW . The whitening procedure first computes its sample covariance as:
A=(X − µ(X))(X − µ(X)) T + I(32)
where A∈R C×C , µ(X) is the mean of X, and is a small constant to make the covariance strictly positive definite.
Afterwards, the inverse square root is calculated to whiten the feature map:
X whitend = A − 1 2 X(33)
By doing so, the eigenvalues of X are all ones, i.e., the feature is uncorrelated. During the training process, the training statistics are stored for the inference phase. We insert the decorrelated BN layer after the first convolutional layer of ResNet [47], and the proposed methods and other baselines are used to compute A − 1 2 . Table 4 displays the speed and validation error on CIFAR10 and CIFAR100 [48]. The ordinary SVD with clipping gradient (SVD-Clip) is inferior to other SVD baselines, and the SVD computation on GPU is slower than that on CPU. Our MTP-Lya is 1.16X faster than NS iteration and 1.32X faster than SVD-Padé, and our MPA-Lya is 1.14X and 1.30X faster. Furthermore, our MPA-Lya achieves state-of-the-art performances across datasets and models. Our MTP-Lya has comparable performances on ResNet-18 but slightly falls behind on ResNet-50. We guess this is mainly because the relatively large approximation error of MTP might affect little on the small model but can hurt the large model. On CIFAR100 with ResNet-50, our MPA-Lya slightly falls behind NS iteration in the average validation error. As a larger and deeper model, ResNet-50 is likely to have worse-conditioned matrices than ResNet-18. Since our MPA involves solving a linear system, processing a very ill-conditioned matrix could lead to some round-off errors. In this case, NS iteration might have a chance to slightly outperform our MPA-Lya. However, this is a rare situation; our MPA-Lya beats NS iteration in most following experiments.
Global Covariance Pooling
For the application of global covariance pooling, we evaluate our method in three different tasks, including large-scale visual recognition, fine-grained visual categorization, and video action recognition. Since the GCP method requires the very accurate matrix square root [4], our MTP-Lya cannot achieve reasonable performances due to the relatively large approximation error. Therefore, we do not take it into account for comparison throughout the GCP experiments. Different from the standard CNNs, the covariance square root of the last convolutional feature is used as the global representation. Considering the final convolutional feature X∈R B×C×HW , a GCP meta-layer first computes the sample covariance as: whereĪ represents the centering matrix, I denotes the identity matrix, and 1 is a column vector whose values are all ones, respectively. Afterwards, the matrix square root is conducted for normalization:
Large-scale Visual Recognition
P = XĪX T ,Ī = 1 N (I − 1 N 11 T )(34)Q P 1 2 = (UΛU T ) 1 2 = UΛ 1 2 U T(35)
where the normalized covariance matrix Q is fed to the FC layer. Our method is applied to calculate Q. Table 5 presents the speed comparison and the validation error of GCP ResNet-50 [47] models on ImageNet [49]. Our MPA-Lya not only achieves very competitive performance but also has the least time consumption. The speed of our method is about 21X faster than the SVD and 1.5X faster than the NS iteration. In line with other GCP works [2], [3], [4], after training on ImageNet, the model is subsequently fine-tuned on each finegrained dataset. Table 6 compares the time consumption and validation accuracy on three commonly used fine-grained benchmarks, namely Caltech University Birds (Birds) [50], FGVC Aircrafts (Aircrafts) [51], and Stanford Cars (Cars) [52]. As can be observed, our MPA-Lya consumes 50% less time than the NS iteration and is about 8X faster than the SVD.
Fine-grained Visual Recognition
Moreover, the performance of our method is slightly better than other baselines on Birds [50] and Aircrafts [51]. The evaluation result on Cars [52] is also comparable. Fig. 9: Architecture of the temporal-attentive GCP network for video action recognition [6]. The channel and spatial attention is used to make the covariance more attentive.
Video Action Recognition
Besides the application of image recognition, the GCP methods can be also used for the task of video recognition [6]. Fig. 9 displays the overview of the temporal-attentive GCP model for video action recognition. The temporal covariance is computed in a sliding window manner by involving both intra-and inter-frame correlations. Supposing the kernel size of the sliding window is 3, then temporal covariance is computed as:
T emp.Cov.(X l ) = X l−1 X T l−1 + X l X T l + X l+1 X T l+1 intra−f rame covariance + X l−1 X T l + X l X T l−1 + · · · + X l+1 X T l inter−f rame covariance(36)
Finally, the matrix square root of the attentive temporalbased covariance is computed and passed to the FC layer. The spectral methods are used to compute the matrix square root of the attentive covariance T emp.Cov.(X l ). We present the validation accuracy and time cost for the video action recognition in Table 7. For the computation speed, our MPA-Lya is about 1.74X faster than the NS iteration and is about 10.82X faster than the SVD. Furthermore, our MPA-Lya achieves the best performance on HMDB51, while the result on UCF101 is also very competitive.
To sum up, our MPA-Lya has demonstrated its general applicability in the GCP models for different tasks. In particular, without the sacrifice of performance, our method can bring considerable speed improvements. This could be beneficial for faster training and inference. In certain experiments [53] and UCF101 [54] with backbone TEA R50 [55]. The covariance matrix is of size 16×128×128, and the time consumption is measured for computing the matrix square root (BP+FP). such as fine-grained classification, the approximate methods (MPA-Lya and NS iteration) can marginally outperform accurate SVD. This phenomenon has been similarly observed in related studies [3], [4], [9], and one likely reason is that the SVD does not have as healthy gradients as the approximate methods. This might negatively influence the optimization process and consequently the performance would degrade. Fig. 10: The architecture overview of our model for neural style transfer. Two encoders take input of the style and content image respectively, and generate the multi-scale content/style features. A decoder is applied to absorb the feature and perform the WCT process at 5 different scales, which outputs a pair of images that exchange the styles. Finally, a discriminator is further adopted to tell apart the authenticity of the images.
Neural Style Transfer
We adopt the WCT process in the network architecture proposed in Cho et al. [14] for neural style transfer. Fig. 10 displays the overview of the model. The WCT performs successive whitening and coloring transform on the content and style feature. Consider the reshaped content feature X c ∈R B×C×HW and the style feature X s ∈R B×C×HW . The style information is first removed from the content as:
X whitened c = (X c − µ(X c ))(X c − µ(X c )) T − 1 2 X c (37)
Then we extract the desired style information from the style feature X s and transfer it to the whitened content feature:
X colored c = (X s −µ(X s ))(X s −µ(X s )) T 1 2 X whitened c(38)
The resultant feature X colored c is compensated with the mean of style feature and combined with the original content feature:
X = α(X colored c + µ(X s )) + (1 − α)X c (39)
where α is a weight bounded in [0, 1] to control the strength of style transfer. In this experiment, both the matrix square root and inverse square root are computed. Table 8 presents the quantitative evaluation using the LPIPS [56] score and user preference. The speed of our MPA-Lya and MTP-Lya is significantly faster than other methods. Specifically, our MTP-Lya is 2.3X faster than the NS iteration and 10.9X faster than the SVD, while our MPA-Lya consumes 1.4X less time than the NS iteration and 6.4X less time than the SVD. Moreover, our MPA-Lya achieves the best LPIPS score and user preference. The performance of our MTP-Lya is also very competitive. Fig. 11 displays the exemplary visual comparison. Our methods can effectively transfer the style information and preserve the original content, leading to transferred images with a more coherent style and better visual appeal. We give detailed evaluation results on each subset and more visual examples in Supplementary Material.
Second-order Vision Transformer
The ordinary vision transformer [31] attaches an empty class token to the sequence of visual tokens and only uses the class token for prediction, which may not exploit the rich semantics embedded in the visual tokens. Instead, The Second-order Vision Transformer (So-ViT) [5] proposes to leverage the high-level visual tokens to assist the task of classification: 9: Validation top-1/top-5 accuracy of the second-order vision transformer on ImageNet [49]. The covariance is of size 64×48×48, where 64 is the mini-batch size. The time cost is measured for computing the matrix square root (BP+FP).
y = FC(c) + FC (XX T ) 1 2 (40)
Methods
Time ( Fig. 12: The scheme of So-ViT [5]. The covariance square root of the visual tokens are computed to assist the classification.
In the original vision transformer [31], only the class token is utilized for class predictions.
where c is the output class token, X denotes the visual token, and y is the combined class predictions. We show the model overview in Fig. 12. Equipped with the covariance pooling layer, So-ViT removes the need for pre-training on the ultralarge-scale datasets and achieves competitive performance even when trained from scratch. To reduce the computational budget, So-ViT further proposes to use Power Iteration (PI) to approximate the dominant eigenvector. We use our methods to compute the matrix square root of the covariance XX T . Table 9 compares the speed and performances on three So-ViT architectures with different depths. Our proposed methods significantly outperform the SVD and NS iteration in terms of speed. To be more specific, our MPA-Lya is 3.19X faster than the NS iteration and 25.63X faster than SVD-Padé, and our MTP-Lya is 4.34X faster than the NS iteration and 34.85X faster than SVD-Padé. For the So-ViT-7 and So-ViT-10, our MPA-Lya achieves the best evaluation results and even slightly outperforms the SVD-based methods. Moreover, on the So-ViT-14 model where the performances are saturated, our method converges faster and spends fewer training epochs. The performance of our MTP-Lya is also on par with the other methods. The PI suggested in the So-ViT only computes the dominant eigenpair but neglects the rest. In spite of the fast speed, the performance is not comparable with other methods.
Ablation Studies
We conduct three ablation studies to illustrate the impact of the degree of power series in the forward pass, the termination criterion during the back-propagation, and the possibility of combining our Lyapunov solver with the SVD and the NS iteration. Table 10 displays the performance of our MPA-Lya for different degrees of power series. As we use more terms of the power series, the approximation error gets smaller and the performance gets steady improvements from the degree [3,3] to [5,5]. When the degree of our MPA is increased from [5,5] to [6,6], there are only marginal improvements. We hence set the forward degrees as [5,5] for our MPA and as 11 for our MTP as a trade-off between speed and accuracy. Table 11 compares the performance of backward algorithms with different termination criteria as well as the exact solution computed by the Bartels-Steward algorithm (BS algorithm) [26]. Since the NS iteration has the property of quadratic convergence, the errors ||B k −I|| F and ||0.5C k − X|| F decrease at a larger rate for more iteration times. When we iterate more than 7 times, the error becomes sufficiently neglectable, i.e., the NS iteration almost converges. Moreover, from 8 iterations to 9 iterations, there are no obvious performance improvements. We thus terminate the iterations after iterating 8 times. The exact gradient calculated by the BS algorithm does not yield the best results. Instead, it only achieves the least fluctuation on ResNet-50 and other results are inferior to our iterative solver. This is because the formulation of our Lyapunov equation is based on the assumption that the accurate matrix square root is computed, but in practice we only compute the approximate one in the forward pass. In this case, calculating the accurate gradient of the approximate matrix square root might not necessarily work better than the approximate gradient of the approximate matrix square root.
Degree of Power series to Match for Forward Pass
Termination Criterion for Backward Pass
Lyapunov Solver as A General Backward Algorithm
We note that our proposed iterative Lyapunov solver is a general backward algorithm for computing the matrix square root. That is to say, it should be also compatible with the SVD and NS iteration as the forward pass.
For the NS-Lya, our previous conference paper [22] shows that the NS iteration used in [2], [21] cannot converge on any datasets. In this extended manuscript, we found out that the underlying reason is the inconsistency between the FP and BP. The NS iteration of [2], [21] is a coupled iteration that use two variables Y k and Z k to compute the matrix square root. For the BP algorithm, the NS iteration is defined to compute the matrix sign and only uses one variable Y k . The term Z k is not involved in the BP and we have no control over the gradient back-propagating through it, which results in the non-convergence of the model. To resolve this issue, we propose to change the forward coupled NS iteration to a variant that uses one variable as:
Z k+1 = 1 2 (3Z k − Z 3 k A ||A|| F )(41)
where Z k+1 converges to the inverse square root A − 1 2 . This variant of NS iteration is often used to directly compute the inverse square root [9], [58]. The Z 0 is initialization with I, and post-compensation is calculated as
Z k = 1 √ ||A||F Z k .
Although the modified NS iteration uses only one variable, we note that it is an equivalent representation with the previous NS iteration. More formally, we have: Proposition 2. The one-variable NS iteration of [9], [58] is equivalent to the two-variable NS iteration of [1], [2], [21].
We give the proof in the Supplementary Material. The modified forward NS iteration is compatible with our iterative Lyapunov solver. Table 12 compares the performance of different methods that use the Lyapunov solver as the backward algorithm. Both the SVD-Lya and NS-Lya achieve competitive performances.
CONCLUSION
In this paper, we propose two fast methods to compute the differentiable matrix square root and the inverse square root. In the forward pass, the MTP and MPA are applied to approximate the matrix square root, while an iterative Lyapunov solver is proposed to solve the gradient function for back-propagation. A number of numerical tests and computer vision applications demonstrate that our methods can achieve both the fast speed and competitive performances.
k (I − A ||A||F ) k ; else A − 1 2 ←I+ ∞ k=1 − 1 2 k (I − A ||A||F ) k end else // FP method is MPA M ← K−1 2 , N ← K−1 2 ; P M ←I− M m=1 p m (I − A ||A||F ) m ; Q N ←I− N n=1 q n (I − A ||A||F ) n ; if Matrix Square Root then A 1 2 ←Q −1 N P M ; else A − 1 2 ←P −1 M Q N ; end end if Matrix Square Root then Post-compensate A 1 2 ← ||A|| F · A 1 2 else Post-compensate A − 1 2 ← 1 √ ||A||F · A − 1 2 end
APPENDIX B THEORETICAL DERIVATION AND PROOF
B.1 Iterative Lyapunov Function Solver
Lemma 1 (Matrix Sign Function [21]). For a given matrix H with no eigenvalues on the imaginary axis, its sign function has the following properties: 1) sign(H) 2 = I; 2) if H has the Jordan decomposition H=TMT −1 , then its sign function satisfies sign(H)=Tsign(M)T −1 .
Proof. The first property is easy to prove. Consider the SVD of USV T = H. As the sign depends on the positiveness of the eigenvale, the square of sign function is computed as:
sign(H) 2 = sign(S) 2(42)
Since all eigenvalues are real, we have sign(S) 2 =I, and the first property is proved. The alternative definition of matrix sign function is given by:
sign(H) = H(H 2 ) − 1 2 (43), C 0 ← ∂l ∂A 1 2 , i←0 ; else B 0 ←A − 1 2 , C 0 ← − A −1 ∂l ∂A − 1 2 A −1 , i←0; end Normalize B 0 ← B0 ||B0||F , C 0 ← C0 ||B0||F ; while i < T do // Coupled iteration B k+1 ← 1 2 B k (3I − B 2 k ) ; C k+1 ← 1 2 − B 2 k C k + B k C k B k + C k (3I − B 2 k ) ; i←i + 1; end ∂l ∂A ← 1 2 C k ;
Injecting sign(H)=Tsign(M)T −1 into the above equation leads to
sign(H) = TMT −1 (TM 2 T) − 1 2 = TMT −1 Tsign(M)M −1 T −1 = Tsign(M)T −1(44)
The second property gets proved.
Now we switch how to derive the iterative solver for matrix sign function in detail. Lemma 1.1 shows that sign(H) is the matrix square root of the identity matrix. We use the Newton-Schulz iteration to compute sign(H) as:
H k+1 = = 1 2 H k (3I − H 2 k ) = 1 2 B k (3I−B 2 k ) 3C k − B k (B k C k −C k B k )−C k B 2 k 0 −B k (3I−B 2 k )(45)
Lemma 1.2 indicates an alternative approach to compute the sign function as:
sign(H) = sign B C 0 −B = I X 0 I sign B 0 0 −B I X 0 I −1 = I X 0 I I 0 0 −I I −X 0 I = I 2X 0 −I(46)
The above two equations define the coupled iterations and the convergence. Proof. Though Padé approximants are derived out of a finite Taylor series, they are asymptotic to their infinite Taylor series [43]. Let f (z)=(1 − z) 1 2 and f (z) −1 =(1 − z) − 1 2 . We have the relation:
1 + M m=1 r m z m 1 + N n=1 s n z n = f (z) −1 + R(z M +N +1 ) 1 − M m=1 p m z m 1 − N n=1 q n z n = f (z) + R(z M +N +1 )(47)
where R(z M +N +1 ) is the discarded higher-order term. Since f (z) = 1 f (z) −1 , we have:
1 + M m=1 r m z m 1 + N n=1 s n z n = 1 − N n=1 q n z n 1 − M m=1 p m z m .(48)
Now we have two sets of Padé approximants at both sides.
Since the numerator and denominator of Padé approximants are relatively prime to each other by definition [59], the two sets of Padé approximants are equivalent and we have:
p m = −s n , q n = −r m(49)
Generalized to the matrix case, this leads to:
P M = S N , Q N = R M .(50)
Therefore, we also have S −1
N R M =P −1 M Q N .
The two sets of MPA are actually the same representation when m=n.
B.3 Equivalence of Newton-Schulz Iteration
Proposition 2. The one-variable NS iteration of [9], [58] is equivalent to the two-variable NS iteration of [1], [2], [21].
Proof. For the two-variable NS iteration, the coupled iteration is computed as:
Y k+1 = 1 2 Y k (3I − Z k Y k ), Z k+1 = 1 2 (3I − Z k Y k )Z k (51)
where Y k and Z k converge to A 1 2 and A − 1 2 , respectively. The two variables are initialized as Y 0 = A ||A||F and Z 0 =I. Since the two variables have the relation
Z −1 k Y k = A ||A||F , we can replace Y k in eq. (51) with Z k A ||A||F : Z k+1 = 1 2 (3I − Z 2 k A ||A|| F )Z k(52)
Notice that A and Z k have the same eigenspace and their matrix product commutes, i.e., AZ k =Z k A. Therefore, the above equation can be further simplified as:
Z k+1 = 1 2 (3Z k − Z 3 k A ||A|| F )(53)
As indicated above, the two seemingly different NS iterations are in essence equivalent.
APPENDIX C BASELINES
In the experiment section, we compare our proposed two methods with the following baselines:
• Power Iteration (PI). It is suggested in the original So-ViT to compute only the dominant eigenpair. • SVD-PI [17] that uses PI to compute the gradients of SVD.
• SVD-Taylor [4], [18] that applies the Taylor polynomial to approximate the gradients. • SVD-Padé [4] that proposes to closely approximate the SVD gradients using Padé approximants. Notice that our MTP/MPA used in the FP is fundamentally different from the Taylor polynomial or Padé approximants used in the BP of SVD-Padé. For our method, we use Matrix Taylor Polynomial (MTP) and Matrix Padé Approximants (MPA) to derive the matrix square root in the FP. For the SVD-Padé, they use scalar Taylor polynomial and scalar Padé approximants to approximate the gradient 1 λi−λj in the BP. That is to say, their aim is to use the technique to compute the gradient and this will not involve the back-propagation of Taylor polynomial or Padé approximants. • NS iteration [20], [21] that uses the Newton-Schulz iteration to compute the matrix square root. It has been widely applied in different tasks, including covariance pooling [3] and ZCA whitening [8]. We note that although [9] and [21] use different forms of NS iteration, the two representations are equivalent to each other (see the proof in the paper).
The modified NS iteration in [9] just replaces Y k with Z k A and re-formulates the iteration using one variable. The computation complexity is still the same.
As the ordinary differentiable SVD suffers from the gradient explosion issue and easily causes the program to fail, we do not take it into account for comparison.
Unlike previous methods such as SVD and NS iteration, our MPA-Lya/MTP-Lya does not have a consistent FP and BP algorithm. However, we do not think it will bring any caveat to the stability or performance. Our MTP and MPA do not need coupled iteration in the FP and always have gradient back-propagating through A 1 2 or A − 1 2 in the BP, which could guarantee the training stability. Moreover, our ablation study implies that our BP Lyapunov solver approximates the real gradient very well (i.e., ||B k −I|| F <3e−7 and ||0.5C k −X|| F <7e−6). Also, our extensive experiments demonstrate the superior performances. In light of these experimental results, we argue that as long as the BP algorithm is accurate enough, the inconsistency between the BP and FP is not an issue.
APPENDIX D EXPERIMENTAL SETTINGS
All the source codes are implemented in Pytorch. For the SVD methods, the forward eigendecomposition is performed on the CPU using the official Pytorch function TORCH.SVD, which calls the LAPACK's routine gesdd that uses the Divide-and-Conquer algorithm for the fast calculation. All the numerical tests are conducted on a single workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz.
For our method throughout all the experiments, in the forward pass, we match the MTP to the power series of degree 11 and set the degree for both numerator and denominator of our MPA as 5. We keep iterating 8 times for our backward Lyapunov solver. Now we turn to the implementation details for each experiment in the paper. [29], we truncate the Taylor polynomial to degree 20 for SVD-Taylor. To make Padé approximant match the same degree with Taylor polynomial, we set the degree of both numerator and denominator to 10 for SVD-Padé. For SVD-PI, the iteration times are also set as 20. For the NS iteration, according to the setting in [3], [8], we set the iteration times to 5. The other experimental settings follow the implementation in [18]. We use the workstation equipped with a Tesla K40 GPU and a 6-core Intel(R) Xeon(R) GPU @ 2.20GHz for training. Notice that in our previous conference paper, we first calculate the matrix square root A 1 2 and then compute X whitend by solving the linear system A 1 2 X whitend =X. Thanks to the algorithm extension to the inverse square root, we can directly computes A − 1 2 in this paper.
D.1 Decorrelated Batch Normalization
D.2 Second-order Vision Transformer
We use 8 Tesla G40 GPUs for distributed training and the NVIDIA Apex mixed-precision trainer is used. Except that the spectral layer uses the single-precision (i.e., float32), other layers use the half-precision (i.e., float16) to accelerate the training. Other implementation details follow the experimental setting of the original So-ViT [5]. Following the experiment of covariance pooling for CNNs [4], the degrees of Taylor polynomial are truncated to 100 for SVD-Taylor, and the degree of both the numerator and denominator of Padé approximants are set to 50 for SVD-Padé. The iteration times of SVD-PI are set to 100. In the experiment of covariance pooling, more terms of the Taylor series are used because the covariance pooling meta-layer requires more accurate gradient estimation [4].
For the SVD-based methods, usually the double-precision is required to ensure an effective numerical representation of the eigenvalues. Using a lower precision would make the model fail to converge at the beginning of the training [4]. This is particularly severe for vision transformers which are known slow and hard to converge in the early training stage. One may consider to cast the tensor into double-precision (64 bits) to alleviate this issue. However, this will trigger much larger gradient and introduce round-off errors when the gradient is passed to previous layer in half-precision (16 bits). To avoid this caveat, we first apply the NS iteration to train the network for 50 epochs, then switch to the corresponding SVD method and continue the training till the end. This hybrid approach can avoid the non-convergence of the SVD methods at the beginning of the training phase.
D.3 Global Covariance Pooling
For the experiment on large-scale and fine-grained image recognition, we refer to [4] for all the experimental settings. In the video action recognition experiment [6], the iteration time for NS iteration is set as 5. Othe implementation details are unchanged.
D.4 Neural Style Transfer
For the loss functions, we follow the settings in [14] and use the cycle-consistent reconstruction loss in both the latent and the pixel space. The image is resized to the resolution of 216×216 before passing to the network, and the model is trained for 100, 000 iterations. The batch size is set to 4. Table 13 and Fig. 14 present the detailed quantitative evaluation and more visual comparison, respectively. As suggested in [13], [38], we use the LPIPS [56] score and the user preference as the evaluation metrics. For the LPIPS metric, we compute the score between each pair of transferred image and the content image. A higher LPIPS score implies that the image carries less content information but more style information. For the user study, we randomly select 100 images from each dataset and ask 20 volunteers to vote for the image that characterizes more the style information. In some cases where the volunteer thinks none of the images correctly carries the style, he/she can abstain and does not vote for any one.
APPENDIX E COMPARISON OF LYAPUNOV SOLVER AGAINST IM-PLICIT FUNCTION AND AUTOMATIC DIFFERENTIA-TION
Besides our proposed custom Lyapunov gradient solver, one may consider alternative gradient computation schemes, such as reverse-mode automatic differentiation (RMAD) and implicit function (IF). For the RMAD, the backward pass indeed takes roughly the same operation costs as the forward pass. Considering that our MPA uses two sets of matrix power polynomials and one matrix inverse, using RMAD for the gradient computation would be less efficient than the Lyapunov solver which only involves matrix multiplications. Moreover, the gradient of some intermediate variables of MPA would be calculated in the RMAD, which . The memory usage of IF should be small since only the gradient of f is introduced in the computation. However, the time cost can be high due to the function gradient evaluation ∂f ∂A and ∂f ∂A 1 2 as well as the matrix inverse computation. Table 14 compares the speed and memory consumption. Our Lyapunov solver outperforms both schemes in terms of speed and memory. The memory usage of IF is competitive, which also meets our expectation. In general, our Lyapunovbased solver can be viewed as a well-optimized RMAD compiler with the least memory and time consumption.
APPENDIX F STABILITY OF PADÉ APPROXIMANTS
When there is the presence of spurious poles [60], [61], the Padé approximants are very likely to suffer from the wellknown defects of instability. The spurious poles mean that when the approximated function has very close poles and zeros, the corresponding Padé approximants will also have close poles and zeros. Consequently, the Padé approximants will become very unstable in the region of defects (i.e., when the input is in the neighborhood of poles and zeros). Generalized to the matrix case, the spurious poles can happen when the determinant of the matrix denominator is zero (i.e. det (Q N ) = 0).
However, in our case, the approximated function for matrix square root is (1 − z) 1 2 for |z| < 1, which only has one zero at z = 1 and does not have any poles. For the inverse square root, the approximated function (1 − z) − 1 2 has one pole but does not have an zeros. Therefore, the spurious pole does not exist in our approximation and there are no defects of our Padé approximants. Now we briefly prove this claim for the matrix square root. The proof for the inverse square root can be given similarly, and we omit it here for conciseness. Consider the denominator of our Padé approximants:
Q N = I − N n=1 q n (I − A ||A|| F ) n(54)
Its determinant is calculated as:
det (Q N ) = i=1 (1 − N n=1 q n (1 − λ i i λ 2 i ) n )(55)
The coefficients q n of our [5,5] and we have:
f (x i ) = 1 − 2.25x i + 1.75x 2 i − 0.54675x 3 i + +0.05859375x 4 i − 0.0009765625x 5 i ; det (Q N ) = i=1 (f (x i )).(56)
The polynomial f (x i ) does not have any zero in the range of x∈[0, 1]. The minimal is 0.0108672 when x = 1. This implies that det (Q N ) = 0 always holds for any Q N and our Padé approximants do not have any pole. Accordingly, there will be no spurious poles and defects. Hence, our MPA is deemed stable. Throughout our experiments, we do not encounter any instability issue of our MPA.
Fig. 1
1Fig. 1: Exemplary visualization of the matrix square root and its inverse. Given the original data X∈R 2×n , the matrix square root performs an effective spectral normalization by stretching the data along the axis of small variances and squeezing the data in the direction with large variances, while the inverse square root transforms the data into the uncorrelated structure that has unit variance in all directions.
Fig. 2 :
2The function (1 − z)1 2 in the range of |z| < 1 and its approximation including Taylor polynomial, Newton-Schulz iteration, and Padé approximants. The Padé approximants consistently achieves a better estimation for other approximation schemes for any possible input values.
N n=1 q n z n and numerator polynomial M m=1 p m z m . The coefficients q n and p m are pre-computed by matching to the corresponding Taylor series. Given the power series of scalar in eq. (4),
Fig. 3 :
3Python-like pseudo-codes for Padé coefficients. the coefficients of a [M, N ] scalar Padé approximant are computed by matching to the series of degree M +N +1:
Fig. 4 :
4The comparison of speed and error in the FP for the matrix square root (left) and the inverse square root (right). Our MPA computes the more accurate and faster solution than the NS iteration, and our MTP enjoys the fastest calculation speed.
Fig. 5 :
5The speed comparison in the backward pass.
Fig. 5 compares the speed of our backward Lyapunov solver and the NS iteration versus different iteration times. The result is coherent with the complexity analysis in
Fig. 7 :
7The speed comparison (left) and the error comparison (middle and right) for matrices in different dimensions. Our MPA-Lya is consistently faster and more accurate than NS iteration for different matrix dimensions. Since the SVD is accurate by default, other approximate methods are compared with SVD to measure the error.
Fig. 8 :
8Overview of the GCP network[2],[3],[4] for largescale and fine-grained visual recognition.
Fig. 8
8displays the architecture of a typical GCP network.
Fig. 11 :
11Visual examples of the neural style transfer on Artworks[57] dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details.
B. 2
2Equivalence of two sets of MPA Proposition 1. The diagonal MPA 1 √ ||A||F S −1 N R M is equivalent to the diagonal MPA 1 √ ||A||F P −1 M Q N , and the relation p m =−s n and q n = − r m hold for any m=n.
Fig. 13 :
13The architecture changes of ResNet models in the experiment of ZCA whitening. The decorrelated batch normalization layer is inserted after the first convolutional layer. The kernel sizes, the stride of the first convolution layer, and the stride of the first ResNet block are changed correspondingly.
Fig
. 13 displays the detailed architecture changes of ResNet. Suggested by
Let x i denotes (1 − λi √ i λ 2 i ). Then x i is in the range of [0, 1],
the real • Yue Song, Nicu Sebe, and Wei Wang are with the Department ofInformation Engineering and Computer Science, University of Trento,
Trento 38123, Italy.
E-mail: {yue.song, nicu.sebe, wei.wang}@unitn.it
Manuscript received April 19, 2005; revised August 26, 2015.
TABLE 1 :
1Summary of mathematical notation and symbol.A p
Matrix p-th power.
I
Identity matrix.
|| · || F
Matrix Frobenius norm.
n
k
Binomial coefficients calculated as n! /k!(n−k)!.
vec(·)
Unrolling matrix into vector.
⊗
Matrix Kronecker product.
sign(A)
Matrix sign function calculated as A(A 2 ) − 1
2
∂l
∂A
Partial derivative of loss l w.r.t. matrix A
TABLE 2 :
2Comparison of forward operations. For the matrix
square root and its inverse, our MPA/MTP consumes the
same complexity. The cost of 1 NS iteration is about that of
MTP of 4 degrees and about that of MPA of 2 degrees.
Op.
MTP
MPA
NS iteration
Mat. Mul. K−1 (K−1) /2
3 × #iters
Mat. Inv.
0
1
0
TABLE 3 :
3Comparison of backward operations. For the inverse square root, our Lyapunov solver uses marginally 3 more matrix multiplications. The cost of 1 NS iteration is about that of 2 iterations of Lyapunov solver.Op.
Lya (Mat. Sqrt.) Lya (Inv. Sqrt.)
NS iteration
Mat. Mul.
6 × #iters
3 + 6 × #iters
4 + 10 × #iters
Mat. Inv.
0
0
0
TABLE 4 :
4Validation error of ZCA whitening methods. The covariance matrix is of size 1×64×64.The time consumption is
TABLE 5 :
5Comparison of validation accuracy (%) on Im-
ageNet [49] and ResNet-50 [47]. The covariance is of size
256×256×256, and the time consumption is measured for
computing the matrix square root (FP+BP).
Methods Time (ms) Top-1 Acc. Top-5 Acc.
SVD-Taylor
2349.12
77.09
93.33
SVD-Padé
2335.56
77.33
93.49
NS iteration
164.43
77.19
93.40
Our MPA-Lya
110.61
77.13
93.45
TABLE 6 :
6Comparison of validation accuracy on fine-grained benchmarks and ResNet-50[47]. The covariance is of size 10×64×64, and the time consumption is measured for computing the matrix square root (FP+BP).Methods Time (ms) Birds Aircrafts Cars
SVD-Taylor
32.13
86.9
89.9
92.3
SVD-Padé
31.54
87.2
90.5
92.8
NS iteration
5.79
87.3
89.5
91.7
Our MPA-Lya
3.89
87.8
91.0
92.5
TABLE 7 :
7Validation top-1/top-5 accuracy (%) on
HMBD51
TABLE 8 :
8The LPIPS[56] score and user preference (%) on Artworks[57] dataset. The covariance is of size 4×256×256. We measure the time consumption of whitening and coloring transform that is conducted 10 times to exchange the style and content feature at different network depths.Methods Time (ms) LPIPS [56] (↑) Preference (↑)
SVD-Taylor 447.12
0.5276
16.25
SVD-Padé 445.23
0.5422
19.25
NS iteration 94.37
0.5578
17.00
Our MPA-Lya 69.23
0.5615
24.75
Our MTP-Lya 40.97
0.5489
18.50
TABLE
TABLE 10 :
10Performance of our MPA-Lya versus different degrees of power series to match.Degrees Time (ms)
ResNet-18
ResNet-50
CIFAR10
CIFAR100
CIFAR100
mean±std min mean±std min mean±std min
[3, 3]
0.80
4.64±0.11 4.54 21.35±0.18 21.20 20.14±0.43 19.56
[4, 4]
0.86
4.55±0.08 4.51 21.26±0.22 21.03 19.87±0.29 19.64
[6, 6]
0.98
4.45±0.07 4.33 21.09±0.14 21.04 19.51±0.24 19.26
[5, 5]
0.93
4.39±0.09 4.25 21.11±0.12 20.95 19.55±0.20 19.24
TABLE 11 :
11Performance of our MPA-Lya versus different iteration times. The residual errors ||B k −I|| and ||0.5C k − X|| F are measured based on 10, 000 randomly sampled matrices.Methods Time (ms) ||B k −I|| F ||0.5C k −X|| FResNet-18
ResNet-50
CIFAR10
CIFAR100
CIFAR100
mean±std min mean±std min mean±std min
TABLE 12 :
12Performance comparison of SVD-Lya and NS-Lya.Methods Time (ms)
ResNet-18
ResNet-50
CIFAR10
CIFAR100
CIFAR100
mean±std min mean±std min mean±std min
SVD-Lya
4.47
4.45±0.16 4.20 21.24±0.24 21.02 19.41±0.11 19.26
NS-Lya
2.88
4.51±0.14 4.34 21.16±0.17 20.94 19.65±0.35 19.39
MPA-Lya
2.61
4.39±0.09 4.25 21.11±0.12 20.95 19.55±0.20 19.24
MTP-Lya
2.46
4.49±0.13 4.31 21.42±0.21 21.24 20.55±0.37 20.12
TABLE 13 :
13The detailed LPIPS[56] score and user preference (%) on each subset of Artworks dataset. Cezanne Monet Vangogh Ukiyoe Average Cezanne Monet Vangogh Ukiyoe Average Fig. 14: More exemplary visualizations on Artworks [57] dataset. Our methods generate sharper images with more coherent style and better visual appeal. The red rectangular indicates regions with subtle details. would further increase unnecessary memory costs. For the IF, the function for matrix square root can be defined as f (A, AMethods
LPIPS [56] Score (↑)
User Preference (↑)
SVD-Taylor
0.4937
0.4820
0.6074
0.5274
0.5276
15
16
25
9
16.25
SVD-Padé
0.6179
0.4783
0.5307
0.5419
0.5422
28
13
15
21
19.25
NS iteration
0.5328
0.5329
0.5386
0.6270
0.5578
11
18
21
18
17.00
Our MPA-Lya
0.6332
0.5291
0.4511
0.6325
0.5615
25
29
18
27
24.75
Our MTP-Lya
0.6080
0.4826
0.4796
0.6253
0.5489
17
21
17
19
18.50
TABLE 14 :
14Backward time and speed comparison for batched matrices of size 64×64×64. We use MPA for forward pass, and the evaluation is averaged on 1, 000 randomly generated matrices.Method
Speed (ms) Memory (MB)
Lyapunov
2.19
1.99
RMAD
5.69
3.08
IF
4.71
2.03
Improved bilinear pooling with cnns. T.-Y Lin, S Maji, T.-Y. Lin and S. Maji, "Improved bilinear pooling with cnns," BMVC, 2017.
Is second-order information helpful for large-scale visual recognition?" in ICCV. P Li, J Xie, Q Wang, W Zuo, P. Li, J. Xie, Q. Wang, and W. Zuo, "Is second-order information helpful for large-scale visual recognition?" in ICCV, 2017.
Towards faster training of global covariance pooling networks by iterative matrix square root normalization. P Li, J Xie, Q Wang, Z Gao, CVPR. P. Li, J. Xie, Q. Wang, and Z. Gao, "Towards faster training of global covariance pooling networks by iterative matrix square root normalization," in CVPR, 2018.
Why approximate matrix square root outperforms accurate svd in global covariance pooling?" in ICCV. Y Song, N Sebe, W Wang, Y. Song, N. Sebe, and W. Wang, "Why approximate matrix square root outperforms accurate svd in global covariance pooling?" in ICCV, 2021.
So-vit: Mind visual tokens for vision transformer. J Xie, R Zeng, Q Wang, Z Zhou, P Li, arXiv:2104.10935arXiv preprintJ. Xie, R. Zeng, Q. Wang, Z. Zhou, and P. Li, "So-vit: Mind visual tokens for vision transformer," arXiv preprint arXiv:2104.10935, 2021.
Temporal-attentive covariance pooling networks for video recognition. Z Gao, Q Wang, B Zhang, Q Hu, P Li, NeurIPSZ. Gao, Q. Wang, B. Zhang, Q. Hu, and P. Li, "Temporal-attentive covariance pooling networks for video recognition," in NeurIPS, 2021.
On the eigenvalues of global covariance pooling for fine-grained visual recognition. Y Song, N Sebe, W Wang, IEEE TPAMI. Y. Song, N. Sebe, and W. Wang, "On the eigenvalues of global covariance pooling for fine-grained visual recognition," IEEE TPAMI, 2022.
Decorrelated batch normalization. L Huang, D Yang, B Lang, J Deng, CVPR. L. Huang, D. Yang, B. Lang, and J. Deng, "Decorrelated batch normalization," in CVPR, 2018.
Iterative normalization: Beyond standardization towards efficient whitening. L Huang, Y Zhou, F Zhu, L Liu, L Shao, CVPR. L. Huang, Y. Zhou, F. Zhu, L. Liu, and L. Shao, "Iterative normal- ization: Beyond standardization towards efficient whitening," in CVPR, 2019.
An investigation into the stochasticity of batch whitening. L Huang, L Zhao, Y Zhou, F Zhu, L Liu, L Shao, CVPR. L. Huang, L. Zhao, Y. Zhou, F. Zhu, L. Liu, and L. Shao, "An investigation into the stochasticity of batch whitening," in CVPR, 2020.
Whitening and coloring batch transform for gans. A Siarohin, E Sangineto, N Sebe, in ICLR. A. Siarohin, E. Sangineto, and N. Sebe, "Whitening and coloring batch transform for gans," in ICLR, 2018.
Whitening for self-supervised representation learning. A Ermolov, A Siarohin, E Sangineto, N Sebe, ICML. A. Ermolov, A. Siarohin, E. Sangineto, and N. Sebe, "Whitening for self-supervised representation learning," in ICML, 2021.
Universal style transfer via feature transforms. Y Li, C Fang, J Yang, Z Wang, X Lu, M.-H Yang, NeurIPSY. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, "Universal style transfer via feature transforms," in NeurIPS, 2017.
Image-toimage translation via group-wise deep whitening-and-coloring transformation. W Cho, S Choi, D K Park, I Shin, J Choo, CVPR. W. Cho, S. Choi, D. K. Park, I. Shin, and J. Choo, "Image-to- image translation via group-wise deep whitening-and-coloring transformation," in CVPR, 2019.
Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. S Choi, S Jung, H Yun, J T Kim, S Kim, J Choo, CVPR. S. Choi, S. Jung, H. Yun, J. T. Kim, S. Kim, and J. Choo, "Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening," in CVPR, 2021.
Training deep networks with structured layers by matrix backpropagation. C Ionescu, O Vantzos, C Sminchisescu, arXiv:1509.07838arXiv preprintC. Ionescu, O. Vantzos, and C. Sminchisescu, "Training deep networks with structured layers by matrix backpropagation," arXiv preprint arXiv:1509.07838, 2015.
Backpropagation-friendly eigendecomposition. W Wang, Z Dang, Y Hu, P Fua, M Salzmann, NeurIPSW. Wang, Z. Dang, Y. Hu, P. Fua, and M. Salzmann, "Backpropagation-friendly eigendecomposition," in NeurIPS, 2019.
Singular value decomposition on gpu using cuda. S Lahabar, P Narayanan, 2009 IEEE International Symposium on Parallel & Distributed Processing. IEEES. Lahabar and P. Narayanan, "Singular value decomposition on gpu using cuda," in 2009 IEEE International Symposium on Parallel & Distributed Processing. IEEE, 2009, pp. 1-10.
Iterative berechung der reziproken matrix. G Schulz, ZAMM-Journal of Applied Mathematics and Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik. 131G. Schulz, "Iterative berechung der reziproken matrix," ZAMM- Journal of Applied Mathematics and Mechanics/Zeitschrift für Ange- wandte Mathematik und Mechanik, vol. 13, no. 1, pp. 57-59, 1933.
Functions of matrices: theory and computation. N J Higham, SIAMN. J. Higham, Functions of matrices: theory and computation. SIAM, 2008.
Fast differentiable matrix square root. Y Song, N Sebe, W Wang, ICLR. Y. Song, N. Sebe, and W. Wang, "Fast differentiable matrix square root," in ICLR, 2022.
Matrix backpropagation for deep networks with structured layers. C Ionescu, O Vantzos, C Sminchisescu, ICCV. C. Ionescu, O. Vantzos, and C. Sminchisescu, "Matrix backpropa- gation for deep networks with structured layers," in ICCV, 2015.
Eigendecomposition-Free Training of Deep Networks with Zero Eigenvalue-Based Losses. Z Dang, K M Yi, Y Hu, F Wang, P Fua, M Salzmann, ECCV. Z. Dang, K. M. Yi, Y. Hu, F. Wang, P. Fua, and M. Salzmann, "Eigendecomposition-Free Training of Deep Networks with Zero Eigenvalue-Based Losses," in ECCV, 2018.
Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems. Z Dang, K Yi, F Wang, Y Hu, P Fua, M Salzmann, TPAMIZ. Dang, K. Yi, F. Wang, Y. Hu, P. Fua, and M. Salzmann, "Eigendecomposition-Free Training of Deep Networks for Linear Least-Square Problems," TPAMI, 2020.
Solution of the matrix equation ax+ xb= c [f4. R H Bartels, G W Stewart, Communications of the ACM. 159R. H. Bartels and G. W. Stewart, "Solution of the matrix equation ax+ xb= c [f4]," Communications of the ACM, vol. 15, no. 9, pp. 820-826, 1972.
Bilinear cnn models for fine-grained visual recognition. T.-Y Lin, A Roychowdhury, S Maji, ICCV. T.-Y. Lin, A. RoyChowdhury, and S. Maji, "Bilinear cnn models for fine-grained visual recognition," in ICCV, 2015.
Deep global generalized gaussian networks. Q Wang, P Li, Q Hu, P Zhu, W Zuo, CVPR. Q. Wang, P. Li, Q. Hu, P. Zhu, and W. Zuo, "Deep global generalized gaussian networks," in CVPR, 2019.
Deep cnns meet global covariance pooling: Better representation and generalization. Q Wang, J Xie, W Zuo, L Zhang, P Li, TPAMIQ. Wang, J. Xie, W. Zuo, L. Zhang, and P. Li, "Deep cnns meet global covariance pooling: Better representation and generalization," TPAMI, 2020.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, NeurIPSA. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, "Attention is all you need," in NeurIPS, 2017.
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, ICLR. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., "An image is worth 16x16 words: Transformers for image recognition at scale," in ICLR, 2020.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, ICML. S. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," in ICML, 2015.
Switchable whitening for deep representation learning. X Pan, X Zhan, J Shi, X Tang, P Luo, ICCV. X. Pan, X. Zhan, J. Shi, X. Tang, and P. Luo, "Switchable whitening for deep representation learning," in ICCV, 2019.
Group whitening: Balancing learning efficiency and representational capacity. L Huang, Y Zhou, L Liu, F Zhu, L Shao, CVPR. L. Huang, Y. Zhou, L. Liu, F. Zhu, and L. Shao, "Group whitening: Balancing learning efficiency and representational capacity," in CVPR, 2021.
Stochastic whitening batch normalization. S Zhang, E Nezhadarya, H Fashandi, J Liu, D Graham, M Shah, CVPR. S. Zhang, E. Nezhadarya, H. Fashandi, J. Liu, D. Graham, and M. Shah, "Stochastic whitening batch normalization," in CVPR, 2021.
Improving generalization of batch whitening by convolutional unit optimization. Y Cho, H Cho, Y Kim, J Kim, ICCV. Y. Cho, H. Cho, Y. Kim, and J. Kim, "Improving generalization of batch whitening by convolutional unit optimization," in ICCV, 2021.
A closed-form solution to photorealistic image stylization. Y Li, M.-Y Liu, X Li, M.-H Yang, J Kautz, ECCV. Y. Li, M.-Y. Liu, X. Li, M.-H. Yang, and J. Kautz, "A closed-form solution to photorealistic image stylization," in ECCV, 2018.
Diversified arbitrary style transfer via deep feature perturbation. Z Wang, L Zhao, H Chen, L Qiu, Q Mo, S Lin, W Xing, D Lu, CVPR. Z. Wang, L. Zhao, H. Chen, L. Qiu, Q. Mo, S. Lin, W. Xing, and D. Lu, "Diversified arbitrary style transfer via deep feature perturbation," in CVPR, 2020.
Keep it simple: Image statistics matching for domain adaptation. A Abramov, C Bayer, C Heller, arXiv:2005.12551arXiv preprintA. Abramov, C. Bayer, and C. Heller, "Keep it simple: Im- age statistics matching for domain adaptation," arXiv preprint arXiv:2005.12551, 2020.
Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis. D Ulyanov, A Vedaldi, V Lempitsky, CVPR. D. Ulyanov, A. Vedaldi, and V. Lempitsky, "Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis," in CVPR, 2017.
Second-order encoding networks for semantic segmentation. Q Sun, Z Zhang, P Li, Neurocomputing. Q. Sun, Z. Zhang, and P. Li, "Second-order encoding networks for semantic segmentation," Neurocomputing, 2021.
Second-order attention network for single image super-resolution. T Dai, J Cai, Y Zhang, S.-T Xia, L Zhang, CVPR. T. Dai, J. Cai, Y. Zhang, S.-T. Xia, and L. Zhang, "Second-order attention network for single image super-resolution," in CVPR, 2019.
Padé and hermite-padé approximation and orthogonality. W Van Assche, math/0609094arXiv preprintW. Van Assche, "Padé and hermite-padé approximation and orthogonality," arXiv preprint math/0609094, 2006.
Linear model reduction and solution of the algebraic riccati equation by use of the sign function. J D Roberts, International Journal of Control. 324J. D. Roberts, "Linear model reduction and solution of the algebraic riccati equation by use of the sign function," International Journal of Control, vol. 32, no. 4, pp. 677-687, 1980.
The matrix sign function. C S Kenney, A J Laub, IEEE transactions on automatic control. 408C. S. Kenney and A. J. Laub, "The matrix sign function," IEEE transactions on automatic control, vol. 40, no. 8, pp. 1330-1348, 1995.
Solving stable sylvester equations via rational iterative schemes. P Benner, E S Quintana-Ortí, G Quintana-Ortí, Journal of Scientific Computing. 281P. Benner, E. S. Quintana-Ortí, and G. Quintana-Ortí, "Solving stable sylvester equations via rational iterative schemes," Journal of Scientific Computing, vol. 28, no. 1, pp. 51-83, 2006.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, CVPR. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in CVPR, 2016.
Learning multiple layers of features from tiny images. A Krizhevsky, University of TrontMaster's thesisA. Krizhevsky, "Learning multiple layers of features from tiny images," Master's thesis, University of Tront, 2009.
Imagenet: A large-scale hierarchical image database. J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, CVPR. J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in CVPR, 2009.
Caltech-UCSD Birds 200. P Welinder, S Branson, T Mita, C Wah, F Schroff, S Belongie, P Perona, CNS-TR-2010-001California Institute of TechnologyTech. Rep.P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona, "Caltech-UCSD Birds 200," California Institute of Technology, Tech. Rep. CNS-TR-2010-001, 2010.
Fine-grained visual classification of aircraft. S Maji, E Rahtu, J Kannala, M Blaschko, A Vedaldi, arXiv:1306.5151arXiv preprintS. Maji, E. Rahtu, J. Kannala, M. Blaschko, and A. Vedaldi, "Fine-grained visual classification of aircraft," arXiv preprint arXiv:1306.5151, 2013.
3d object representations for fine-grained categorization. J Krause, M Stark, J Deng, L Fei-Fei, 4th International IEEE Workshop on 3D Representation and Recognition. Sydney, AustraliaJ. Krause, M. Stark, J. Deng, and L. Fei-Fei, "3d object represen- tations for fine-grained categorization," in 4th International IEEE Workshop on 3D Representation and Recognition (3dRR-13), Sydney, Australia, 2013.
HMDB: a large video database for human motion recognition. H Kuehne, H Jhuang, E Garrote, T Poggio, T Serre, ICCV. H. Kuehne, H. Jhuang, E. Garrote, T. Poggio, and T. Serre, "HMDB: a large video database for human motion recognition," in ICCV, 2011.
Ucf101: A dataset of 101 human actions classes from videos in the wild. K Soomro, A R Zamir, M Shah, arXiv:1212.0402arXiv preprintK. Soomro, A. R. Zamir, and M. Shah, "Ucf101: A dataset of 101 human actions classes from videos in the wild," arXiv preprint arXiv:1212.0402, 2012.
Tea: Temporal excitation and aggregation for action recognition. Y Li, B Ji, X Shi, J Zhang, B Kang, L Wang, CVPR. Y. Li, B. Ji, X. Shi, J. Zhang, B. Kang, and L. Wang, "Tea: Temporal excitation and aggregation for action recognition," in CVPR, 2020.
The unreasonable effectiveness of deep features as a perceptual metric. R Zhang, P Isola, A A Efros, E Shechtman, O Wang, CVPR. R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, "The unreasonable effectiveness of deep features as a perceptual metric," in CVPR, 2018.
Image-to-image translation with conditional adversarial networks. P Isola, J.-Y Zhu, T Zhou, A A Efros, CVPR. P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in CVPR, 2017.
Algorithms for the matrix p th root. D A Bini, N J Higham, B Meini, Numerical Algorithms. 394D. A. Bini, N. J. Higham, and B. Meini, "Algorithms for the matrix p th root," Numerical Algorithms, vol. 39, no. 4, pp. 349-378, 2005.
The Padé approximant in theoretical physics. G A Baker, J L Gammel, Academic PressG. A. Baker and J. L. Gammel, The Padé approximant in theoretical physics. Academic Press, 1970.
Spurious poles in padé approximation. H Stahl, Journal of computational and applied mathematics. 991-2H. Stahl, "Spurious poles in padé approximation," Journal of computational and applied mathematics, vol. 99, no. 1-2, pp. 511-527, 1998.
Defects and the convergence of padé approximants. G A Baker, Acta Applicandae Mathematica. 611G. A. Baker, "Defects and the convergence of padé approximants," Acta Applicandae Mathematica, vol. 61, no. 1, pp. 37-52, 2000.
| [
"https://github.com/KingJamesSong/FastDifferentiableMatSqrt."
] |
[
"X-ray Emission from the Interstellar and Circumgalactic Medium of Elliptical Galaxies based on MACER simulations",
"X-ray Emission from the Interstellar and Circumgalactic Medium of Elliptical Galaxies based on MACER simulations"
] | [
"Aditi Vijayan \nShanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China\n\nResearch School of Astronomy and Astrophysics\nAustralian National University\n2601CanberraACTAustralia\n",
"Bocheng Zhu \nShanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n19A Yuquan Road100049BeijingPeople's Republic of China\n",
"Miao Li \nZhejiang University\n\n",
"Feng Yuan \nShanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China\n\nUniversity of Chinese Academy of Sciences\n19A Yuquan Road100049BeijingPeople's Republic of China\n",
"Luis C Ho \nKavli Institute for Astronomy and Astrophysics\nPeking University\n100871BeijingPeople's Republic of China\n\nDepartment of Astronomy\nSchool of Physics\nPeking University\n100871BeijingPeople's Republic of China\n"
] | [
"Shanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China",
"Research School of Astronomy and Astrophysics\nAustralian National University\n2601CanberraACTAustralia",
"Shanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China",
"University of Chinese Academy of Sciences\n19A Yuquan Road100049BeijingPeople's Republic of China",
"Zhejiang University\n",
"Shanghai Astronomical Observatory\nChinese Academy of Sciences\n200030ShanghaiPeople's Republic of China",
"University of Chinese Academy of Sciences\n19A Yuquan Road100049BeijingPeople's Republic of China",
"Kavli Institute for Astronomy and Astrophysics\nPeking University\n100871BeijingPeople's Republic of China",
"Department of Astronomy\nSchool of Physics\nPeking University\n100871BeijingPeople's Republic of China"
] | [
"MNRAS"
] | Interstellar (ISM) and circumgalactic medium (CGM) around galaxies are linked to several physical processes that drive galaxy evolution. For example, the X-ray emission from the CGM gas around ellipticals has been linked to the AGN feedback occurring in the host. Upcoming telescopes such as HUBS, with ∼ 1 eV resolution, can provide us with deep insights about the hot gas properties of such galaxies thus constrain these processes. In this project, we discuss X-ray emission of the ISM and CGM of elliptical galaxies simulated using MACER code. We generate X-ray emission data from the MACER simulations with various feedback models and produce mock observations for an instrument with high spectral resolution, which is a necessary step of selecting sources for the future observations with planned mission such as HUBS. More importantly, we establish connections between the physics of AGN and stellar feedback with the emission spectra from the ISM and CGM to investigate the possibility of using observations to constrain feedback models. We fit the X-ray spectra from these simulations with standard fitting procedures and compare the retrieved physical properties with their counterparts from the simulations to understand whether the future high-resolution observations can reliably reveal the properties of the gas in the galaxies. | null | [
"https://export.arxiv.org/pdf/2210.12886v1.pdf"
] | 253,098,179 | 2210.12886 | f6759988335f81ccd5b45df037061f6de66a704c |
X-ray Emission from the Interstellar and Circumgalactic Medium of Elliptical Galaxies based on MACER simulations
2015
Aditi Vijayan
Shanghai Astronomical Observatory
Chinese Academy of Sciences
200030ShanghaiPeople's Republic of China
Research School of Astronomy and Astrophysics
Australian National University
2601CanberraACTAustralia
Bocheng Zhu
Shanghai Astronomical Observatory
Chinese Academy of Sciences
200030ShanghaiPeople's Republic of China
University of Chinese Academy of Sciences
19A Yuquan Road100049BeijingPeople's Republic of China
Miao Li
Zhejiang University
Feng Yuan
Shanghai Astronomical Observatory
Chinese Academy of Sciences
200030ShanghaiPeople's Republic of China
University of Chinese Academy of Sciences
19A Yuquan Road100049BeijingPeople's Republic of China
Luis C Ho
Kavli Institute for Astronomy and Astrophysics
Peking University
100871BeijingPeople's Republic of China
Department of Astronomy
School of Physics
Peking University
100871BeijingPeople's Republic of China
X-ray Emission from the Interstellar and Circumgalactic Medium of Elliptical Galaxies based on MACER simulations
MNRAS
0002015Accepted XXX. Received YYY; in original form ZZZPreprint 25 October 2022 Compiled using MNRAS L A T E X style file v3.0Active galactic nuclei(16)Circumgalactic medium(1879)X-ray observatories(1819)Elliptical galaxies(456)Hydrodynamical simulations(767)
Interstellar (ISM) and circumgalactic medium (CGM) around galaxies are linked to several physical processes that drive galaxy evolution. For example, the X-ray emission from the CGM gas around ellipticals has been linked to the AGN feedback occurring in the host. Upcoming telescopes such as HUBS, with ∼ 1 eV resolution, can provide us with deep insights about the hot gas properties of such galaxies thus constrain these processes. In this project, we discuss X-ray emission of the ISM and CGM of elliptical galaxies simulated using MACER code. We generate X-ray emission data from the MACER simulations with various feedback models and produce mock observations for an instrument with high spectral resolution, which is a necessary step of selecting sources for the future observations with planned mission such as HUBS. More importantly, we establish connections between the physics of AGN and stellar feedback with the emission spectra from the ISM and CGM to investigate the possibility of using observations to constrain feedback models. We fit the X-ray spectra from these simulations with standard fitting procedures and compare the retrieved physical properties with their counterparts from the simulations to understand whether the future high-resolution observations can reliably reveal the properties of the gas in the galaxies.
INTRODUCTION
Feedback processes occurring in a galaxy as a result of the activity of the central Active Galactic Nucleus (AGN) are known to affect the evolution of the host galaxy. These processes are driven by the interaction of the radiation, wind and jet outputs from the AGN with the gas in the galaxy. Energetically, a large fraction (∼ 10%) of the mass accreted onto the supermassive black hole (SMBH) is converted into energy for powering the AGN. Because this energy is significantly larger than the binding energy of gas in the galaxy, an AGN strongly impacts the gas distribution host galaxy (Fabian 2012).
Several observations indicate the effect an AGN has on its host. For relatively larger objects, such as clusters and groups, scalings between the black hole mass (M • ) and the halo temperatures are explained only by invoking AGN feedback as a source of additional heating to the gas (Bogdán et al. 2018;Gaspari et al. 2019;Lakhchaura et al. 2019). In early type galaxies (ETGs), which are the focus of this paper, observations have established a steep and tight correlation ★ E-mail:aditi.vij[email protected] † E-mail: [email protected] ‡ E-mail: [email protected] between the total halo X-ray luminosity (L X ) and X-ray temperature, T X , which is hard to be explained using the self similarity argument alone (Boroson et al. 2011;Kim & Fabbiano 2013, 2015Babyk et al. 2018). Further, it has been established that hot atmospheres of ETGs cannot be created through the action of stellar feedback alone (Goulding et al. 2016;Werner et al. 2019). These clues suggest that AGN feedback plays significant role in moulding the properties of hot atmospheres around ETGs. In numerical simulations, the AGN feedback is implemented via two modes. The nomenclature for these modes is diverse in the literature. Throughout this work, we refer to them as "cold" and "hot" modes, following Yuan et al. (2018b). Cold mode, also referred to as the "quasar mode" or "radiative mode", operates in AGNs which are powered by cold accretion flow when the accretion rate is above ∼ 2% Eddington rate ( Edd ≡ 10 Edd / 2 ). The main outputs in this mode are radiation and wind (Morganti 2017;Yuan et al. 2018b).
The hot mode, also the kinetic or jet or radio or mechanical mode, operates in AGNs which are powered by hot accretion flow when the mass accretion rate is below ∼ 2% Edd . The outputs of the AGN in this mode are radiation, wind, and jet (Yuan et al. 2018b). It has been suggested that wind launched in this mode may prevent the gaseous atmosphere from cooling and forming stars, thereby maintaining the quiescent state of the galaxy (Yao et al. 2021, also Zhu et al 2022 for details), or even inducing quenching of the galaxy (Weinberger et al. 2017). Observationally, this mode has been identified in clusters from the presence of a cavity in the X-ray emission which is filled with radio emission , see for a review).
Unfortunately, works focussed on simulating AGN feedback simulation are quite diverse in the sense that different works adopt different models and not all the above-mentioned AGN physics has been properly incorporated (Naab & Ostriker 2017). Some simulations successfully reproduce key observations such as the presence of buoyant cavities (Gaspari et al. 2012(Gaspari et al. , 2014, M • -relation (Sijacki et al. (2007); Booth & Schaye (2011);Teyssier et al. (2011);Choi et al. (2015), etc). One aspect where tension persists between simulations and observation is the X-ray emission properties of systems hosting an AGN.
Diffuse soft X-ray emission ( 2 keV) originates from hot gas within and around a galaxy. Such a diffuse emission has been observed and studied around star-forming galaxies (Yamasaki et al. 2009;Anderson & Bregman 2011;Dai et al. 2012;Anderson et al. 2016;Bogdán et al. 2013aBogdán et al. ,b, 2017Lopez et al. 2020) as well as more massive galaxies (Anderson et al. 2015;Kim & Fabbiano 2015). The emission is usually characterised by the total X-ray luminosity (L X ) and the temperature of the emitting gas, T X , estimated by fitting the X-ray spectrum by emission models. Simulations of early-type galaxies (ETGs) (Gaspari et al. 2012(Gaspari et al. , 2014 report a lower than expected L X and a break in the L X -T X relation neither of which are corroborated by observations (Babyk et al. 2018). Choi et al. (2015) analysed separately the effects of the two feedback mechanisms and found that their quasar (thermal) mode of feedback overestimates L X by nearly two orders of magnitude. Cosmological simulations such as Illustris-TNG (Truong et al. 2020b) and EAGLE (Schaye et al. 2015) also report brighter than observed X-ray emission as a result of their respective AGN feedback recipes (Davies et al. 2019).
Apart from global X-ray properties such as L X , the X-ray emission spectrum carries a wealth of information about the temperature and the chemical abundance of the hot emitting gas. Usually, spectral fitting models, based on simplifying assumptions about the underlying emitting plasma, are used to derive the temperature from the spectrum (T spec ). Studies of simulated emission from clusters have found that the spectra-derived temperature T spec may differ from emission-weighted temperature (T ew ), leading to underestimation in the latter of up to ∼ 20% (Mazzotta et al. 2004;Rasia et al. 2005;Vikhlinin 2006). Spectrum-derived temperature estimates of diffuse gas around galaxies are also dependent on whether the fitting models use a single or multi-temperature components (Li & Wang 2013;Wu et al. 2020).
Previous X-ray studies of diffuse gas around isolated galaxies have been limited by the spectral resolution, sensitivity, and the magnitude of the field of view of the telescopes. Upcoming X-ray missions such Hot Universe Baryon Survey (HUBS) are specifically designed to study the hot, diffuse gas around galaxies, groups and cluster (Cui et al. 2020). With respect to AGN feedback occurring in ETGs, this presents an opportunity to study the observational effects of the hot and cold feedback mechanisms on the circumgalactic medium (CGM) around such galaxies. Further, a comparison with observations can indicate how well various AGN feedback recipes, implemented in different simulations, emulate the physical processes occurring in nature and whether some of the discrepancies with respect to observation can be removed.
In this project, we aim to understand the X-ray properties of the ISM within and the CGM around isolated ellipitcal galaxies. Specifically, we analyse the relationship between an estimate of X-ray temperature, obtained from the emission spectrum, and the physical temperature of the gas in the galaxy. We analyse results from high-resolution simulations presented in Yuan et al. (2018b), which follow the evolution of an isolated elliptical galaxy (i.e., not considering galaxy merger and cosmological inflow to the galaxy) based on the MACER code. The two main features of the model are that: 1) the black hole accretion rate is precisely determined because the inner boundary of the simulation and is typically ten times smaller than the Bondi radius of the accretion flow and, 2) the state-of-the-art AGN physics is incorporated into the code, including the wind and radiation as a function of mass accretion rate. We briefly introduce the key components of the model in Section 2. In Section 3, we discuss the major results of our analysis and we present the major conclusions in Section 4.
MODELS
In this section we give a brief summary of the MACER simulations presented in Yuan et al. (2018b). Readers are also referred to Yuan et al. (2018a) for an overview of the MACER code. MACER are a set of 2D, axi-symmetric simulations which focus on the evolution of a single elliptical galaxy, with the inner and outer boundaries being ∼ 2 pc and 500 kpc, respectively. A resolution as high as ∼ 0.3 pc is achieved at the inner boundary. The Bondi radius of the accretion flow is typically ∼ 15 pc (Yao et al. 2021), which is several times larger than the inner boundary of the simulation, thus the outer boundary of the accretion flow is well resolved. Once the accretion rate at the inner boundary of our simulation domain is calculated, we can safely combine the accretion physics as subgrid model and precisely calculate the mass accretion rate of the AGN at the black hole horizon. This is crucial to determine the strength of the AGN feedback.
According to the value of the mass accretion rate, the accretion is divided into "cold" and "hot" modes, bounded by ∼ 2% Edd (Yuan & Narayan 2014). Radiation and wind are present in both modes while jet is perhaps present only in the hot mode 1 . The properties of the wind in the cold mode, including the velocity and mass flux as a function of AGN luminosity, are taken from observations (Gofford et al. 2015). Wind in the hot mode has been intensively studied in recent years by magnetohydrodynamical numerical simulations (Yuan et al. 2012;Narayan et al. 2012;Yuan et al. 2015;Yang et al. 2021). In the observational side, we are accumulating more and more observational evidences for wind from hot accretion flows, including the supermassive black hole in our Galactic center Ma et al. 2019), low-luminosity AGNs (Cheung et al. 2016;Park et al. 2019;Shi et al. 2021Shi et al. , 2022. Especially, Shi et al. (2021Shi et al. ( , 2022 have found the most direct evidences for winds in two prototype lowluminosity AGNs by detecting the blue-shifted emission lines. But still, we have not obtained good constrain to the hot wind properties, therefore we adopt the properties of wind from GRMHD simulations of Yuan et al. (2015).
Wind and radiation are injected at the inner boundary of the simulation domain as the boundary conditions. Their energy and momentum interaction with the gas in the host galaxy is calculated selfconsistently in the code. In addition to AGN feedback, the code also includes star formation and stellar feedback. We follow the evolution of the galaxy for ∼ 2.5 Gyr up to the present redshift.
We consider four different models in the present work, as listed in Table 1, with different AGN feedback models and galaxy properties. The first "Fiducial" model is directly taken from Yuan et al. (2018b), in which the AGN feedback physics as mentioned above has been properly incorporated. Specifically, both "hot" and "cold" modes have been taken into account according to their accretion rates. In the "Only Cold" ("Only Hot") model, irrespective of the accretion rate, we always adopt the physics of the cold (hot) mode for the descriptions of wind and radiation. In the " 300 " model, the AGN physics is identical to the Fiducial model, but we change the the value of the velocity dispersion of the galaxy from 200 km s −1 to 300 km s −1 . Using these four models, we mimic the effects of different AGN feedback physics and galaxy size. One main caveat of these models is that we do not consider the effects of cosmological inflow. It is expected that it will affect the physical properties of the CGM of the galaxy. We will consider this effect in the future work.
RESULTS
Density and Temperature Distribution
We first discuss the temperature and density distributions of gas in different models. Figures 1 and 2 show the radial distribution of density and the emission-weighted temperature in the four models. Each curve in every panel represents one time in the simulation.
As expected, the density profiles are declining outward radially. Upto > 200 kpc, the profiles are similar to those following the hydrostatic equilibrium. The profiles for "Only Hot" and "Only Cold" models are flat within 1 kpc and drop by five orders of magnitude beyond this radius. The "Fiducial" and " 300 " have monotonically declining profiles up to 200 kpc. We also note that the density profiles in the inner region ( < 10 kpc) of the "only hot" and "only cold" modes vary significantly in time, up to nearly four orders of magnitude. In the outer regions, > 10 kpc, this feature is reversed, that is, in the "Only Hot" and "Only Cold" models there is hardly r < 10 kpc any variation in density profiles, while for the "Fiducial" and " 300 ", there is a greater variation. We define the emission-weighted temperature as
ew = ∫ 2 Λ( ) ∫ 2 Λ( ) ,(1)
where Λ( ) is the X-ray emissivity between 0.3 − 5.0 keV 2 , is the gas temperature and is the gas number density. The radial profiles for four different models are shown in Figure 2. As for Figure 1, each curve represents a different time step in the evolution. Unlike the density profiles, the temperature profiles have a vary across the radial extent, from 0.1 − 3 keV. The general trends of density and temperature of the "Fiducial" and " 300 " models shown in the above two figures are not difficult to understand as they are similar to profile representing hydrostatic equilibrium. However, for "Only Cold" and "Only Hot" models, we can see that the density (temperature) rapidly increases (decreases) inward within ∼ 1kpc. As shown in Yuan et al. (2018b), the main energy input from the AGN is by wind. The significantly smaller The galaxy velocity dispersion is set to 300 km s −1 as compared to 200 km s −1 in the Fiducial run.
temperature (and thus larger density) in "Only Cold" and "Only Hot" models compared to the Fiducial model is because the energy input from wind is much weaker in the former two cases. For the "Only Cold" model, compared to the Fiducial model, the wind power remains same when the accretion is in the cold mode. But when the accretion rate is low and the accretion is in the hot mode, the wind described by the "cold-mode physics" will be weaker than that described by the "hot-mode physics". Similarly, for the "Only hot" model, the wind power remains same when the accretion rate is low. But when the accretion rate is high, the wind described by the "hotmode physics" will be weaker than that described by the "cold-mode physics". The density and temperature distribution of the gas determines the X-ray luminosity of the system. We show the soft X-ray luminosity integrated from the whole galaxy plotted against the simulation time in Figure 3. To obtain the luminosity from the simulation data, we use APEC emissivity tables between 0.3 and 5.0 keV. We show the luminosity from the entire simulation domain (left panel) as well as that from the ISM, < 10 kpc. For all the models, the stochasticity in the density and temperature profiles is translated into the temporal variations in the luminosity.
Given the high stochasticity of the density and temperature profiles of the "Only Hot" and "Only Cold" models in the ISM, there is significant variation in the total luminosity from this region. Further, these models have higher luminosity because on an average they have higher density values, especially in the ISM of the galaxy.
Extracting Pure Emission Spectra
We use pyatomDB (Foster & Heuer 2020) to generate a spectral emission from a parcel of gas over a wide range of temperature, density and metallicity. pyatomDB (Heuer et al. 2021) is a database for modelling the X-ray emission from collisionally-ionised plasma. The users have the option to convolve the spectra generated with an instrument response to produce a realistic spectrum. In this Section, we discuss the spectrum derived temperature, T spec ,low-and highspectral resolution instruments.
We use the density, temperature and metallicity information from each cell in the ISM region ( < 10 kpc) of the simulation domain as an input for pyatomDB and generate a spectrum for every cell. For low resolution T spec , we set the instrument response in the pyatomDB session corresponding to the instrument AISCC (on the Chandra telescope having a resolution of ∼ 130 eV) and for the high resolution T spec , we use the response for HUBS telescope 3 . We then add the spectrum emitted by each cell and sum it up to produce a single spectrum for the entire simulation domain. Figure 4 shows the highresolution spectra for the four models, after 1.25 Gyr of evolution, . The simulation spectra after 1.25 Gyr of evolution, which roughly marks the midway of the simulation for the different AGN models, and their comparison with the spectra produced by the 1-T model. The solid lines represent the simulation spectra, generated using pyAtomDB, as described in the text. The best-fit spectra are shown by the dashed curve. The spectra are generated using the gas between 0.1 − 10.0 kpc, that is the ISM of the galaxy. The number indicates the fitting temperature, T spec . in dashed lines. As expected, the emission decreases with increasing energy.
We extract the temperature of the X-ray emitting gas from the spectra, which is an observable quantity obtained by fitting models to the observed spectrum. We use the recipe followed by observers to extract a spectral temperature, T spec . We fit the spectra using a 1-T fitting model similar to the fitting methods described in (Truong et al. 2020b), but with some minor modifications described below. The 1-T model assumes that the observed spectrum is the result of emission from gas at a single temperature. Using temperature as a free parameter, we fit the simulation spectrum for every time step of the simulation. We then compare the simulation spectrum with a set of ideal spectra generated using pyAtomDB. To estimate the best-fit temperature, we minimized a statistical quantity, called the Wasserstein distance (Rubner et al. 1998), between the simulation spectrum and the ideal spectrum generated at that temperature. The temperature corresponding to the minimum distance is taken to be the best-fit temperature, which we denote as T spec .
In Figure 4, we show the simulation spectrum from the ISM as well as the corresponding spectrum from the best-fit single temperature model after ∼ 1.25 Gyr of evolution. The solid curves in each panel represent the spectrum from the simulations, while the dashed curves are the best-fit spectra, obtained using the procedure outlined above, . The distribution of T spec with the emission-weighted temperature, T ew , indicating the discrepancy between the two quantities for various models. Each star corresponds to a different time step in the simulation. We show T spec generated for low (∼ 130 eV) and high (∼ 2 eV) spectral resolution instruments. For reference, we show curve corresponding to T spec =T ew (black, solid), = 2T spec (grey, dashed) and = 4T ew (blue, dashed). We have used the region between 0.1−10 kpc for estimating the two temperatures. The black and teal points indicate time steps for which T spec > and T spec ∼T ew , respectively. We show the luminosity-weighted temperature-density histogram for these time steps in Figure 7. Though there are time steps in all the models, for which T spec accurately predicts T ew , it is not always the case.
with the fitting temperature in the bottom left corner. The spectra obtained from 1-T model provides a reasonable fit for the simulation data, even though it is quite simplistic. However, it is not a perfect fit as it misses out, for e.g., low temperature emission (< 1.0 keV) in Fiducial model and relatively high energy emission (> 1.6 keV) in the 300 model. We note that the spectra corresponding to 300 , "Only Cold" and "Only Hot" models show similar hump-like feature around 0.8 − 1.0 keV, while the Fiducial run does not. Note that the fitting temperatures are different for all the four models.
Comparing Spectral Temperature with Emission-Weighted Temperature
The physical interpretation of T spec is that it represents the emissionweighted temperature of the gas. To assess how accurate this interpretation is, we compare the T spec with the emission-weighted temperature, T ew , calculated using Equation 1. From the simulation data, we know the emission-weighted temperature exactly. For each time step, we now have a pair of temperatures-T ew and T spec . Though both temperatures are obtained from the simulation data, T spec is the temperature that derived from spectra, while T ew is the actual temperature of the gas. Such a comparison, between these two different temperatures, is not possible with observations. Figures 5 and 6 show scatter plots between T ew and T spec for the ISM and the CGM ( > 10 kpc), respectively. The spectrum from the ISM (CGM) is obtained by including only those cells that lie within (without) 10 kpc. 4 We obtain T spec from the spectra for high-resolution (magenta points) and low-resolution (navy points) instruments, for several time steps. If the two temperatures are identical, they should fall on the solid black line which represents the equality T spec =T ew . In the figures, we also show lines corresponding to 2 T ew and 4 T ew . While there exist some time steps when the predicted T spec is equal or close to the physical T ew , the equality does not hold true for most time steps during the simulation. Overall, we note that for both inner-and outer-CGM, the predicted T spec overestimates T ew by a factor of 4. There are considerable differences in the T spec -T ew distribution between the ISM and CGM. For the CGM, the predicted T spec by the lower resolution instrument are close to their higher resolution counterpart, as the navy and magenta points are overlapping for several time steps, while for ISM the two predict different T spec .
To understand the source of the discrepancy between T spec and T ew , we show the luminosity-weighted temperature-density histogram of two time steps in Figure 7. We have chosen these particular time steps from Figure 5 where these are shown in black and teal squares in the panel for the Fiducial run. The teal square corresponds to a time step for which T spec >T ew at 0.275 Gyr and for the black square at 1.64 Gyr the two temperatures are nearly identical. For Figure 7, we distribute the gas in 0.1 − 10 kpc in several temperature and density bins and weighted using the soft X-ray luminosity. The gas distribution corresponding to the time step for which T spec > T ew (teal point in Figure 5) has a wide temperature range (10 3 K < T < 10 10 K). The luminosity is dominated by gas at 10 7−10 K, therefore its spectral temperature is ∼ 2 × 10 7 K (≡ 2.0 keV). However, the galaxy hosts significant amount of mass at much lower temperatures. While this lower temperature gas ( < 10 6 K) does not contribute significantly to the total luminosity, it contributes towards lowering T ew . As a result, T spec is greater than T ew .
The right panel of Figure 7 corresponds to a time step for which T spec predicts T ew correctly (both equal to ∼ 1.9 keV). At this time step, the luminosity distribution in the temperature-density plane is relatively narrow and though there is mass at higher temperatures (> 10 6 K), it does not dominate the emission. As a result, both the temperatures are close to each other. The reason why there is copious amount of extremely hot at 0.275 Gyr is because it corresponds to a local peak in the luminosity (perhaps corresponding to a peak in the luminosity). We conclude here that T spec , estimated using the single temperature spectral fitting model, reliably predicts T ew only when the gas in the galaxy possesses a narrow temperature distribution.
Fitting Log Normal Model to Spectra
In the previous Section, we show that the discrepancy between T spec and T ew are considerable if the actual temperature distribution of the underlying gas is over several orders of magnitude. In such a case, fitting the spectrum with a single temperature emission model is not entirely physical. It has been suggested that a log-normal temperature distribution is better suited to represent gas that occupies a wide range in temperature (Vijayan & Li 2022). To assess this, we re-fit the spectrum at the time step, represented by the teal square in Figure 5, at which there is large discrepancy between T spec and T ew .
As indicated in the left panel of Figure 7, the X-ray emitting gas lies between ∼ 10 7−10 K. To obtain a spectrum from a log-normal distribution, we construct a box having dimensions identical to the simulation domain. The values of the density and metallicity fields are same as that of the simulation box. For every cell in the box, we extract a temperature value from a log-normal number distribution with a peak temperature at ∼ 2 × 10 7 K and a width of 0.4 dex in temperature. We choose the peak temperature from mass-weighted temperature distribution of the gas and the width of the log-normal is identical to that used in the analysis of (Vijayan & Li 2022). We follow the procedure described in Section 3.2 to extract the spectrum from the log-normal temperature distribution.
In Figure 8, we show the spectrum from the simulation (solid curve), the single temperature fit as well as the from the log-normal temperature distribution (dashed curves). We note here that for this particular time step, the T spec from single temperature fit is ∼ 3 keV, while T ew ∼ 0.3 keV. As can be seen from Figure 8, the single temperature fit is unable to reproduce the simulation spectrum at both high and low energies. However, the spectrum generated from the log-normal distribution is able to reproduce the spectrum well across the entire energy range because it is able to capture the wide temperature distribution of the gas.
Mock HUBS Spectrum
In Figure 9, we show the high-resolution spectrum for the Fiducial and "Only Cold" models at the same time step and fit by nearly identical single temperature fits (T spec ∼ 17 MK). Despite identical fitting temperature, the spectra are not identical. The differences are especially stark in the 0.7 − 1.0 and 1.6 keV range. The feature in the 0.7 − 1.0 keV range are associated with Fe-line complexes which are very sensitive to the gas temperature (Böhringer & Werner 2010). To explore whether such features in the spectrum can be used as a diagnostic for discriminating between AGN models and galaxy sizes, we estimate the fractional 0.7 − 1.2 keV line width which is the ratio between the counts in 0.7 − 1.2 keV to the counts in the full energy range (0.3 − 2.0 keV) for all the models. To understand how this quantity changes with radius, in Figure 10 we show it as a function of radius. The binning in radius is narrower (10 kpc wide) for < 60 kpc and wider (100 kpc) for radii larger than 60 kpc. This is because that temperature and density profiles are have lesser variation for larger radii (Figures 2 and 1). The line-width ratios have been averaged over all the timesteps.
The fractional line-width for 300 shows a trend vastly different from the other three models. Unlike the other three models, at radii close to the centre, the fractional line-width is relatively small. It peaks at around ∼ 20 kpc, falls off upto 60 kpc and increase thereafter. The "Only Hot" and "Only Cold" models show trends similar to each other. The fractional line-width is strongest close to the centre, which has the highest density and temperature values and it flattens out at larger radii. The Fiducial fractional line-width remains nearly flat for the entire radial domain. From this Figure, we can conclude that fractional line-width should be able to distinguish
Simulating HUBS Emission
We convert the 2D spherical data from the simulations into 3D Cartesian data using the assumption of axi-symmetry. We do this conversion in order to use the python package, pyXSIM, for modelling X-ray emission from the simulation data set. Because this 2D to 3D is a computationally expensive process, we use a much lower resolution for the Cartesian data. Using a distance of 15 Mpc and an exposure time of 300 ks, we generate a mock HUBS image of the galaxy, using the Fiducial model. We excise the innermost 600 pc of the galaxy as it is excessively bright, and produce X-ray emission from the gas within 300 kpc of the galaxy. We provide arbitrary RA and Dec values and turn off contribution from various backgrounds and foregrounds. We select a time step (1.25 Gyr, which is half way through the simulation) and generate a mock image based on the underlying density and temperature distribution at this time step. The resulting image generated from such a synthetic observation is shown in Figure 11. As expected, the region close to the centre of the galaxy is the brightest and the brightness decreases radially outwards. This is expected since most of the volume of the outer-CGM is filled with low density gas (top-right panels of Figures 1 and 2). We therefore expect the X-ray brightness to fall off sharply away from the centre. At the scale shown in Figure 11, the distinguishing features in the gas density and temperature profiles are smoothed out.
We repeat the process and produce mock images for all the models discussed in this paper. For each of these runs, we radially average the surface brightness profile and obtain a radial intensity plot. We divide the pixels of the mock image into equal radially spaced 100 annular regions and sum up the total number of photon counts in each region. We divide the total photon count by the area of the annular region to normalise and plot it against the physical distance along the radius of the galaxy. We show the annulus-averaged brightness in Figure 12. Though T spec values for "Only Cold" and "Only Hot" models are similar, 18 and 14 MK, respectively, their radial profiles have different slopes, indicating that radial surface brightness profiles might hold clue to the mode of accretion taking place in the galaxy's AGN.
DISCUSSION & CONCLUSIONS
Comparison with Other Works
We have undertaken a systematic study of the relationship between two estimates of gas temperatures, viz, T spec and T ewẆ hile T spec is obtained from spectral analysis of X-ray emission from hot gas in the ISM and the CGM around elliptical galaxies, T ew is estimated from the simulation data. The properties of the hot gas are intrinsically related to the properties of the SMBH hosted by the galaxy and the correlations between the two have been studied via observations (Lakhchaura et al. 2018;Gaspari et al. 2019;Lakhchaura et al. 2019) and simulations (Gaspari et al. 2014;Truong et al. 2020b,a). While these works focus on quantities such as the black hole mass, the total X-ray luminosity etc, our work aims to understand the spectral properties of the emission. Such a study is critical in the context of the upcoming X-ray telescope, HUBS, which has a large FoV (1 • × • and ∼ eV spectral resolution (compared to ∼ 100 eV of Chandra).
The high resolution of the X-ray spectrum will provide us with abundant information about the temperature distribution of the hot gas. In this respect, it is critical to evaluate the relationships between the extracted spectral temperature and the actual gas temperature. Mazzotta et al. (2004) address the discrepancy between the Chandra spectral temperatures of galaxy clusters and their physical equivalents from simulations. They find that the spectroscopic temperature obtained from X-ray observations is always lower than the emissionweighted temperature in the cluster. In our analysis, we find that T spec may be higher than T ew by a factor of ∼ 4 (Figures 5 and 6).
Conclusions
We have analysed 2D axi-symmertic simulations of the evolution of an elliptical galaxy under the influence of feedback from the SMBH at its centre. We are interested in the X-ray emission from the hot ( > 10 6 K) diffuse gas in the ISM and the CGM of such a galaxy. We explore four sets of simulations representing the different forms of feedback ("Fiducial", "Only Hot", and "Only Cold") and different galaxy size (" 300 "). We follow the galaxy evolution for a period of ∼ 2 Gyr, over which the SMBH undergoes several outburst phases resulting in radially declining density and temperature profiles (Figures 1 and 2). Because of the stochasticity of the outbursts, the soft X-ray luminosity (0.3 − 5.0 keV) varies considerably over the simulation time period (Figure 3. We use pyAtomDB for estimating the spectral temperature of the gas in the simulation for low and high spectral resolutions and compare it with the emission weighted temperature.
Our main conclusions are as follows-(i) The spectral temperature, estimated using spectral analysis, is different from the emission-weighted temperature by a factor of few.
(ii) The low-(resolution ∼ 130 eV) and high-resolution (resolution ∼ 2 eV) instruments produce nearly the same predictions for the T spec .
(iii) The difference between T spec and T ew arise because the spectral fitting model (a single temperature fit) is not able to capture the gas distribution accurately. Using a more physically motivated model, such as the log-normal model, can potentially alleviate the discrepancies between the two temperatures ( Figure 8).
(iv) Even if T spec is similar for different models, the underlying gas properties might be different. Such differences appear only upon analysis of the full spectra ( Figure 9).
(v) The ratio of counts between 0.7 − 1.2 keV, a range corre-sponding to Fe-line emission, can potentially be a diagnostic tool for discriminating between different accretion models ( Figure 10). (vi) The surface brightness maps could also hold clues about the exact mode of accretion taking place within the galaxy, as the radial profiles of surface brightness possess different slopes for the various runs ( Figure 12).
APPENDIX A: METALLICITY DISTRIBUTION
Apart from the density and temperature of the gas, its metallicity also affects the emission spectrum. In Figure A1, we show the radially averaged metallicity profiles for the four models. As in Figures 2 and 1, each curve represents a different time step and the black dashed curve is the temporal average. Similar to the density and temperature profiles, there is significant variation in the region close to the centre, while at larger radius the metallicity drops by nearly two orders and does not vary much with time.
APPENDIX B: COMPARING 2D AND 3D DATA
We have relied on generating 3D data from the axi-symmetric 2D simulations for the purpose of producing mock images and the radial surface brightness profiles in Section 3.6. In this Section, we compare the 3D and the original 2D data sets for the Fiducial run.
In the 2D simulations, the resolution is very high close to the centre (∼ 0.5 pc) and decreases to (∼ 10 kpc) at larger radius. As we are interested in the diffuse emission from CGM, for the 3D data conversion, we use a uniform grid size of 750 pc across the domain. In the inner regions of the 2D data, where the resolution is lesser than 750 pc, we average the quantities over multiple cells in the 2D data set for the corresponding spatial coordinates in the 3D data set. For the rest of cells in the 2D data set, we choose the value of the quantity in the nearest neighbour to the corresponding spatial location in the 3D.
To understand if the conversion between 2D and 3D data was faithful, we compare radially averaged temperature (left) and density (right) profiles for a particular timestep for the Fiducial run. The dip in the profiles, at around ∼ 1 kpc, indicate the location of the first cell from the centre for the 3D data set. Beyond this radius, the profiles from 2D spherical data and 3D cartesian data are identical. This paper has been typeset from a T E X/L A T E X file prepared by the author.
Figure 1 .
1The radial variation of number density ( H ) for the four models. Each curve represents a different time in the simulation.
Figure 2 .
2Identical to Figure 1, but instead of number density, we show the emission-weighted temperature, defined by Equation 1.
Figure 3 .
3Temporal variation of the total soft (0.3−5.0 keV) X-ray luminosity for the four runs, calculated for < 300 and < 10 kpc.
Figure 4
4Figure 4. The simulation spectra after 1.25 Gyr of evolution, which roughly marks the midway of the simulation for the different AGN models, and their comparison with the spectra produced by the 1-T model. The solid lines represent the simulation spectra, generated using pyAtomDB, as described in the text. The best-fit spectra are shown by the dashed curve. The spectra are generated using the gas between 0.1 − 10.0 kpc, that is the ISM of the galaxy. The number indicates the fitting temperature, T spec .
Figure 5
5Figure 5. The distribution of T spec with the emission-weighted temperature, T ew , indicating the discrepancy between the two quantities for various models. Each star corresponds to a different time step in the simulation. We show T spec generated for low (∼ 130 eV) and high (∼ 2 eV) spectral resolution instruments. For reference, we show curve corresponding to T spec =T ew (black, solid), = 2T spec (grey, dashed) and = 4T ew (blue, dashed). We have used the region between 0.1−10 kpc for estimating the two temperatures. The black and teal points indicate time steps for which T spec > and T spec ∼T ew , respectively. We show the luminosity-weighted temperature-density histogram for these time steps in Figure 7. Though there are time steps in all the models, for which T spec accurately predicts T ew , it is not always the case.
Figure 6 .Figure 7 .
67Identical toFigure 5, but for the region between 10 − 300 kpc. The temperature-density distribution of luminosity for two time steps of the Fiducial run. The left and right panel correspond to the teal and black squares inFigure 5, respectively.
Figure 8 .
8Comparison of the spectrum from the log normal model of temperature distribution, the Fiducial model at the same time step and the fit corresponding to the single temperature fit.
Figure 9 .
9The spectra from the Fiducial and "Only Cold" simulations between 0.1 and 10 kpc. The spectra have been normalised with respective total luminosity.
Figure 10 .
10The ratio of the counts between 0.7 − 1.2 keV and the total counts for 0.2 − 2.0 keV for different runs.
Figure 11 .
11Mock image of the Fiducial run using HUBS. This is a 300 ks observation. The galaxy is at a distance of 15 Mpc.
Figure 12 .
12Surface brightness profile of the mock image for the various runs, corresponding to the time step half-way through the simulation at ∼ 1 Gyr.
Figure A1 .
A1Radial dependence of the metallicity.
Figure B1 .
B1Comparison of the density and temperature profiles from the cartesian and spherical data.
Table 1 .
1Model description Name Features Fiducial Identical to the one described in Yuan et al. (2018b). OnlyCold Adopting the cold-mode AGN physics no matter what the value of the accretion rate. OnlyHot Adopting the hot-mode AGN physics no matter what the value of the accretion rate.300
Vijayan et al.
Maybe jets are also present in some cases when the accretion is in the cold mode, hinted by the existence of radio-loud quasars. This is still an unsolved problem. Jet has not been included in the hot mode inYuan et al. (2018b) and it is being added into the code in Guo et al.(2022, in preparation).
This energy range is for estimating the emission-weighted temperature for Chandra-like low spectral resolution instrument.MNRAS 000, 1-9 (2015)
HUBS is an upcoming X-ray telescope designed specifically to observe hot gas around galaxies, having a 1 • × 1 • FoV and a spectral resolution of 2 eV(Cui et al. 2020). The energy range of HUBS is 0.5 − 2.0 keV, therefore, we show the spectra in this energy range.
Actual spectra from the ISM region will possess contribution from the intervening gas between the observer and the CGM. However, from the radial surface brightness plot(Figure 12) we estimate that this contribution will be negligible and therefore, we ignore it for our analysis.MNRAS 000, 1-9 (2015)
MNRAS 000, 1-9 (2015)
ACKNOWLEDGEMENTSWe thank Drs. Wei Cui and Jiangtao Li for helpful discussions and comments. AV, BZ, and FY are supported in part by the Natural Science Foundation of China (grants 12133008, 12192220, and 12192223), and the China Manned Space Project (No. CMS-CSST-2021-B02). LCH was supported by the National Science Foundation of China (11721303, 11991052, 12011540375) and the China Manned Space Project (CMS-CSST-2021-A04, CMS-CSST-2021-A06). The analysis, presented in this paper, was done using the High Performance Computing Resource in the Core Facility for Advanced Research Computing at Shanghai Astronomical Observatory. AV would like to thank the staff maintaining the facility for their support.DATA AVAILABILITYThe data underlying this paper will be shared on reasonable request to the corresponding author.
. M E Anderson, J N Bregman, 10.1088/0004-637X/737/1/22ApJ. 73722Anderson M. E., Bregman J. N., 2011, ApJ, 737, 22
. M E Anderson, M Gaspari, S D M White, W Wang, X Dai, 10.1093/mnras/stv437MNRAS. 4493806Anderson M. E., Gaspari M., White S. D. M., Wang W., Dai X., 2015, MNRAS, 449, 3806
. M E Anderson, E Churazov, J N Bregman, 10.1093/mnras/stv2314MNRAS. 455227Anderson M. E., Churazov E., Bregman J. N., 2016, MNRAS, 455, 227
. I V Babyk, B R Mcnamara, P E J Nulsen, M T Hogan, A N Vantyghem, H R Russell, F A Pulido, A C Edge, 10.3847/1538-4357/aab3c9ApJ. 85732Babyk I. V., McNamara B. R., Nulsen P. E. J., Hogan M. T., Vantyghem A. N., Russell H. R., Pulido F. A., Edge A. C., 2018, ApJ, 857, 32
. Á Bogdán, W R Forman, R P Kraft, C Jones, 10.1088/0004-637X/772/2/98ApJ. 77298Bogdán Á., Forman W. R., Kraft R. P., Jones C., 2013a, ApJ, 772, 98
. Á Bogdán, W R Forman, R P Kraft, C Jones, 10.1088/0004-637X/772/2/98ApJ. 77298Bogdán Á., Forman W. R., Kraft R. P., Jones C., 2013b, ApJ, 772, 98
. Á Bogdán, H Bourdin, W R Forman, R P Kraft, M Vogelsberger, L Hernquist, V Springel, 10.3847/1538-4357/aa9523ApJ. 85098Bogdán Á., Bourdin H., Forman W. R., Kraft R. P., Vogelsberger M., Hern- quist L., Springel V., 2017, ApJ, 850, 98
. Á Bogdán, L Lovisari, M Volonteri, Y Dubois, 10.3847/1538-4357/aa9ab5ApJ. 852131Bogdán Á., Lovisari L., Volonteri M., Dubois Y., 2018, ApJ, 852, 131
. H Böhringer, N Werner, 10.1007/s00159-009-0023-3A&ARv. 18127Böhringer H., Werner N., 2010, A&ARv, 18, 127
. C M Booth, J Schaye, 10.1111/j.1365-2966.2011.18203.xMNRAS. 4131158Booth C. M., Schaye J., 2011, MNRAS, 413, 1158
. B Boroson, D.-W Kim, G Fabbiano, 10.1088/0004-637X/729/1/12ApJ. 72912Boroson B., Kim D.-W., Fabbiano G., 2011, ApJ, 729, 12
. E Cheung, 10.1038/nature18006Nature. 533504Cheung E., et al., 2016, Nature, 533, 504
. E Choi, J P Ostriker, T Naab, L Oser, B P Moster, 10.1093/mnras/stv575MNRAS. 4494105Choi E., Ostriker J. P., Naab T., Oser L., Moster B. P., 2015, MNRAS, 449, 4105
. W Cui, 10.1007/s10909-019-02279-3Journal of Low Temperature Physics. 199502Cui W., et al., 2020, Journal of Low Temperature Physics, 199, 502
. X Dai, M E Anderson, J N Bregman, J M Miller, 10.1088/0004-637X/755/2/107ApJ. 755107Dai X., Anderson M. E., Bregman J. N., Miller J. M., 2012, ApJ, 755, 107
. J J Davies, R A Crain, I G Mccarthy, B D Oppenheimer, J Schaye, M Schaller, S Mcalpine, 10.1093/mnras/stz635MNRAS. 4853783Davies J. J., Crain R. A., McCarthy I. G., Oppenheimer B. D., Schaye J., Schaller M., McAlpine S., 2019, MNRAS, 485, 3783
. A C Fabian, 10.1146/annurev-astro-081811-125521ARA&A. 50455Fabian A. C., 2012, ARA&A, 50, 455
. A R Foster, K Heuer, 10.3390/atoms8030049Atoms. 849Foster A. R., Heuer K., 2020, Atoms, 8, 49
. M Gaspari, F Brighenti, P Temi, 10.1111/j.1365-2966.2012.21183.xMNRAS. 424190Gaspari M., Brighenti F., Temi P., 2012, MNRAS, 424, 190
. M Gaspari, F Brighenti, P Temi, S Ettori, 10.1088/2041-8205/783/1/L10ApJ. 78310Gaspari M., Brighenti F., Temi P., Ettori S., 2014, ApJ, 783, L10
. M Gaspari, 10.3847/1538-4357/ab3c5dApJ. 884169Gaspari M., et al., 2019, ApJ, 884, 169
. J Gofford, J N Reeves, D E Mclaughlin, V Braito, T J Turner, F Tombesi, M Cappi, 10.1093/mnras/stv1207MNRAS. 4514169Gofford J., Reeves J. N., McLaughlin D. E., Braito V., Turner T. J., Tombesi F., Cappi M., 2015, MNRAS, 451, 4169
. A D Goulding, 10.3847/0004-637X/826/2/167ApJ. 826167Goulding A. D., et al., 2016, ApJ, 826, 167
. K Heuer, A R Foster, R Smith, 10.3847/1538-4357/abcaffApJ. 9083Heuer K., Foster A. R., Smith R., 2021, ApJ, 908, 3
. D.-W Kim, G Fabbiano, 10.1088/0004-637X/776/2/116ApJ. 776116Kim D.-W., Fabbiano G., 2013, ApJ, 776, 116
. D.-W Kim, G Fabbiano, 10.1088/0004-637X/812/2/127ApJ. 812127Kim D.-W., Fabbiano G., 2015, ApJ, 812, 127
. K Lakhchaura, 10.1093/mnras/sty2565MNRAS. 4814472Lakhchaura K., et al., 2018, MNRAS, 481, 4472
. K Lakhchaura, N Truong, N Werner, 10.1093/mnrasl/slz114MNRAS. 488134Lakhchaura K., Truong N., Werner N., 2019, MNRAS, 488, L134
. J.-T Li, Q D Wang, 10.1093/mnras/sts183MNRAS. 4282085Li J.-T., Wang Q. D., 2013, MNRAS, 428, 2085
. L A Lopez, S Mathur, D D Nguyen, T A Thompson, G M Olivier, 10.3847/1538-4357/abc010ApJ. 904152Lopez L. A., Mathur S., Nguyen D. D., Thompson T. A., Olivier G. M., 2020, ApJ, 904, 152
. R.-Y Ma, S R Roberts, Y.-P Li, Q D Wang, 10.1093/mnras/sty3039MNRAS. 4835614Ma R.-Y., Roberts S. R., Li Y.-P., Wang Q. D., 2019, MNRAS, 483, 5614
. P Mazzotta, E Rasia, L Moscardini, G Tormen, 10.1111/j.1365-2966.2004.08167.xMNRAS. 35410Mazzotta P., Rasia E., Moscardini L., Tormen G., 2004, MNRAS, 354, 10
. R Morganti, 10.3389/fspas.2017.00042Frontiers in Astronomy and Space Sciences. 442Morganti R., 2017, Frontiers in Astronomy and Space Sciences, 4, 42
. T Naab, J P Ostriker, 10.1146/annurev-astro-081913-040019ARA&A. 5559Naab T., Ostriker J. P., 2017, ARA&A, 55, 59
. R Narayan, A Sä, R F Penna, A K Kulkarni, 10.1111/j.1365-2966.2012.22002.xMNRAS. 4263241Narayan R., SÄ dowski A., Penna R. F., Kulkarni A. K., 2012, MNRAS, 426, 3241
. J Park, K Hada, M Kino, M Nakamura, H Ro, S Trippe, 10.3847/1538-4357/aaf9a9ApJ. 871257Park J., Hada K., Kino M., Nakamura M., Ro H., Trippe S., 2019, ApJ, 871, 257
. Rasia E Mazzotta, P Borgani, S Moscardini, L Dolag, K Tormen, G Diaferio, A Murante, G , 10.1086/427554ApJ. 6181Rasia E., Mazzotta P., Borgani S., Moscardini L., Dolag K., Tormen G., Diaferio A., Murante G., 2005, ApJ, 618, L1
Y Rubner, C Tomasi, L Guibas, 10.1109/ICCV.1998.710701Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271). Rubner Y., Tomasi C., Guibas L., 1998, in Sixth International Confer- ence on Computer Vision (IEEE Cat. No.98CH36271). pp 59-66, doi:10.1109/ICCV.1998.710701
. J Schaye, 10.1093/mnras/stu2058MNRAS. 446521Schaye J., et al., 2015, MNRAS, 446, 521
. F Shi, Z Li, F Yuan, B Zhu, 10.1038/s41550-021-01394-0Nature Astronomy. 5928Shi F., Li Z., Yuan F., Zhu B., 2021, Nature Astronomy, 5, 928
. F Shi, B Zhu, Z Li, F Yuan, 10.3847/1538-4357/ac4789ApJ. 926209Shi F., Zhu B., Li Z., Yuan F., 2022, ApJ, 926, 209
. D Sijacki, V Springel, Di Matteo, T Hernquist, L , 10.1111/j.1365-2966.2007.12153.xMNRAS. 380877Sijacki D., Springel V., Di Matteo T., Hernquist L., 2007, MNRAS, 380, 877
. R Teyssier, B Moore, D Martizzi, Y Dubois, L Mayer, 10.1111/j.1365-2966.2011.18399.xMNRAS. 414195Teyssier R., Moore B., Martizzi D., Dubois Y., Mayer L., 2011, MNRAS, 414, 195
. N Truong, A Pillepich, N Werner, arXiv:2009.06634arXiv e-printsTruong N., Pillepich A., Werner N., 2020a, arXiv e-prints, p. arXiv:2009.06634
. N Truong, 10.1093/mnras/staa685MNRAS. 494549Truong N., et al., 2020b, MNRAS, 494, 549
. A Vijayan, M Li, 10.1093/mnras/stab3413MNRAS. 510568Vijayan A., Li M., 2022, MNRAS, 510, 568
. A Vikhlinin, 10.1086/500121ApJ. 640710Vikhlinin A., 2006, ApJ, 640, 710
. Q D Wang, 10.1126/science.1240755Science. 341981Wang Q. D., et al., 2013, Science, 341, 981
. R Weinberger, 10.1093/mnras/stw2944MNRAS. 4653291Weinberger R., et al., 2017, MNRAS, 465, 3291
. N Werner, B R Mcnamara, E Churazov, E Scannapieco, 10.1007/s11214-018-0571-9Space Sci. Rev. 2155Werner N., McNamara B. R., Churazov E., Scannapieco E., 2019, Space Sci. Rev., 215, 5
. X Wu, H Mo, C Li, S Lim, 10.3847/1538-4357/abb80dApJ. 90326Wu X., Mo H., Li C., Lim S., 2020, ApJ, 903, 26
. N Y Yamasaki, K Sato, I Mitsuishi, T Ohashi, 10.1093/pasj/61.sp1.S291PASJ. 61291Yamasaki N. Y., Sato K., Mitsuishi I., Ohashi T., 2009, PASJ, 61, S291
. H Yang, F Yuan, Y.-F Yuan, C J White, 10.3847/1538-4357/abfe63ApJ. 914131Yang H., Yuan F., Yuan Y.-F., White C. J., 2021, ApJ, 914, 131
. Z Yao, F Yuan, J P Ostriker, 10.1093/mnras/staa3755MNRAS. 501398Yao Z., Yuan F., Ostriker J. P., 2021, MNRAS, 501, 398
. F Yuan, R Narayan, 10.1146/annurev-astro-082812-141003ARA&A. 52529Yuan F., Narayan R., 2014, ARA&A, 52, 529
. F Yuan, D Bu, M Wu, 10.1088/0004-637X/761/2/130ApJ. 761130Yuan F., Bu D., Wu M., 2012, ApJ, 761, 130
. F Yuan, Z Gan, R Narayan, A Sadowski, D Bu, X.-N Bai, 10.1088/0004-637X/804/2/101ApJ. 804101Yuan F., Gan Z., Narayan R., Sadowski A., Bu D., Bai X.-N., 2015, ApJ, 804, 101
. F Yuan, J P Ostriker, D Yoon, Y.-P Li, L Ciotti, Z.-M Gan, L C Ho, F Guo, arXiv:1807.05488Yuan F., Ostriker J. P., Yoon D., Li Y.-P., Ciotti L., Gan Z.-M., Ho L. C., Guo F., 2018a, arXiv e-prints, p. arXiv:1807.05488
. F Yuan, D Yoon, Y.-P Li, Z.-M Gan, L C Ho, F Guo, 10.3847/1538-4357/aab8f8ApJ. 857121Yuan F., Yoon D., Li Y.-P., Gan Z.-M., Ho L. C., Guo F., 2018b, ApJ, 857, 121
| [] |
[
"Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks",
"Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks"
] | [
"Abhishek Aich \nUniversity of California\nRiversideUSA\n",
"Shasha Li \nUniversity of California\nRiversideUSA\n",
"Chengyu Song csong@cs. \nUniversity of California\nRiversideUSA\n",
"M Salman Asif sasif@ece. \nUniversity of California\nRiversideUSA\n",
"Srikanth V Krishnamurthy \nUniversity of California\nRiversideUSA\n",
"Amit K Roy-Chowdhury [email protected] \nUniversity of California\nRiversideUSA\n"
] | [
"University of California\nRiversideUSA",
"University of California\nRiversideUSA",
"University of California\nRiversideUSA",
"University of California\nRiversideUSA",
"University of California\nRiversideUSA",
"University of California\nRiversideUSA"
] | [] | State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on single-object (i.e., single dominant object) images. Different from such settings, we tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images as they are representative of most real-world scenes. Our goal is to design an attack strategy that can learn from such natural scenes by leveraging the local patch differences that occur inherently in such images (e.g. difference between the local patch on the object 'person' and the object 'bike' in a traffic scene).Our key idea is to misclassify an adversarial multi-object image by confusing the victim classifier for each local patch in the image. Based on this, we propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes to optimize the perturbation generator. Through various experiments across diverse victim convolutional neural networks, we show that our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings. | 10.1109/wacv56688.2023.00136 | [
"https://export.arxiv.org/pdf/2209.09883v2.pdf"
] | 252,383,149 | 2209.09883 | bbbc8f16605b5056426264229fb152929405f41e |
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks
Abhishek Aich
University of California
RiversideUSA
Shasha Li
University of California
RiversideUSA
Chengyu Song csong@cs.
University of California
RiversideUSA
M Salman Asif sasif@ece.
University of California
RiversideUSA
Srikanth V Krishnamurthy
University of California
RiversideUSA
Amit K Roy-Chowdhury [email protected]
University of California
RiversideUSA
Leveraging Local Patch Differences in Multi-Object Scenes for Generative Adversarial Attacks
State-of-the-art generative model-based attacks against image classifiers overwhelmingly focus on single-object (i.e., single dominant object) images. Different from such settings, we tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images as they are representative of most real-world scenes. Our goal is to design an attack strategy that can learn from such natural scenes by leveraging the local patch differences that occur inherently in such images (e.g. difference between the local patch on the object 'person' and the object 'bike' in a traffic scene).Our key idea is to misclassify an adversarial multi-object image by confusing the victim classifier for each local patch in the image. Based on this, we propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes to optimize the perturbation generator. Through various experiments across diverse victim convolutional neural networks, we show that our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
Introduction
Understanding and exposing security vulnerabilities of deep neural networks (DNNs) has been an important recent focus of the computer vision research community [1][2][3][4]. DNNs have been extremely effective in recognition and classification systems like pedestrian recognition [5][6][7] and health-care applications [8,9]. Images of real-world scenes usually consist of multiple objects. Such scenes are often analyzed by classifiers which predict all the object labels present in such images for downstream tasks such as object annotation [10][11][12][13][14][15]. Since DNNs are known to be vulnerable to adversarial attacks, it is important to understand the vulnerabilities of such multi-object classifiers. For example, scenes monitored by drones can be attacked by adversaries where all object labels detected are Figure 1: Proposed attack LPD-Attack: We aim to create perturbations using multi-object images. To do this, our proposed attack LPD-Attack leverages the rich local differences between the patches of features extracted from multi-object images. e.g., the local feature patch of 'person's head' will be different from local feature patch of 'bike's tire' or 'bike's engine'. LPD-Attack leverages these differences to misalign (repel) a query patch (η) from perturbed image feature with the corresponding patch (η−) from clean image feature, while aligning (attract) with non-corresponding patches of different locations (η+). changed for misinterpretation at the user end [16]. Investigating such scenarios where the multi-object classifiers fail is important in order to design robust and secure real-world systems.
Adversarial attacks can be broadly classified as instancedriven approaches that are image (i.e. instance) specific [17][18][19]) and the distribution-driven or generative model-based approaches (e.g. GAP [20], CDA [21], and TDA [22]). Generative model attacks learn to craft perturbations by a generative model via training on a data distribution against a surrogate classifier. Victim classification attributes (e.g. kind of model architecture, data distribution, etc.) are generally unknown by attackers in practical cases. Hence, attackers aim towards creating strong transferable perturbations. Generative attacks provide this distinct advantage over instance-driven attacks for better transferability of perturbations for attacking unseen models [21] as well as better time complexity [20,21,[23][24][25]. Our work focuses on generative attacks for learning to create perturbations using multi-object images and disrupt all labels predicted by victim classifiers. For example in Figure 1, we aim to change the labels associated with the image (i.e. 'person' and 'bike') to labels whose objects do not exist in the input image with imperceptible perturbations (e.g. 'car', 'dog'). Existing generative model attacks (see Table 1) typically attempt to perturb images with a single dominant object in them which are analyzed by single-label classifiers. Using such single-object attacks on multi-object images would require independent object binary segmentation masks to focus on every single object in order to perturb them. This makes these attacks inefficient and impractical as an attacker cannot assume to have object binary masks for every possible distribution on the victim end.
The focus of this paper is to learn to create perturbations on multi-object images that can disrupt the output of various victim (multi-object or single-object) classifiers for all labels of the input image, without any need for independent attacks on individual objects. To this end, we propose a novel attack method that utilizes the local difference of patches in the multi-object image. As multi-object images generally contain multiple dominant objects, it is highly likely that the majority of the patches sampled are from different objects. Based on these "inherent local differences" in multi-object images, we propose a method that utilizes this property to train a perturbation generator.
Our core idea is: if the object is to be misclassified, a patch over the object should also be misclassified (in other words, make them ambiguous to the victim model). To create this misclassification, we exploit the rich local patch differences provided by multi-object images and train a perturbation generator using a novel contrastive learning loss. More specifically, given an image with multiple objects ( e.g. 'bike', and 'person' in Figure 1), we aim to use the local difference of the feature patch on object 'bike's tire' and the feature patch on object 'person's head'. Assuming the size of clean and perturbed image are the same (e.g. 224 × 224), our proposed contrastive strategy misaligns a query patch from feature map of perturbed image (say patch from 'person's head') with the patch from corresponding or the same location on feature map of a clean image, by simultaneously aligning it with patches from non-corresponding or different locations (say patch from 'bike's tire' and 'bike's engine') on feature map of clean image. Our intuition to do so is: we want the feature patch on 'person's head' in the perturbed image to change to some random features in order to create ambiguity and eventually confuse the victim classifier.
Unique to multi-object images, this location information is readily available in them due to the spatial arrangement of objects, without the need for any kind of labels or segmentation maps. Further, local patches (on average) differ from each other even if they belong to the same object, e.g. the shape of the engine of a bike will differ from the shape of the tyre.
Our approach is fundamentally different from prior singlelabel image based generative attack approaches [20,21,26] which do not use any kind of aforesaid local differences in feature maps of clean and perturbed images. Specifically, we use the approach of contrastive learning where the perturbation generator learns to disassociate corresponding signals of clean and perturbed image features, in contrast to other non-corresponding signals. In our case, these corresponding signals are patches at the same spatial location in clean and perturbed image features, while non-corresponding signals are patches at different spatial locations in the clean image features. The contrastive learning approach has been extensively used in unsupervised learning [27][28][29][30] for various image downstream tasks. We demonstrate its benefits in optimizing perturbation generating models for highly potent adversarial attacks. We refer to our attack approach as Local Patch Difference attack or LPD-Attack (see Figure 2). LPD-Attack uses our novel local-patch contrasting approach and learns to create strong imperceptible perturbations on multi-object images.
To validate our approach, we evaluate LPD-Attack's generated perturbations in different challenging scenarios. For example, if a perturbation generator is trained on Pascal-VOC [31] dataset with a Res152 [32] Pascal-VOC pre-trained multi-object classifier as a surrogate, then from the attacker's perspective, we show that LPD-Attack crafts highly transferable perturbations under following settings (in order of least realistic to most)
• Setting 1. white-box: victim classifier is seen, victim data dataset is seen, victim task is seen (e.g. Res152 multi-object classifier, Pascal-VOC dataset, multi-object classification task)
• Setting 2. black-box: victim classifier is unseen, victim data dataset is seen, victim task is seen (e.g. VGG19 multi-object classifier, Pascal-VOC dataset, multi-object classification task)
• Setting 3. strict black-box: victim classifier is unseen, victim data dataset is unseen, victim task is seen (e.g. VGG19 multi-object classifier, MS-COCO [33] dataset, multi-object classification task)
• Setting 4. extreme black-box: victim classifier is unseen, victim dataset is unseen, victim task is unseen (e.g. VGG16 single-label classifier, ImageNet [34] dataset, single-label classification task) 'Setting 4' is especially useful to test the strength of crafted perturbations by different attacks because it presents real-world use case for attackers where all victim attributes like classifier architecture, data distribution and task is unseen. To summarize, we make the following contributions:
1. New practical problem. We tackle a new problem of learning to craft perturbations for multi-object data distributions, the situation in most real-life scenes, using generative modelbased attacks to disrupt decisions. To the best of our knowledge, this is the first work that considers to create generative attacks using multi-object images.
2. Novel attack framework. To this end, we propose a novel generative model-based attack approach namely LPD-Attack, where the perturbation generator is trained using a contrastive loss that uses rich local patch differences of multi-object image features.
3. Extensive experiments. Through extensive experiments on two multi-object benchmarks, we show that LPD-Attack has overall better attack transferability and outperforms its baselines under aforementioned settings (see Table 2 and Table 3).
Related Work
Adversarial attacks on image classifiers. Most existing stateof-the-art adversarial attack works [17,18,20,21,23,24,26,[35][36][37][38][39][40][41] have been designed to attack single-object classifiers. Among these attacks, instance (or image)-driven perturbations [17,[35][36][37]42] have been extensively explored, both to showcase the various shortcomings of single-object classifiers [43]. Instance-driven attacks are characterized by their method of computation of perturbations only on corresponding clean images. This results in perturbations being computed for each image individually, without using knowledge from other images [21]. The current literature on instance-driven approaches broadly consists of methods that use gradient ascent on the images [17,19,42,44] or the those that generate adversarial examples using optimization-based methods [18,40] for attacking singleobject classifiers. Attacks on multi-object classifiers using instance-driven approaches have been proposed in [45][46][47]. [46] proposed a method to create multi-object adversarial examples by optimizing for a linear programming problem. [45] proposed a method to exploit label-ranking relationships based framework to attack multi-object ranking algorithms. More recently, [47] presented a method to disrupt the top-k labels of multi-object classifiers. Although effective for perturbing single images, instance-driven approaches are inefficient when it comes to attacking a large dataset of images, as the perturbations will have to be generated by iterating over these images individually multiple times [21,24]. Different from [45][46][47], LPD-Attack falls under the category of generative model-based adversarial attacks (which we discuss next) that are distribution-driven approaches. Such approaches train a generative network over a large number of images to create perturbations. Once the model is trained, it can be used to perturb multiple images simultaneously. Generative model-based adversarial attacks. To address the shortcomings of instance-driven approaches, generative model-based or distribution-driven attack approaches [20][21][22][23][24][25]48] have been explored recently for learning perturbations on single-object images. For example, GAP [20] presents a distribution-driven attack that trains a generative model for creating adversarial examples by utilizing the cross-entropy loss. Recently, CDA [21] proposed a generative network that is trained using a relativistic cross-entropy loss function. Both GAP [20] and CDA [21] rely on the final classification layer of the surrogate model to train the perturbation generator which has been shown to have inferior transferability of perturbations to unknown models. Different from these, [22] presented an attack methodology to enhance the transferability of perturbations using feature separation loss functions (e.g. mean square error loss). However, their attack requires a manual selection of a specific mid-layer for every model against which the generator is to be trained. In contrast to these aforementioned Table 1: Characteristic comparison. Better than prior generative attacks [20][21][22], LPD-Attack is a generative attack method designed for "multi-object" images. Here, CE(·): Cross-Entropy loss, MSE(·): Mean-Square Error loss, f: surrogate classifier used for training perturbation generator G θ (·) (weights θ). x: clean image, xδ: perturbed image, and δ: perturbation. ℓ: output from specific pre-defined layer. t: misclassification label depending on type of attack (targeted or untargeted). Proposed loss (LG+LLPCL) is detailed in Section 3.
Classifier Attack Strategy DD Attacks
Venue image type?
G θ (·) loss GAP [20] CVPR2018 single-object CE(f(xδ), t) CDA [21] NeurIPS2019 single-object CE(f(xδ) -f(x), t) TDA [22] NeurIPS2021 single-object MSE(f ℓ (xδ), f ℓ (x)) LPD-Attack
Ours multi-object LG+LLPCL works, LPD-Attack is designed to learn to craft imperceptible adversarial perturbations using multi-object images. Rather than focusing on the feature map globally, we take a more fine-grained approach of (feature map) patch contrasting via a novel contrastive loss. More specifically, LPD-Attack uses the local feature differences at multiple mid-level layers and uses an InfoNCE loss [49] based framework to create highly effectual perturbations. We summarize the differences of LPD-Attack with the aforementioned generative attack methods in Table 1.
Proposed Attack Methodology
Here, we explain our proposed generative adversarial attack LPD-Attack that learns from multi-object images. It includes training the perturbation generator with a novel local patch contrasting learning loss that uses local regions of features extracted from clean and perturbed images. We start with the notations and defining the problem statement.
Problem Formulation
Notations. Let C be the total number of classes and N be the number of training samples in a dataset T . We define
T = {(x (1) ,y (1) ),··· ,(x (N) ,y (N) )} where x (i) ∈ R H×W ×C and y (i) = [y (i) 1 ,···,y (i) C ] ∈ Y ⊆ {0,1} C are the ith image (with height H, width W ,
and channels Z) and ground-truth label vector, respectively. For an example data point x (i) and class c, y (i) c =1 (or =0) indicates the presence (or absence) of an object from class c in x (i) . We define a surrogate multi-object classifier trained on T as f(·), which is utilized to train a perturbation generator G θ (·) (parameterized by weight θ). In further discussions and Figure 2, we drop the superscript i for ease of exposition.
Problem Statement. Given a clean multi-object image x from data-distribution T containing multiple dominant objects and the victim classifier g(·), we aim to flip all labels of x with an allowable perturbation budget ϵ defined by an ℓ ∞ norm. Specifically, the problem objective is to craft a perturbation δ such that the prediction of g(·) for all labels y associated with x is changed. Mathematically, this can be represented as y ̸ = y where, y =g x and y =g x+δ with ∥δ∥ ∞ ≤ϵ. Our proposed LPD-Attack framework (top) aims to learn from multi-object images using a contrastive learning mechanism (LLPCL) to maximize the difference of corresponding patches of same locations while minimizing the difference between non-corresponding patches of distinct locations, from features extracted from clean and perturbed images. This results in highly effective and transferable perturbations for input clean images during inference (bottom-left).
Proposed Approach: LPD-Attack
Our proposed framework is presented in Figure 2. It contains a perturbation generator G θ (·) that is trained to craft imperceptible perturbations δ on x. G θ (·) is trained against a surrogate pre-trained multi-object classifier f(·). More precisely, f(·) acts as a discriminator against which generator G θ (·) is trained (f(·) remains fixed or frozen). During training, G θ (·) takes x as input and generates an unbounded perturbed image G(x)= x δ . This unbounded perturbed image x δ is clipped to be within an pre-defined perturbation budget ϵ on x under the ℓ ∞ norm using the projection operator P(·). The perturbed image is then estimated as x δ = P( x δ ). To compute the generator loss, x δ is sent to the discriminator, f(·), to be misclassified. At multiple L mid-layers from f(·), we compute the features of clean image f k (x) L k=1 and features of perturbed image
f k (x δ ) L k=1 , where f k (x), f k (x δ ) ∈ R h k ×w k ×c k .
Here, h k × w k denote the spatial size of ith layer feature map with c k channels. The effectiveness of using mid-level features to craft powerful perturbations have been extensively studied in [24,26,[50][51][52][53]. Therefore, we leverage these mid-level features of f(·) and define our generative model loss via two functions. The first loss function is a global loss L G that compares extracted features directly as follows:
L G = 1 L L k=1 dist f k (x),f k (x δ )(1)
Here, dist(·) can be any distance measuring function, e.g. which only compare perturbed and clean images globally, our proposed LPCL loss leverages the local difference of patches from multiple objects in the input image to disrupt the victim classifier's decisions. We expand on the details of LPCL next.
Contrasting Patches of Multi-Object Images
Motivation. We make the observation that due to existence of multiple objects in a multi-object image x, we can utilize the local feature patches from f k (x) (and f k (x δ )). The local patches of input clean image belong to individual dominant objects and thus, prompt the multi-object classifier to output their respective associated labels. Therefore, for each object in a perturbed image to be misclassified, each patch within its feature map should look different to the classifier than the same location corresponding patch in the feature map of a clean image. To create this difference, we use the feature maps from different location non-corresponding patches to create ambiguity for the victim classifier to prompt incorrect decisions on the overall perturbed image. This patch location-wise contrasting of the clean and perturbed image features at the local level allows for stronger supervision for training the perturbation generator G θ (·). Proposed contrasting loss (L LPCL ). To misclassify the perturbed image x δ , we need to maximize the difference between its features and that of the clean image x. We propose to achieve this by misaligning corresponding clean-perturbed image feature patches at a specific location to maximize the difference at a local level. This misalignment is enabled by utilizing the other patches from the clean image features at noncorresponding locations. We start with computing the features of clean and perturbed image from surrogate model f(·) as f k (x)∈R h k ×w k ×c k L k=1 and f k (x δ )∈R h k ×w k ×c k L k=1 , respectively. We convert these feature maps to tensors D k and D k , respectively, of size v k ×c k (where, v k =h k w k ). Next, we chose a query vector η q k ∈R c k from a qth spatial location of D k and choose the corresponding spatial location vector η − k from D k , which we call η k 's negative. Then, from R other (or different) locations of D k , we choose a collection of positives denoted by η + k ∈R R×c k . The L LPCL loss is now defined as an (R+1)-way classification objective, with logits representing the similarity between query η q k and set [η − k ,η + k1 ,η + k2 ,···,η + kR ], as follows.
L LPCL =− 1 L L k=1 log exp sim(η q k ,η − k ) exp sim(η q k ,η − k ) + R r=1 exp sim(η q k ,η + kr )(2)
where sim(η a ,η b )= η ⊤ a η b/τ returns the similarity between two vectors, ⊤ represents the transpose operation, and τ is a scaling parameter. We set τ =0.07 following [27]. This loss envisions our idea that, if a feature patch on the perturbed image is to be disrupted, it should obtain a low similarity score with the corresponding (same location) "negative" feature patch of the clean image, and high similarity score with "positive" patches from non-corresponding locations. Note that "patch" does not correspond to "object" and it is possible that (1) group of patches can belong to one object and (2) one patch can contain parts of multiple objects. The only requirement for the R positive patches used in L LPCL to operate properly is: these R positive patches should contain feature values that are different from the values in query feature patch η q k . This requirement is easily fulfilled when we sample them from non-overlapping w.r.t. to each other and from different locations w.r.t. to η q k .
Final Objective
Our final learning objective includes a loss function to train the generator over x δ both globally with a L G objective and locally using our proposed contrasting loss L LPCL . This loss is computed over multiple L mid-level layers of f(·) as L = L G +L LPCL . Note that we maximize L for an untargeted attack with L G set as the mean square error loss. For targeted attack, we minimize L with L G set as binary cross-entropy loss to classify the perturbed image to the target label. The whole training procedure is summarized in Algorithm 1. During testing, we simply input the test image to the trained generator to create a perturbed image with the aim to fool the victim classifier for all the associated labels.
Experiments and Results
Here, we discuss the strength of LPD-Attack under diverse attack Settings 1-4 (as described in Section 1) presented in Table 2 and Table 3. Furthermore, we analyze the strength of LPD-Attack on most realistic attack setting in Table 4 and Table 5, as well as other easier variations in Table 6. We also perform an ablation analysis of LPD-Attack in Figure 3 and show some examples of perturbed images and attention shift in Figure 4 to validate our method. Unless otherwise stated, perturbation budget is set to ℓ ∞ ≤ 10. We provide details of implementation, baselines (GAP [20], CDA [21], TDA [22]), and additional experiments in the Supplementary Material.
Training Datasets. We employ widely-used and publicly available PASCAL-VOC [31] and MS-COCO [33] datasets. For Pascal-VOC, we use trainval from 'VOC2007' and 'VOC2012' as our training dataset and the evaluations are carried out on 'VOC2007 test' dataset. For MS-COCO, we use train2017 as our training dataset and val2017 for evaluations.
Inference Metrics. We evaluate the attacks on multi-object classifiers using accuracy on test set defined for multi-object classification in [54,55]. For attacks on single-object classifier in Table 4 and Table 5, we use top-1 accuracy on test set. For all untargeted attacks, a lower score indicates better attack. In case of targeted attack, a higher score indicates better attack result. Best results are in red, second best are in blue. Accuracy on clean images are provided in gray for reference.
Victim Models and Attack Settings. To attack the victim models, we first train all the perturbation model G θ (·) for baselines and LPD-Attack on Pascal-VOC and MS-COCO on their respective train set against surrogate multi-classifier model f(·). We chose f(·) to be (Pascal-VOC or MS-COCO) pre-trained multi-object classifiers Res152 [32], Dense169 [56], and VGG19 [57]. As discussed in Section 1, we then evaluate the trained G θ (·) under four following settings. Firstly for Setting 1 (white-box), we attack the surrogate multi-classifier model f(·) on test set of same multi-object distribution used during training. Secondly for Setting 2 (black-box), we attack other multi-object classifiers different from the surrogate model also on test set of same multi-object distribution used during training. Thirdly for Setting 3 (strict black-box), we attack multi-object classifiers on test set of different multi-object distribution other than used during training. Finally, following [24] for Setting 4 [20] 56.24 CDA [21] 55.86 TDA [22] 55.32 Setting 2
Pascal-VOC (victim model ̸ = surrogate model) LPD-Attack 54.37 GAP [20] 40.86 CDA [21] 40.51 TDA [22] 39.79 Setting
MS-COCO
LPD-Attack 38.69 GAP [20] 83.96 CDA [21] 82.72 TDA [22] 83.13 CIFAR(10,100), STL-10, SVHN (Coarse-Grained tasks) LPD-Attack 70.72 GAP [20] 90.50 CDA [21] 90.21 TDA [22] 88.61
CUB-200, Stanford Cars, FGVC Aircraft (Fine-Grained tasks)
LPD-Attack 73.72 GAP [20] 73.05 CDA [21] 72.33 TDA [22] 69.91
Setting
(difficult)
ImageNet LPD-Attack 45.12
(extreme black-box), we attack various single-object classifiers for CIFAR10 [58], CIFAR100 [58], STL-10 [59], and SVHN [60] (coarse-grained tasks), CUB-200-2011 [61], Stanford Cars [62], and FGVC Aircrafts [63] (fine-grained tasks), and Ima-geNet [64] models on their respective test sets. The pre-trained victim models of coarse-grained tasks are available in [65], for fine-grained tasks (Res50 [32] and SENet154 [66]) in [67] and ImageNet task in [68]. Briefly, the coarse-grained single-object classification task is to distinguish labels like 'cats vs dogs', whereas the fine-grained single-object classification task is to distinguish difficult labels like species of cats (e.g. 'tiger vs panther'). Analyzing attacks on such diverse tasks after learning perturbations from multi-object images will show the transferability of perturbations which is important for real-world attacks.
Quantitative Results
We evaluated LPD-Attack against baselines under four different attack scenarios. We summarize them in Table 2 and Table 3, and discuss them below. Observation 1. The proposed method LPD-Attack has the overall best performance. We outperform the prior best SOTA method TDA [22] in 10 out of 12 cases, demonstrating the efficacy of our proposed method. For example in Pascal-VOC, we outperform TDA by a margin of 10% (ours: 55.88%, TDA: 65.08%), and in MS-COCO, by a margin of 3.5% (ours: 46.73%, TDA: 51.00%). Furthermore, TDA carries an expensive computational overhead (discussed by the authors themselves in Section 4.6 under "Limitations"): the attacker [20] 41.09 CDA [21] 39.96 TDA [22] 34.31 Setting
(easy) MS-COCO (victim model = surrogate model)
LPD-Attack 34.91 GAP [20] 41.05 CDA [21] 41.17 TDA [22] 37.08 Setting [20] 56.03 CDA [21] 55.63 TDA [22] 51.84 Setting 3
2 MS-COCO (victim model ̸ = surrogate model) LPD-Attack 36.97 GAP
Pascal-VOC LPD-Attack 52.06 GAP [20] 84.07 CDA [21] 81.52 TDA [22] 70.40 CIFAR(10,100), STL-10, SVHN (Coarse-Grained tasks) LPD-Attack 65.53 GAP [20] 90.64 CDA [21] 89.98 TDA [22] 74.88
CUB-200, Stanford Cars, FGVC Aircraft (Fine-Grained tasks)
LPD-Attack 63.39 GAP [20] 73.25 CDA [21] 71.94 TDA [22] 42.37
Setting
(difficult)
ImageNet LPD-Attack 27.51 needs to incur high time complexity (by training the generator separately for each possible mid-layer) to search for the most effective mid-layer of the surrogate model in order to optimize the generator. Through our results, especially on the ImageNet dataset, we show that TDA's manually selected specific layer is highly sensitive to the training data distribution as the results on ImageNet degrade drastically if the generator is trained on datasets different from ImageNet (in this case, Pascal-VOC, MS-COCO). In contrast, since we select a group of layers, we do not need this laborious time and resource-consuming analysis.
Observation 2. SOTA tends to comparatively overfit more to the attacker's training data distribution than the proposed method. The aforementioned four attack scenarios (after the generator is trained on Pascal-VOC and MS-COCO) show that: as the victim data distribution starts varying (e.g. ImageNet, STL-10, FGVC Aircraft classification), there is a huge performance drop in the prior attacks due to weaker transferability of perturbations. For example, TDA shows a comparable performance when the victim distribution is similar to the attacker's training distribution (see Table 6) but shows surprisingly low attack results (20% difference) when the victim distribution changes to single-object classifications tasks like ImageNet, STL-10, FGVC Aircraft (see Table 4). This clearly demonstrates that prior works tend to overfit to the attacker's training distribution and perform poorly when there is no overlap in the victim's data distribution and type of classification task. On the other hand, our proposed method LPD-Attack alleviates this issue and shows better transferability of perturbations. We attribute the better performance of our method, in better alleviating the overfitting issue than SOTA, to the unique strategy of comparing local feature patches rather than just global differences.
Observation 3. As attack scenarios become more difficult and realistic, the proposed method's performance is much better than the SOTA baselines. White-box attacks (Setting 1) are easy and least realistic attacks, whereas extreme black-box attacks (Setting 4) are the most difficult but most realistic (the attacker has no knowledge of the victim model or task) attack settings. We observe that as the difficulty level of attack increases, the performance of TDA crafted perturbations show increasingly poor performance than the proposed method LPD-Attack. For example, though LPD-Attack and TDA show comparable performance in the white-box attacks, it outperforms TDA by a huge margin of 18% in extreme black-box attacks (see Table 5 and Table 4). This implies existing attacks perform poorly in real-world use cases, whereas LPD-Attack poses a greater threat to the victim model than prior SOTA attacks.
Targeted attacks. We performed a white-box targeted attack on Dense169 with the target label set to 'person' (i.e. all perturbed images should output the label 'person'). We observed that GAP [20] and CDA [21] result in an accuracy of 34.58% and 34.86% whereas LPD-Attack resulted in 35.00% attack performance (perturbation bound ℓ ∞ ≤16).
Ablation Study
We perform an ablation analysis of LPD-Attack with respect to loss objectives in Figure 3(a), impact of number of patches R in Figure 3(b), and impact of number of layers L in Figure 3(c) utilized from the surrogate model f(·) to train G θ (·). From Figure 3(a), we observe the impact of components of our loss objective when G θ (·) was trained against Res152 on Pascal-VOC both for white-box (test against Pascal-VOC) and strict black-box (test against MS-COCO). It can be observed that the perturbations are most effective when both the global loss L G and local loss L LPCL are utilized. Next from Figure 3(b), we observe that the best performance is observed with R = 256 patches (note that we use R=128 for a slightly better training time-accuracy trade-off). Finally, we analyze the impact of using multiple midlevel features from f(·) and observe that L = 4 results in best attacks as it allows the use of diverse features to learn the perturbations. This also shows that we do not need to manually choose a specific layer for better attacks as in the case of TDA [22], and an average choice of a group of layers creates effective attacks.
Qualitative Results
We visualize some examples of perturbed images and shift in attention (using CAM [69]) for misclassified images from clean images in Pascal-VOC and MS-COCO in Figure 4 for Res152 multi-object classifier. It can be observed that LPD-Attack changes the focus of the victim classifier to irrelevant regions leading to highly successful attacks.
Conclusion
In this paper, we tackle a novel problem of altering the decisions of victim classifiers by learning to create perturbations on multi-object images. To this end, we proposed a novel generative adversarial attack (LPD-Attack) framework that trains the perturbation generators by exploiting the local differences in multi-object image features. LPD-Attack achieves high attack rates both in white-box and different practical black-box settings. For example, when we learn to craft perturbations on Pascal-VOC and create black-box attack on ImageNet, LPD-Attack outperforms existing attacks by ∼25% points. In our future work, we will explore the case of black-box multi-object targeted attacks for multi-object images, as well as video generative models [70,71] for adversarial attacks on video classifiers.
Acknowledgement. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Agreement No. HR00112090096.
Supplementary material for "Leveraging Local Patch Differences in Multi-Object
Scenes for Generative Adversarial Attacks"
Baselines. We use three state-of-the-art generative attack methods (GAP [20], CDA [21], and TDA [22]) as our baselines. We change the cross-entropy loss in GAP [20] and CDA [21] with binary cross-entropy loss to adapt it for multi-object surrogate classifier.
Implementation Details. Following [21], we use the ResNet architecture introduced in [72] as the generator network for G θ (·), except we replace the ReLU [73] activation with Fused Leaky ReLU [74] activation to stabilize training (negative slope = 0.2, scale set = √ 2). We use Adam optimizer [75] with a learning rate 0.0001, batch size 32, and exponential decay rates between 0.5 and 0.999. All images are resized to 224×224 and normalized with mean and standard deviation before feeding to generator. Further, similar to gaussian smoothing in CDA [21], in order to make the perturbations more transferable, we clamp the perturbed image between 0 to 1. This clamping trick helps in increasing the transferability of the perturbations. For fair comparison, we apply this strategy to all attacks. Perturbation generators are trained for 20 epochs. We use PyTorch [76] in all experiments. Training time is 1 hr for Pascal-VOC dataset and 10 hrs for MS-COCO dataset on two NVIDIA GeForce RTX 3090 GPUs. For all experiments, number of patches for L LPCL is set to 128.
White-box, black-box, and strict black-box attacks. We sanalyze white-box and black-box attack (attack in same distribution as adversary) performance of LPD-Attack in Table 7(a). This attack tests the strength of perturbations on the same type of task (i.e. multi-object classification) as training. Our proposed method shows a stronger attack than GAP [20] and CDA [21]. In comparison to TDA [22], our attack shows comparable performances (for cases where TDA [22] does better, the difference is very small) in most cases even though we do not need to manually choose a specific layer for each classifier to train the perturbation generator. Choosing a particular mid-layer for every classifier does not always guarantee better transferability of perturbations. A similar observation can be made in Table 7(b).
Figure 2 :
2Framework overview.
mean square error function, etc. The second loss function is a novel objective, namely, Local-Patch Contrasting Loss (LPCL) which compares the extracted features f k (x) L k=1 and f k (x δ ) L k=1 at a local or patch level. Better than prior generative attacks Algorithm 1: LPD-Attack Training Algorithm Input : clean images x from distribution T , perturbation ℓ∞ bound ϵ, surrogate classifier f(·) Input : learning rate α Output : perturbation generator G θ (·)'s weights θ / * Large-Scale training of G θ (·) * / 1 Randomly initialize θ 2 Load and freeze multi-object classifier f(·) trained on T 3 while not done do / * Obtain clean image features * / 4 Input x to f(·) and get L mid-layer features f k (x)Project it within bound ϵ using P(·) to obtain xδ7 Input xδ to f(·), get L mid-layer features f k (xδ)Update θ with respect to L using Adam 10 θ ←θ−α∇ θ L(θ)
Figure 3 :
3Ablation analysis of LPD-Attack: Figure 3(a): G θ (·) trained on Pascal-VOC against Res152, strict black-box attacks on MS-COCO; Figure 3(b), Figure 3(c): G θ (·) trained on Pascal-VOC against Dense169 for all cases; perturbation bound was set ℓ∞ ≤10.
Table 2 :
2Average Results when G θ (·) trained with Pascal-VOC. We summarize the attack capability of prior generative attack works under various victim scenarios with training data as Pascal-VOC. Results are averaged over three surrogate classifiers for all methods.Attack
Victim Details
Method
Mean result
GAP [20]
55.22
CDA [21]
54.79
TDA [22]
53.73
Setting
1
(easy)
Pascal-VOC
(victim model = surrogate model)
LPD-Attack
52.69
GAP
Table 3 :
3Average Results when G θ (·) trained with MS-COCO We summarize the attack capability of prior generative attack works under various victim scenarios with training data as MS-COCO. Results are averaged over three surrogate classifiers for all methods.Attack
Victim Details
Method
Mean result
GAP
Table 4 :
4Setting 4 attack comparison when G θ (·) is trained with Pascal-
VOC: Perturbations created on test set of each task. f(·): Res152.
(a) Coarse-Grained task
CIFAR10 CIFAR100 STL-10 SVHN
All Victim Models from [65]
Method
93.79%
74.28%
77.60% 96.03%
GAP [20]
92.94%
72.56%
74.33% 96.01%
CDA [21]
91.97%
72.18%
70.99% 95.74%
TDA [22]
92.49%
70.80%
73.31% 95.93%
LPD-Attack
76.61%
47.51%
70.49% 88.27%
(b) Fine-Grained tasks
CUB-200-2011
Stanford Cars
FGVC Aircraft
Res50 SENet154 Res50 SENet154 Res50 SENet154
Method
87.35%
86.81%
94.35%
93.36%
92.23%
92.05%
GAP [20]
86.24%
86.40%
93.79%
93.09%
91.69%
91.78%
CDA [21]
85.90%
86.11%
93.28%
92.69%
91.36%
91.90%
TDA [22]
83.93%
82.33%
92.92%
91.79%
90.04%
90.64%
LPD-Attack 59.34%
76.58%
77.35%
81.98%
73.78%
73.27%
(c) ImageNet task (on ImageNet validation set (50k samples))
ImageNet Trained Victim Classifiers
VGG16 VGG19 Res50 Res152 Dense121 Dense169
Method
70.15% 70.95% 74.60% 77.34%
74.21%
75.74%
GAP [20]
69.19% 70.23% 73.71% 76.62%
73.36%
75.21%
CDA [21]
68.20% 69.41% 72.67% 75.95%
72.93%
74.79%
TDA [22]
65.60% 66.28% 70.47% 74.35%
70.11%
72.62%
LPD-Attack 32.24% 35.05% 48.53% 50.54%
49.99%
54.37%
Table 5 :
5Setting 4 attack comparison when G θ (·) is trained with MS-COCO: Perturbations created on test set of each task. f(·): Dense169.(a) Coarse-Grained task
CIFAR10 CIFAR100 STL-10 SVHN
All Victim Models from [65]
Method
93.79%
74.28%
77.60% 96.03%
GAP [20]
93.12%
72.72%
74.78% 95.65%
CDA [21]
90.77%
69.20%
70.31% 95.79%
TDA [22]
76.37%
40.35%
72.19% 92.67%
LPD-Attack
66.16%
35.12%
70.28% 90.56%
(b) Fine-Grained tasks
CUB-200-2011
Stanford Cars
FGVC Aircraft
Res50 SENet154 Res50 SENet154 Res50 SENet154
Method
87.35%
86.81%
94.35%
93.36%
92.23%
92.05%
GAP [20]
86.69%
86.33%
94.12%
93.10%
91.84%
91.78%
CDA [21]
85.57%
86.04%
93.10%
92.71%
91.15%
91.30%
TDA [22]
60.30%
70.04%
76.21%
80.48%
81.07%
81.19%
LPD-Attack 22.25%
74.77%
64.98%
81.31%
60.37%
76.66%
(c) ImageNet task (on ImageNet validation set (50k samples))
ImageNet Trained Victim Classifiers
VGG16 VGG19 Res50 Res152 Dense121 Dense169
Method
70.15% 70.95% 74.60% 77.34%
74.21%
75.74%
GAP [20]
69.32% 70.39% 73.89% 76.75%
73.75%
75.38%
CDA [21]
67.24% 68.45% 72.17% 75.69%
73.12%
74.96%
TDA [22]
31.59% 33.11% 45.74% 58.15%
46.11%
39.49%
LPD-Attack 20.60% 23.60% 30.42% 37.07%
29.50%
23.88%
42.00
42.50
43.00
43.50
44.00
Accuracy %
LG
LLPCL
L
loss functions
white-box
strict black-box
(a) loss analysis
57.00
57.50
58.00
58.50
59.00
Accuracy %
16 32 64 128 256 512
Number of patches in LLPCL
(b) Impact of R
57.00
58.00
59.00
Accuracy %
L=1 L=2 L=3 L=4
Number of layers
(c) Impact of L
Table 6 :
6Generative Attack Comparison when G θ (·) is trained with Pascal-VOC: Gray colored cells represent the Setting 1 attacks. f(·) in both Table 6(a) andTable 6(b) are pre-trained on Pascal-VOC.(a) Setting 1 and Setting 2 attacks Illustration of perturbed images and attention shift: Row 1: clean images, Row 2: CAM [69] attention map on clean images, Row 3: perturbed images (ℓ∞ ≤10), Row 4: CAM [69] attention map on perturbed images. G θ (·) was trained against Res152 for both datasets, examples are visualized on test sets with attention maps extracted from Res152.Pascal-VOC Trained Victim Models
Res152
VGG19
Dense169
f(·)
Method
83.12%
83.18%
83.73%
GAP [20]
58.78%
48.52%
61.31%
CDA [21]
58.62%
48.69%
60.93%
TDA [22]
58.45%
48.19%
61.16%
Res152
LPD-Attack
57.22%
46.07%
59.63%
GAP [20]
58.88%
45.60%
61.32%
CDA [21]
58.18%
45.26%
60.73%
TDA [22]
57.47%
42.61%
59.39%
VGG19
LPD-Attack
57.84%
42.62%
59.66%
GAP [20]
58.83%
48.58%
61.29%
CDA [21]
58.39%
48.25%
60.48%
TDA [22]
58.04%
47.66%
60.12%
Dense169
LPD-Attack
57.21%
45.82%
58.23%
(b) Setting 3 attacks
MS-COCO Trained Victim Models
Res152
VGG19
Dense169
f(·)
Method
67.95%
66.49%
67.60%
GAP [20]
44.91%
34.70%
44.15%
CDA [21]
44.99%
34.89%
44.35%
TDA [22]
44.45%
34.46%
43.88%
Res152
LPD-Attack
42.36%
32.16%
42.37%
GAP [20]
45.02%
31.10%
44.14%
CDA [21]
43.41%
30.94%
43.31%
TDA [22]
43.22%
27.74%
42.23%
VGG19
LPD-Attack
43.12%
28.31%
42.54%
GAP [20]
44.88%
34.72%
44.12%
CDA [21]
44.50%
34.42%
43.82%
TDA [22]
44.21%
34.30%
43.58%
Dense169
LPD-Attack
43.09%
32.40%
41.86%
Table 7 :
7Generative Attack Comparison when G θ (·) is trained with MS-COCO: Gray colored cells represent the white-box attacks. f(·) in both Table 7(a) and Table 7(b) are pre-trained on MS-COCO. (a) Setting 1 and Setting 2MS-COCO Trained Victim ModelsRes152
VGG19
Dense169
f(·)
Method
67.95%
66.49%
67.60%
GAP [20]
44.98%
34.61%
43.91%
CDA [21]
45.00%
34.91%
44.36%
TDA [22]
39.60%
29.41%
39.66%
Res152
LPD-Attack
41.02%
30.18%
40.33%
GAP [20]
44.83%
34.67%
44.10%
CDA [21]
44.41%
30.63%
43.53%
TDA [22]
39.81%
23.04%
38.96%
VGG19
LPD-Attack
40.09%
24.23%
39.11%
GAP [20]
44.55%
34.28%
43.61%
CDA [21]
44.92%
34.86%
44.24%
TDA [22]
42.69%
31.96%
40.30%
Dense169
LPD-Attack
41.96%
30.19%
39.47%
(b) Setting 3
Pascal-VOC Trained Victim Models
Res152
VGG19
Dense169
f(·)
Method
83.12%
83.18%
83.73%
GAP [20]
58.80%
48.67%
60.80%
CDA [21]
58.67%
48.66%
60.92%
TDA [22]
54.05%
43.29%
57.55%
Res152
LPD-Attack
55.76%
43.88%
58.15%
GAP [20]
59.09%
48.61%
61.17%
CDA [21]
58.44%
45.20%
60.35%
TDA [22]
55.06%
38.41%
57.49%
VGG19
LPD-Attack
55.24%
40.71%
58.10%
GAP [20]
58.45%
48.18%
60.47%
CDA [21]
58.69%
48.68%
61.04%
TDA [22]
56.65%
45.52%
58.54%
Dense169
LPD-Attack
56.08%
43.46%
57.23%
Object detection neural networks. Zhaoyin, Jia, US Patent. 10827Zhaoyin et.al. Jia. Object detection neural networks, June 11 2019. US Patent 10,318,827.
Method and system for hierarchical human/crowd behavior detection. Jiajun, Zhu, US Patent. 10717Jiajun et.al. Zhu. Method and system for hierarchical hu- man/crowd behavior detection, June 25 2020. US Patent 10,572,717.
Method and system for anatomical object detection using marginal space deep neural networks. Georgescu, Bogdan, US Patent. 9643Georgescu et.al. Bogdan. Method and system for anatomical object detection using marginal space deep neural networks, June 15 2017. US Patent 9,730,643.
Adversarial camouflage: Hiding physical-world attacks with natural styles. Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, Yun Kai Qin, Yang, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionIEEERanjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A Kai Qin, and Yun Yang. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 1000-1008. IEEE, 2020.
Pedestrian detection with convolutional neural networks. Mate Szarvas, Akira Yoshizawa, Munetaka Yamamoto, Jun Ogata, IEEE Proceedings. Intelligent Vehicles Symposium. IEEEMate Szarvas, Akira Yoshizawa, Munetaka Yamamoto, and Jun Ogata. Pedestrian detection with convolutional neural networks. In IEEE Proceedings. Intelligent Vehicles Symposium, 2005., pages 224-229. IEEE, 2005.
Pedestrian detection system for smart communities using deep convolutional neural networks. Jonathan Lwowski, Prasanna Kolar, Patrick Benavidez, Paul Rad, J John, Mo Prevost, Jamshidi, 12th System of Systems Engineering Conference (SoSE). IEEEJonathan Lwowski, Prasanna Kolar, Patrick Benavidez, Paul Rad, John J Prevost, and Mo Jamshidi. Pedestrian detection system for smart communities using deep convolutional neural networks. In 2017 12th System of Systems Engineering Conference (SoSE), pages 1-6. IEEE, 2017.
Spatiotemporal representation factorization for video-based person re-identification. Abhishek Aich, Meng Zheng, Srikrishna Karanam, Terrence Chen, K Amit, Ziyan Roy-Chowdhury, Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)Abhishek Aich, Meng Zheng, Srikrishna Karanam, Terrence Chen, Amit K. Roy-Chowdhury, and Ziyan Wu. Spatio- temporal representation factorization for video-based person re-identification. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 152-162, October 2021.
Augmented bladder tumor detection using deep learning. Eugene Shkolyar, Xiao Jia, C Timothy, Dharati Chang, Kathleen E Trivedi, Mach, Q-H Max, Lei Meng, Joseph C Xing, Liao, European urology. 766Eugene Shkolyar, Xiao Jia, Timothy C Chang, Dharati Trivedi, Kathleen E Mach, Max Q-H Meng, Lei Xing, and Joseph C Liao. Augmented bladder tumor detection using deep learning. European urology, 76(6):714-718, 2019.
Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent. Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, N C Natalie, John Shih, Fabio A Tomaszewski, Anant González, Madabhushi, Scientific reports. 71Angel Cruz-Roa, Hannah Gilmore, Ajay Basavanhally, Michael Feldman, Shridar Ganesan, Natalie NC Shih, John Tomaszewski, Fabio A González, and Anant Madabhushi. Accurate and reproducible invasive breast cancer detection in whole-slide images: A deep learning approach for quantifying tumor extent. Scientific reports, 7(1):1-14, 2017.
Learning multi-label scene classification. Jiebo Matthew R Boutell, Xipeng Luo, Christopher M Shen, Brown, Pattern recognition. 379Matthew R Boutell, Jiebo Luo, Xipeng Shen, and Christopher M Brown. Learning multi-label scene classification. Pattern recognition, 37(9):1757-1771, 2004.
Ml-knn: A lazy learning approach to multi-label learning. Min-Ling Zhang, Zhi-Hua Zhou, Pattern recognition. 407Min-Ling Zhang and Zhi-Hua Zhou. Ml-knn: A lazy learn- ing approach to multi-label learning. Pattern recognition, 40(7):2038-2048, 2007.
Classifier chains for multi-label classification. Jesse Read, Bernhard Pfahringer, Geoff Holmes, Eibe Frank, Machine learning. 333Jesse Read, Bernhard Pfahringer, Geoff Holmes, and Eibe Frank. Classifier chains for multi-label classification. Machine learning, page 333, 2011.
Order-free rnn with visual attention for multi-label classification. Yi-Chen Shang-Fu Chen, Chih-Kuan Chen, Yu-Chiang Yeh, Wang, Proceedings of the AAAI Conference on Artificial Intelligence. AAAI. the AAAI Conference on Artificial Intelligence. AAAIShang-Fu Chen, Yi-Chen Chen, Chih-Kuan Yeh, and Yu-Chiang Wang. Order-free rnn with visual attention for multi-label classification. In Proceedings of the AAAI Conference on Artificial Intelligence. AAAI, 2018.
Reducing multiclass to binary: A unifying approach for margin classifiers. Robert E Erin L Allwein, Yoram Schapire, Singer, Journal of machine learning research. 1Erin L Allwein, Robert E Schapire, and Yoram Singer. Reducing multiclass to binary: A unifying approach for margin classifiers. Journal of machine learning research, 1(Dec):113-141, 2000.
Multi-Label Image Recognition with Graph Convolutional Networks. Xiu-Shen Zhao-Min Chen, Peng Wei, Yanwen Wang, Guo, The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEEZhao-Min Chen, Xiu-Shen Wei, Peng Wang, and Yanwen Guo. Multi-Label Image Recognition with Graph Convolutional Networks. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE, 2019.
Changenet: A deep learning architecture for visual change detection. Ashley Varghese, Jayavardhana Gubbi, Akshaya Ramaswamy, P Balamuralidhar, Proceedings of the European Conference on Computer Vision (ECCV) Workshops. the European Conference on Computer Vision (ECCV) WorkshopsSpringerAshley Varghese, Jayavardhana Gubbi, Akshaya Ramaswamy, and P Balamuralidhar. Changenet: A deep learning architecture for visual change detection. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, pages 0-0. Springer, 2018.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. arXiv preprint arXiv:1412.6572, 2014.
Towards evaluating the robustness of neural networks. Nicholas Carlini, David Wagner, 2017 IEEE symposium on security and privacy (sp). IEEENicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp), pages 39-57. IEEE, 2017.
Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. Alhussein Seyed-Mohsen Moosavi-Dezfooli, Pascal Fawzi, Frossard, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEESeyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. Deepfool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574-2582. IEEE, 2016.
Generative Adversarial Perturbations. Omid Poursaeed, Isay Katsman, Bicheng Gao, Serge Belongie, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIEEEOmid Poursaeed, Isay Katsman, Bicheng Gao, and Serge Belongie. Generative Adversarial Perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4422-4431. IEEE, 2018.
. Muzammal Naseer, H Salman, Harris Khan, Fahad Khan, Fatih Shahbaz Khan, Porikli, arXiv:1905.11736Cross-Domain Transferability of Adversarial Perturbations. arXiv preprintMuzammal Naseer, Salman H Khan, Harris Khan, Fahad Shahbaz Khan, and Fatih Porikli. Cross-Domain Transferability of Ad- versarial Perturbations. arXiv preprint arXiv:1905.11736, 2019.
Learning transferable adversarial perturbations. Mathieu Salzmann, Advances in Neural Information Processing Systems. 342021Mathieu Salzmann et al. Learning transferable adversarial perturbations. Advances in Neural Information Processing Systems, 34, 2021.
Chaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, Dawn Song, arXiv:1801.02610Generating Adversarial Examples with Adversarial Networks. arXiv preprintChaowei Xiao, Bo Li, Jun-Yan Zhu, Warren He, Mingyan Liu, and Dawn Song. Generating Adversarial Examples with Adversarial Networks. arXiv preprint arXiv:1801.02610, 2018.
Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. Qilong Zhang, Xiaodan Li, Yuefeng Chen, Jingkuan Song, Lianli Gao, Yuan He, Hui Xue, International Conference on Learning Representations. International Conference on Learning Representations (ICLR. 2022Qilong Zhang, Xiaodan Li, YueFeng Chen, Jingkuan Song, Lianli Gao, Yuan He, and Hui Xue'. Beyond imagenet attack: Towards crafting adversarial examples for black-box domains. In Inter- national Conference on Learning Representations. International Conference on Learning Representations (ICLR), 2022.
Nag: Network for adversary generation. Utkarsh Konda Reddy Mopuri, Utsav Ojha, R Venkatesh Garg, Babu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionKonda Reddy Mopuri, Utkarsh Ojha, Utsav Garg, and R Venkatesh Babu. Nag: Network for adversary generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 742-751, 2018.
Indirect local attacks for context-aware semantic segmentation networks. Krishna Kanth Nakka, Mathieu Salzmann, European Conference on Computer Vision. Krishna Kanth Nakka and Mathieu Salzmann. Indirect local attacks for context-aware semantic segmentation networks. In European Conference on Computer Vision, pages 611-628.
. Springer, Springer, 2020.
Contrastive learning for unpaired image-to-image translation. Taesung Park, Alexei A Efros, Richard Zhang, Jun-Yan Zhu, European Conference on Computer Vision. SpringerTaesung Park, Alexei A Efros, Richard Zhang, and Jun-Yan Zhu. Contrastive learning for unpaired image-to-image translation. In European Conference on Computer Vision, pages 319-345. Springer, 2020.
Momentum contrast for unsupervised visual representation learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionIEEEKaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual repre- sentation learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 9729-9738. IEEE, 2020.
Unsupervised feature learning via non-parametric instance discrimination. Zhirong Wu, Yuanjun Xiong, X Stella, Dahua Yu, Lin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEZhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. Unsupervised feature learning via non-parametric instance dis- crimination. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3733-3742. IEEE, 2018.
Contrastive feature loss for image prediction. Alex Andonian, Taesung Park, Bryan Russell, Phillip Isola, Jun-Yan Zhu, Richard Zhang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionIEEEAlex Andonian, Taesung Park, Bryan Russell, Phillip Isola, Jun-Yan Zhu, and Richard Zhang. Contrastive feature loss for image prediction. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 1934-1943. IEEE, 2021.
The Pascal Visual Object Classes (VOC) Challenge. Mark Everingham, Luc Van Gool, K I Christopher, John Williams, Andrew Winn, Zisserman, International journal of computer vision. 882Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The Pascal Visual Object Classes (VOC) Challenge. International journal of computer vision, 88(2):303-338, 2010.
Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770-778. IEEE, 2016.
Microsoft COCO: Common Objects in Context. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, C Lawrence Zitnick, European conference on computer vision. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C Lawrence Zitnick. Microsoft COCO: Common Objects in Context. In European conference on computer vision, pages 740-755.
. Springer, Springer, 2014.
Imagenet classification with deep convolutional neural networks. Alex Krizhevsky, Ilya Sutskever, Geoffrey E Hinton, Advances in neural information processing systems. 25Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25, 2012.
Analysis of Classifiers' Robustness to Adversarial Perturbations. Alhussein Fawzi, Omar Fawzi, Pascal Frossard, Machine Learning. Alhussein Fawzi, Omar Fawzi, and Pascal Frossard. Analysis of Classifiers' Robustness to Adversarial Perturbations. Machine Learning, pages 481-508, 2018.
Adversarial machine learning at scale. Alexey Kurakin, Ian Goodfellow, Samy Bengio, arXiv:1611.01236arXiv preprintAlexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016.
Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images. Anh Nguyen, Jason Yosinski, Jeff Clune, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEAnh Nguyen, Jason Yosinski, and Jeff Clune. Deep Neural Net- works are Easily Fooled: High Confidence Predictions for Unrec- ognizable Images. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, pages 427-436. IEEE, 2015.
Perceptual-Sensitive GAN for Generating Adversarial Patches. Aishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, Dacheng Tao, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligenceAAAIAishan Liu, Xianglong Liu, Jiaxin Fan, Yuqing Ma, Anlan Zhang, Huiyuan Xie, and Dacheng Tao. Perceptual-Sensitive GAN for Generating Adversarial Patches. In Proceedings of the AAAI con- ference on artificial intelligence, pages 1028-1035. AAAI, 2019.
Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once. Jiangfan Han, Xiaoyi Dong, Ruimao Zhang, Dongdong Chen, Weiming Zhang, Nenghai Yu, Ping Luo, Xiaogang Wang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionIEEEJiangfan Han, Xiaoyi Dong, Ruimao Zhang, Dongdong Chen, Weiming Zhang, Nenghai Yu, Ping Luo, and Xiaogang Wang. Once a MAN: Towards Multi-Target Attack via Learning Multi-Target Adversarial Network Once. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 5158-5167. IEEE, 2019.
Chaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, Dawn Song, arXiv:1801.02612Spatially transformed adversarial examples. arXiv preprintChaowei Xiao, Jun-Yan Zhu, Bo Li, Warren He, Mingyan Liu, and Dawn Song. Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612, 2018.
Adversarial attacks on black box video classifiers: Leveraging the power of geometric transformations. Shasha Li, Abhishek Aich, Shitong Zhu, Salman Asif, Chengyu Song, Amit Roy-Chowdhury, Srikanth Krishnamurthy, Advances in Neural Information Processing Systems. 342021Shasha Li, Abhishek Aich, Shitong Zhu, Salman Asif, Chengyu Song, Amit Roy-Chowdhury, and Srikanth Krishnamurthy. Adversarial attacks on black box video classifiers: Leveraging the power of geometric transformations. Advances in Neural Information Processing Systems, 34, 2021.
Sparse adversarial attack via perturbation factorization. Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, Yujiu Yang, Computer Vision-ECCV 2020: 16th European Conference. Glasgow, UKSpringerProceedings, Part XXII 16Yanbo Fan, Baoyuan Wu, Tuanhui Li, Yong Zhang, Mingyang Li, Zhifeng Li, and Yujiu Yang. Sparse adversarial attack via perturbation factorization. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXII 16, pages 35-50. Springer, 2020.
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, arXiv:1312.6199Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing Properties of Neural Networks. arXiv preprintChristian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing Properties of Neural Networks. arXiv preprint arXiv:1312.6199, 2013.
Boosting Adversarial Attacks with Momentum. Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, Jianguo Li, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEYinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting Adversarial Attacks with Momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9185-9193. IEEE, 2018.
Multilabel adversarial perturbations. Qingquan Song, Haifeng Jin, Xiao Huang, Xia Hu, 2018 IEEE International Conference on Data Mining (ICDM). IEEEQingquan Song, Haifeng Jin, Xiao Huang, and Xia Hu. Multi- label adversarial perturbations. In 2018 IEEE International Con- ference on Data Mining (ICDM), pages 1242-1247. IEEE, 2018.
Generating multi-label adversarial examples by linear programming. Nan Zhou, Wenjian Luo, Xin Lin, Peilan Xu, Zhenya Zhang, 2020 International Joint Conference on Neural Networks (IJCNN). IEEENan Zhou, Wenjian Luo, Xin Lin, Peilan Xu, and Zhenya Zhang. Generating multi-label adversarial examples by linear programming. In 2020 International Joint Conference on Neural Networks (IJCNN), pages 1-8. IEEE, 2020.
Tkml-ap: Adversarial attacks to top-k multi-label learning. Shu Hu, Lipeng Ke, Xin Wang, Siwei Lyu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionIEEEShu Hu, Lipeng Ke, Xin Wang, and Siwei Lyu. Tkml-ap: Adversarial attacks to top-k multi-label learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 7649-7657. IEEE, 2021.
Discriminator-free generative adversarial attack. Shaohao Lu, Yuqiao Xian, Ke Yan, Yi Hu, Xing Sun, Xiaowei Guo, Feiyue Huang, Wei-Shi Zheng, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaACMShaohao Lu, Yuqiao Xian, Ke Yan, Yi Hu, Xing Sun, Xiaowei Guo, Feiyue Huang, and Wei-Shi Zheng. Discriminator-free generative adversarial attack. In Proceedings of the 29th ACM International Conference on Multimedia, pages 1544-1552. ACM, 2021.
Aaron Van Den Oord, Yazhe Li, Oriol Vinyals, Representation learning with contrastive predictive coding. arXiv e-prints. 1807Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. Represen- tation learning with contrastive predictive coding. arXiv e-prints, pages arXiv-1807, 2018.
Transferable adversarial perturbations. Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, Yong Yang, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)SpringerWen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 452-467. Springer, 2018.
Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability. Nathan Inkawhich, Kevin J Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, Yiran Chen, NeurIPS. NeurIPSNathan Inkawhich, Kevin J. Liang, Binghui Wang, Matthew Inkawhich, Lawrence Carin, and Yiran Chen. Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability. In NeurIPS. NeurIPS, 2020.
Feature space perturbations yield more transferable adversarial examples. Nathan Inkawhich, Wei Wen, Hai Helen Li, Yiran Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionIEEENathan Inkawhich, Wei Wen, Hai Helen Li, and Yiran Chen. Fea- ture space perturbations yield more transferable adversarial exam- ples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7066-7074. IEEE, 2019.
Transrpn: Towards the transferable adversarial perturbations using region proposal networks and beyond. Yuezun Li, Ming-Ching Chang, Pu Sun, Honggang Qi, Junyu Dong, Siwei Lyu, Computer Vision and Image Understanding. 213103302Yuezun Li, Ming-Ching Chang, Pu Sun, Honggang Qi, Junyu Dong, and Siwei Lyu. Transrpn: Towards the transferable adver- sarial perturbations using region proposal networks and beyond. Computer Vision and Image Understanding, 213:103302, 2021.
Discriminative methods for multi-labeled classification. Shantanu Godbole, Sunita Sarawagi, Pacific-Asia conference on knowledge discovery and data mining. SpringerShantanu Godbole and Sunita Sarawagi. Discriminative methods for multi-labeled classification. In Pacific-Asia conference on knowledge discovery and data mining, pages 22-30. Springer, 2004.
A literature survey on algorithms for multi-label learning. S Mohammad, Sorower, 18CorvallisOregon State UniversityMohammad S Sorower. A literature survey on algorithms for multi-label learning. Oregon State University, Corvallis, 18:1-25, 2010.
Densely Connected Convolutional Networks. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, Kilian Q Weinberger, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEGao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely Connected Convolutional Networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700-4708. IEEE, 2017.
Karen Simonyan, Andrew Zisserman, arXiv:1409.1556Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv preprintKaren Simonyan and Andrew Zisserman. Very Deep Convo- lutional Networks for Large-Scale Image Recognition. arXiv preprint arXiv:1409.1556, 2014.
Learning multiple layers of features from tiny images. Alex Krizhevsky, Geoffrey Hinton, Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.
An analysis of single-layer networks in unsupervised feature learning. Adam Coates, Andrew Ng, Honglak Lee, Proceedings of the fourteenth international conference on artificial intelligence and statistics. the fourteenth international conference on artificial intelligence and statisticsJMLR Workshop and Conference ProceedingsAdam Coates, Andrew Ng, and Honglak Lee. An analysis of single-layer networks in unsupervised feature learning. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, pages 215-223. JMLR Workshop and Conference Proceedings, 2011.
Reading digits in natural images with unsupervised feature learning. Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, Andrew Y Ng, Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011.
The caltech-ucsd birds. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, Serge Belongie, Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011.
Collecting a large-scale dataset of fine-grained cars. Jonathan Krause, Jia Deng, Michael Stark, Li Fei-Fei, Jonathan Krause, Jia Deng, Michael Stark, and Li Fei-Fei. Collecting a large-scale dataset of fine-grained cars. 2013.
Fine-grained visual classification of aircraft. Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, Andrea Vedaldi, arXiv:1306.5151arXiv preprintSubhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. arXiv preprint arXiv:1306.5151, 2013.
Imagenet: A large-scale hierarchical image database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, Li Fei-Fei, 2009 IEEE conference on computer vision and pattern recognition. IEEEJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248-255. IEEE, 2009.
Coarse-grain models and pre-trained weights. GitHub link. Aaron Chen, 2022Aaron Chen. Coarse-grain models and pre-trained weights. GitHub link, 2022.
Squeeze-and-excitation networks. Jie Hu, Li Shen, Gang Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEJie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7132-7141. IEEE, 2018.
Fine-grain models and pre-trained weights. GitHub link. Alibaba-Aaig, 2022Alibaba-AAIG. Fine-grain models and pre-trained weights. GitHub link, 2022.
Imagenet models and pre-trained weights. Pytorch, PyTorchPyTorch. Imagenet models and pre-trained weights. PyTorch, 2022.
Learning deep features for discriminative localization. Bolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, Antonio Torralba, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionIEEEBolei Zhou, Aditya Khosla, Agata Lapedriza, Aude Oliva, and Antonio Torralba. Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2921-2929. IEEE, 2016.
Non-adversarial video synthesis with learned priors. Abhishek Aich, Akash Gupta, Rameswar Panda, Rakib Hyder, M Salman Asif, Amit K Roy-Chowdhury, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Abhishek Aich, Akash Gupta, Rameswar Panda, Rakib Hyder, M. Salman Asif, and Amit K. Roy-Chowdhury. Non-adversarial video synthesis with learned priors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
Alanet: Adaptive latent attention network for joint video deblurring and interpolation. Akash Gupta, Abhishek Aich, Amit K Roy-Chowdhury , Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaAkash Gupta, Abhishek Aich, and Amit K Roy-Chowdhury. Alanet: Adaptive latent attention network for joint video deblurring and interpolation. In Proceedings of the 28th ACM International Conference on Multimedia, pages 256-264, 2020.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution. Justin Johnson, Alexandre Alahi, Li Fei-Fei, European conference on computer vision. SpringerJustin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual Losses for Real-Time Style Transfer and Super-Resolution. In European conference on computer vision, pages 694-711. Springer, 2016.
Rectified linear units improve restricted boltzmann machines. Vinod Nair, Geoffrey E Hinton, ICML. ICML. Vinod Nair and Geoffrey E Hinton. Rectified linear units improve restricted boltzmann machines. In ICML. ICML, 2010.
A style-based generator architecture for generative adversarial networks. Tero Karras, Samuli Laine, Timo Aila, Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. the IEEE/CVF conference on computer vision and pattern recognitionIEEETero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401-4410. IEEE, 2019.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980, 2014.
PyTorch: An Imperative Style, High-Performance Deep Learning Library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Advances in Neural Information Processing Systems. NeurIPSAdam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In Advances in Neural Information Processing Systems, pages 8026-8037. NeurIPS, 2019.
| [] |
[
"Quantum algorithm for simulating real time evolution of lattice Hamiltonians",
"Quantum algorithm for simulating real time evolution of lattice Hamiltonians",
"Quantum algorithm for simulating real time evolution of lattice Hamiltonians",
"Quantum algorithm for simulating real time evolution of lattice Hamiltonians"
] | [
"Jeongwan Haah [email protected] ",
"Matthew B Hastings [email protected] ",
"Robin Kothari [email protected] ",
"Hao Guang [email protected] ",
"Low ",
"Jeongwan Haah [email protected] ",
"Matthew B Hastings [email protected] ",
"Robin Kothari [email protected] ",
"Hao Guang [email protected] ",
"Low "
] | [] | [] | We study the problem of simulating the time evolution of a lattice Hamiltonian, where the qubits are laid out on a lattice and the Hamiltonian only includes geometrically local interactions (i.e., a qubit may only interact with qubits in its vicinity). This class of Hamiltonians is very general and is believed to capture fundamental interactions of physics.Our algorithm simulates the time evolution of such a Hamiltonian on n qubits for time T up to error using O(nT polylog(nT / )) gates with depth O(T polylog(nT / )). Our algorithm is the first simulation algorithm that achieves gate cost quasilinear in nT and polylogarithmic in 1/ . Our algorithm also readily generalizes to time-dependent Hamiltonians and yields an algorithm with similar gate count for any piecewise slowly varying time-dependent bounded local Hamiltonian.We also prove a matching lower bound on the gate count of such a simulation, showing that any quantum algorithm that can simulate a piecewise constant bounded local Hamiltonian in one dimension to constant error requires Ω(nT ) gates in the worst case. The lower bound holds even if we only require the output state to be correct on local measurements. To our best knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem.Our algorithm is based on a decomposition of the time-evolution unitary into a product of small unitaries using Lieb-Robinson bounds. In the appendix, we prove a Lieb-Robinson bound tailored to Hamiltonians with small commutators between local terms, giving zero Lieb-Robinson velocity in the limit of commuting Hamiltonians. This improves the performance of our algorithm when the Hamiltonian is close to commuting. | 10.1137/18m1231511 | [
"https://arxiv.org/pdf/1801.03922v4.pdf"
] | 53,064,154 | 1801.03922 | f2e375082282da2a7c1a28806eb91ea72f7ff076 |
Quantum algorithm for simulating real time evolution of lattice Hamiltonians
Jeongwan Haah [email protected]
Matthew B Hastings [email protected]
Robin Kothari [email protected]
Hao Guang [email protected]
Low
Quantum algorithm for simulating real time evolution of lattice Hamiltonians
8 February 2020 ‡
We study the problem of simulating the time evolution of a lattice Hamiltonian, where the qubits are laid out on a lattice and the Hamiltonian only includes geometrically local interactions (i.e., a qubit may only interact with qubits in its vicinity). This class of Hamiltonians is very general and is believed to capture fundamental interactions of physics.Our algorithm simulates the time evolution of such a Hamiltonian on n qubits for time T up to error using O(nT polylog(nT / )) gates with depth O(T polylog(nT / )). Our algorithm is the first simulation algorithm that achieves gate cost quasilinear in nT and polylogarithmic in 1/ . Our algorithm also readily generalizes to time-dependent Hamiltonians and yields an algorithm with similar gate count for any piecewise slowly varying time-dependent bounded local Hamiltonian.We also prove a matching lower bound on the gate count of such a simulation, showing that any quantum algorithm that can simulate a piecewise constant bounded local Hamiltonian in one dimension to constant error requires Ω(nT ) gates in the worst case. The lower bound holds even if we only require the output state to be correct on local measurements. To our best knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem.Our algorithm is based on a decomposition of the time-evolution unitary into a product of small unitaries using Lieb-Robinson bounds. In the appendix, we prove a Lieb-Robinson bound tailored to Hamiltonians with small commutators between local terms, giving zero Lieb-Robinson velocity in the limit of commuting Hamiltonians. This improves the performance of our algorithm when the Hamiltonian is close to commuting.
Introduction
Background The problem of simulating the time evolution of a quantum system is perhaps the most important application of quantum computers. Indeed, this was the reason Feynman proposed quantum computing [Fey82], and it remains an important practical application since a significant fraction of the world's supercomputing power is used to solve instances of this problem that arise in materials science, condensed matter physics, high energy physics, and chemistry [Nat16].
All known classical algorithms (i.e., algorithms that run on traditional non-quantum computers) for this problem run in exponential time. On the other hand, from the early days of quantum computing [Fey82,Llo96] it was known that quantum computers can solve this problem in polynomial time. More precisely, when formalized as a decision problem, the problem of simulating the time evolution of a quantum system is in the complexity class BQP, the class of problems solved by a quantum computer to bounded error in polynomial time. Furthermore, the problem is complete for BQP [Fey85,Chi04,Nag10], which means we do not expect there to be efficient classical algorithms for the problem, since that would imply BPP = BQP, which in turn would imply polynomial-time algorithms for problems such as integer factorization and discrete log [Sho97].
Hamiltonian simulation problem The Hamiltonian simulation problem is a standard formalization of the problem of simulating the time evolution 1 of a quantum system. In this problem, we assume the quantum system whose time evolution we wish to simulate consists of n qubits and we want to simulate its time evolution for time T , in the sense that we are provided with the initial state |ψ(0) and we want to compute the state of the system at time T , |ψ(T ) . The goal of an efficient simulation is to solve the problem in time polynomial in n and T .
The state of a system of n qubits can be described by a complex vector of dimension 2 n of unit norm. Since we are studying quantum algorithms for the problem, we are given the input as an n-qubit quantum state, and have to output an n-qubit quantum state. The relation between the output state at time T and the initial state at time 0 is given by the Schrödinger equation
i d dt |ψ(t) = H(t)|ψ(t) ,(1)
where the Hamiltonian H, a 2 n × 2 n complex Hermitian matrix, has entries which may also be functions of time. The Hamiltonian captures the interaction between the constituents of the system and governs time dynamics. In the special case where the Hamiltonian is independent of time, the Schrödinger equation can be solved to yield |ψ(T ) = e −iHT |ψ(0) . More formally, the input to the Hamiltonian simulation problem consists of a Hamiltonian H (or H(t) in the time-dependent case), a time T , and an error parameter . The goal is to output a quantum circuit that approximates the unitary matrix that performs the time evolution above (e.g., for time-independent Hamiltonians, the quantum circuit should approximate the unitary e −iHT ). The notion of approximation used is the spectral norm distance between the ideal unitary U and the one V performed by the circuit. This implies that the circuit as a quantum channel is close to the ideal one in the completely bounded trace norm distance:
sup ρ 1 2 (V ⊗ I)ρ(V ⊗ I) † − (U ⊗ I)ρ(U ⊗ I) † tr ≤ V − U where A tr = Tr √
A † A is the trace norm, · is the spectral norm, I is the identity on an ancilla system, and ρ ranges over all density matrices on the joint system. The inequality is easily seen by using the triangle inequality of the trace norm and the inequality AB tr ≤ A tr B for any A, B [Wat18, Eq. 1.175].
The cost of a quantum circuit is measured by the number of gates used in the circuit, where the gates come from some fixed universal gate set. Note that it is important to describe how the Hamiltonian in the input is specified, since it is a matrix of size 2 n × 2 n . This will be made precise when talking about specific classes of Hamiltonians that we would like to simulate.
Geometrically local Hamiltonian simulation The most general class of Hamiltonians that is commonly studied in the literature is the class of sparse Hamiltonians [ATS03, Chi04, BACS07, BCC + 14, BCC + 15, BCK15,LC17b,LC16,LC17a]. A Hamiltonian on n qubits is sparse if it has only poly(n) nonzero entries in any row or column. For such Hamiltonians, we assume we have an efficient algorithm to compute the nonzero entries in each row or column, and the input Hamiltonian is specified by an oracle that can be queried for this information. In this model, recent quantum algorithms have achieved optimal complexity in terms of the queries made to the oracle [LC17b].
A very important special case of this type of Hamiltonian is a "local Hamiltonian." Confusingly, this term is used to describe two different kinds of Hamiltonians in the literature. We distinguish these two definitions of "local" by referring to them as "non-geometrically local" and "geometrically local" in this introduction. A non-geometrically local Hamiltonian is a Hamiltonian H(t) that can be written as a sum of polynomially many terms H j (t), each of which acts nontrivially on only k qubits at a time (i.e., the matrix acts as identity on all other qubits). A geometrically local Hamiltonian is similar, except that each term H j (t) must act on k adjacent qubits. Since we refer to "adjacent" qubits, the geometry of how the qubits are laid out in space must be specified. In this paper we will deal with qubits laid out in a D-dimensional lattice in Euclidean space. That is, qubits are located at points in Z D and are adjacent if they are close in Euclidean distance. In this paper, we consider D as constant so asymptotic expressions will not show any dependence on D.
(However, we briefly discuss a strategy for large D in Appendix B.)
Geometrically local Hamiltonians are central objects in physics. Indeed, fundamental forces of Nature (strong, weak, and electromagnetic interactions) are modeled by geometrically local Hamiltonians on lattices [KS75]. From a practical perspective, geometrically local Hamiltonians capture a large fraction of condensed matter systems we are interested in. 2 From now on, we will exclusively use the term "local" to refer to geometrically local Hamiltonians.
Prior best algorithms To describe the known algorithms for this problem, we need to formally specify the problem. Although our results apply to very general time-dependent Hamiltonians, while comparing to previous work we assume the simpler case where the Hamiltonian is time independent. We assume our n qubits are laid out in a D-dimensional lattice Λ in R D , where D = O(1), and every unit ball contains O(1) qubits. We assume our Hamiltonian H is given as a sum of terms H = X⊆Λ h X , where each h X only acts nontrivally on qubits in X (and acts as identity on the qubits in Λ \ X), such that h X = 0 if diam(X) > 1, which enforces geometric locality. (More formally, we rescale the metric in such a way that diam(X) > 1 implies h X = 0.) We normalize the Hamiltonian by requiring h X ≤ 1.
We consider a quantum circuit simulating the time evolution due to such a Hamiltonian efficient if it uses poly(n, T, 1/ ) gates. To get some intuition for what we should hope for, notice that in the real world, time evolution takes time T and uses n qubits. Regarding "Nature" as a quantum simulator, we might expect that there is a quantum circuit that uses O(n) qubits, O(T ) circuit depth, and O(nT ) total gates to solve the problem. It is also reasonable to allow logarithmic overhead in the simulation since such overheads are common even when one classical system simulates the time evolution of another (e.g., when one kind of Turing machine simulates another).
However, previous algorithms for this problem fall short of this expectation. The best Hamiltonian simulation algorithms for sparse Hamiltonians [BCC + 15, LC17b,LC16] have query complexity O(nT polylog(nT / )), but the assumed oracle for the entries requires O(n) gates to implement, yielding an algorithm that uses O(n 2 T polylog(nT / )) gates. This was also observed in a recent paper of Childs, Maslov, Nam, Ross, and Su [CMN + 17], who noted that for T = n, all the sparse Hamiltonian simulation algorithms had gate cost proportional to n 3 (or worse). A standard application of high-order Lie-Trotter-Suzuki expansions [Tro59, Suz91, Llo96, BACS07] yields gate complexity O(n 2 T (nT / ) δ ) for any fixed δ > 0. It has been argued [JLP14,Sec. 4.3] that this in fact yields an algorithm with gate complexity O(nT (nT / ) δ ) for any fixed δ > 0. We believe this analysis is correct, but perhaps some details need to be filled in to make the analysis rigorous. In any case, this algorithm still performs worse than desired, and in particular does not have polylogarithmic dependence on 1/ .
Results
We exhibit a quantum algorithm that simulates the time evolution due to a time-dependent lattice Hamiltonian with a circuit that uses O(nT polylog(nT / )) geometrically local 2-qubit gates (i.e., the gates in our circuit also respect the geometry of the qubits), with depth O(T polylog(nT / )) using only polylog(nT / ) ancilla qubits. We then also prove a matching lower bound, showing that no quantum algorithm can do better (up to logarithmic factors), even if we relax the output requirement significantly. We now describe our results more formally.
Algorithmic results We consider a more general version of the problem with time-dependent Hamiltonians. In this case we will have H(t) = X⊆Λ h X (t) with the locality and norm conditions as before. However, now the operators h X (t) are functions of time and we need to impose some reasonable constraints on the entries to obtain polynomial-time algorithms.
First we need to be able to compute the entries of our Hamiltonian efficiently at a given time t. We say that a function α : [0, T ] t → α(t) ∈ R is efficiently computable if there is an algorithm that outputs α(t) to precision for any given input t specified to precision in running time polylog(T / ). Note that any complex-valued analytic function on a nonzero neighborhood of a closed real interval in the complex plane is efficiently computable (see Appendix D). We will assume that each entry in a local term h X (t) is efficiently computable.
In addition to being able to compute the entries of the Hamiltonian, we require that the entries do not change wildly with time; otherwise, a sample of entries at discrete times may not predict the behavior of the entries at other times. We say that a function α on the interval [0, T ] (T ≥ 1) is piecewise slowly varying if there are M = O(T ) intervals [t j−1 , t j ] with 0 = t 0 < t 1 < · · · < t M = T such that d dt α(t) exists and is bounded by 1/(t j − t j−1 ) for t ∈ (t j−1 , t j ). In particular, a function is piecewise slowly varying if it is a sum of O(T ) pieces, each of which has derivative at most O(1). We will assume that each entry in a term h X (t) is piecewise slowly varying.
We are now ready to state our main result, which is proved in Section 2 Theorem 1. Let H(t) = X⊆Λ h X (t) be a time-dependent Hamiltonian on a lattice Λ of n qubits, embedded in the Euclidean metric space R D . Assume that every unit ball contains O(1) qubits and h X = 0 if diam(X) > 1. Also assume that every local term h X (t) is efficiently computable (e.g., analytic), piecewise slowly varying on time domain [0, T ], and has h X (t) ≤ 1 for any X and t.
Then, there exists a quantum algorithm that can approximate the time evolution of H for time T to accuracy using O(T n polylog(T n/ )) 2-qubit local gates, and has depth O(T polylog(T n/ )).
Our algorithm uses O(1) ancillas per system qubit on which H is defined. The ancillas are interspersed with the system qubits, and all the gates respect the locality of the lattice.
Lower bounds
We also prove a lower bound on the gate complexity of simulating the time evolution of a time-dependent lattice Hamiltonian. This lower bound matches, up to logarithmic factors, the gate complexity of the algorithm presented in Theorem 1. Note that unlike previous lower bounds on Hamiltonian simulation [BACS07, BCC + 14, BCK15], which prove lower bounds on query complexity, this is a lower bound on the number of gates required to approximately implement the time-evolution unitary. To our best knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem. For concreteness, we focus on a 1-dimensional time-dependent local Hamiltonian in this section, although the lower bound extends to other constant dimensions with minor modifications. The lower bounds are proved in Section 3.
Before stating the result formally, let us precisely define the class of Hamiltonians for which we prove the lower bound. We say a Hamiltonian H(t) acting on n qubits is a "piecewise constant 1D Hamiltonian" if H(t) = n−1 j=1 H j (t), where H j (t) is only supported on qubits j and j + 1 with max t H j (t) = O(1), and there is a time slicing
0 = t 0 < t 1 < · · · < t M = T where t m − t m−1 ≤ 1 and M = O(T ) such that H(t) is time-independent within each time slice.
For such Hamiltonians, the time evolution operator for time T can be simulated with error at most using Theorem 1 with O(T n polylog(T n/ )) 2-qubit local gates (i.e., the 2-qubit gates only act on adjacent qubits). In particular, for any constant error, the simulation only requires O(T n) 2-qubit local gates. We prove a matching lower bound, where the lower bound even holds against circuits that may use non-geometrically local (i.e., acting on non-adjacent qubits) 2-qubit gates from a possibly infinite gate set and unlimited ancilla qubits.
Theorem 2. For any integers n and T ≤ 4 n , there exists a piecewise constant bounded 1D Hamiltonian H(t) on n qubits, such that any quantum circuit that approximates the time evolution due to H(t) for time T to constant error must use Ω(T n) 2-qubit gates. The quantum circuit may use unlimited ancilla qubits and the gates may be non-local and come from a possibly infinite gate set.
Note that this lower bound only holds for T ≤ 4 n , because any unitary on n qubits can be implemented with O(4 n ) 2-qubit local gates [BBC + 95, Kni95].
We can also strengthen our lower bound to work in the situation where we are only interested in measuring a local observable at the end of the simulation. The simulation algorithm presented in Theorem 1 provides a strong guarantee: the output state is -close to the ideal output state in trace distance. Trace distance captures distinguishability with respect to arbitrary measurements, but for some applications it might be sufficient for the output state to be close to the ideal state with respect to local measurements only. We show that even in this limited measurement setting, it is not possible to speed up our algorithm in general. In fact, our lower bound works even if the only local measurement performed is a computational basis measurement on the first output qubit.
Theorem 3. For any integers n and T such that 1 ≤ n ≤ T ≤ 2 n , there exists a piecewise constant bounded 1D Hamiltonian H(t) on n qubits, such that any quantum circuit that approximates the time evolution due to H(t) for time T to constant error on any local observable must use Ω(T n) 2-qubit gates. If T ≤ n, we have a lower bound of Ω(T 2 ) gates. (The quantum circuit may use unlimited ancilla qubits and the gates may be non-local and come from a possibly infinite gate set.)
Note that the fact that we get a weaker lower bound of Ω(T 2 ) when T ≤ n is not a limitation, but reflects the fact that small time evolutions are actually easier to simulate when the measurement is local. To see this, consider first simulating the time evolution using the algorithm in Theorem 1. This yields a circuit with O(T n) 2-qubit local gates. But if we only want the output of a local measurement after time T , qubits that are far away from the measured qubits cannot affect the output, since the circuit only consists of 2-qubit local gates. Hence we can simply remove all gates that are more than distance equal to the depth of the circuit, O(T ), away from the measured qubits. We are then left with a circuit that uses O(T 2 ) gates, matching the lower bound in Theorem 3.
Techniques
Algorithm Our algorithm is based on a decomposition of the time evolution unitary using Lieb-Robinson bounds [LR72,Has04,NS06,HK06,Has10], that was made explicit by Osborne [Osb06] (see also Michalakis [Mic12, Sec. III]), which when combined with recent advances in Hamiltonian simulation [BCC + 15, LC17b, LC16], yields Theorem 1.
Lieb-Robinson bounds are theorems that informally state that information travels at a constant speed in geometrically local Hamiltonians. For intuition, consider a 1-dimensional lattice of qubits and a geometrically local Hamiltonian that is evolved for a short amount of time. If the time is too short, no information about the first qubit can be transmitted to the last qubit. Lieb-Robinson bounds make this intuition precise, and show that the qubit at position n is only affected by the qubits and operators at position 1 after time Ω(n). Note that if this were a small-depth unitary circuit of geometrically local 2-qubit gates such a statement would follow using a "lightcone" argument. In other words, after one layer of geometrically local 2-qubit gates, the influence of qubit 1 can only have spread to qubit 2. Similarly, after k layers of 2-qubit gates, the influence of qubit 1 can only have spread up to qubit k. The fact that this extends to geometrically local Hamiltonians is nontrivial, and is only approximately true. See Lemma 5 for a formal statement of a Lieb-Robinson bound.
We use these ideas to chop up the large unitary that performs time evolution for the full Hamiltonian H into many smaller unitaries that perform time evolution for a small portion of the Hamiltonian. Quantitatively, we break Hamiltonian simulation for H for time O(1) into O(n/ log(nT / )) pieces, each of which is a Hamiltonian simulation problem for a Hamiltonian on an instance of size O(log(nT / )) to exponentially small error. At this point we can use any Hamiltonian simulation algorithm for the smaller piece as long as it has polynomial gate cost and has exponentially good dependence on . While Hamiltonian simulation algorithms based on product formulas do not have error dependence that is polylog(1/ ), recent Hamiltonian simulation algorithms, such as [BCC + 14, BCC + 15, BCK15, LC17b, LC16] have polylog(1/ ) scaling. Thus our result importantly uses the recent advances in Hamiltonian simulation with improved error scaling.
Lower bound As noted before, we lower bound the gate complexity (or total number of gates) required for Hamiltonian simulation, which is different from prior work which proved lower bounds on the query complexity of Hamiltonian simulation. As such, our techniques are completely different from those used in prior work. Informally, our lower bounds are based on a refined circuit-size hierarchy theorem for quantum circuits, although we are technically comparing two different resources in two different models, which are simulation time for Hamiltonians versus gate cost for circuits.
As a simple motivating example, consider circuit-size hierarchy theorems for classical or quantum circuits more generally. Abstractly, a hierarchy theorem generally states that a computational model with X amount of a resource (e.g., time, space, gates) can do more if given more of the same resource. For example, it can be shown that for every G 2 n /n, there exists a Boolean function on n bits that cannot be computed by a circuit of size G, but can be computed by a circuit of size G + O(n). We show similar hierarchy theorems for quantum circuit size, except that we show that the circuit of larger size that computes the function actually comes from a weaker family of circuits. Informally, we are able to show that there are functions that can be computed by a larger circuit that uses only geometrically local 2-qubit gates from a fixed universal gate set that cannot be a computed by a smaller circuit, even if we allow the smaller circuit access to unlimited ancilla bits and non-geometrically local 2-qubit from an infinite gate set. We then leverage this asymmetric circuit size hierarchy theorem to show that there is a Hamiltonian whose evolution for time T cannot be simulated by a circuit of size nT , by embedding the result of any quantum circuit with geometrically local 2-qubit gates into a piecewise constant Hamiltonian with time proportional to the depth of the circuit.
Organization of the paper
In Section 2 we analyze our algorithm and prove Theorem 1. In Section 3 we prove the lower bounds of Theorems 2 and 3. We conclude in Section 4 with an extended discussion on fermionic systems; we remark that there exists an embedding of systems with fermions into systems of qubits and hence our algorithm is applicable. This paper has several appendices. In Appendix A we report numerical results for system sizes at which our algorithm becomes competitive in terms of actual quantum gate count. In Appendix B we remark how to tailor our algorithm if there is spatial modulation of interaction strength. We also remark how to reduce the complexity of our algorithm on spatial dimension D. Appendix C contains a new Lieb-Robinson bound for Hamiltonians that are close to commuting. This appendix can be read independently and provides a self-contained proof of a Lieb-Robinson bound that we use in the main theorem. Appendix D explains why analytic functions are efficiently computable; they often arise when time-dependent Hamiltonians are considered. Appendix E summarizes what quantum signal processing is and contains certain optimization techniques that are used in our numerical benchmark of Appendix A.
Algorithm and analysis
In this section we establish our main algorithmic result, restated below for convenience: Theorem 1. Let H(t) = X⊆Λ h X (t) be a time-dependent Hamiltonian on a lattice Λ of n qubits, embedded in the Euclidean metric space R D . Assume that every unit ball contains O(1) qubits and h X = 0 if diam(X) > 1. Also assume that every local term h X (t) is efficiently computable (e.g., analytic), piecewise slowly varying on time domain [0, T ], and has h X (t) ≤ 1 for any X and t. Then, there exists a quantum algorithm that can approximate the time evolution of H for time T to accuracy using O(T n polylog(T n/ )) 2-qubit local gates, and has depth O(T polylog(T n/ )).
The algorithm is depicted in Figure 1. Before showing why this algorithm works, we provide a high-level overview of the algorithm and the structure of the proof. Since a time evolution unitary U (T ; 0) is equal to U (T = t M ; t M −1 )U (t M −1 , t M −2 ) · · · U (t 2 ; t 1 )U (t 1 ; t 0 = 0), we will focus on a time evolution operator U (t; 0) where t = O(1), generated by a slowly varying bounded Hamiltonian. The number M of time slices is chosen to be O(T ). The key idea, as shown in Figure 1, is that the time-evolution operator, e −itH due to the full Hamiltonian X⊆Λ h X can be approximately written as a product
e −itH ≈ e −itH A e +itH Y e −itH Y ∪B .
(2)
Here A ∪ B = Λ and we think of A and B as large regions, but Y as a small subset of A. The error in the approximation is exponentially small in the diameter of Y . This is formally proved in Lemma 6, which is supported by Lemma 4 and Lemma 5. Applying this twice, using Figure 1: Decomposition of time evolution operator for time t = O(1). The time is going upwards. Each block represents the forward time evolution, e −itH , if the arrow is upward, and backward time evolution, e +itH , if the arrow is downward. Here, H is the sum of local terms in the Hamiltonian supported completely within the block. The overlap has size . (a) shows a onedimensional setting, but a generalization to higher D dimensions is readily achieved by regarding each block as a (D − 1)-dimensional hyperplane so that the problem reduces to lower dimensions. (b) shows a two-dimensional setting. The approximation error from the depicted decomposition is = O(e −µ L D / ) where L is the linear system size, is the width of the overlap between blocks, and µ > 0 is a constant that depends only on the locality of the Hamiltonian. One can use any algorithm to further decompose the resulting "small" unitaries on O(log(L/ )) qubits into elementary gates. To achieve gate count that is linear (up to logarithmic factors) in spacetime volume, the algorithm for simulating the blocks needs to be polynomial in the block size and polylogarithmic in accuracy.
e −itH Y ∪B ≈ e −itH B e +itH Z e −itH Y ∪Z (3) − − − ∪ ≈ a) ⇓ ⇓ ⇑ ⇑ ⇓ ⇑ ⇓ ⇑ ⇑ b) ⇑ ⇓ ⇓ ⇑ ⇑ ⇓ ⇑ ⇑ ⇓ site index − − ∪ ≈ −
leads to a symmetric approximation as depicted at the bottom left of Figure 1. This procedure can then be repeated for the large operators supported on A and B to reduce the size of all the operators involved, leading to the pattern in Figure 1 (a). This reduces the problem of implementing the time-evolution operator for H into the problem of implementing smaller time-evolution operators, which can be implemented using known quantum algorithms. 3 We now establish the lemmas needed to prove the result.
Lemma 4. Let A t and B t be continuous time-dependent Hermitian operators, and let U A t and U B t with U A 0 = U B 0 = 1 be the corresponding time evolution unitaries. Then the following hold:
(i) W t = (U B t ) † U A t is the unique solution of i∂ t W t = (U B t ) † (A t − B t )U B t W t and W 0 = 1. (ii) If A s − B s ≤ δ for all s ∈ [0, t], then U A t − U B t ≤ tδ.
Proof. (i) Differentiate. The solution to the ordinary differential equation is unique. (ii) Apply Jensen's inequality for · (implied by the triangle inequality for · ) to the equation W t − W 0 = t 0 ds∂ s W s . Then, invoke (i) and the unitary invariance of · . For any two sites x, y we denote by dist(x, y) the distance between two sites x, y, and for any two sets X, Y of sites we write dist(X, Y ) = min x∈X,y∈Y dist(x, y). The diameter of a set X is diam(X) = max x,x ∈X dist(x, x ).
Lemma 5 (Lieb-Robinson bound [LR72,Has04,NS06,HK06]). Let H = X h X be a local Hamiltonian and O X be any operator supported on X, and put = dist(X, Λ \ Ω) . Then
(U H t ) † O X U H t − (U H Ω t ) † O X U H Ω t ≤ |X| O X (2ζ 0 |t|) ! ,(4)where ζ 0 = max p∈Λ Z p |Z| h Z = O(1).
In particular, there are constants v LR > 0, called the Lieb-Robinson velocity, 4 and µ > 0, such that for ≥ v LR |t|, we have
(U H t ) † O X U H t − (U H Ω t ) † O X U H Ω t ≤ O(|X| O X exp(−µ )).(5)
Proof. See C.4.
We are considering strictly local interactions (as in Theorem 1), where h X = 0 if diam(X) > 1, but similar results hold with milder locality conditions such as h X ≤ e − diam(X) [LR72,Has04,NS06,HK06,Has10]; see Appendix C for a detailed proof. Below we will only use the result that the error is at most O(e −µ ) for some µ > 0 and fixed t. For slower decaying interactions, the bound is weaker and the overlap size in Figure 1 will have to be larger.
The Lieb-Robinson bound implies the following decomposition.
Lemma 6. Let H = X h X be a local Hamiltonian (as in Theorem 1, or a more general definition for which Lemma 5 still holds). Then there is a constant µ > 0 such that for any disjoint regions A, B, C, and for constant t, we have
U H A∪B t (U H B t ) † U H B∪C t − U H A∪B∪C t ≤ O(e −µ dist(A,C) ) X:bd(AB,C) h X ,(6)
where X : bd(AB, C) means that X ⊆ A ∪ B ∪ C and X ⊆ A ∪ B and X ⊆ C.
A similar technique of the proof below has appeared in [Osb06,Mic12]. In these references one
approximates U H A∪B t by U H A t U H B t V t
where V t is generated 5 by some time dependent Hermitian operator of small support.
Proof. We omit "∪" for the union of disjoint sets. The following identity is trivial but important:
U H ABC t = U H AB +H C t (U H AB +H C t ) † U H ABC t =Wt .
(7)
By Lemma 4 (i), W t is generated by the Hamiltonian
(U H AB +H C t ) † (H ABC − H AB − H C H bd )U H AB +H C t = (U H AB +H C t ) † H bd U H AB +H C t .
Applying Lemma 5 (and Eq. (5) in particular) with O X = H bd , we have
(U H AB +H C t ) † H bd U H AB +H C t − (U H B +H C t ) † H bd U H B +H C t ≤ O( H bd exp(−µ )) =δ ,(8)
for some µ > 0 where is the distance between the support of the boundary terms H bd and A. Since H bd contains terms that cross between AB and C, the distance is at least dist(A, C) minus 2. Using this and H bd ≤ X:bd(AB,C) h X , we get δ = O(e −µ dist(A,C) X:bd(AB,C) h X ). We can now use Lemma 4 (i) again to compute the unitary generated by the second term in (8),
(U H B +H C t ) † H bd U H B +H C t . The unitary generated is (U H B +H C t ) † U H bd +H B +H C = (U H B +H C t ) † U H BC t ,
where we used the fact that H ABC − H AB + H B = H BC . This operator can be thought of as the "interaction picture" time-evolution operator of the second term in (8). This is our simplification of the "patching" unitary. Applying Lemma 4 (ii), we get
W t − (U H B +H C t ) † U H BC t ≤ tδ = O(δ)
The left-hand side can be further simplified as
W t − (U H B +H C t ) † U H BC t = (U H AB +H C t ) † U H ABC t − (U H B +H C t ) † U H BC t = U H ABC t − (U H AB +H C t )(U H B +H C t ) † U H BC t = U H ABC t − (U H AB t U H C t )(U H B t U H C t ) † U H BC t = U H ABC t − (U H AB t )(U H B t ) † U H BC t .
This yields the desired inequality in the statement of the lemma.
Proof of Theorem 1. The circuit for simulating the Hamiltonian is described in Figure 1 For longer times, apply the decomposition to each factor of For not too large system sizes L, it may be reasonable to use a bruteforce method to decompose the block unitaries into elementary gates [KSV02,Chap. 8].
U (T = t M ; t M −1 )U (t M −1 , t M −2 ) · · · U (t 2 ; t 1 )U (t 1 ; t 0 = 0).
Optimality
In this section we prove a lower bound on the gate complexity of the problem of simulating the time evolution of a time-dependent local Hamiltonian. (Recall that throughout this paper we use local to mean geometrically local.)
Lower bound proofs
We now prove Theorem 2 and Theorem 3, starting with Theorem 3. This lower bound follows from the following three steps. First, in Lemma 7, we observe that for every depth-T quantum circuit on n qubits that uses local 2-qubit gates, there exists a piecewise constant bounded Hamiltonian H(t) such that time evolution due to H(t) for time T is equal to applying the quantum circuit. Then, in Lemma 8 we show that the number of distinct Boolean functions on n bits computed by such quantum circuits is at least exponential in Ω(T n), where we say a quantum circuit has computed a Boolean function if its first output qubit is equal to the value of the Boolean function with high probability. Finally, in Lemma 9 we observe that the maximum number of Boolean functions that can be computed (to constant error) by the class of quantum circuits with G arbitrary non-local 2-qubit gates from any (possibly infinite) gate set is exponential in O(G log n). Since we want this class of circuits to be able to simulate all piecewise constant bounded 1D Hamiltonians for time T , we must have G = Ω(T n).
Lemma 7. Let U be a depth-T quantum circuit on n qubits that uses local 2-qubit gates from any gate set. Then there exists a piecewise constant bounded 1D Hamiltonian H(t) such that the time evolution due to H(t) for time T exactly equals U .
Proof. We first prove the claim for a depth-1 quantum circuit. This yields a Hamiltonian H(t) that is defined for t ∈ [0, 1], whose time evolution for unit time equals the given depth-1 circuit. Then we can apply the same argument to each layer of the circuit, obtaining Hamiltonians valid for times t ∈ [1, 2], and so on, until t ∈ [T − 1, T ]. This yields a Hamiltonian H(t) defined for all time t ∈ [0, T ] whose time evolution for time T equals the given unitary. If the individual terms in a given time interval have bounded spectral norm, then so will the Hamiltonian defined for the full time duration.
For a depth 1 circuit with local 2-qubit gates, since the gates act on disjoint qubits we only need to solve the problem for one 2-qubit unitary and sum the resulting Hamiltonians. Consider a unitary U j that acts on qubits j and j + 1. By choosing H j = i log U j , we can ensure that e −iH j = U j and
H j = O(1).
The overall Hamiltonian is now piecewise constant with T time slices.
Note that it also possible to use a similar construction to obtain a Hamiltonian that is continuous (instead of being piecewise constant) with a constant upper bound on the norm of the first derivative of the Hamiltonian. One way to achieve this is to make the Hamiltonian to be zero for integer values of time; that is, H(t) is not identically zero, but is zero for t ∈ Z ⊂ R. Concretely, let g(τ ) = 6τ (1 − τ ) be a real function which satisfies g(0) = g(1) = 0 and 1 0 g(τ )dτ = 1. We then let H j (t) = ig(t − v + 1) log U j for a single two-qubit unitary U j that is in the v-th layer of the circuit where v = 1, 2, . . . , T .
Lemma 8. For any integers n and T such that 2 ≤ n ≤ T ≤ 2 n , the number of distinct Boolean functions f : {0, 1} n → {0, 1} that can be computed by depth-T quantum circuits on n qubits that use local 2-qubit gates from a finite gate set is at least 2 Ω(T n) .
Proof. We first divide the n qubits into groups of k = log 2 T qubits, which is possible since T ≤ 2 n . On these k qubits, we will show that it is possible to compute 2 Ω(T ) distinct Boolean functions with a depth T circuit that uses local 2-qubit gates. One way to do this is to consider all Boolean functions on k < k bits. Any Boolean function f x that evaluates to f x (x) = 1 on exactly one input x of size k can be computed with a circuit of O(k ) gates and O(k ) depth using only 2-qubit gates and 1 ancilla qubit in addition to one output qubit [BBC + 95, Corollary 7.4]. An arbitrary Boolean function f :
{0, 1} k → {0, 1} is a sum of such functions: f = x∈f −1 (1) f x = x∈f −1 (1) f x .
Implementing all f x for x ∈ f −1 (1) in serial using a common output qubit, we obtain a circuit for the full function f . Since f −1 (1) consists of at most 2 k bit strings, this will yield a circuit of size O(2 k ) and depth O(2 k ). Note that each of the 2-qubit gates may be made local without changing these expressions by more than a log factor in the exponent -O(k ) local pairwise SWAP gates suffice to move any two target qubits next two each other. By choosing k = k − Θ(log k), we can compute all Boolean functions on k bits with depth at most T . Since there are 2 2 k = 2 Ω(T ) distinct Boolean functions on k bits, we have shown that circuits with depth T using k = log 2 T qubits can compute at least 2 Ω(T ) distinct Boolean functions.
We can compute 2 Ω(T ) distinct Boolean functions on each of the n/k blocks of k qubits to obtain (2 Ω(T ) ) n/k = 2 Ω(T n) distinct Boolean functions with n/k outputs. I.e., we have computed a function {0, 1} n → {0, 1} n/k . Since we want to obtain a single-output Boolean function, as the overall goal is to prove lower bounds against simulation algorithms correct on local measurements, we combine these Boolean functions into one. We do this by computing the parity of all the outputs of these n/k Boolean functions using CNOT gates. Computing the parity uses at most n 2-qubit local gates and has depth n. The circuit now has depth T + n ≤ 2T and by rescaling T we can make this circuit have depth T , while retaining the lower bound of 2 Ω(T n) distinct Boolean functions.
Unfortunately, after taking the parity of these n/k functions, it is not true that the resulting functions are all distinct. For example, the parity of functions f (x) and g(y) is a new function f (x) ⊕ g(y), which also happens to be the parity of the functions ¬f (x) and ¬g(y). To avoid this overcounting of functions, we do not use all possible functions on k bits in the argument above, but only all those functions that map the all-zeros input to 0. This only halves the total number of functions, which does not change the asymptotic expressions above. With this additional constraint, it is easy to see that if f (x) ⊕ g(y) = f (x) ⊕ g (y), this implies that f and f are the same, by fixing y to be the all-zeros input, and similarly that g and g are the same.
We say a quantum circuit U computes a Boolean function f : {0, 1} n → {0, 1} with high probability if measuring the first output qubit of U |x 1 x 2 · · · x n 0 · · · 0 yields f (x) with probability at least 2/3. Lemma 9. The number of Boolean functions f : {0, 1} n → {0, 1} that can be computed with high probability by quantum circuits with unlimited ancilla qubits using G non-local 2-qubit gates from any gate set is at most 2 O(G log n) .
Proof. First we note that if a circuit U with G arbitrary 2-qubit gates from any gate set computes a Boolean function with probability at least 2/3, then there is another circuit with O(G) gates from a finite 2-qubit non-local gate set that computes the same function with probability at least 2/3. We do this by first boosting the success probability of the original circuit using an ancilla qubit to a constant larger than 2/3 and then invoking the Solovay-Kitaev theorem [NC00] to approximate each gate in this circuit to error O(1/G) with a circuit from a finite gate set of 2-qubit gates. This increases the circuit size to O(G) gates. Since each gate has error O(1/G), the overall error is only a constant, and the new circuit computes the Boolean function f with high probability.
We now have to show that the number of Boolean functions on n bits computed by a circuit with O(G) non-local 2-qubit gates from a finite gate set is at most 2 O(G log n) . To do so, we simply show that the total number of distinct circuits with O(G) non-local 2-qubit gates from a finite gate set is at most 2 O(G log n) .
First observe that a circuit with O(G) gates can only use O(G) ancilla qubits, since each 2-qubit gate can interact with at most 2 new ancilla qubits. Furthermore, the depth of a circuit cannot be larger than the number of gates in the circuit. Let us now upper bound the total number of quantum circuits of this form. Each such circuit can be specified by listing the location of each gate and which gate it is from the finite gate set. Specifiying the latter only needs a constant number of bits since the gate set is finite, and the location can be specified using the gate's depth, and the labels of the two qubits it acts on. The depth only requires O(log G) bits to specify, and since there are at most n + O(G) qubits, this only needs O(log n + log G) bits to specify. In total, since there are O(G) gates, the entire circuit can be specified with O(G log n) bits. Finally, since any such circuit can be specified with O(G log n) bits, there can only be 2 O(G log n) such circuits.
Proof of Theorem 3. Suppose that any piecewise constant bounded 1D Hamiltonian on n qubits can be simulated for time T using G 2-qubit non-local gates from any (possibly infinite) gate set. Then using Lemma 7 and Lemma 8, we can compute at least 2 Ω(T n) distinct Boolean functions using such Hamiltonians. By assumption, each of these Boolean functions can be approximately computed by a circuit with G gates. Now invoking Lemma 9, we know that such circuits can compute at most 2 O(G log n) n-bit Boolean functions. Hence we must have G log n = Ω(T n), which yields G = Ω(T n).
The proof for T ≤ n follows in a black-box manner from the first part of the theorem statement by only using T out of the n available qubits. In this case the first part of the theorem applies and yields a lower bound of Ω(T 2 ).
We now prove Theorem 2, which follows a similar outline. The first step is identical, and we can reuse Lemma 7. For the next step, instead of counting distinct Boolean functions, we count the total number of "distinct" unitaries. Unlike Boolean functions on n bits, there are infinitely many unitaries on n qubits. Hence we count unitaries that are "distinguishable." Formally, we say U and V are distinguishable if there is a state |ψ such that U |ψ and V |ψ have trace distance, say, 0.1. In Lemma 10 we show that the number of distinguishable unitaries computed by quantum circuits on n qubits with depth T is exponential in Ω(T n). As before, we then show that the maximum number of distinguishable unitaries that can be computed (to constant error) by the class of quantum circuits with G arbitrary non-local 2-qubit gates from any (possibly infinite) gate set is exponential in O(G log n).
Lemma 10. For any integers n, T such that 4 ≤ n ≤ T ≤ 4 n , there exists a set of unitaries of cardinality 2 Ω(T n) such that every unitary in the set can be computed by a depth-T quantum circuit on n qubits that uses local 2-qubit gates from a finite gate set and any U = V from this set are distinguishable.
Proof. We divide the n qubits into groups of k = log 4 T qubits, which is possible since T ≤ 4 n . On these k qubits, we will compute 2 Ω(T ) distinguishable unitaries with a depth T circuit that uses local 2-qubit gates. We can do this by considering a maximal set of unitaries on k qubits that is distinguishable. More precisely, on k qubits there exist 2 Ω(4 k ) unitaries such that each pair of unitaries is at least distance 0.1 apart in spectral norm; see e.g. [Sza97]. (This follows from the fact that in the group of d × d unitaries with metric induced by operator norm, a ball of radius 0.1 has volume exponentially small in d 2 .) If U 1 |ψ − U 2 |ψ 2 ≥ 0.1, then the trace distance between the two states 1 √ 2 (|0 |0 + |1 U j |ψ ) (j = 1, 2) is ≥ 0.1, and hence the controlled unitaries |0 0| ⊗ I + |1 1| ⊗ U j (j = 1, 2) are distinguishable. Therefore, on k qubits there exist 2 Ω(4 k ) unitaries that are pairwise distinguishable. We know that any unitary on k qubits can be exactly written as a product of O(4 k ) arbitrary 2-qubit gates [BBC + 95, Kni95]. As described in the proof of Lemma 8, making these gates local and from a finite gate set only adds polynomial factors in k . By choosing k = k − Θ(log k), we can compute this set of 2 Ω(4 k ) = 2 Ω(T ) distinguishable unitaries with depth at most T .
If U and V are distinguishable, then so are U ⊗ X and V ⊗ Y for any unitary X, Y , since the distinguisher can simply ignore the second register. Hence, if we have two sets
{U i } q i=1 , {V j } q j=1 of distinguishable unitaries, the set {U i ⊗ V j } q i,j=1
consists of q 2 distinguishable unitaries. Since we can compute 2 Ω(T ) distinguishable unitaries on each of the n/k blocks of k qubits, we can compute (2 Ω(T ) ) n/k = 2 Ω(T n) unitaries on all n qubits.
Lemma 11. Let S be a set of pairwise distinguishable unitaries. If any unitary in S can be computed by a quantum circuit with G non-local 2-qubit gates from any gate set, then |S| = 2 O(G log n) .
Proof. This proof is essentially the same as that of Lemma 9. We first observe that if a circuit over an arbitrary gate set computes a unitary U , we can approximate it to error less than 0.04 using the Solovay-Kitaev theorem and increase the circuit size to O(G) gates. Importantly, since the unitaries are a distance 0.1 apart (see Eq.
(2)), one circuit cannot simultaneously approximate two different unitaries to error 0.04. Then exactly the same counting argument as in Lemma 9 shows there can only be 2 O(G log n) such circuits.
Proof of Theorem 2. Suppose that any piecewise constant bounded local Hamiltonian on n-qubits could be simulated for time T using G 2-qubit non-local gates from any (possibly infinite) gate set. Then using Lemma 7 and Lemma 10, we can produce a set S of distinguishable unitaries of size 2 Ω(T n) . By assumption, each of these unitaries can be approximately computed by a circuit with G non-local 4-qubit gates. Now invoking Lemma 9, we know that such circuits can approximate at most 2 O(G log n) distinguishable unitaries on n qubits. Hence we must have G log n = Ω(T n), which yields G = Ω(T n).
Discussion
We have only analyzed local Hamiltonians on (hyper)cubic lattices embedded in some Euclidean space, but Lieb-Robinson bounds with exponential dependence on the separation distance hold more generally. However, on more general graphs, it may be more difficult to find an appropriate decomposition that gives a small error; this must be analyzed for each graph. One advantage of the method here is that the accuracy improves for smaller Lieb-Robinson velocity. This can occur if the terms in the Hamiltonian have a small commutator (see Appendix C).
The decomposition based on Lieb-Robinson bounds looks very similar to higher order Lie-Trotter-Suzuki formulas. The difference is in the fact that the overlap is chosen to be larger and larger (though very slowly) as the simulated spacetime volume increases. If we want an algorithm that does not use any ancilla qubits, similar to algorithms based on Lie-Trotter-Suzuki formulas, then we can simulate the small blocks from Lieb-Robinson bounds by high order Suzuki formulas [Suz91,BACS07] where the accuracy dependence is polynomial (power-law) of arbitrarily small exponent a > 0. This combination results in an algorithm of total gate complexity e O(1/a) · O(T n(T n/ ) a ), similar to what is claimed to be achievable in Ref. [JLP14,Sec. 4.3]. However, a poly-logarithmic dependence on the factor T n/ is not possible in this approach by any choice of a due to the exponential prefactor.
Application to fermions, which represent common physical particles such as electrons, is straightforward but worth mentioning. While we have previously focused on Hamiltonians that act on qubits, we now consider Hamiltonians on n sites that are occupied by fermions, and describe their well-known reduction to qubit Hamiltonians. Each term in a fermion Hamiltonian is a product of some number of fermion operators c j and their Hermitian conjugates, indexed by the site j ∈ {0, 1, · · · , n − 1}, e.g. c 0 c † 1 . These fermion operators are defined by the anti-commutation relations
{c j , c k } := c j c k + c k c j = 0, {c † j , c † k } = 0, {c j , c † k } = δ jk 1.
Since Hamiltonian terms always have fermion parity even (this is a basic assumption on any physical Hamiltonian), Lieb-Robinson bounds hold without any change. It is often convenient to additionally represent fermion operators by (real) Majorana fermion operators γ j where j ∈ {0, 1, · · · 2n − 1}, defined by the linear relations
c p = γ 2p + iγ 2p+1 2 , c † p = γ 2p − iγ 2p+1 2 ,
from which we may infer that Majorana operators are self-inverse and satisfy the anti-commutation relations
{γ j , γ k } = 2δ jk 1.
Given the block decomposition based on the Lieb-Robinson bound, we should implement each small block using polylog(T n/ ) 2-qubit gates. The Jordan-Wigner transformation, a representation of Majorana operators in terms of the Clifford algebra, is a first method one may consider:
γ 2j−1 → σ z 1 ⊗ · · · ⊗ σ z j−1 ⊗ σ x j ,(9)γ 2j → σ z 1 ⊗ · · · ⊗ σ z j−1 ⊗ σ y j ,(10)
and the right-hand side is a tensor product of the 2 × 2 Pauli matrices that satisfy
[σ j , σ k ] = i2 jkl σ l , {σ j , σ k } = δ jk 1,
where jkl is the anti-symmetric Levi-Civita symbol. Often, the tensor factor of σ z preceding σ x or σ y is called a Jordan-Wigner string. In one spatial dimension, the above representation where the ordering of γ is the same as the chain's direction gives a local qubit Hamiltonian, since in any term Jordan-Wigner strings cancel. The ordering of fermions is thus very important. (Under periodic boundary conditions, at most one block may be nonlocal; however, we can circumvent the problem by regarding the periodic chain, a circle, as a double line of finite length whose end points are glued:
[−1, +1] × {↑, ↓} /{(−1, ↑) ≡ (−1, ↓), (+1, ↑) ≡ (+1, ↓)}.
This trick doubles the density of qubits in the system, but is applicable in any dimensions for periodic boundary conditions.)
In higher dimensions with fermions, a naive ordering of fermion operators turns most of the small blocks into nonlocal operators under the Jordan-Wigner transformation. However, fortunately, there is a way to get around this, at a modest cost, by introducing auxiliary fermions and letting them mediate the interaction of a target Hamiltonian [VC05]. The auxiliary fermions are "frozen," during the entire simulation, by an auxiliary Hamiltonian that commutes with the target Hamiltonian. With a specific ordering of the fermions, one can represent all the new interaction terms as local qubit operators. The key is that if c j c k is a fermion coupling whose Jordan-Wigner strings do not cancel, we instead simulate c j c k γ 1 γ 2 , where γ 1,2 are auxiliary, such that the Jordan-Wigner strings of γ 1 , γ 2 cancel those of c j , c k , respectively. The auxiliary γ's may be "reused" for other interaction terms if the interaction term involves fermions that are close in a given ordering of fermions. (Ref. [VC05] explains this manipulation for quadratic Hamiltonian terms but it is straightforward that any higher order terms can be treated similarly. They also manipulate the Hamiltonian for the auxiliary γ to make it local after Jordan-Wigner transformation, but for our simulation purposes it suffices to initialize the corresponding auxiliary qubits.) In this approach, if we insert O(1) auxiliary fermions per fermion in the target system, the gate and depth complexity is the same as if there were no fermions. Note that we can make the density of auxiliary fermions arbitrarily small by increasing the simulation complexity as follows. Divide the system with non-overlapping blocks of diameter , which is e.g. O(log n), that form a hypercubic lattice. (These blocks have nothing to do with our decomposition by Lieb-Robinson bounds.) Put O(1) auxiliary fermions per block, and order all the fermions lexicographically so that all the fermions in a block be within a consecutive segment of length O( D ) in the ordering. Interaction terms within a block have Jordan-Wigner string of length at most O( D ), and so do the inter-block terms using the prescription of [VC05]. The gate and depth complexity of this modified approach has poly( ) overhead.
A Heisenberg model benchmark
We may gain some intuition in applying Lieb-Robinson bounds to quantum simulation with a concrete example. The Heisenberg model offers one useful benchmark [CMN + 17] for the performance of various quantum simulation algorithms. In the case of 1D nearest-neighbor interactions with open boundary conditions, and where each spin is subject to an inhomogeneous magnetic field, its Hamiltonian is
H = n−1 j=0 (X j X j+1 + Y j Y j+1 + Z j Z j+1 + z j Z j ) h j ,(11)
where {X, Y, Z} are the single-qubit Pauli operators. Though this Hamiltonian may be diagonalized in a closed form without the field term (z j = 0), in the presence of non-uniform z j this model can only be treated numerically in general.
To recap, there are two sources of error in the entire quantum simulation algorithm for implementing time-evolution e −iT H . One is from the decomposition of the full time-evolution operator Before proceeding, we require estimates of the Lieb-Robinson contribution to error LR . By rescaling H and 1/T by the same constant factor to ensure that h j ≤ 1, this may be rigorously upper-bounded through Lemma 5. However, it is also reasonable to numerically obtain better constant factors in the scaling of LR . Though simulation of the entire system of size n is classically intractable, LR can be obtained by classically simulating small blocks of size O(log (n)), which is within the realm of feasibility. The decomposition (m = 1) is
exp(−itH) exp −it j<b h j exp +it b−1 j=a h j exp −it j≥a h j (12)
so there are = b − a + 1 spins in the overlap. We computed the error for a wide range of t up to = 9, and observed that the error is almost independent of the position of the overlap, and is also exponentially small in . Note that the best fit LR = α tβ +γ +γ in Figure 2 may be solved for = O(t + log (1/ LR )), and is consistent with Lemma 6.
Using the recursive decomposition into blocks shown in Figure 1, we now simulate m/2 blocks of size and m/2 blocks of size 2 , both for time t, and each with error = 3m . Holding constant, we may use fit approximation of LR to simultaneously solve for the number of blocks m = 2T n t and the evolution time of each block t such that the Lieb-Robinson error contribution LR = 3m . Note that we may also invert the ordering of sequential stacks in Eq. (12) to merge blocks of size 2 . For instance,
e −itH e −itH e (−it j<b h j ) e (+it b−1 j=a h j ) e (−it j≥a h j ) e (−it j≥a h j ) e (+it b−1 j=a h j ) e (−it j<b h j ) = e (−it j<b h j ) e (+it b−1 j=a h j ) e (−i2t j≥a h j ) e (+it b−1 j=a h j ) e (−it j<b h j ) .(13)
Excluding boundary cases, this leads to fewer blocks m = 3T n 2t . Specifically, we may alternatively simulate 2m/3 blocks of size for time t and m/3 blocks of size 2 for time 2t, and each with error = 3m . Depending on the simulation algorithm used for each block, this may be slightly more efficient.
Similar to the benchmark in [CMN + 17], we obtain explicit gate counts in the Clifford+T basis in Figure 3 for simulating e −iT H with T = n, error = 10 −3 , and h j ∈ [−1, 1] chosen uniformly at random. We implement each block with the combined quantum signal processing [LC17b] and qubitization [LC16] simulation algorithm. An outline of this algorithm together with certain minor circuit optimizations is discussed in Appendix E. Furthermore, the remaining error budget of /3 is reserved for approximating arbitrary single-qubit rotations in the algorithm with Clifford+T gates [KMM13].
B Further algorithmic improvements B.1 Inhomogeneous interaction strength
We can adapt the decomposition of the time-evolution unitary based on Lieb-Robinson bounds when there is inhomogeneity in interaction strength across the lattice. For this section, we do not assume that h X ≤ 1 for all X ⊆ Λ. Instead, suppose there is one term h X 0 in the Hamiltonian with h X 0 = J 1 while all the other terms h X have h X ≤ 1, the prescription above says that we would have to divide the time step in J pieces, and simulate each time slice. However, more careful inspection in the algorithm analysis tells us that one does not have to subdivide the time step for the entire system. For clarity in presentation, let us focus on a one-dimensional chain where the strong term h X 0 is at the middle of the chain. We then introduce a cut as in Figure 1 (a) at h X 0 . The purpose is to put the strong term into H bd so that the truncation error in Eq. (8) is manifestly at most linear in J. Since the truncation error is exponential in , the factor of J in the error can be suppressed by increasing by O(log J). After we confine the strong term in a block of size 2 0 = O(log(JLT / )) in the middle of the chain, the rest of the blocks can be chosen to have size O(log(LT / )) and do not have any strong term, and hence the time step t can be as large as it would have been without the strong term. For the block with Hamiltonian H that contains the strong term h X 0 , we can simply simulate H /J for time Jt.
B.2 Reducing number of layers in higher dimensions
Although we have treated the spatial dimension D as a constant, the number of layers for unit time evolution is 3 D , which grows rather quickly in D, if we used the hyperplane decomposition as above. We can reduce this number by considering a different tessellation of the lattice.
To be concrete, let us explain the idea in two dimensions. Imagine a tiling of the two-dimensional plane using hexagons of diameter, say, 10 . It is important that this hexagonal tiling is 3-colorable; one can assign one of three colors, red, green, and blue, to each of hexagons such that no two neighboring hexagons have the same color.
Let R, G, B be the unions of red, green, and blue hexagons, respectively. Each of R, G, B consists of well separated hexagons. Here, being well separated means that for any color c, the -neighborhood of a cell of color c does not intersect with any other cell of color c. Suppose we had implemented the time evolution U (R ∪ G) for the Hamiltonian H R∪G . Consider -neighborhood B + of B. B + ∩ (R ∪ G) consists of separated "rings" of radius 6 and thickness . (See Fig. 4.) We can now apply Lemma 6 to R ∪ G and B + to complete the unit time evolution for the entire system R ∪ G ∪ B. The unitaries needed in addition to U (R ∪ G) are the backward time-evolution operator on (R ∪ G) ∩ B + , which is a collection of disjoint unitaries on the rings, and the forward time-evolution operator on B + , which is a collection of disjoint unitaries on enlarged hexagons.
The time evolution U (R ∪ G) is constructed in a similar way. We consider an -neighborhood of G within R ∪ G. The enlarged part G + ∩ R consists of line segments of thickness , and it is clear that R \ G + is -away from G. We can again apply Lemma 6.
In summary, the algorithm under the 3-colored tesellation is (i) forward-evolve the blocks in R, (ii) backward-evolve the blocks in R ∩ G + , (iii) forward-evolve the blocks in G + ∩ (R ∪ G), (iv) backward-evolve the blocks in (R ∪ G) ∩ B + , and (v) forward-evolve the blocks in B + .
In general, if the layout of qubits allows α-colorable tessellation, where α = 2, 3, 4, . . ., such that the cells of the same color are well separated, then we can decompose unit time evolution into 2α − 1 layers by considering fattened cells of the tessellation. The proof is by induction. When α = 2, it is clear. For larger α, we implement the forward-evolution on the union A of α − 1 colors using 2α − 3 layers by the induction hypothesis, and finish the evolution by backward-evolution on A ∩ B + , where B is the union of the last color and B + is the -neighborhood of B, and then forward-evolution on B + . This results in 2α − 1 layers in total.
A regular D-dimensional lattice can be covered with D + 1 colorable tessellation. One such coloring scheme is obtained by any triangulation of R D , and coloring each 0-cell by color "0", and each 1-cell by color "1", and so on, and finally fattening them. This is a well known fact; see for example [BPT10].
For three dimensions, there exists a more "uniform" 4-colorable tessellation. Consider the bodycentered cubic (BCC) lattice, spanned by basis vectors (2, 0, 0), (0, 2, 0), and (1, 1, 1). Color each It is easy to check that the shortest distance between two distinct lattice points of the same color is 2 √ 2 2.828, by, for example, an exhaustive search in the box {−5, . . . , 5} 3 ⊂ Z 3 . By definition of Voronoi tessellation, a point q ∈ R 3 belongs to a cell C of color c if and only if the closest lattice point to q is unique and has color c. Since C is convex, a point q on the boundary of C is the furthest from the central lattice point of C if q is equidistant from the four sublattices. Indeed, (1, 0, 1 2 ) is equidistant by distance 1 2 √ 5 1.118 from distinctly colored points (0, 0, 0), (1, −1, 1), (2, 0, 0), (1, 1, 1), which define a tetrahedron inside which there is no other lattice point. Hence, each cell is contained in a ball of radius 1 2 √ 5, which is smaller than √ 2, the half of the minimum distance between distinct points of the same color. Therefore, the cells of the same color are separated.
C Lieb-Robinson Bounds with Bounded Commutators
C.1 Introduction and Assumptions
One advantage of the method described in this paper is that if the Lieb-Robinson velocity is small, then the method becomes more accurate. One case in which this occurs is if the terms in the Hamiltonian have a small commutator with each other. This section considers the Lieb-Robinson velocity under such a small commutator assumption. Note that Trotter-Suzuki methods also improve in accuracy if the Hamiltonian terms have a small commutator [CMN + 17], but many other time-simulation methods do not.
Bounds on the Lieb-Robinson velocity with bounded commutators were first considered in Ref. [PSHKMK10]. Our results generalize their work in several ways. The previous work considered strictly local Hamiltonians with a bound on the commutator of two terms (but without any bound on the norm of a term), but it was under the further assumption that the Hamiltonian was a sum of two different strictly local Hamiltonians H = H 0 + H 1 , such that H 0 and H 1 were each a sum of exactly commuting terms, with the commutator bound applying to the commutator of a term in H 0 with a term in H 1 . We consider strictly local interactions with a bound on the commutator without any bound on the norm of a term, but without requiring the further assumption that the Hamiltonian can be decomposed as a sum of two exactly commuting Hamiltonians. Also, we do not require any bound on higher order commutators as was used in [PSHKMK10]; however, we do find in some cases tighter bounds when we assume such higher order bounds. Additionally, we consider exponentially decaying interactions, rather than strictly local interactions. In this case, instead of giving a bound on the commutator of two terms, we find it necessary to give a bound on the norm of terms as well as the bound commutator. This is required to express the exponential decay appropriately. The exponential decay and commutator bounds that we consider are as follows: consider a Hamiltonian H = X h X , where each X represents some set of sites and each h X a Hermitian operator supported on X. The sum is over all possible subsets of the lattice. The terms h X obey a commutator condition that there exists 0 ≤ η ≤ 1 such that for all X and Y
[h X , h Y ] ≤ 2η h X h Y .(14)
The exponential decay is imposed as follows. Introduce a metric dist(x, y) between pairs of sites x, y. As before, for any two sets X, Y of sites, we write dist(X, Y ) = min x∈X,y∈Y dist(x, y). For any set X of sites, its diameter diam(X) is defined to be max x,y∈X dist(x, y). Assume that there are constants ζ, µ > 0 such that for any site x,
X x h X |X| 2 exp(µ diam(X)) ≤ ζ < ∞(15)
where |X| denotes the cardinality of set X. The assumption (15) will not be used until Appendix C.3; we will indicate where it is used as many of the early bounds do not use this assumption and only use Eq. (14). This assumption is slightly stronger than previous exponential decay assumptions such as in Ref. [HK06] as we have |X| 2 rather than |X|. The reason for this will be clear later.
Note that we do not impose h X ≤ 1 in this appendix; the strength of interaction is bounded only through Eq. (15). Throughout Most of the appendix is devoted to the exponential decay case. In Appendix C.6, we consider the strictly local case. The main result that we will prove under the exponential decay assumptions is:
Lemma 12. Assume that assumptions (14,15) hold. Then, for any operator A supported on a set X and operator B supported on a set Y we have
[A(t), B] ≤ 2 √ η A · B exp ζ|t| 8η − 1 x∈X exp(−µ dist(x, Y )).(16)
Let v LR be chosen greater than ζ √ 8η/µ. Then, for large t, at fixed dist(X, Y )/t ≤ v LR , for bounded |X|, the above bound tends to zero, giving a Lieb-Robinson velocity proportional to √ η.
A Lieb-Robinson velocity proportional to √ η might initially be surprising: one might hope to have a bound proportional to η. However, one can see that this is the best possible under these assumptions. Consider any local Hamiltonian H = X h 0 X (without a commutator condition) with h 0 X of order unity and Lieb-Robinson velocity v 0 LR also of order unity. Now, consider a new Hamiltonian H = X h X with h X = 1 + √ ηh 0 X ; here 1 simply denotes the identity operator. Then, the commutator of any two terms is proportional to η, while the norm of the terms is still of order 1 and the Lieb-Robinson velocity is proportional to √ η. If the reader does not like adding the identity to define h X as a way of ensuring h X ∼ 1, one could instead add some other exactly commuting terms of norm 1 which act on some additional degrees of freedom.
In the proof below, we use notations as if the Hamiltonian was time-independent. However, Lemma 12 is valid (replacing the exponential with a time-ordered exponential) if the Hamiltonian is a piecewise continuous function of time, provided that assumptions (14,15) hold for all time.
C.2 Bound on Commutator Assuming Eq. (14)
We wish to bound
[exp(iHt)A X exp(−iHt), B Y ] .(17)
where A X is supported on X and B Y is supported on Y . In what follows, we will drop the subscripts X, Y from A X and B Y to avoid overly cluttering the notation. Define
C B (X, t) = sup A∈A X : A ≤1 [A(t), B] ,(18)D B (X, t) = [h X (t), B](19)
where A X is the algebra of operators supported on X. For any two sets X and Y of sites we will write X ∼ Y in place of X ∩ Y = ∅. Also, the notation Z 1 ∼ Z 2 ∼ · · · ∼ Z m will mean that Z j ∼ Z j+1 for every j = 1, . . . , m − 1. This does not necessarily mean that Z j ∼ Z k for j + 1 < k. For any set X, define I X as
I X = Z:Z∼X h Z .(20)
Let dt be a small positive real number. We consider a finite system so that H has finite operator norm. The quantities O(dt 2 ) and O(1/n) in Eqs. (21) to (23) will have a hidden dependence on system size, but after taking a limit dt → 0 (equivalent to n → ∞) the bounds in the subsequent equations will not depend on system size and as a result the Lieb-Robinson velocity will not depend on system size.
= [exp(iI X dt)A exp(−iI X dt), B(−t)] + O(dt 2 ) = [A, exp(−iI X dt)B(−t) exp(iI X dt)] + O(dt 2 ) = [A, B(−t) − idt[I X , B(−t)] + O(dt 2 ) ≤ [A, B(−t)] + 2dt A · [I X , B(−t)] + O(dt 2 ) ≤ [A, B(−t)] + 2dt A Z:Z∼X [h Z , B(−t)] + O(dt 2 )
By definitions of C B (X, t) and D B (X, t), it follows that
C B (X, t + dt) ≤ C B (X, t) + 2dt Z:Z∼X D B (Z, t) + O(dt 2 ).(22)
Hence, for any positive integer n,
C B (X, t) ≤ C B (X, 0) + 1 n n j=0 Z:Z∼X 2D B (Z, tj/n) + O(1/n).(23)
For finite operator norm of H, the above expression converges to an integral as n → ∞, so
Also, we have
[h X (t + dt), B](25)= [h X (dt), B(−t)] = [h X , B(−t)] + idt[[I X , h X ], B(−t)] + O(dt 2 ) ≤ [h X , B(−t)] + dt Z:Z∼X [[h Z , h X ], B(−t)] .
By definitions of C B (X, t) and D B (X, t) and assumption (14), it follows that
D B (X, t + dt) ≤ D B (X, t) + 2dt Z:Z∼X η h Z · h X C B (Z ∪ X, t).(26)
Hence,
D B (X, t) ≤ D B (X, 0) + Z:Z∼X 2η |t| 0 h Z · h X C B (Z ∪ X, s)ds.(27)
Note that for an arbitrary set Z of sites and any t
C B (Z, 0) ≤ 2 B δ Z∼Y , C B (Z, t) ≤ 2 B ,(28)D B (Z, 0) ≤ 2 B h Z δ Z∼Y , D B (Z, t) ≤ 2 B h Z
where δ P = 1 if the predicate P is true and δ P = 0 otherwise. Thus we may rewrite Eqs. (24) and (27) as
C B (X, t) ≤ 2 B δ X∼Y + Z:X∼Z 2 |t| 0 ds D B (Z, s),(29)D B (X, t) ≤ 2 B h X δ X∼Y + Z:X∼Z 2η h X · h Z |t| 0 ds C B (X ∪ Z, s)δ X ∼Y .(30)
Since X and Y are arbitrary, we may use Eq. (30) in Eq. (29); for any sets Z j−2 , Z j−1 we have:
C B (Z j−2 ∪ Z j−1 , s j−1 ) 2 B δ Z j−2 ∼Y ≤ δ Z j−1 ∼Y(31)+ (2s j−1 ) Z j : (Z j−2 ∪Z j−1 )∼Z j ∼Y h Z j + η · 2 2 s j−1 0 ds j s j 0 ds j+1 Z j ,Z j+1 : (Z j−2 ∪Z j−1 )∼Z j ∼Z j+1 h Z j h Z j+1 C B (Z j ∪ Z j+1 , s j+1 ) 2 B δ Z j ∼Y
where s j−1 ≥ 0. This is our fundamental recursive inequality. Note that we have only the constraint that Z j−1 ∼ Y in the second line rather than the slightly looser constraint that (Z j−2 ∪ Z j−1 ) ∼ Y . This is due to the constraint δ Z j−2 ∼Y in the first line. Assuming X ∩ Y = ∅ and setting Z −1 = ∅, Z 0 = X, we have
C B (X, t) 2 B ≤ (2|t|) Z 1 :X∼Z 1 ∼Y h Z 1 + η (2|t|) 2 2! Z 1 ,Z 2 :X∼Z 1 ∼Z 2 ∼Y h Z 1 h Z 2 (32) + η (2|t|) 3 3! Z 1 ,Z 2 :X∼Z 1 ∼Z 2 h Z 1 h Z 2 Z 3 :(Z 1 ∪Z 2 )∼Z 3 ∼Y h Z 3 + η 2 (2|t|) 4 4! Z 1 ,Z 2 :X∼Z 1 ∼Z 2 h Z 1 h Z 2 Z 3 ,Z 4 :(Z 1 ∪Z 2 )∼Z 3 ∼Z 4 ∼Y h Z 3 h Z 4 + · · ·
That is,
C B (X, t) 2 B ≤ k≥1 (2|t|) k k! η k/2 Z 1 ,Z 2 ,...,Z k k j=1 h Z j j odd δ (Z j−2 ∪Z j−1 )∼Z j j even δ Z j−1 ∼Z j δ Z k ∼Y ,
where the notation δ S∼T for two sets S, T is an indicator function that is 1 if S ∩ T = ∅ and 0 otherwise.
C.3 Lieb-Robinson Velocity
We now use assumption (15), especially the following consequences.
Proposition 13. For arbitrary sets P, Q, R, S of sites
Q:P ∼Q∼S h Q ≤ ζ p∈P e −µ dist(p,S) ,(33)Q:P ∼Q h Q |Q| 2 e −µ dist(Q,S) ≤ ζ p∈P e −µ dist(p,S) ,(34)Q,R:P ∼Q∼R∼S h Q h R ≤ ζ 2 p∈P e −µ dist(p,S) ,(35)Q,R:P ∼Q∼R |Q ∪ R| h Q h R e −µ dist(Q∪R,S) ≤ 2ζ 2 p∈P e −µ dist(p,S) .(36)
Here, p∈P e −µ dist(p,S) ≤ |P |e −µ dist(P,S) .
Proof. for that in any even-k-th line. Once the innermost sum is bounded, we use Eq. (36) for (k − 1)/2 times. This is the point at which the dependence on |X| 2 in assumption (15) is necessary. Therefore, the k-th line is bounded by
(2|t|) k k! η k/2 2 (k−1)/2 ζ k x∈X exp(−µ dist(x, Y )).(37)
Summing over k we find that
C B (X, t) ≤ 2 √ η B exp ζ|t| 8η − 1 x∈X exp(−µ dist(x, Y )),(38)
so that Lemma 12 follows.
C.4 Proof of Lemma 5
A slight modification of the previous section proves Lemma 5. Recall that H Ω = Z:Z⊆Ω h Z , and it suffices to assume X ⊂ Ω. Then,
A X (t; H Ω ) − A X (t; H) = t 0 ds∂ s (U H Ω s U H t−s ) † A X U H Ω s U H t−s ≤ |t| 0 ds (U H t−s ) † [H − H Ω , (U H Ω s ) † A X U H Ω s ]U H t−s (39) ≤ |t| 0 ds Y :Y ∼Ω c [A X (s; H Ω ), h Y ]
(In the last inequality the sum over Y could be further restricted to those with Y ∼ Ω, but the present bound will be enough.) The last line can be bounded by multiplying Eq. (32) by 2 B = 2 h Y and summing over Y such that Y ∼ Ω c . The innermost sum of Eq. (32) is modified to
Z k :(Z k−2 ∪Z k−1 )∼Z k h Z k Y :Z k ∼Y ∼Ω c h Y if k is odd,(40)Z k−1 ,Z k :(Z k−3 ∪Z k−2 )∼Z k−1 ∼Z k h Z k−1 h Z k Y :Z k ∼Y ∼Ω c h Y if k is even.
The sum over Y here is bounded by Eq. (33), and the remaining sum is bounded by applying Eq. (34) once or twice. The net effect of the modification is that there is an extra factor of ζ, and that the distance is now measured to Ω c instead of Y before. We conclude that
A X (t; H) − A X (t; H Ω ) ≤ 2ζ|t| √ η A exp ζ|t| 8η − 1 x∈X exp(−µ dist(x, Ω c )).(41)
Since η ≤ 1, this proves a variant of Lemma 5 where the locality assumption is given by Eq. (15). In Lemma 5 we assumed strictly local interactions such that h X = 0 if diam(X) > 1 in a D-dimensional lattice. This strict locality condition implies Eq. (15) with arbitrary µ > 0 (ζ can be estimated as a function of µ), and hence the bound is sufficient for Theorem 1. However, one can prove a stronger bound for the strictly local interactions. Since D B (X, t) ≤ h X C B (X, t), assuming X ∼ Y , we have
C B (X, t) ≤ C B (X, 0) + 2 Z:X∼Z h Z |t| 0 dsC B (Z, s) (42) ≤ −1 k=1 (2|t|) k k! Z 1 ,...,Z k :linked k j=1 h Z j C B (Z k , 0) + 2 ∆ d t Z 1 ,...,Z :linked j=1 h Z j C B (Z , t ) (43) ≤ 2 B (2|t|) ! Z 1 ,...,Z :linked j=1 h Z j ( = dist(X, Y ) )(44)
where the factor |t| k /k! is ∆ k dt = |t| 0 dt 1 t 1 0 dt 2 · · · t k−1 0 dt k (the volume of a k-dimensional simplex ∆ k ), "linked" means that X ∼ Z 1 ∼ Z 2 ∼ · · · ∼ Z k , and t is the last component of t. Here, Eq. (43) is valid for any integer ≥ 1, but in Eq. (44) we set to the specific value. Eq. (44) follows by Eq. (28) and a trivial bound C B (Z , t ) ≤ 2 B ; Eq. (28) forces Z k ∼ Y , but due to the locality there is no nonzero "link" from X to Y using − 1 segments or less, so the first − 1 terms of Eq. (43) are zero.
Similarly to Proposition 13, we see for arbitrary sets P, Q, S of sites
Q:P ∼Q∼S h Q ≤ ζ 0 p∈P δ dist(p,S)≤1 ,(45)Q:P ∼Q |Q| h Q δ dist(Q,S)≤d ≤ ζ 0 p∈P δ dist(p,S)≤d+1(46)
where ζ 0 = max x∈Λ Q x |Q| h Q . Note that ζ 0 is bounded by the number of nonzero Hamiltonian terms that may act on a site p and the number of sites in a ball of diameter 1. We conclude that
C B (X, t) ≤ 2 B x∈X (2ζ 0 |t|) ! ,(47)[A X (t), B Y ] ≤ 2 A B |X| (2ζ 0 |t|) ! where = dist(X, Y ) .(48)
By manipulation analogous to Eq. (39), we also have
A X (t; H) − A X (t; H Ω ) ≤ |X| A X (2ζ 0 |t|) ! where = dist(X, Λ \ Ω) .(49)
This completes the proof of Lemma 5.
C.5 Higher Order Commutators
Finally, let us remark that even better bounds can be proven if one assumes a bound on higher-order commutators. For example, if we assume that
[[h X , h Y ], h Z ] ≤ η h X · h Y · h Z ,(50)
we can prove a better bound for sufficiently small η . This is done by an straightforward generalization of the above results: In addition to quantities C B (X, t) and D B (X, t), define also a quantity E B (X, Y, t) as
E B (X, Y, t) = [[h X (t), h Y (t)], B] .(51)
Then, just as we bound C B (X, t + dt) − C B (X, t) in terms of D B (X, t) above, we also bound D B (X, t + dt) − D B (X, t) in terms of E B (X, Y, t) summed over sets Y that intersect X. Then, we bound E B (X, Y, t + dt) − E B (X, Y, t) using Eq. (50). Extensions to even higher commutators follow similarly. For such higher order commutators, a natural assumption to replace Eq. (15) is that
X x h X |X| β exp(µ diam(X)) ≤ ζ < ∞(52)
where β is the order of the commutator that we bound (i.e, β = 2 for Eq. (14), β = 3 for Eq. (50), and so on).
C.6 Strictly Local Hamiltonians
In this subsection, we consider strictly local interactions. We assume that
H = X h X ,(53)
where h X is supported on set X which we assume obeys diam(X) ≤ 1.
Note that any bounded range interaction (for example, with terms supported on sets of diameter at most R for some given R) can be written in this form by re-scaling the metric by a constant. Further, we assume a bound on the commutator
[h X , h Y ] ≤ 2K(55)
for some constant K.
We make no assumption on the bound on the norm of h X . However, we emphasize that this does not mean that we are considering unbounded operators. Rather, we assume that all h X have bounded norm (which, in conjunction with an assumption of finite system size, means that H has bounded norm allowing us to use various analytic estimates similar to those above), but all bounds will be uniform in the bound on the norm of h X as well as the system size.
Define C B (X, t) and D B (X, t) as above. Let S denote the collection of sets Z such that diam(Z) ≤ 1. We use Eq. (24) from before:
C B (X, t) ≤ C B (X, 0) + Z∈S:Z∼X 2 |t| 0 D B (Z, s)ds,(56)
where we explicitly write the requirement that Z ∈ S as well as a version of Eq. (27) that follows from Eq. (55)
D B (X, t) ≤ D B (X, 0) + Z∈S:Z∼X 2K |t| 0 C B (Z ∪ X, s)ds.(57)
We write X ∼∼ Z if there exists Y ∈ S such that X ∼ Y ∼ Z; this is equivalent to requiring that dist(X, Z) ≤ 1. We write !X ∼∼ Z if it is not the case that X ∼∼ Z. Thus,
dist(X, Y ) > 1 =⇒ C B (X, t) ≤ 4K Z 1 ,Z 2 ∈S:X∼Z 1 ∼Z 2 |t| 0 C B (Z 1 ∪ Z 2 , s)(|t| − s)ds (58) dist(X, Y ) ≤ 1 =⇒ C B (X, t) ≤ 2 B .
Here the first case follows by combining Eqs. (56) and (57) with the condition that D B (Z, 0) = 0 for Z ∼ Y . The substitution of D B (Z, s) in Eq. (56) by Eq. (57) gives a double integral but we can change the integration order to give (|t| − s) factor. The second case is trivially true for any choice of X.
Thus, iterating Eq. (58) we find that for dist(X, Y ) > 1 we have
C B (X, t) ≤ 4K Z 1 ,Z 2 ∈S:X∼Z 1 ∼Z 2 |t| 0 C B (Z 1 ∪ Z 2 , s 1 )(|t| − s 1 )ds 1 ≤ 8K B Z 1 ,Z 2 ∈S:X∼Z 1 ∼Z 2 ∼∼Y |t| 0 (|t| − s 1 )ds 1 + (4K) 2 Z 1 ,Z 2 ,Z 3 ,Z 4 ∈S: X∼Z 1 ∼Z 2 ,(Z 1 ∪Z 2 )∼Z 3 ∼Z 4 , !Z 2 ∼∼Y |t| 0 (|t| − s 1 ) |s 1 | 0 C B (Z 3 ∪ Z 4 , s 2 )|s 1 − s 2 |ds 2 ds 1 ≤ 8K B Z 1 ,Z 2 ∈S:X∼Z 1 ∼Z 2 ∼∼Y |t| 0 (|t| − s 1 )ds 1 + 2 · (4K) 2 B Z 1 ,Z 2 ,Z 3 ,Z 4 ∈S: X∼Z 1 ∼Z 2 ,(Z 1 ∪Z 2 )∼Z 3 ∼Z 4 ∼∼Y, !Z 2 ∼∼Y |t| 0 (|t| − s 1 ) |s 1 | 0 |s 1 − s 2 |ds 2 ds 1 + (4K) 2 Z 1 ,Z 2 ,Z 3 ,Z 4 ,Z 5 ,Z 6 ∈S: X∼Z 1 ∼Z 2 ,(Z 1 ∪Z 2 )∼Z 3 ∼Z 4 , (Z 3 ∪Z 4 )∼Z 5 ∼Z 6 , !Z 2 ∼∼Y,!Z 4 ∼∼Y |t| 0 (|t| − s 1 ) |s 1 | 0 |s 1 − s 2 | |s 2 | 0 C B (Z 5 ∪ Z 6 , s 3 )|s 3 − s 2 |ds 3 ds 2 ds 1 ≤ · · ·
We continue recurring in this fashion, using Eq. (58) to substitute for C B . Thus, we find that C B (X, t) is bounded by the sum, over k > 0 and over sequences X ∼ Z 1 ∼ Z 2 , (Z 1 ∪ Z 2 ) ∼ Z 3 ∼ Z 4 , · · · ∼ Z 2k ∼∼ Y where Z j ∈ S such that for no j < k do we have Z 2j ∼∼ Y , of 2 B (4K) k |t| 2k /(2k)!. Since all terms in the sum are positive, this is bounded by the sum over all sequences X ∼ Z 1 ∼ . . . ∼ Z 2k ∼∼ Y of 2 B (4K) k |t| 2k /(2k)!, i.e., we may remove the restriction !Z 2j ∼∼ Y . Further, we may relax the restriction (Z 2j−1 ∪ Z 2j ) ∼ Z 2j+1 to Z 2j ∼∼ Z 2j+1 .
For any graph with bounded degree d, this gives a Lieb-Robinson velocity v LR bounded by a constant times d √ K. Indeed, the sum is bounded by a sum over both even and odd length sequences
X ∼ Z 1 ∼ Z 2 ∼∼ Z 3 ∼ Z 4 ∼∼ Z 5 . . . ∼ Z n ∼∼ Y with any n ≥ dist(X, Y ) of 2 B (2 √ K) n |t| n /n!.
This is the same sum as appears in the Lieb-Robinson bound for strictly decaying interactions, whose strength is bounded in norm by √ K. Remark: As in the proof of the usual Lieb-Robinson bound, the convergence of the sum of the first n terms of the sequence to C B (X, t) as n → ∞ can be established by bounding the remainder term which is proportional to a sum of C B (Z 2k−1 ∪ Z 2k , s k ), and using C B ≤ 2 B . In fact, since the sum over sequences X ∼ Z 1 ∼ · · · ∼ Z 2k ∼∼ Y is empty if 3k + 1 < dist(X, Y ), the remainder for the sum of first = dist(X, Y ) terms is so small that we can conclude v LR = O(d √ K) without dealing with an infinite series. This was the proof method in Appendix C.4. One disadvantage of this simpler argument is that the resulting bound on the Lieb-Robinson velocity is slightly worse than what is obtainable by the infinite series, particularly if the interaction graph is expanding.
D Analytic functions are efficiently computable
Here we briefly review a polynomial approximation scheme [Tre12] to analytic functions -that have power series representations. 7 For ρ > 1, define E ρ to be the Bernstein ellipse, the image of a circle {z ∈ C : |z| = ρ} under the map z → 1 2 (z + z −1 ). The Bernstein ellipse always encloses [−1, 1], and collapses to [−1, 1] ⊂ C as ρ → 1. It is useful to introduce Chebyshev polynomials (of the first kind) T m of degree m ≥ 0, defined by the equation
T m ( z + z −1 2 ) = z m + z −m 2 .(59)
Picking z on the unit circle, we see that T m : [−1, 1] cos θ → cos mθ ∈ [−1, 1].
Lemma 14. Let f be a analytic function on the interior of E ρ for some ρ > 1, and assume sup z∈Eρ f (z) = M < ∞. Then, f admits an approximate polynomial expansion in Chebyshev polynomials such that
max x∈[−1,1] f (x) − J j=0 a j T j (x) ≤ 2M ρ − 1 ρ −J .(60)
Proof. The series expansion is a disguised cosine series:
f (cos θ) = ∞ j=0 a j T j (cos θ) = ∞ j=0
a j cos(jθ).
E Hamiltonian simulation by quantum signal processing and Qubitization
In this section, we outline Hamiltonian simulation of e −itH by quantum signal processing [LC17b] and Qubitization [LC16] in three steps, and introduce a simple situational circuit optimization for constant factor improvements in gate costs. First, one assumes that the Hamiltonian H acting on register s is encoded in a certain standardform:
( G| a ⊗ 1 s )O(|G a ⊗ 1 s ) = H/α.
Here, we assume access to a unitary oracle O that acts jointly on registers a, s, and a unitary oracle G that prepares some state G |0 a = |G a such that Eq. (65) is satisfied with some normalization constant α ≥ H . Note that α represents the quality of the encoding; a smaller α leads fewer overall queries to O and G. Second, the Qubitization algorithm queries O and G to construct a unitary W , the qubiterate, with eigenphases θ λ = sin −1 (λ/α) directly related to eigenvalues of the Hamiltonian H |λ = λ |λ . In the case where O 2 = 1 as , this is accomplished by a reflection about |G a :
W = −i ((2 |G G| a − 1 a ) ⊗ 1 s ) O.(66)
For every eigenstate |λ , the normalized states
|G λ = |G a |λ s , |G ⊥ λ ∝ (1 − G λ | W |G λ ) |G λ ,(67)|G λ± = 1 √ 2 (|G λ ± i |G ⊥ λ )
are eigenstate of W with eigenvalues W |G λ± = ∓e ±iθ λ |G λ± , θ λ = sin −1 (λ/α).
Third, the quantum signal processing algorithm queries the qubiterate to approximate a unitary V which has the same eigenstates |G λ± , but with eigenphases transformed as ∓e ±iθ λ → e −iαt sin θ λ = e −itλ .
That is, V always has both |G λ and |G ⊥ λ in an eigenspace. Therefore, the time evolution by e −itH is accomplished as follows:
V |G a |λ s = V |G λ+ + |G λ− √ 2 = e −itλ |G a |λ s(70)
=⇒ ( G| a ⊗ 1 s )V (|G a ⊗ 1 s ) = e −itH .
The transformation from W to V is accomplished by a unitary sequence V ϕ such that +| b (V ϕ ) bas |+ b V where b is a single-qubit ancilla, and |± = ±X |± . V ϕ is a product of controlled-W interspersed by single-qubit rotations, parameterized by ϕ ∈ R N with N even, defined as follows:
V ϕ = V † π+ϕ N V ϕ N −1 V † π+ϕ N −2 V ϕ N −3 · · · V † π+ϕ 2 V ϕ 1 ,(71)
V ϕ = (e −iϕZ/2 ⊗ 1 as )(|+ +| b ⊗ 1 as + |− −| b ⊗ W )(e iϕZ/2 ⊗ 1 as ).
For every eigenstate W |θ as = e iθ |θ as , the operator V ϕ on a given |θ as introduces a phase kickback to the ancilla register b. The net action on the ancilla b given |θ as is e −iϕZ/2 e iθ/2 e −iθX/2 e iϕZ/2 = e iθ/2 e −iθPϕ/2 , P ϕ = X cos ϕ + Y sin ϕ.
Thus by multiplying out the single qubit-rotations, ( +| b θ| as )V ϕ (|+ b |θ as ) = +| b e −iθPϕ N /2 e −iθPϕ N −1 /2 · · · e −iθPϕ 1 /2 |+ b
= +| b N/2 k=0 (c 1 k 1 + ic Z k Z) cos kθ + i(c X k X + c Y k Y ) sin kθ |+ b , = N/2 k=0 c 1 k cos kθ + ic X k sin kθ,(73)
where {c 1 , c X , c Y , c Z } are real coefficients determined by ϕ. (For the second equality, it is useful to work out N = 2 case.) We would like Eq. (73) to be e −it sin θ , or a good approximation thereof. The following Jacobi-Anger expansion suits this purpose.
e −iαt sin θ = J 0 (αt) + 2 ∞ k: even >0
J k (αt) cos kθ − i 2 ∞ k: odd >0 J k (αt) sin kθ(74)
where J k is the Bessel function of the first kind. Remark that the function e −iθ → e −iαt sin θ sends both e −iθ and −e iθ = e i(θ+π) to the same value, and the same property holds in the right-hand side of Eq. (74) term-by-term. If we keep the series up to order N/2, the truncation error of this approximation is at most 2 k>N/2 |J k (αt)|. In principle, the angles ϕ that generate the desired coefficients {c 1 , c X } may be precomputed by a classical polynomial-time algorithm [LYC16,Haa19] given α ≥ H and t. This ultimately leads to an approximation of e −iαt sin θ with error = ( +| b G| a ⊗ 1 s )V ϕ (|+ b |G a ⊗ 1 s ) − e −itH ≤ 16 ∞ k=q |J k (αt)| ≤ 32(αt) q 2 q q! , q = N 2 + 1.
(75) (The extra factor of 8 in the estimate is mainly because we cannot guarantee that the truncated series is exactly implemented by unitary V ϕ .) When evaluating gate counts of this simulation algorithm, we will use placeholder values for ϕ. The algorithm requires a post-selection; however, the success probability is 1 − O( ) since the post-selected operator is -close to a unitary.
E.1 Encoding coefficients in reflections
With these three steps, the remaining task is to construct the oracles O and G that encode the desired Hamiltonian. [SBM06]. Each single-qubit rotation may then by approximated to error by a sequence of O(log 1/ ) Clifford+T gates [KMM13] -if the overall algorithm uses N single-qubit rotations, a triangle inequality bounds the total error to at most N . However, if many coefficients α j of H are identical, the use of arbitrary state preparation is excessively costly. For instance, in the extreme case where all α j are identical, and M is a power of 2, log 2 M Hadamard gates suffice to prepare |G a -when M is not a power of two, a uniform superposition over all states up to M may still be prepared with cost O(log M ) by combining integer arithmetic with amplitude amplification. Or, we can add extra Hamiltonian terms 1 to make the number of terms to be a power of 2; this shifts energy level of the Hamiltonian and gives a time-dependent global phase factor to the time evolution unitary. In another case where only a few α j differ, one may exploit a unary representation of the control logic [PKS + 18] to accelerate the preparation of |G at the cost of additional space overhead.
Rather than encoding coefficient information in the state |G , one simple alternate approach is to encode coefficient information by replacing the operators P j with exponentials U j = e −iP j cos −1 (α j ) by taking a linear combination of U j +U † j 2 = α j P j . However, as U 2 j = 1 in general, the downside of this approach is that O 2 = 1 as , which violates the prerequisite for the simple qubitization circuit of Eq. (66).
We present a simple modification that allows us to encode coefficient information in unitary operators whilst maintaining the condition O 2 = 1 as . Consider the two-qubit circuit acting on register c. Q j = (1 ⊗ e iβ j X ) SWAP(1 ⊗ e −iβ j X ), Q 2 j = 1 12 , 00| Q j |00 = cos 2 β j
where β j > 0. Thus if we define
O = M −1 j=0 |j j| ⊗ Q j ⊗ P j , |G ac = M −1 j=0 1 √ M |j a |00 c ,(79)
This construction can be advantageous in the situation, such as in Eq. (11) where most coefficients are 1, and only a few are less than one -whenever β j = 0, we replace Q j with the identity operator.
. The decomposition of the time evolution unitary in Figure 1 is the result of iterated applications of Lemma 6. For a one-dimensional chain with open boundary condition, let L be the length of the chain so there are O(L) qubits. Take two blocks A and Y ∪B of the chain such that their intersection Y has length L and their union is the whole chain. (See Fig. 1.) Applying Lemma 6, we approximate the full unitary by the composition of forward time evolution on Y ∪ B, backward time evolution on Y , and forward time evolution on A. This incurs approximation error δ in spectral norm. Every block unitary in this decomposition is a time evolution operator with respect to the sum of Hamitonian terms within the block, and we can recursively apply the decomposition for large blocks (size ). We end up with a layout of small unitaries as shown in Figure 1 (a). The error from this decomposition is O(δL/ ), which is exponentially small in for t = O(1). Going to higher dimensions D > 1, we first decompose the full time evolution into unitaries on O(L/ ) slabs of size L D−1 × 2 . (Since L , each slab looks like a hyperplane of codimension 1.) This entails error O(e −µ L D / ) since the boundary term has norm at most O(L D−1 ). For each slab the decomposition into O(L/ ) blocks of size L D−2 × 2 × 2 , which look like hyperplanes of codimension 2, gives error O(e −µ ( L D−2 )(L/ )). Summing up all the (thickened) hyperplanes, we get O(e −µ L D / ) for the second round of decomposition. After D rounds of the decomposition the total error is O(e −µ DL D / ), and we are left with O((L/ ) D ) blocks of unitaries for t = O(1).
It remains to implement the unitaries on m = O(T L D / D ) blocks of O( D ) qubits where = O(log(T L/ )) to accuracy /m. All blocks have the form U H t , and can be implemented using any known Hamiltonian simulation algorithm. For a time-independent Hamiltonian, if we use an algorithm that is polynomial in the spacetime volume and polylogarithmic in the accuracy such as those based on signal processing [LC17b, LC16] or linear combination of unitaries [BCC + 14, BCC + 15, BCK15], then the overall gate complexity is O(T L D polylog(T L/ )) = O(T n polylog(T n/ )), where the exponent in the polylog factor depends on the choice of the algorithm. 6 Similarly for a slowly varying time-dependent Hamiltonian, we achieve the same gate complexity by using any timedependent Hamiltonian simulation algorithm that is polynomial in the spacetime volume and polylogarithmic in the accuracy. For example, the fractional queries algorithm [BCC + 14] or the Taylor series approach [BCC + 15, LW18, KSB18] possess these properties.
Figure 2 :
2Numerical test of m = 1 decomposition of the real-time evolution operator based on Lieb-Robinson bounds. The Hamiltonian is the antiferromagnetic one-dimensional Heisenberg model up to 11 spins in Eq. (11) with open boundary condition. The error of the decomposition in Eq. (12) is almost independent of the position a of the overlap within the system and is exponentially small in the overlap size . All lines plotted are the best-fit to LR = α tβ +γ +γ . e −iT H using Lieb-Robinson bounds into m = O(T n/ ) blocks, and is bounded from above by m LR = O(me −µ ) for some µ > 0. The other is from approximate simulations of the block unitary using known algorithms such as [BCC + 15, LC16]. If each block is simulated up to error , then the total error of the algorithm is at most m( LR + ). Thus, we need = O(log(T n/ )).
Figure 3 :
3T gate counts of simulating the Heisenberg model of Eq. (11) for time of n, error = 10 −3 , and h j ∈ [−1, 1] chosen uniformly at random, using the Lieb-Robinson decomposition for overlap sizes of = 7, 8, 9. Plotted for reference is the complexityÕ(n 3 ) of simulating the entire n-site system using Quantum Signal Processing (QSP)[LC17b] without decomposing into blocks. Also plotted with data from [CMN + 17] are the gate counts optimized over a numerical simulation of Lie-Trotter-Suzuki product formulas of various orders and step-sizes.
Figure 4 :
4Hexagonal tiling of 2D plane with 3 colors. The grey regions ("rings") that encircle blue hexagons represent the intersection of the -neighborhood B + of B and R ∪ G.BCC lattice point p = (x, y, z) by the rule c = x + y + z mod 4. The Voronoi tessellation associated with this colored BCC lattice is a valid 4-colored tessellation for our purpose, as seen as follows. For c = 0, 1, 2, 3, the sublattice of color c is spanned by (2−c, −2−c, −c), (4−c, 4−c, 4−c), (−c, −c, 4−c).
, we write O(t; J) := exp(iJt)O exp(−iJt) for any operator O and any hermitian operator J. If J is the full Hamiltonian H of the system, we omit H and write O(t) = O(t; H).
CD
B (X, t) ≤ C B (B (Z, s)ds.
(33): Observe Q:P ∼Q∼S ≤ p∈P Q:p∈Q∼S whenever the summand is nonnegative. Using assumption (15) and diam(Q) ≥ dist(p, S) when p ∈ Q ∼ S, we have an upper bound asp Q:p∈Q∼S h Q e µ diam(Q)−µ dist(p,S) ≤ ζ p e −µ dist(p,S) . (34): Similarly, we use that diam(Q) + dist(Q, S) ≥ dist(p, S) when p ∈ Q by triangle inequality of the metric. (35): Use (33) for the sum over R and then use (34) for the sum over Q. (36): Use |Q ∪ R| ≤ |Q| · |R| since Q ∼ R. We then separately bound two cases where (i) dist(Q ∪ R, S) = dist(R, S) and (ii) dist(Q ∪ R, S) = dist(Q, S). The sum of (i) and (ii) is an upper bound to the original sum. For case (i), we use (34) for the sum over R to have an upper bound ζ Q:P ∼Q |Q| 2 h Q e −µ dist(Q,S) , and then use (34) again for the sum over Q to have an upper bound ζ 2 |P |e −µ dist(P,S) . For case (ii), we use either (33) or (34) to sum over R, which gives ζ Q:P ∼Q |Q| 2 h Q e −µ dist(Q,S) , and then use (34). Now, we can use Eq. (33) for the innermost sum in any odd-k-th line of Eq. (32), and Eq. (35)
For a general Hamiltonian represented as a linear combination of M Pauli operators P j H = M −1 j=0 α j P j , α j > 0, α = easy to verify that ( G| a ⊗ 1 s )O(|G a ⊗ 1 s ) = H/α, and O 2 = 1 as , as desired. The gate complexity O and G are asymptotically similar: The control logic for O may be constructed using O(M ) NOT, CNOT, and Toffoli gates [CMN + 17], and the creation of an arbitrary dimension M quantum state requires O(M ) CNOT gates and arbitrary single-qubit rotations
then O 2
2= 1 abc and this encodes the Hamiltonian ( G| ac ⊗ 1 s )O(|G ac ⊗ 1 s cos 2 β j .
This is sometimes referred to as "real time evolution", to distinguish it from "imaginary time evolution" which we will not talk about in this paper.
There are some physical situations where we do care about more general Hamiltonians. Even though the system we are given may be described by a geometrically local Hamiltonian, it is sometimes computationally advantageous to represent a given system with a non-geometrically local (or sparse) Hamiltonian.
The condition on the Hamiltonian that each term be slowly varying with respect to time, is only needed to use Hamiltonian simulation algorithms of [BCC + 14, BCC + 15,LW18,KSB18]. If there is a Hamiltonian simulation algorithm whose gate count is polylogarithmic in accuracy and polynomial in system size and evolution time regardless of the time derivative, then our results will give an algorithm that is independent of the time derivative as well.
Strictly speaking, the Lieb-Robinson velocity is defined to be the infimum of any vLR such that Eq. (5) holds.5 We say a unitary Ut is generated by Ht if Ut is the solution to i∂tUt = HtUt with U0 = 1.
If we use the quantum signal processing based algorithms[LC17b,LC16] to implement the blocks of size O( D ), then we need O(log ) ancilla qubits for a block. Thus, if we do not mind implementing them all in serial, then it follows that the number of ancillas needed is O(log log(T n/ )), which is much smaller than what would be needed if the quantum signal processing algorithm was directly used to simulate the full system.
An example of smooth but non-analytic function is exp(−z −2 ). This function fails to be analytic at z = 0. It is infinitely differentiable at z = 0, with derivatives all zero, and hence the Taylor series is identically zero, but the function is not identically zero around z = 0. If a real function is analytic at x ∈ R, then its power series converges in an open neighborhood of x in the complex plane (analytic continuation).
It is important that θ → f (cos θ) is a periodic smooth function, and therefore its Fourier series converges to the function value. The coefficients can be read off byThe assumption on the analyticity means that z → f ( 1 2 (z + z −1 )) is analytic on the annulus {z ∈ C : ρ −1 < |z| < ρ}. Thus, we can write the above integral as a contour integral along the circle C ρ = {z ∈ C : |z| = ρ}, and obtain a bound for j > 0where the second equality is due to the symmetry z ↔ z −1 of the integrand. For j = 0, we know |a 0 | ≤ M . Since |T j (x)| ≤ 1 for x ∈ [−1, 1], this completes the proof.Therefore, a function on [−1, 1] that is analytic over E ρ can be computed to accuracy by evaluating a polynomial of degree O(log(1/ )).
Adiabatic quantum state generation and statistical zero knowledge. Dorit Aharonov, Amnon Ta-Shma, 10.1145/780542.780546arXiv:quant-ph/0301023Proceedings of the 35th ACM Symposium on Theory of Computing. the 35th ACM Symposium on Theory of ComputingDorit Aharonov and Amnon Ta-Shma. Adiabatic quantum state generation and statistical zero knowledge. In Proceedings of the 35th ACM Symposium on Theory of Computing, pages 20-29, 2003. arXiv:quant-ph/0301023, doi:10.1145/780542. 780546.
Efficient quantum algorithms for simulating sparse Hamiltonians. Dominic W Berry, Graeme Ahokas, Richard Cleve, Barry C Sanders, 10.1007/s00220-006-0150-xarXiv:quant-ph/0508139Communications in Mathematical Physics. 2702Dominic W. Berry, Graeme Ahokas, Richard Cleve, and Barry C. Sanders. Ef- ficient quantum algorithms for simulating sparse Hamiltonians. Communica- tions in Mathematical Physics, 270(2):359-371, 2007. arXiv:quant-ph/0508139, doi:10.1007/s00220-006-0150-x.
Elementary gates for quantum computation. Adriano Barenco, Charles H Bennett, Richard Cleve, David P Divincenzo, Norman Margolus, Peter Shor, Tycho Sleator, John A Smolin, Harald Weinfurter, 10.1103/PhysRevA.52.3457arXiv:quant-ph/9503016Phys. Rev. A. 52Adriano Barenco, Charles H. Bennett, Richard Cleve, David P. DiVincenzo, Norman Margolus, Peter Shor, Tycho Sleator, John A. Smolin, and Harald Weinfurter. Elementary gates for quantum computation. Phys. Rev. A, 52:3457-3467, Nov 1995. arXiv:quant-ph/9503016, doi:10.1103/PhysRevA.52.3457.
Exponential improvement in precision for simulating sparse Hamiltonians. Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, Rolando D Somma, 10.1145/2591796.2591854arXiv:1312.1414Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Exponential improvement in precision for simulating sparse Hamiltonians. pages 283-292, 2014. arXiv:1312.1414, doi:10.1145/2591796.2591854.
Simulating Hamiltonian dynamics with a truncated Taylor series. Dominic W Berry, Andrew M Childs, Richard Cleve, Robin Kothari, Rolando D Somma, 10.1103/PhysRevLett.114.090502arXiv:1412.4687Phys. Rev. Lett. 11490502Dominic W. Berry, Andrew M. Childs, Richard Cleve, Robin Kothari, and Rolando D. Somma. Simulating Hamiltonian dynamics with a truncated Taylor series. Phys. Rev. Lett., 114:090502, 2015. arXiv:1412.4687, doi:10.1103/PhysRevLett.114. 090502.
Hamiltonian simulation with nearly optimal dependence on all parameters. D W Berry, A M Childs, R Kothari, 10.1109/FOCS.2015.54arXiv:1501.01715IEEE 56th Annual Symposium on Foundations of Computer Science. D. W. Berry, A. M. Childs, and R. Kothari. Hamiltonian simulation with nearly optimal dependence on all parameters. In 2015 IEEE 56th Annual Symposium on Foundations of Computer Science, pages 792-809, Oct 2015. arXiv:1501.01715, doi:10.1109/FOCS.2015.54.
Tradeoffs for reliable quantum information storage in 2d systems. Sergey Bravyi, David Poulin, Barbara Terhal, 10.1103/PhysRevLett.104.050503arXiv:0909.5200Phys. Rev. Lett. 10450503Sergey Bravyi, David Poulin, and Barbara Terhal. Tradeoffs for reliable quantum information storage in 2d systems. Phys. Rev. Lett., 104:050503, 2010. arXiv: 0909.5200, doi:10.1103/PhysRevLett.104.050503.
Quantum information processing in continuous time. Andrew M Childs, Massachusetts Institute of TechnologyPhD thesisAndrew M. Childs. Quantum information processing in continuous time. PhD thesis, Massachusetts Institute of Technology, 2004.
Toward the first quantum simulation with quantum speedup. Andrew M Childs, Dmitri Maslov, Yunseong Nam, Neil J Ross, Yuan Su, arXiv:1711.10980v1Andrew M. Childs, Dmitri Maslov, Yunseong Nam, Neil J. Ross, and Yuan Su. Toward the first quantum simulation with quantum speedup. 2017. arXiv:1711.10980v1.
Simulating physics with computers. Richard P Feynman, International Journal of Theoretical Physics. 216-7Richard P. Feynman. Simulating physics with computers. International Journal of Theoretical Physics, 21(6-7):467-488, 1982.
Quantum mechanical computers. Richard P Feynman, Optics News. 112Richard P. Feynman. Quantum mechanical computers. Optics News, 11(2):11-20, 1985.
Product decomposition of periodic functions in quantum signal processing. Jeongwan Haah, 10.22331/q-2019-10-07-190arXiv:1806.10236doi:10.22331/ q-2019-10-07-190190Jeongwan Haah. Product decomposition of periodic functions in quantum signal processing. Quantum, 3:190, October 2019. arXiv:1806.10236, doi:10.22331/ q-2019-10-07-190.
Lieb-Schultz-Mattis in higher dimensions. M B Hastings, 10.1103/PhysRevB.69.104431arXiv:cond-mat/0305505Phys. Rev. B. 69104431M. B. Hastings. Lieb-Schultz-Mattis in higher dimensions. Phys. Rev. B, 69:104431, Mar 2004. arXiv:cond-mat/0305505, doi:10.1103/PhysRevB.69.104431.
Locality in quantum systems. M B Hastings, arXiv:1008.5137M. B. Hastings. Locality in quantum systems. 2010. arXiv:1008.5137.
Spectral gap and exponential decay of correlations. B Matthew, Tohru Hastings, Koma, 10.1007/s00220-006-0030-4arXiv:math-ph/0507008Commun. Math. Phys. 265Matthew B. Hastings and Tohru Koma. Spectral gap and exponential decay of correlations. Commun. Math. Phys., 265:781-804, 2006. arXiv:math-ph/0507008, doi:10.1007/s00220-006-0030-4.
Quantum computation of scattering in scalar quantum field theories. Quantum Information and Computation. P Stephen, Keith S M Jordan, John Lee, Preskill, arXiv:1112.4833v114Stephen P. Jordan, Keith S. M. Lee, and John Preskill. Quantum computation of scattering in scalar quantum field theories. Quantum Information and Computation, 14(11&12):1014-1080, 2014. arXiv:1112.4833v1.
Asymptotically optimal approximation of single qubit unitaries by clifford and t circuits using a constant number of ancillary qubits. Vadym Kliuchnikov, Dmitri Maslov, Michele Mosca, 10.1103/PhysRevLett.110.190502doi: 10.1103/PhysRevLett.110.190502Phys. Rev. Lett. 110190502Vadym Kliuchnikov, Dmitri Maslov, and Michele Mosca. Asymptotically op- timal approximation of single qubit unitaries by clifford and t circuits using a constant number of ancillary qubits. Phys. Rev. Lett., 110:190502, May 2013. URL: https://link.aps.org/doi/10.1103/PhysRevLett.110.190502, doi: 10.1103/PhysRevLett.110.190502.
Approximation by quantum circuits. Emanuel Knill, arXiv:quant-ph/9508006Emanuel Knill. Approximation by quantum circuits. 1995. arXiv:quant-ph/ 9508006.
Hamiltonian formulation of wilson's lattice gauge theories. John Kogut, Leonard Susskind, 10.1103/PhysRevD.11.395doi:10.1103/PhysRevD.11.395Phys. Rev. D. 11John Kogut and Leonard Susskind. Hamiltonian formulation of wilson's lattice gauge theories. Phys. Rev. D, 11:395-408, Jan 1975. URL: https://link.aps.org/doi/ 10.1103/PhysRevD.11.395, doi:10.1103/PhysRevD.11.395.
Simulating the dynamics of timedependent Hamiltonians with a truncated Dyson series. Maria Kieferova, Artur Scherer, Dominic Berry, arXiv:1805.00582Maria Kieferova, Artur Scherer, and Dominic Berry. Simulating the dynamics of time- dependent Hamiltonians with a truncated Dyson series. 2018. arXiv:1805.00582.
Classical and Quantum Computation, volume GSM 47. A Yu, A H Kitaev, M N Shen, Vyalyi, American Mathematical SocietyA. Yu. Kitaev, A. H. Shen, and M. N. Vyalyi. Classical and Quantum Computation, volume GSM 47. American Mathematical Society, 2002.
Hamiltonian simulation by Qubitization. Guang Hao Low, L Isaac, Chuang, 10.22331/q-2019-07-12-163arXiv:1610.06546163Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by Qubitization. Quantum, 3:163, 2016. arXiv:1610.06546, doi:10.22331/q-2019-07-12-163.
Hamiltonian simulation by uniform spectral amplification. Guang Hao Low, L Isaac, Chuang, arXiv:1707.05391Guang Hao Low and Isaac L Chuang. Hamiltonian simulation by uniform spectral amplification. 2017. arXiv:1707.05391.
Optimal Hamiltonian simulation by quantum signal processing. Hao Guang, Isaac L Low, Chuang, 10.1103/PhysRevLett.118.010501arXiv:1606.02685v2Phys. Rev. Lett. 11810501Guang Hao Low and Isaac L. Chuang. Optimal Hamiltonian simulation by quantum signal processing. Phys. Rev. Lett., 118:010501, 2017. arXiv:1606.02685v2, doi: 10.1103/PhysRevLett.118.010501.
Universal quantum simulators. Seth Lloyd, 10.1126/science.273.5278.1073Science. 2735278Seth Lloyd. Universal quantum simulators. Science, 273(5278):1073-1078, 1996. URL: http://www.jstor.org/stable/2899535, doi:10.1126/science.273.5278. 1073.
The finite group velocity of quantum spin systems. H Elliott, Derek W Lieb, Robinson, 10.1007/BF01645779Communications in Mathematical Physics. 283Elliott H. Lieb and Derek W. Robinson. The finite group velocity of quantum spin systems. Communications in Mathematical Physics, 28(3):251-257, sep 1972. doi:10.1007/BF01645779.
Hamiltonian simulation in the interaction picture. Guang Hao Low, Nathan Wiebe, arXiv:1805.00675Guang Hao Low and Nathan Wiebe. Hamiltonian simulation in the interaction picture. 2018. arXiv:1805.00675.
Methodology of resonant equiangular composite quantum gates. Guang Hao Low, Theodore J Yoder, Isaac L Chuang, 10.1103/PhysRevX.6.041067arXiv:1603.03996Phys. Rev. X. 641067Guang Hao Low, Theodore J. Yoder, and Isaac L. Chuang. Methodology of resonant equiangular composite quantum gates. Phys. Rev. X, 6:041067, Dec 2016. arXiv: 1603.03996, doi:10.1103/PhysRevX.6.041067.
Stability of the area law for the entropy of entanglement. Spyridon Michalakis, arXiv:1206.6900Spyridon Michalakis. Stability of the area law for the entropy of entanglement. 2012. arXiv:1206.6900.
Fast universal quantum computation with railroad-switch local hamiltonians. Daniel Nagaj, 10.1063/1.3384661arXiv:0908.4219Journal of Mathematical Physics. 5162201Daniel Nagaj. Fast universal quantum computation with railroad-switch local hamil- tonians. Journal of Mathematical Physics, 51:062201, 2010. arXiv:0908.4219, doi:10.1063/1.3384661.
A Michael, Isaac L Nielsen, Chuang, Quantum Computation and Quantum Information. Cambridge University PressMichael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press, 2000.
Lieb-Robinson bounds and the exponential clustering theorem. Bruno Nachtergaele, Robert Sims, 10.1007/s00220-006-1556-1arXiv:math-ph/0506030v3Commun. Math. Phys. 265Bruno Nachtergaele and Robert Sims. Lieb-Robinson bounds and the exponential clustering theorem. Commun. Math. Phys., 265:119-130, 2006. arXiv:math-ph/ 0506030v3, doi:10.1007/s00220-006-1556-1.
The dynamics of 1D quantum spin systems can be approximated efficiently. Tobias J Osborne, 10.1103/PhysRevLett.97.157202arXiv:quant-ph/0508031doi:10. 1103/PhysRevLett.97.157202Phys. Rev. Lett. 97157202Tobias J. Osborne. The dynamics of 1D quantum spin systems can be approximated efficiently. Phys. Rev. Lett., 97:157202, 2006. arXiv:quant-ph/0508031, doi:10. 1103/PhysRevLett.97.157202.
Fast quantum algorithm for spectral properties. David Poulin, Alexei Kitaev, Damian S Steiger, B Matthew, Matthias Hastings, Troyer, 10.1103/PhysRevLett.121.010501arXiv:1711.11025Phys. Rev. Lett. 12110501David Poulin, Alexei Kitaev, Damian S Steiger, Matthew B Hastings, and Matthias Troyer. Fast quantum algorithm for spectral properties. Phys. Rev. Lett., 121:010501, 2018. arXiv:1711.11025, doi:10.1103/PhysRevLett.121.010501.
Lieb-robinson bounds for commutator-bounded operators. Isabeau Prémont-Schwarz, Alioscia Hamma, Israel Klich, Fotini Markopoulou-Kalamara, 10.1103/PhysRevA.81.040102arXiv:0912.4544v2Phys. Rev. A. 81440102Isabeau Prémont-Schwarz, Alioscia Hamma, Israel Klich, and Fotini Markopoulou- Kalamara. Lieb-robinson bounds for commutator-bounded operators. Phys. Rev. A, 81(4):040102, 2010. arXiv:0912.4544v2, doi:10.1103/PhysRevA.81.040102.
Synthesis of quantum-logic circuits. V Vivek, Shende, S Stephen, Igor L Bullock, Markov, 10.1109/TCAD.2005.855930arXiv:quant-ph/0406176IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 256Vivek V Shende, Stephen S Bullock, and Igor L Markov. Synthesis of quantum-logic circuits. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 25(6):1000-1010, 2006. arXiv:quant-ph/0406176, doi:10.1109/TCAD. 2005.855930.
Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. W Peter, Shor, arXiv:quant-ph/9508027FOCS 1994. 26Peter W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Journal on Computing, 26(5):1484-1509, 1997. preliminary version in FOCS 1994. arXiv:quant-ph/9508027.
General theory of fractal path integrals with applications to manybody theories and statistical physics. Masuo Suzuki, 10.1063/1.529425Journal of Mathematical Physics. 322Masuo Suzuki. General theory of fractal path integrals with applications to many- body theories and statistical physics. Journal of Mathematical Physics, 32(2):400-407, 1991. doi:10.1063/1.529425.
Metric entropy of homogeneous spaces and Finsler geometry of classical Lie groups. Stanislaw J Szarek, arXiv:math/9701204Stanislaw J. Szarek. Metric entropy of homogeneous spaces and Finsler geometry of classical Lie groups, 1997. arXiv:math/9701204.
Approximation Theory and Approximation Practice. Lloyd N Trefethen, Society for Industrial and Applied Mathematics. Lloyd N. Trefethen. Approximation Theory and Approximation Practice. Society for Industrial and Applied Mathematics, 2012.
On the product of semi-groups of operators. H F Trotter, 10.1090/s0002-9939-1959-0108732-6doi:10.1090/ s0002-9939-1959-0108732-6Proceedings of the American Mathematical Society. 104H. F. Trotter. On the product of semi-groups of operators. Proceedings of the American Mathematical Society, 10(4):545-545, apr 1959. doi:10.1090/ s0002-9939-1959-0108732-6.
Mapping local hamiltonians of fermions to local hamiltonians of spins. F Verstraete, J I Cirac, 10.1088/1742-5468/2005/09/P09012arXiv:cond-mat/0508353v3J. Stat. Mech. 05099012F. Verstraete and J. I. Cirac. Mapping local hamiltonians of fermions to local hamiltonians of spins. J. Stat. Mech., 0509:09012, 2005. arXiv:cond-mat/0508353v3, doi:10.1088/1742-5468/2005/09/P09012.
John Watrous, The Theory of Quantum information. Cambridge University PressJohn Watrous. The Theory of Quantum information. Cambridge University Press, 2018.
| [] |
[
"UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction",
"UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction"
] | [
"Leland Mcinnes [email protected] ",
"John Healy [email protected] ",
"James Melville [email protected] ",
"\nTu\n",
"\nInstitute for Mathematics and Computing\nTu e Institute for Mathematics and Computing\n\n"
] | [
"Tu",
"Institute for Mathematics and Computing\nTu e Institute for Mathematics and Computing\n"
] | [] | UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning. | null | [
"https://arxiv.org/pdf/1802.03426v2.pdf"
] | 3,641,284 | 1802.03426 | 9de92a4fb295141a344819828066a5bdd6f6de61 |
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
December 7, 2018
Leland Mcinnes [email protected]
John Healy [email protected]
James Melville [email protected]
Tu
Institute for Mathematics and Computing
Tu e Institute for Mathematics and Computing
UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction
December 7, 2018
UMAP (Uniform Manifold Approximation and Projection) is a novel manifold learning technique for dimension reduction. UMAP is constructed from a theoretical framework based in Riemannian geometry and algebraic topology. e result is a practical scalable algorithm that applies to real world data. e UMAP algorithm is competitive with t-SNE for visualization quality, and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
Introduction
Dimension reduction seeks to produce a low dimensional representation of high dimensional data that preserves relevant structure (relevance o en being application dependent). Dimension reduction is an important problem in data science for both visualization, and a potential pre-processing step for machine learning. 1 Dimension reduction plays an important role in data science, being a fundamental technique in both visualisation and as pre-processing for machine learning. Dimension reduction techniques are being applied in a broadening range of elds and on ever increasing sizes of datasets. It is thus desirable to have an algorithm that is both scalable to massive data and able to cope with the diversity of data available. Dimension reduction algorithms tend to fall into two categories; those that seek to preserve the distance structure within the data and those that favor the preservation of local distances over global distance. Algorithms such as PCA [22], MDS [23], and Sammon mapping [41] fall into the former category while t-SNE [50,49], Isomap [47], LargeVis [45], Laplacian eigenmaps [5,6] and di usion maps [14] all fall into the la er category.
In this paper we introduce a novel manifold learning technique for dimension reduction. We provide a sound mathematical theory grounding the technique and a practical scalable algorithm that applies to real world data. UMAP (Uniform Manifold Approximation and Projection) builds upon mathematical foundations related to the work of Belkin and Niyogi on Laplacian eigenmaps. We seek to address the issue of uniform data distributions on manifolds through a combination of Riemannian geometry and the work of David Spivak [43] in category theoretic approaches to geometric realization of fuzzy simplicial sets. t-SNE is the current state-of-the-art for dimension reduction for visualization. Our algorithm is competitive with t-SNE for visualization quality and arguably preserves more of the global structure with superior run time performance. Furthermore, UMAP's topological foundations allow it to scale to signi cantly larger data set sizes than are feasible for t-SNE. Finally, UMAP has no computational restrictions on embedding dimension, making it viable as a general purpose dimension reduction technique for machine learning.
Based upon preliminary releases of a so ware implementation, UMAP has already found widespread use in the elds of bioinformatics [4,15,37,2,36,13], materials science [27,19], and machine learning [8,20,17,38] among others.
is paper is laid out as follows. In Section 2 we describe the theory underlying the algorithm. Section 2 is necessary to understand both the theory underlying why UMAP works and the motivation for the choices that where made in developing the algorithm. A reader without a background (or interest) in topological data analysis, category theory or the theoretical underpinnings of UMAP should skip over this section and proceed directly to Section 3. at being said, we feel that strong theory and mathematically justi ed algorithmic decisions are of particular importance in the eld of unsupervised learning.
is is, at least partially, due to plethora of proposed objective functions within the area. In Section 3 we provide a more computation description of UMAP. Section 3 should provide readers less familiar with topological data analysis with a better foundation for understanding the theory described in Section 2. Appendix C contrasts UMAP against the more familiar algorithms t-SNE and LargeVis, describing all these algorithms in similar language. is section should assist readers already familiar with those techniques to quickly gain an understanding of the UMAP algorithm if not its theoretical underpinnings.
In Section 4 we discuss implementing the UMAP algorithm. is includes a more detailed algorithmic description, and discussion of the hyper-parameters involved and their practical e ects.
In Section 5 we provide practical results on real world datasets as well as scaling experiments to demonstrate the algorithm's performance in real world scenarios as compared with other dimension reduction algorithms.
In Section 6 we discuss relative weakenesses of the algorithm, and applications for which UMAP may not be the best choice.
Finally, in Section 7 we detail an number of potential extensions of UMAP that are made possible by its construction upon solid mathematical foundations. ese avenues for further development include semi-supervised learning, metric learning and heterogeneous data embedding. 2 eoretical Foundations for UMAP e theoretical foundations for UMAP are largely based in manifold theory and topological data analysis. Much of the theory is most easily explained in the language of topology and category theory. Readers may consult [31], [40] and [32] for background. Readers more interested in practical computational aspects of the algorithm, and not necessarily the theoretical motivation for the computations involved, may wish to skip this section.
At a high level, UMAP uses local manifold approximations and patches together their local fuzzy simplicial set representations to construct a topological representation of the high dimensional data. Given some low dimensional representation of the data, a similar process can be used to construct an equivalent topological representation. UMAP then optimizes the layout of the data representation in the low dimensional space, to minimize the cross-entropy between the two topological representations. e construction of fuzzy topological representations can be broken down 3 into two problems: approximating a manifold on which the data is assumed to lie; and constructing a fuzzy simplicial set representation of the approximated manifold. In explaining the algorithm we will rst discuss the method of approximating the manifold for the source data. Next we will discuss how to construct a fuzzy simplicial set structure from the manifold approximation. Finally, We will discuss the construction of the fuzzy simplicial set associated to a low dimensional representation (where the manifold is simply R d ), and how to optimize the representation with respect to our objective function.
Uniform distribution of data on a manifold and geodesic approximation
e rst step of our algorithm is to approximate the manifold we assume the data lies on. e manifold may be known apriori (as simply R n ) or may need to be inferred from the data. Suppose the manifold is not known in advance and we wish to approximate geodesic distance on it. Let the input data be X = {X 1 , . . . , X N }. As in the work of Belkin and Niyogi on Laplacian eigenmaps [5,6], for theoretical reasons it is bene cial to assume the data is uniformly distributed on the manifold. In practice, real world data is rarely so nicely behaved. However, if we assume that the manifold has a Riemannian metric not inherited from the ambient space, we can nd a metric such that the data is approximately uniformly distributed with regard to that metric.
Formally, let M be the manifold we assume the data to lie on, and let g be the Riemannian metric on M. us, for each point p ∈ M we have g p , an inner product on the tangent space T p M. Lemma 1. Let (M, g) be a Riemannian manifold in an ambient R n , and let p ∈ M be a point. If g is locally constant about p in an open neighbourhood U such that g is a constant diagonal matrix in ambient coordinates, then in a ball B ⊆ U centered at p with volume π n/2 Γ(n/2+1) with respect to g, the geodesic distance from p to any point q ∈ B is 1 r d R n (p, q), where r is the radius of the ball in the ambient space and d R n is the existing metric on the ambient space.
See Appendix A of the supplementary materials for a proof of Lemma 1. If we assume the data to be uniformly distributed on M (with respect to g) then any ball of xed volume should contain approximately the same number of points of X regardless of where on the manifold it is centered. Conversely, a ball centered at X i that contains exactly the k-nearest-neighbors of X i should have 4 xed volume regardless of the choice of X i ∈ X. Under Lemma 1 it follows that we can approximate geodesic distance from X i to its neighbors by normalising distances with respect to the distance to the k th nearest neighbor of X i .
In essence, by creating a custom distance for each X i , we can ensure the validity of the assumption of uniform distribution on the manifold assumption. e cost is that we now have an independent notion of distance for each and every X i , and these notions of distance may not be compatible. We have a family of discrete metric spaces (one for each X i ) that we wish to merge into a consistent global structure.
is can be done in a natural way by converting the metric spaces into fuzzy simplicial sets.
Fuzzy topological representation
We will convert to fuzzy topological representations as a means to merge the incompatible local views of the data. e topological structure of choice is that of simplicial sets. For more details on simplicial sets we refer the reader to [21], [32], [39], or [18]. Our approach draws heavily upon the work of Michael Barr [3] and David Spivak in [43], and many of the de nitions and theorems below are drawn or adapted from those sources.
To start we will review the de nitions for simplicial sets. Simplicial sets provide a combinatorial approach to the study of topological spaces.
ey are related to the simpler notion of simplicial complexes -which construct topological spaces by gluing together simple building blocks called simplices -but are more general. Simplicial sets are most easily de ned purely abstractly in the language of category theory.
De nition 1. e category ∆ has as objects the nite order sets [n] = {1, . . . , n}, with morphims given by (non-strictly) order-preserving maps.
De nition 2. A simplicial set is a functor from ∆ op to Sets, the category of sets.
Given a simplicial set X : ∆ op → Sets, it is common to denote the set X([n]) as X n and refer to the elements of the set as the n-simplices of X. e simplest possible examples of simplicial sets are the standard simplices ∆ n , de ned as the representable functors hom ∆ (·, [n]). It follows from the Yoneda lemma that there is a natural correspondence between n-simplices of X and morphisms ∆ n → X in the category of simplicial sets, and it is o en helpful to think in these terms.
us for each x ∈ X n we have a corresponding morphism x : ∆ n → X. By the 5 density theorem and employing a minor abuse of notation we then have
colim x∈Xn ∆ n ∼ = X
ere is a standard covariant functor | · | : ∆ → Top mapping from the category ∆ to the category of topological spaces that sends [n] to the standard n-simplex |∆ n | ⊂ R n+1 de ned as
|∆ n | (t 0 , . . . , t n ) ∈ R n+1 | n i=0 t i = 1, t i ≥ 0
with the standard subspace topology. If X : ∆ op → Sets is a simplicial set then we can construct the realization of X (denoted |X|) as the colimit |X| = colim x∈Xn |∆ n | and thus associate a topological space with a given simplicial set. Conversely given a topological space Y we can construct an associated simplicial set S(Y ), called the singular set of Y , by de ning
S(Y ) : [n] → hom Top (|∆ n |, Y ).
It is a standard result of classical homotopy theory that the realization functor and singular set functors form an adjunction, and provide the standard means of translating between topological spaces and simplicial sets. Our goal will be to adapt these powerful classical results to the case of nite metric spaces. We draw signi cant inspiration from Spivak, speci cally [43], where he extends the classical theory of singular sets and topological realization to fuzzy singular sets and metric realization. To develop this theory here we will rst outline a categorical presentation of fuzzy sets, due to [3], that will make extending classical simplicial sets to fuzzy simplicial sets most natural.
Classically a fuzzy set [55] is de ned in terms of a carrier set A and a map µ : A → [0, 1] called the membership function. One is to interpret the value µ(x) for x ∈ A to be the membership strength of x to the set A. us membership of a set is no longer a bi-valent true or false property as in classical set theory, but a fuzzy property taking values in the unit interval. We wish to formalize this in terms of category theory.
Let I be the unit interval (0, 1] ⊆ R with topology given by intervals of the form [0, a) for a ∈ (0, 1]. e category of open sets (with morphisms given by inclusions) can be imbued with a Grothendieck topology in the natural way for any poset category.
De nition 3. A presheaf P on I is a functor from I op to Sets. A fuzzy set is a presheaf on I such that all maps P(a ≤ b) are injections.
Presheaves on I form a category with morphisms given by natural transformations. We can thus form a category of fuzzy sets by simply restricting to the sub-category of presheaves that are fuzzy sets. We note that such presheaves are trivially sheaves under the Grothendieck topology on I. As one might expect, limits (including products) of such sheaves are well de ned, but care must be taken to de ne colimits (and coproducts) of sheaves. To link to the classical approach to fuzzy sets one can think of a section P([0, a)) as the set of all elements with membership strength at least a. We can now de ne the category of fuzzy sets.
De nition 4. e category Fuzz of fuzzy sets is the full subcategory of sheaves on I spanned by fuzzy sets.
With this categorical presentation in hand, de ning fuzzy simplicial sets is simply a ma er of considering presheaves of ∆ valued in the category of fuzzy sets rather than the category of sets.
De nition 5. e category of fuzzy simplicial sets sFuzz is the category with objects given by functors from ∆ op to Fuzz, and morphisms given by natural transformations.
Alternatively, a fuzzy simplicial set can be viewed as a sheaf over ∆ × I, where ∆ is given the trivial topology and ∆ × I has the product topology. We will use ∆ n <a to denote the sheaf given by the representable functor of the object ([n], [0, a)). e importance of this fuzzy (shea ed) version of simplicial sets is their relationship to metric spaces. We begin by considering the larger category of extended-pseudo-metric spaces.
De nition 6. An extended-pseudo-metric space (X, d) is a set X and a map d :
X × X → R ≥0 ∪ {∞} such that 1. d(x, y) 0, and x = y implies d(x, y) = 0; 2. d(x, y) = d(y, x); and 3. d(x, z) d(x, y) + d(y, z) or d(x, z) = ∞.
e category of extended-pseudo-metric spaces EPMet has as objects extendedpseudo-metric spaces and non-expansive maps as morphisms. We denote the subcategory of nite extended-pseudo-metric spaces FinEPMet. e choice of non-expansive maps in De nition 6 is due to Spivak, but we note that it closely mirrors the work of Carlsson and Memoli in [12] on topological methods for clustering as applied to nite metric spaces.
is choice is signi cant since pure isometries are too strict and do not provide large enough Hom-sets.
In [43] Spivak constructs a pair of adjoint functors, Real and Sing between the categories sFuzz and EPMet.
ese functors are the natural extension of the classical realization and singular set functors from algebraic topology. e functor Real is de ned in terms of standard fuzzy simplices ∆ n <a as
Real(∆ n <a ) (t 0 , . . . , t n ) ∈ R n+1 | n i=0 t i = − log(a), t i ≥ 0
similarly to the classical realization functor | · |. e metric on Real(∆ n <a ) is simply inherited from R n+1 . A morphism ∆ n <a → ∆ m <b exists only if a ≤ b, and is determined by a ∆ morphism σ :
[n] → [m].
e action of Real on such a morphism is given by the map
(x 0 , x 1 , . . . , x n ) → log(b) log(a) i 0 ∈σ −1 (0) x i 0 , i 0 ∈σ −1 (1) x i 0 , . . . , i 0 ∈σ −1 (m) x i 0 .
Such a map is clearly non-expansive since 0 ≤ a ≤ b ≤ 1 implies that log(b)/ log(a) ≤ 1.
We then extend this to a general simplicial set X via colimits, de ning
Real(X) colim ∆ n <a →X
Real(∆ n <a ).
Since the functor Real preserves colimits, it follows that there exists a right adjoint functor. Again, analogously to the classical case, we nd the right adjoint, denoted Sing, is de ned for an extended pseudo metric space Y in terms of its action on the category ∆ × I:
Sing(Y ) : ([n], [0, a)) → hom EPMet (Real(∆ n <a ), Y ).
For our case we are only interested in nite metric spaces. To correspond with this we consider the subcategory of bounded fuzzy simplicial sets Fin-sFuzz. We therefore use the analogous adjoint pair FinReal and FinSing. Formally we de ne the nite fuzzy realization functor as follows:
De nition 7. De ne the functor FinReal : Fin-sFuzz → FinEPMet by se ing
FinReal(∆ n <a ) ({x 1 , x 2 , . . . , x n }, d a ), where d a (x i , x j ) = − log(a) if i = j, 0 otherwise
. and then de ning
FinReal(X) colim ∆ n <a →X FinReal(∆ n <a ).
Similar to Spivak's construction, the action of FinReal on a map ∆ n <a → ∆ m <b , where a ≤ b de ned by σ : ∆ n → ∆ m , is given by
({x 1 , x 2 , . . . , x n }, d a ) → ({x σ(1) , x σ(2) , . . . , x σ(n) }, d b ), which is a non-expansive map since a ≤ b implies d a ≥ d b .
Since FinReal preserves colimits it admits a right adjoint, the fuzzy singular set functor FinSing. We can then de ne the ( nite) fuzzy singular set functor in terms of the action of its image on ∆ × I, analogously to Sing. We then have the following theorem. With the necessary theoretical background in place, the means to handle the family of incompatible metric spaces described above becomes clear. Each metric space in the family can be translated into a fuzzy simplicial set via the fuzzy singular set functor, distilling the topological information while still retaining metric information in the fuzzy structure. Ironing out the incompatibilities of the resulting family of fuzzy simplicial sets can be done by simply taking a (fuzzy) union across the entire family. e result is a single fuzzy simplicial set which captures the relevant topological and underlying metric structure of the manifold M.
It should be noted, however, that the fuzzy singular set functor applies to extended-pseudo-metric spaces, which are a relaxation of traditional metric spaces. e results of Lemma 1 only provide accurate approximations of geodesic distance local to X i for distances measured from X i -the geodesic distances between other pairs of points within the neighborhood of X i are not well de ned. In deference to this lack of information we de ne distances between X j and X k in the extended-pseudo metric space local to X i (where i = j and i = k) to be innite (local neighborhoods of X j and X k will provide suitable approximations).
For real data it is safe to assume that the manifold M is locally connected. In practice this can be realized by measuring distance in the extended-pseudometric space local to X i as geodesic distance beyond the nearest neighbor of X i . Since this sets the distance to the nearest neighbor to be equal to 0 this is only possible in the more relaxed se ing of extended-pseudo-metric spaces. It ensures, however, that each 0-simplex is the face of some 1-simplex with fuzzy membership strength 1, meaning that the resulting topological structure derived from the manifold is locally connected. We note that this has a similar practical e ect to the truncated similarity approach of Lee and Verleysen [26], but derives naturally from the assumption of local connectivity of the manifold.
Combining all of the above we can de ne the fuzzy topological representation of a dataset.
De nition 9. Let X = {X 1 , . . . , X N } be a dataset in R n . Let {(X, d i )} i=1.
..N be a family of extended-pseudo-metric spaces with common carrier set X such that
d i (X j , X k ) = d M (X j , X k ) − ρ if i = j or i = k, ∞ otherwise ,
where ρ is the distance to the nearest neighbor of X i and d M is geodesic distance on the manifold M, either known apriori, or approximated as per Lemma 1. e fuzzy topological representation of X is
n i=1 FinSing((X, d i )).
e (fuzzy set) union provides the means to merge together the di erent metric spaces. is provides a single fuzzy simplicial set as the global representation of the manifold formed by patching together the many local representations.
Given the ability to construct such topological structures, either from a known manifold, or by learning the metric structure of the manifold, we can perform dimension reduction by simply nding low dimensional representations that closely match the topological structure of the source data. We now consider the task of nding such a low dimensional representation.
Optimizing a low dimensional representation
Let Y = {Y 1 , . . . , Y N } ⊆ R d be a low dimensional (d n)
representation of X such that Y i represents the source data point X i . In contrast to the source data where we want to estimate a manifold on which the data is uniformly distributed, we know the manifold for Y is R d itself. erefore we know the manifold and manifold metric apriori, and can compute the fuzzy topological representation directly. Of note, we still want to incorporate the distance to the nearest neighbor as per the local connectedness requirement. is can be achieved by supplying a parameter that de nes the expected distance between nearest neighbors in the embedded space.
Given fuzzy simplicial set representations of X and Y , a means of comparison is required. If we consider only the 1-skeleton of the fuzzy simplicial sets we can describe each as a fuzzy graph, or, more speci cally, a fuzzy set of edges. To compare two fuzzy sets we will make use of fuzzy set cross entropy. For these purposes we will revert to classical fuzzy set notation.
at is, a fuzzy set is given by a reference set A and a membership strength function µ : A → [0, 1]. Comparable fuzzy sets have the same reference set. Given a sheaf representation P we can translate to classical fuzzy sets by se ing A = a∈(0,1] P([0, a)) and µ(x) = sup{a ∈ (0, 1] | x ∈ P([0, a))}.
De nition 10. e cross entropy C of two fuzzy sets (A, µ) and (A, ν) is de ned as
C((A, µ), (A, ν)) a∈A µ(a) log µ(a) ν(a) + (1 − µ(a)) log 1 − µ(a) 1 − ν(a) .
Similar to t-SNE we can optimize the embedding Y with respect to fuzzy set cross entropy C by using stochastic gradient descent. However, this requires a di erentiable fuzzy singular set functor. If the expected minimum distance between points is zero the fuzzy singular set functor is di erentiable for these purposes, however for any non-zero value we need to make a di erentiable approximation (chosen from a suitable family of di erentiable functions).
is completes the algorithm: by using manifold approximation and patching together local fuzzy simplicial set representations we construct a topological representation of the high dimensional data. We then optimize the layout of data in a low dimensional space to minimize the error between the two topological representations.
We note that in this case we restricted a ention to comparisons of the 1skeleton of the fuzzy simplicial sets. One can extend this to -skeleta by de ning a cost function C as
C (X, Y ) = i=1 λ i C(X i , Y i ),
where X i denotes the fuzzy set of i-simplices of X and the λ i are suitably chosen real valued weights. While such an approach will capture the overall topological structure more accurately, it comes at non-negligible computational cost due to the increasingly large numbers of higher dimensional simplices. For this reason current implementations restrict to the 1-skeleton at this time.
A Computational View of UMAP
To understand what computations the UMAP algorithm is actually making from a practical point of view, a less theoretical and more computational description may be helpful for the reader. is description of the algorithm lacks the motivation for a number of the choices made. For that motivation please see Section 2.
e theoretical description of the algorithm works in terms of fuzzy simplicial sets. Computationally this is only tractable for the one skeleton which can ultimately be described as a weighted graph.
is means that, from a practical computational perspective, UMAP can ultimately be described in terms of, construction of, and operations on weighted graphs. In particular this situates UMAP in the class of k-neighbour based graph learning algorithms such as Laplacian Eigenmaps, Isomap and t-SNE.
As with other k-neighbour graph based algorithms, UMAP can be described in two phases. In the rst phase a particular weighted k-neighbour graph is constructed. In the second phase a low dimensional layout of this graph is computed. e di erences between all algorithms in this class amount to speci c details in how the graph is constructed and the layout is computed. e theoretical basis for UMAP as described in Section 2 provides novel approaches to both of these phases.
Finally, since t-SNE is not usually described as a graph based algorithm, a direct comparison of UMAP with t-SNE, using the similarity/probability notation commonly used to express the equations of t-SNE, is given in the Appendix, section C.
Graph Construction
e rst phase of UMAP can be thought of as the construction of a weighted kneighbour graph. Let X = {x 1 , . . . , x N } be the input dataset, with a metric (or dissimilarity measure) d : X ×X → R ≥0 . Given an input hyper-parameter k, for each x i we compute the set {x i 1 , . . . , x i k } of the k nearest neighbors of x i under the metric d. is computation can be performed via any nearest neighbour or approximate nearest neighbour search algorithm. For the purposes of our UMAP implemenation we prefer to use the nearest neighbor descent algorithm of [16].
For each x i we will de ne ρ i and σ i . Let
ρ i = min{d(x i , x i j ) | 1 ≤ j ≤ k, d(x i , x i j ) > 0},
and set σ i to be the value such that
k j=1 exp − max(0, d(x i , x i j ) − ρ i ) σ i = log 2 (k).
We can now de ne a weighted directed graphḠ = (V, E, w). e vertices V ofḠ are simply the set X. We can then form the set of directed edges
E = {(x i , x i j ) | 1 ≤ j ≤ k, 1 ≤ i ≤ N }, and de ne the weight function w by se ing w((x i , x i j )) = exp − max(0, d(x i , x i j ) − ρ i ) σ i .
Let A be the weighted adjacency matrix ofḠ, and consider the symmetric matrix
B = A + A − A • A ,
where • is the Hadamard (or pointwise) product. e UMAP graph G is then an undirected weighted graph whose adjacency matrix is given by B.
13
Graph Layout
In practice UMAP uses a force directed graph layout algorithm in low dimensional space. A force directed graph layout utilizes of a set of a ractive forces applied along edges and a set of repulsive forces applied among vertices. Any force directed layout algorithm requires a description of both the a ractive and repulsive forces. e algorithm proceeds by iteratively applying a ractive and repulsive forces at each edge or vertex. Convergence is guaranteed by slowly decreasing the a ractive and repulsive forces in a similar fashion to that used in simulated annealing.
In UMAP the a ractive force between two vertices i and j at coordinates y i and y j respectively, is determined by:
−2ab y i − y j 2(b−1) 2 1 + y i − y j 2 2 w((x i , x j )) (y i − y j )
where a and b are hyper-parameters. Repulsive forces are computed via sampling due to computational constraints. us, whenever an a ractive force is applied to an edge, one of that edge's vertices is repulsed by a sampling of other vertices. e repulsive force is given by
b ( + y i − y j 2 2 ) (1 + y i − y j 2 2 ) (1 − w((x i , x j ))) (y i − y j ) .
is a small number to prevent division by zero (0.001 in the current implementation). e algorithm can be initialized randomly but in practice, since the symmetric Laplacian of the graph G is a discrete approximation of the Laplace-Beltrami operator of the manifold, we can use a spectral layout to initialize the embedding.
is provides both faster convergence and greater stability within the algorithm.
Implementation and Hyper-parameters
Having completed a theoretical description of the approach, we now turn our a ention to the practical realization of this theory. We begin by providing a more detailed description of the algorithm as implemented, and then discuss a few implementation speci c details. We conclude this section with a discussion of the hyper-parameters for the algorithm and their practical e ects.
Algorithm description
In overview the UMAP algorithm is relatively straightforward (see Algorithm 1).
When performing a fuzzy union over local fuzzy simplicial sets we have found it most e ective to work with the probabilistic t-conorm (as one would expect if treating membership strengths as a probability that the simplex exists). e individual functions for constructing the local fuzzy simplicial sets, determining the spectral embedding, and optimizing the embedding with regard to fuzzy set cross entropy, are described in more detail below.
Algorithm 1 UMAP algorithm function UMAP(X, n, d, min-dist, n-epochs) for all x ∈ X do fs-set[x] ← L F S S (X, x, n) top-rep ← x∈X fs-set[x]
We recommend the probabilistic t-conorm
Y ← S E (top-rep, d) Y ← O E (top-rep, Y , min-dist, n-epochs) return Y
Algorithm 2 describes the construction of local fuzzy simplicial sets. To represent fuzzy simplicial sets we work with the fuzzy set images of [0] and [1] (i.e. the 1-skeleton), which we denote as fs-set 0 and fs-set 1 . One can work with higher order simplices as well, but the current implementation does not. We can construct the fuzzy simplicial set local to a given point x by nding the n nearest neighbors, generating the appropriate normalised distance on the manifold, and then converting the nite metric space to a simplicial set via the functor FinSing, which translates into exponential of the negative distance in this case.
Rather than directly using the distance to the n th nearest neighbor as the normalization, we use a smoothed version of knn-distance that xes the cardinality of the fuzzy set of 1-simplices to a xed value. We selected log 2 (n) for this purpose based on empirical experiments. is is described brie y in Algorithm 3.
Spectral embedding is performed by considering the 1-skeleton of the global fuzzy topological representation as a weighted graph and using standard spectral Algorithm 2 Constructing a local fuzzy simplicial set
function L F S S (X, x, n) knn, knn-dists ← A N N (X, x, n) ρ ← knn-dists[1]
Distance to nearest neighbor σ ← S KNND (knn-dists, n, ρ)
Smooth approximator to knn-distance fs-set 0 ← X fs-set 1 ← {([x, y], 0) | y ∈ X} for all y ∈ knn do d x,y ← max{0, dist(x, y) − ρ}/σ fs-set 1 ← fs-set 1 ∪ ([x, y], exp(−d x,y )) return fs-set
e rst sum depends only on µ which takes xed values during the optimization, thus the minimization of cross entropy depends only on the second sum, so we seek to minimize − a∈A (µ(a) log(ν(a)) + (1 − µ(a)) log(1 − ν(a))) .
Following both [45] and [33], we take a sampling based approach to the optimization. We sample 1-simplices with probability µ(a) and update according to the value of ν(a), which handles the term µ(a) log(ν(a)). e term (1−µ(a)) log(1− ν(a)) requires negative sampling -rather than computing this over all potential simplices we randomly sample potential 1-simplices and assume them to be a negative example (i.e. with membership strength 0) and update according to the value of 1 − ν(a). In contrast to [45] the above formulation provides a vertex sampling distribution of
P (x i ) = {a∈A|d 0 (a)=x i } 1 − µ(a) {b∈A|d 0 (b) =x i } 1 − µ(b)
for negative samples, which can be reasonably approximated by a uniform distribution for su ciently large data sets. It therefore only remains to nd a di erentiable approximation to ν(a) for a given 1-simplex a so that gradient descent cane be applied for optimization. is is done as follows:
De nition 11. De ne Φ : R d × R d → [0, 1], a smooth approximation of the membership strength of a 1-simplex between two points in R d , as
Φ(x, y) = 1 + a( x − y 2 2 ) b −1 ,
where a and b are chosen by non-linear least squares ing of Ψ :
R d ×R d → [0, 1] where Ψ(x, y) 1 if x − y 2 ≤ min-dist exp(−( x − y 2 − min-dist)) otherwise .
e optimization process is now executed by stochastic gradient descent as given by Algorithm 5.
Algorithm 5 Optimizing the embedding
function O E (top-rep, Y , min-dist, n-epochs) α ← 1.0 Fit Φ from Ψ de ned by min-dist for e ← 1, . . . , n-epochs do for all ([a, b], p) ∈ top-rep 1 do if R ( ) ≤ p then
Sample simplex with probability p y a ← y a + α · ∇(log(Φ))(y a , y b ) for i ← 1, . . . , n-neg-samples do c ← random sample from Y y a ← y a + α · ∇(log(1 − Φ))(y a , y c ) α ← 1.0 − e/n-epochs return Y is completes the UMAP algorithm.
Implementation
Practical implementation of this algorithm requires (approximate) k-nearest-neighbor calculation and e cient optimization via stochastic gradient descent.
E cient approximate k-nearest-neighbor computation can be achieved via the Nearest-Neighbor-Descent algorithm of [16]. e error intrinsic in a dimension reduction technique means that such approximation is more than adequate for these purposes. While no theoretical complexity bounds have been established for Nearest-Neighbor-Descent the authors of the original paper report an empirical complexity of O(N 1.14 ). A further bene t of Nearest-Neighbor-Descent is its generality; it works with any valid dissimilarity measure, and is e cient even for high dimensional data.
In optimizing the embedding under the provided objective function, we follow work of [45]; making use of probabilistic edge sampling and negative sampling [33].
is provides a very e cient approximate stochastic gradient descent algorithm since there is no normalization requirement. Furthermore, since the normalized Laplacian of the fuzzy graph representation of the input data is a discrete approximation of the Laplace-Betrami operator of the manifold [?, see]]belkin2002laplacian, belkin2003laplacian, we can provide a suitable initialization for stochastic gradient descent by using the eigenvectors of the normalized Laplacian. e amount of optimization work required will scale with the number of edges in the fuzzy graph (assuming a xed negative sampling rate), resulting in a complexity of O(kN ).
Combining these techniques results in highly e cient embeddings, which we will discuss in Section 5. e overall complexity is bounded by the approximate nearest neighbor search complexity and, as mentioned above, is empirically approximately O(N 1.14 ). A reference implementation can be found at https://github.com/lmcinnes/umap, and an R implementation can be found at https://github.com/jlmelville/uwot. While our reference implementation is single core for simplicity it should be noted that both Nearest-Neighbor-Descent and SGD can be parallelised. A parallel multi-core implementation of UMAP is therefore achievable.
Hyper-parameters
As described in Algorithm 1, the UMAP algorithm takes four hyper-parameters:
1. n, the number of neighbors to consider when approximating the local metric;
2. d, the target embedding dimension;
3. min-dist, the desired separation between close points in the embedding space; and 4. n-epochs, the number of training epochs to use when optimizing the low dimensional representation. e e ects of the parameters d and n-epochs are largely self-evident, and will not be discussed in further detail here. In contrast the e ects of the number of neighbors n and of min-dist are less clear.
One can interpret the number of neighbors n as the local scale at which to approximate the manifold as roughly at, with the manifold estimation averaging over the n neighbors. Manifold features that occur at a smaller scale than within the n nearest-neighbors of points will be lost, while large scale manifold features that cannot be seen by patching together locally at charts at the scale of n nearest-neighbors may not be well detected. us n represents some degree of trade-o between ne grained and large scale manifold features -smaller values will ensure detailed manifold structure is accurately captured (at a loss of the "big picture" view of the manifold), while larger values will capture large scale manifold structures, but at a loss of ne detail structure which will get averaged out in the local approximations. With smaller n values the manifold tends to be broken into many small connected components (care needs to be taken with the spectral embedding for initialization in such cases).
In contrast min-dist is a hyperparameter directly a ecting the output, as it controls the fuzzy simplicial set construction from the low dimensional representation. It acts in lieu of the distance to the nearest neighbor used to ensure local connectivity. In essence this determines how closely points can be packed together in the low dimensional representation. Low values on min-dist will result in potentially densely packed regions, but will likely more faithfully represent the manifold structure. Increasing the value of min-dist will force the embedding to spread points out more, assisting visualization (and avoiding potential overplotting issues). We view min-dist as an essentially aesthetic parameter, governing the appearance of the embedding, and thus is more important when using UMAP for visualization.
In Figure 1 we provide examples of the e ects of varying the hyperparameters for a toy dataset. e data is uniform random samples from a 3-dimensional color-cube, allowing for easy visualization of the original 3-dimensional coordinates in the embedding space by using the corresponding RGB colour. Since the data lls a 3-dimensional cube there is no local manifold structure, and hence for such data we expect larger n values to be more useful. Low values will interpret the noise from random sampling as ne scale manifold structure, producing potentially spurious structure 1 . 1 See the discussion of the constellation e ect in Section 6 Figure 1: Variation of UMAP hyperparameters n and min-dist result in di erent embeddings. e data is uniform random samples from a 3-dimensional colorcube, allowing for easy visualization of the original 3-dimensional coordinates in the embedding space by using the corresponding RGB colour. Low values of n spuriously interpret structure from the random sampling noise -see Section 6 for further discussion of this phenomena.
Practical E cacy
While the strong mathematical foundations of UMAP were the motivation for its development, the algorithm must ultimately be judged by its practical e cacy. In this section we examine the delity and performance of low dimensional embeddings of multiple diverse real world data sets under UMAP. e following datasets were considered: Pen digits [1, 10] is a set of 1797 grayscale images of digits entered using a digitiser tablet. Each image is an 8x8 image which we treat as a single 64 dimensional vector, assumed to be in Euclidean vector space. COIL 20 [34] is a set of 1440 greyscale images consisting of 20 objects under 72 di erent rotations spanning 360 degrees. Each image is a 128x128 image which we treat as a single 16384 dimensional vector for the purposes of computing distance between images. COIL 100 [35] is a set of 7200 colour images consisting of 100 objects under 72 di erent rotations spanning 360 degrees. Each image consists of 3 128x128 intensity matrices (one for each color channel). We treat this as a single 49152 dimensional vector for the purposes of computing distance between images. Mouse scRNA-seq [11] is pro led gene expression data for 20,921 cells from an adult mouse. Each sample consists of a vector of 26,774 measurements. Statlog (Shuttle) [28] is a NASA dataset consisting of various data associated to the positions of radiators in the space shu le, including a timestamp. e dataset has 58000 points in a 9 dimensional feature space. MNIST [25] is a dataset of 28x28 pixel grayscale images of handwri en digits.
ere are 10 digit classes (0 through 9) and 70000 total images. is is treated as 70000 di erent 784 dimensional vectors. F-MNIST [53] or Fashion MNIST is a dataset of 28x28 pixel grayscale images of fashion items (clothing, footwear and bags).
ere are 10 classes and 70000 total images. As with MNIST this is treated as 70000 di erent 784 dimensional vectors. Flow cytometry [42,9] is a dataset of ow cytometry measurements of CDT4 cells comprised of 1,000,000 samples, each with 17 measurements. GoogleNews word vectors [33] is a dataset of 3 million words and phrases derived from a sample of Google News documents and embedded into a 300 dimensional space via word2vec.
For all the datasets except GoogleNews we use Euclidean distance between vectors. For GoogleNews, as per [33], we use cosine distance (or angular distance 22 in t-SNE which does support non-metric distances, in contrast to UMAP).
alitative Comparison of Multiple Algorithms
We compare a number of algorithms-UMAP, t-SNE [51,49], LargeVis [45], Laplacian Eigenmaps [6], and Principal Component Analysis [22]-on the COIL20 [34], MNIST [25], Fashion-MNIST [53], and GoogleNews [33] datasets. e Isomap algorithm was also tested, but failed to complete in any reasonable time for any of the datasets larger than COIL20. e Multicore t-SNE package [48] was used for t-SNE. e reference implementation [44] was used for LarveVis. e scikit-learn [10] implementations were used for Laplacian Eigenmaps and PCA. Where possible we a empted to tune parameters for each algorithm to give good embeddings.
Historically t-SNE and LargeVis have o ered a dramatic improvement in nding and preserving local structure in the data. is can be seen qualitatively by comparing their embeddings to those generated by Laplacian Eigenmaps and PCA in 2. We claim that the quality of embeddings produced by UMAP is comparable to t-SNE when reducing to two or three dimensions. For example, Figure 2 shows both UMAP and t-SNE embeddings of the COIL20, MNIST, Fashion MNIST, and Google News datasets. While the precise embeddings are di erent, UMAP distinguishes the same structures as t-SNE and LargeVis.
It can be argued that UMAP has captured more of the global and topological structure of the datasets than t-SNE [4]. More of the loops in the COIL20 dataset are kept intact, including the intertwined loops. Similarly the global relationships among di erent digits in the MNIST digits dataset are more clearly captured with 1 (red) and 0 (dark red) at far corners of the embedding space, and 4,7,9 (yellow, sea-green, and violet) and 3,5,8 (orange, chartreuse, and blue) separated as distinct clumps of similar digits. In the Fashion MNIST dataset the distinction between clothing (dark red, yellow, orange, vermilion) and footwear (chartreuse, sea-green, and violet) is made more clear. Finally, while both t-SNE and UMAP capture groups of similar word vectors, the UMAP embedding arguably evidences a clearer global structure among the various word clusters.
Embedding Stability
Since UMAP makes use of both stochastic approximate nearest neighbor search, and stochastic gradient descent with negative sampling for optimization, the Figure 2: A comparison of several dimension reduction algorithms. We note that UMAP successfully re ects much of the large scale global structure that is well represented by Laplacian Eigenmaps and PCA (particularly for MNIST and Fashion-MNIST), while also preserving the local ne structure similar to t-SNE and LargeVis. 24 resulting embedding is necessarily di erent from run to run, and under subsampling of the data. is is potentially a concern for a variety of uses cases, so establishing some measure of how stable UMAP embeddings are, particularly under sub-sampling, is of interest. In this subsection we compare the stability under subsampling of UMAP, LargeVis and t-SNE (the three stochastic dimension reduction techniques considered).
To measure the stability of an embedding we make use of the normalized Procrustes distance to measure the distance between two potentially comparable distributions. Given two datasets X = {x 1 , . . . , x N } and Y = {y 1 , . . . , y N } such that x i corresponds to y i , we can de ne the Procustes distance between the datasets d P (X, Y ) in the following manner. Determine Y = {y 1 , . . . , y N } the optimal translation, uniform scaling, and rotation of Y that minimizes the squared error N i=1 (x i − y i ) 2 , and de ne
d P (X, Y ) = N i=1 (x i − y i ) 2 .
Since any measure that makes use of distances in the embedding space is potentially sensitive to the extent or scale of the embedding, we normalize the data before computing the Procrustes distance by dividing by the average norm of the embedded dataset. In Figure 3 we visualize the results of using Procrustes alignment of embedding of sub-samples for both UMAP and t-SNE, demonstrating how Procrustes distance can measure the stability of the overall structure of the embedding. Given a measure of distance between di erent embeddings we can examine stability under sub-sampling by considering the normalized Procrustes distance between the embedding of a sub-sample, and the corresponding sub-sample of an embedding of the full dataset. As the size of the sub-sample increases the average distance per point between the sub-sampled embeddings should decrease, potentially toward some asymptote of maximal agreement under repeated runs. Ideally this asymptotic value would be zero error, but for stochastic embeddings such as UMAP and t-SNE this is not achievable.
We performed an empirical comparison of algorithms with respect to stability using the Flow Cytometry dataset due its large size, interesting structure, and low ambient dimensionality (aiding runtime performance for t-SNE). We note that for a dataset this large we found it necessary to increase the default n_iter value for t-SNE from 1000 to 1500 to ensure be er convergence. While this had an impact on the runtime, it signi cantly improved the Procrustes distance results by providing more stable and consistent embeddings. Figure 4 provides a comparison between UMAP and t-SNE, demonstrating that UMAP has signifcantly more stable results than t-SNE. In particular, a er sub-sampling on 5% of the million data points, the per point error for UMAP was already below any value achieved by t-SNE.
Computational Performance Comparisons
Benchmarks against the real world datasets were performed on a Macbook Pro with a 3.1 GHz Intel Core i7 and 8GB of RAM for Table 1, and on a server with Intel Xeon E5-2697v4 processors and 512GB of RAM for the large scale benchmarking in Subsections 5.3.1, 5.3.2, and 5.3.3.
For t-SNE we chose MulticoreTSNE [48], which we believe to be the fastest extant implementation of Barnes-Hut t-SNE at this time, even when run in single core mode. It should be noted that MulticoreTSNE is a heavily optimized implementation wri en in C++ based on Van der Maaten's bhtsne [49] code.
As a fast alternative approach to t-SNE we also consider the FIt-SNE algorithm [30]. We used the reference implementation [29], which, like Multi- Figure 4: Comparison of average Procustes distance per point for t-SNE, LargeVis and UMAP over a variety of sizes of subsamples from the full Flow Cytometry dataset. UMAP sub-sample embeddings are very close to the full embedding even for subsamples of 5% of the full dataset, outperforming the results of t-SNE and LargeVis even when they use the full Flow Cytometry dataset. coreTNSE is an optimized C++ implementation. We also note that FIt-SNE makes use of multiple cores.
LargeVis [45] was benchmarked using the reference implementation [44]. It was run with default parameters including use of 8 threads on the 4-core machine.
e only exceptions were small datasets where we explicitly set the -samples parameter to n_samples/100 as per the recommended values in the documentation of the reference implementation.
e Isomap [46] and Laplacian Eigenmaps [6] implementations in scikit-learn [10] were used. We suspect the Laplacian eigenmaps implementation may not be well optimized for large datasets but did not nd a be er performing implementation that provided comparable quality results. Isomap failed to complete for the Shu le, Fashion-MNIST, MNIST and GoogleNews datasets, while Laplacian Eigenmaps failed to run for the GoogleNews dataset.
To allow a broader range of algorithms to run some of the datasets where subsampled or had their dimension reduced by PCA. e Flow Cytometry dataset was benchmarked on a 10% sample and the GoogleNews was subsampled down to 200,000 data points. Finally, the Mouse scRNA dataset was reduced to 1,000 dimensions via PCA.
Timing were performed for the COIL20 [34], COIL100 [35], Shu le [28], MNIST [25], Fashion-MNIST [53], and GoogleNews [33] datasets. Results can be seen in Table 1. UMAP consistently performs faster than any of the other algorithms aside from on the very small Pendigits dataset, where Laplacian Eigenmaps and Isomap have a small edge.
Scaling with Embedding Dimension
UMAP is signi cantly more performant than t-SNE 2 when embedding into dimensions larger than 2. is is particularly important when the intention is to use the low dimensional representation for further machine learning tasks such as clustering or anomaly detection rather than merely for visualization. e computation performance of UMAP is far more e cient than t-SNE, even for very small embedding dimensions of 6 or 8 (see Figure 5). is is largely due to the fact that UMAP does not require global normalisation. is allows the algorithm to work without the need for space trees -such as the quad-trees and oct-trees that t-SNE uses [49]-. Such space trees scale exponentially in dimension, resulting in t-SNE's relatively poor scaling with respect to embedding dimension. To allow a broader range of algorithms to run some of the datasets where subsampled or had their dimension reduced by PCA. e Flow Cytometry dataset was benchmarked on a 10% sample and the GoogleNews was subsampled down to 200,000 data points. Finally, the Mouse scRNA dataset was reduced to 1,000 dimensions via PCA. e fastest runtime for each dataset has been bolded.
UMAP FIt-SNE t-SNE LargeVis Eigenmaps Isomap
By contrast, we see that UMAP consistently scales well in embedding dimension, making the algorithm practical for a wider range of applications beyond visualization.
(a) A comparison of run time for UMAP, t-SNE and LargeVis with respect to embedding dimension on the Pen digits dataset. We see that t-SNE scales worse than exponentially while UMAP and LargeVis scale linearly with a slope so slight to be undetectable at this scale.
(b) Detail of scaling for embedding dimension of six or less. Wessentially e can see that UMAP and LargeVis are essentially at. In practice they appear to scale linearly, but the slope is essentially undetectable at this scale.
Scaling with Ambient Dimension
rough a combination of the local-connectivity constraint and the approximate nearest neighbor search, UMAP can perform e ective dimension reduction even for very high dimensional data (see Figure 9 for an example of UMAP operating directly on 1.8 million dimensional data). is stands in contrast to many other manifold learning techniques, including t-SNE and LargeVis, for which it is generally recommended to reduce the dimension with PCA before applying these techniques (see [50] for example).
To compare runtime performance scaling with respect to the ambient dimension of the data we chose to use the Mouse scRNA dataset, which is high dimensional, but is also amenable to the use of PCA to reduce the dimension of the data as a pre-processing step without losing too much of the important structure 3 . We compare the performance of UMAP, FIt-SNE, MulticoreTSNE, and LargeVis on PCA reductions of the Mouse scRNA dataset to varying dimensionalities, and on the original dataset, in Figure 6.
While all the implementations tested show a signi cant increase in runtime with increasing dimension, UMAP is dramatically more e cient for large ambient dimensions, easily scaling to run on the original unreduced dataset. e ability to run manifold learning on raw source data, rather than dimension reduced data that may have lost important manifold structure in the pre-processing, is a signi cant advantage.
Since UMAP scales well with ambient dimension the python implementation also supports input in sparse matrix format, allowing scaling to extremely high dimensional data, such as the integer data shown in Figures 9 and 10.
Scaling with the Number of Samples
For dataset size performance comparisons we chose to compare UMAP with FIt-SNE [30], a version of t-SNE that uses approximate nearest neighbor search and a Fourier interpolation optimisation approach; MulticoreTSNE [48], which we believe to be the fastest extant implementation of Barnes-Hut t-SNE; and LargeVis [45]. It should be noted that FIt-SNE, MulticoreTSNE, and LargeVis are all heavily optimized implementations wri en in C++. In contrast our UMAP implementation was wri en in Python -making use of the numba [24] library for performance. MulticoreTSNE and LargeVis were run in single threaded mode to make fair comparisons to our single threaded UMAP implementation.
We benchmarked all four implementations using subsamples of the Google-News dataset. e results can be seen in Figure 7. is demonstrates that UMAP has superior scaling performance in comparison to Barnes-Hut t-SNE, even when Barnes-Hut t-SNE is given multiple cores. Asymptotic scaling of UMAP is comparable to that of FIt-SNE (and LargeVis). On this dataset UMAP demonstrated somewhat faster absolute performance compared to FIt-SNE, and was dramatically faster than LargeVis. e UMAP embedding of the full GoogleNews dataset of 3 million word vectors, as seen in Figure 8, was completed in around 200 minutes, as compared with several days required for MulticoreTSNE, even using multiple cores.
To scale even further we were inspired by the work of John Williamson on embedding integers [52], as represented by (sparse) binary vectors of their prime divisibility. is allows the generation of arbitrarily large, extremely high dimension datasets that still have meaningful structure to be explored. In Figures 9 and 10 we show an embedding of 30,000,000 data samples from an ambient space of approximately 1.8 million dimensions. is computation took approximately 2 weeks on a large memory SMP. Note that despite the high ambient dimension, and vast amount of data, UMAP is still able to nd and display interesting structure. In Figure 11 we show local regions of the embedding, demonstrating the ne detail structure that was captured.
Weaknesses
While we believe UMAP to be a very e ective algorithm for both visualization and dimension reduction, most algorithms must make trade-o s and UMAP is no exception. In this section we will brie y discuss those areas or use cases where UMAP is less e ective, and suggest potential alternatives. For a number of uses cases the interpretability of the reduced dimension results is of critical importance. Similarly to most non-linear dimension reduction techniques (including t-SNE and Isomap), UMAP lacks the strong interpretability of Principal Component Analysis (PCA) and related techniques such a Non-Negative Matrix Factorization (NMF). In particular the dimensions of the UMAP embedding space have no speci c meaning, unlike PCA where the dimensions are the directions of greatest variance in the source data. Furthermore, since UMAP is based on the distance between observations rather than the source features, it does not have an equivalent of factor loadings that linear techniques such as PCA, or Factor Analysis can provide. If strong interpretability is critical we therefore recommend linear techniques such as PCA and NMF.
One of the core assumptions of UMAP is that there exists manifold structure in the data. Because of this UMAP can tend to nd manifold structure within the noise of a dataset -similar to the way the human mind nds structured constellations among the stars. As more data is sampled the amount of structure evident from noise will tend to decrease and UMAP becomes more robust, however care must be taken with small sample sizes of noisy data, or data with only large scale manifold structure. Detecting when a spurious embedding has occurred is a topic of further research.
UMAP is derived from the axiom that local distance is of more importance than long range distances (similar to techniques like t-SNE and LargeVis). UMAP therefore concerns itself primarily with accurately representing local structure. While we believe that UMAP can capture more global structure than these other techniques, it remains true that if global structure is of primary interest then UMAP may not be the best choice for dimension reduction.
Finally, to improve the computational e ciency of the algorithm a number of approximations are made. is can have an impact on the results of UMAP for small (less than 500 samples) dataset sizes. In particular the use of approximate nearest neighbor algorithms, and the negative sampling used in optimization, can result in suboptimal embeddings. For this reason we encourage users to take care with particularly small datasets. A slower but exact implementation of UMAP for small datasets is a future project.
Future Work
Having established both relevant mathematical theory and a concrete implementation, there still remains signi cant scope for future developments of UMAP.
Making use of the fuzzy simplicial set representation of data UMAP can potentially be extended to support (semi-)supervised dimension reduction, and dimension reduction for datasets with heterogeneous data types. Each data type (or prediction variables in the supervised case) can be seen as an alternative view of the underlying structure, each with a di erent associated metric -for example categorical data may use Jaccard or Dice distance, while ordinal data might use Manha an distance. Each view and metric can be used to independently generate fuzzy simplicial sets, which can then be intersected together to create a single fuzzy simplicial set for embedding. Extending UMAP to work with mixed data types would vastly increase the range of datasets to which it can be applied. Use cases for (semi-)supervised dimension reduction include semi-supervised clustering, and interactive labelling tools. e computational framework established for UMAP allows for the potential development of techniques to add new unseen data points into an existing embedding, and to generate high dimensional representations of arbitrary points in the embedded space. Furthermore, the combination of supervision and the addition of new samples to an existing embedding provides avenues for metric learning.
e addition of new samples to an existing embedding would allow UMAP to be used as a feature engineering tool as part of a general machine learning pipeline for either clustering or classi cation tasks. Pulling points back to the original high dimensional space from the embedded space would poten-tially allow UMAP to be used as a generative model similar to some use cases for autoencoders. Finally, there are many use cases for metric learning; see [54] or [7] for example.
ere also remains signi cant scope to develop techniques to both detect and mitigate against potentially spurious embeddings, particularly for small data cases. e addition of such techniques would make UMAP far more robust as a tool for exploratory data analysis, a common use case when reducing to two dimensions for visualization purposes.
Experimental versions of some of this work are already available in the referenced implementations.
Conclusions
We have developed a general purpose dimension reduction technique that is grounded in strong mathematical foundations. e algorithm implementing this technique is demonstrably faster than t-SNE and provides be er scaling. is allows us to generate high quality embeddings of larger data sets than had previously been a ainable. e use and e ectiveness of UMAP in various scienti c elds demonstrates the strength of the algorithm.
If B is contained in U , then g is constant in B and hence det(g) is constant and can be brought outside the integral. us, the volume of B is
det(g) B dx 1 ∧ ... ∧ dx n = det(g) π n/2 r n Γ(n/2 + 1) ,
where r is the radius of the ball in the ambient R n . If we x the volume of the ball to be π n/2 Γ(n/2+1) we arrive at the requirement that
det(g) = 1 r 2n .
Now, since g is assumed to be diagonal with constant entries we can solve for g itself as
g ij = 1 r 2 if i = j, 0 otherwise .(2)
e geodesic distance on M under g from p to q (where p, q ∈ B) is de ned as
inf c∈C b a g(ċ(t),ċ(t))dt,
where C is the class of smooth curves c on M such that c(a) = p and c(b) = q, andċ denotes the rst derivative of c on M. Given that g is as de ned in (2) we see that this can be simpli ed to where the face maps d i are given by pre-composition with F d i , and similarly for degeneracy maps, at any given value of a. Furthermore post-composition with F level-wise for each a de nes maps of fuzzy simplicial sets making FinSing a functor. We now construct FinReal as the le Kan extension of F along the Yoneda embedding:
1 r inf c∈C b a ċ(t),ċ(t) dt = 1 r inf c∈C b a ċ(t),ċ(t) dt = 1 r d R n (p, q).
Fin-sFuzz
e rst isomorphism follows from the Yoneda lemma, the equality is by construction, and the nal isomorphism follows by another application of the Yoneda lemma. Since every simplicial set can be canonically expressed as a colimit of standard simplices and FinReal commutes with colimits (as it was de ned via a colimit formula), it follows that FinReal is completely determined by its image on standard simplices. As a result the isomorphism of equation (4) extends to the required isomorphism demonstrating the adjunction.
C From t-SNE to UMAP
As an aid to implementation of UMAP and to illuminate the algorithmic similarities with t-SNE and LargeVis, here we review the main equations used in those methods, and then present the equivalent UMAP expressions in a notation which may be more familiar to users of those other methods.
In what follows we are concerned with de ning similarities between two objects i and j in the high dimensional input space X and low dimensional embedded space Y . ese are normalized and symmetrized in various ways. In a typical implementation, these pair-wise quantities are stored and manipulated as (potentially sparse) matrices.
antities with the subscript ij are symmetric, i.e. v ij = v ji . Extending the conditional probability notation used in t-SNE, j | i indicates an asymmetric similarity, i.e. v j|i = v i|j . t-SNE de nes input probabilities in three stages. First, for each pair of points, i and j, in X, a pair-wise similarity, v ij , is calculated, Gaussian with respect to the Euclidean distance between x i and x j :
v j|i = exp(− x i − x j 2 2 /2σ 2 i )(5)
p ij and w ij are de ned as in Barnes-Hut t-SNE (apart from the use of approximate nearest neighbors for p ij ) and γ is a user-chosen positive constant which weights the strength of the the repulsive contributions (second term) relative to the a ractive contribution ( rst term). Note also that the rst term resembles the optimizable part of the Kullback-Leibler divergence but using w ij instead of q ij . Abandoning calculation of q ij is a crucial change, because the LargeVis cost function is amenable to optimization via stochastic gradient descent.
Ignoring speci c de nitions of v ij and w ij , the UMAP cost function, the cross entropy, is:
C U M AP = i =j v ij log v ij w ij + (1 − v ij ) log 1 − v ij 1 − w ij(13)
Like the Kullback-Leibler divergence, this can be arranged into two constant contributions (those containing v ij only) and two optimizable contributions (containing w ij ):
C U M AP = i =j v ij log v ij + (1 − v ij ) log (1 − v ij ) −v ij log w ij − (1 − v ij ) log (1 − w ij )(14)
Ignoring the two constant terms, the UMAP cost function has a very similar form to that of LargeVis, but without a γ term to weight the repulsive component of the cost function, and without requiring matrix-wise normalization in the high dimensional space. e cost function for UMAP can therefore be optimized (in this case, minimized) with stochastic gradient descent in the same way as LargeVis.
Although the above discussion places UMAP in the same family of methods as t-SNE and LargeVis, it does not use the same de nitions for v ij and w ij . Using the notation established above, we now provide the equivalent expressions for the UMAP similarities. In the high dimensional space, the similarities v j|i are the local fuzzy simplicial set memberships, based on the smooth nearest neighbors distances:
v j|i = exp[(−d (x i , x j ) − ρ i )/σ i ](15)
As with LargeVis, v j|i is calculated only for n approximate nearest neighbors and v j|i = 0 for all other j. d (x i , x j ) is the distance between x i and x j , which UMAP does not require to be Euclidean. ρ i is the distance to the nearest neighbor 50 of i. σ i is the normalizing factor, which is chosen by Algorithm 3 and plays a similar role to the perplexity-based calibration of σ i in t-SNE. Calculation of v j|i with Equation 15 corresponds to Algorithm 2.
Symmetrization is carried out by fuzzy set union using the probabilistic tconorm and can be expressed as:
v ij = v j|i + v i|j − v j|i v i|j(16)
Equation 16 corresponds to forming top-rep in Algorithm 1. Unlike t-SNE, further normalization is not carried out. e low dimensional similarities are given by:
w ij = 1 + a y i − y j 2b 2 −1(17)
where a and b are user-de ned positive values. e procedure for nding them is given in De nition 11. Use of this procedure with the default values in the UMAP implementation results in a ≈ 1.929 and b ≈ 0.7915. Se ing a = 1 and b = 1 results in the Student t-distribution used in t-SNE.
De nition 8 .
8De ne the functor FinSing : FinEPMet → Fin-sFuzz by FinSing(Y ) : ([n], [0, a)) → hom FinEPMet (FinReal(∆ n <a ), Y ).
eorem 1. e functors FinReal : Fin-sFuzz → FinEPMet and FinSing : FinEPMet → Fin-sFuzz form an adjunction with FinReal the le adjoint and FinSing the right adjoint. e proof of this is by construction. Appendix B provides a full proof of the theorem.
Algorithm 3
3Compute the normalizing factor for distances σ function S KNND (knn-dists, n, ρ) Binary search for σ such that n i=1 exp(−(knn-dists i − ρ)/σ) = log 2 (n) return σ methods on the symmetric normalized Laplacian.is process is described in Algorithm 4.Algorithm 4 Spectral embedding for initializationfunction S E (top-rep, d) A ← 1-skeleton of top-rep expressed as a weighted adjacency matrix D ← degree matrix for the graphA L ← D 1/2 (D − A)D 1/2 evec ← Eigenvectors of L (sorted) Y ← evec[1..d + 1]0-base indexing assumed return Y e nal major component of UMAP is the optimization of the embedding through minimization of the fuzzy set cross entropy. Recall that fuzzy set cross entropy, with respect given membership functions µ and ν, is given by C((A, µ), (A, ν)) a) log(ν(a)) + (1 − µ(a)) log(1 − ν(a))) .
Figure 3 :
3Procrustes based alignment of a 10% subsample (red) against the full dataset (blue) for the ow cytometry dataset for both UMAP and t-SNE.
Figure 5 :
5Scaling performance with respect to embedding dimension of UMAP, t-SNE and LargeVis on the Pen digits dataset.
Figure 6 :
6Runtime performance scaling of UMAP, t-SNE, FIt-SNE and Largevis with respect to the ambient dimension of the data. As the ambient dimension increases beyond a few thousand dimensions the computational cost of t-SNE, FIt-SNE, and LargeVis all increase dramatically, while UMAP continues to perform well into the tens-of-thousands of dimensions.
Figure 7 :
7Runtime performance scaling of t-SNE and UMAP on various sized subsamples of the full Google News dataset. e lower t-SNE line is the wall clock runtime for Multicore t-SNE using 8 cores.
Figure 8 :
8Visualization of the full 3 million word vectors from the GoogleNews dataset as embedded by UMAP.
Figure 9 :
9Visualization of 30,000,000 integers as represented by binary vectors of prime divisibility, colored by density of points.
Figure 10 :Figure 11 :
1011Visualization of 30,000,000 integers as represented by binary vectors of prime divisibility, colored by integer value of the point (larger values are green or yellow, smaller values are blue or purple)Zooming in on various regions of the integer embedding reveals further layers of ne structure have been preserved.
B
Proof that FinReal and FinSing are adjoint eorem 2. e functors FinReal : Fin-sFuzz → FinEPMet and FinSing : FinEPMet → Fin-sFuzz form an adjunction with FinReal the le adjoint and FinSing the right adjoint. Proof. e adjunction is evident by construction, but can be made more explicit as follows. De ne a functor F : ∆ × I → FinEPMet by F ([n], [0, a)) = ({x 1 , x 2 , . . . , x n }, d a ), where d a (x i , x j ) = − log(a) if i = j, 0 otherwise . Now FinSing can be de ned in terms of F as FinSing(Y ) : ([n], [0, a)) → hom FinEPMet (F ([n], [0, a)), Y ).
results in a de nition of FinReal at a fuzzy simplicial set X as a colimit: FinReal(X) = colim y([n],[0,a))→X F ([n]). Further, it follows from the Yoneda lemma that FinReal(∆ n <a ) ∼ = F ([n], [0, a)), and hence this de nition as a le Kan extension agrees with De nition 7, and the de nition of FinSing above agrees with that of De nition 8. To see that FinReal and FinSing are adjoint we note that hom Fin-sFuzz (∆ n <a , FinSing(Y )) ∼ = FinSing(Y ) n <a = hom FinEPMet (F ([n], [0, a)), Y ) ∼ = hom FinEPMet (FinReal(∆ n <a ), Y )).
Table 1 :
1Runtime of several dimension reduction algorithms on various datasets.
Comparisons were performed against MulticoreTSNE as the current implementation of FIt-SNE does not support embedding into any dimension larger than 2.
In contrast to COIL100, on which PCA destroys much of the manifold structure
Acknowledgementse authors would like to thank Colin Weir, Rick Jardine, Brendan Fong, David Spivak and Dmitry Kobak for discussion and useful commentary on various dra s of this paper.A Proof of Lemma 1 Lemma 1. Let (M, g) be a Riemannian manifold in an ambient R n , and let p ∈ M be a point. If g is locally constant about p in an open neighbourhood U such that g is a constant diagonal matrix in ambient coordinates, then in a ball B ⊆ U centered at p with volume π n/2 Γ(n/2+1) with respect to g, the geodesic distance from p to any point q ∈ B is 1 r d R n (p, q), where r is the radius of the ball in the ambient space and d R n is the existing metric on the ambient space.Proof. Let x 1 , . . . , x n be the coordinate system for the ambient space. A ball B in M under Riemannian metric g has volume given bywhere σ 2 i is the variance of the Gaussian. Second, the similarities are converted into N conditional probability distributions by normalization:σ i is chosen by searching for a value such that the perplexity of the probability distribution p ·|i matches a user-speci ed value. ird, these probability distributions are symmetrized and then further normalized over the entire matrix of values to give:48Similarities between pairs of points in the output space Y are de ned using a Student t-distribution with one degree of freedom on the squared Euclidean distance:followed by the matrix-wise normalization, to form q ij :e t-SNE cost is the Kullback-Leibler divergence between the two probability distributions:this can be expanded into constant and non-constant contributions:Because both p ij and q ij require calculations over all pairs of points, improving the e ciency of t-SNE algorithms has involved separate strategies for approximating these quantities. Similarities in the high dimensions are e ectively zero outside of the nearest neighbors of each point due to the calibration of the p j|i values to reproduce a desired perplexity. erefore an approximation used in Barnes-Hut t-SNE is to only calculate v j|i for n nearest neighbors of i, where n is a multiple of the user-selected perplexity and to assume v j|i = 0 for all other j. Because the low dimensional coordinates change with each iteration, a di erent approach is used to approximate q ij . In Barnes-Hut t-SNE and related methods this usually involves grouping together points whose contributions can be approximated as a single point.LargeVis uses a similar approach to Barnes-Hut t-SNE when approximating p ij , but further improves e ciency by only requiring approximate nearest neighbors for each point. For the low dimensional coordinates, it abandons normalization of w ij entirely. Rather than use the Kullback-Leibler divergence, it optimizes a likelihood function, and hence is maximized, not minimized:
Pen-based recognition of handwri en digits data set. university of california, irvine. E Alpaydin, Fevzi Alimoglu, Machine Learning Repository. Irvine4University of CaliforniaE Alpaydin and Fevzi Alimoglu. Pen-based recognition of handwri en dig- its data set. university of california, irvine. Machine Learning Repository. Irvine: University of California, 4(2), 1998.
Bloodspot: a database of healthy and malignant haematopoiesis updated with puri ed and single cell mrna sequencing pro les. Frederik Otzen Bagger, Savvas Kinalis, and Nicolas Rapin. Frederik Otzen Bagger, Savvas Kinalis, and Nicolas Rapin. Bloodspot: a database of healthy and malignant haematopoiesis updated with puri ed and single cell mrna sequencing pro les. Nucleic Acids Research, 2018.
Fuzzy set theory and topos theory. Michael Barr, Canad. Math. Bull. 294Michael Barr. Fuzzy set theory and topos theory. Canad. Math. Bull, 29(4):501-508, 1986.
Evaluation of umap as an alternative to t-sne for single-cell data. bioRxiv. Etienne Becht, Charles-Antoine Dutertre, Immanuel W H Kwok, Lai Guan Ng, Florent Ginhoux, Evan W Newell, Etienne Becht, Charles-Antoine Dutertre, Immanuel W.H. Kwok, Lai Guan Ng, Florent Ginhoux, and Evan W Newell. Evaluation of umap as an alter- native to t-sne for single-cell data. bioRxiv, 2018.
Laplacian eigenmaps and spectral techniques for embedding and clustering. Mikhail Belkin, Partha Niyogi, Advances in neural information processing systems. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps and spectral tech- niques for embedding and clustering. In Advances in neural information processing systems, pages 585-591, 2002.
Laplacian eigenmaps for dimensionality reduction and data representation. Mikhail Belkin, Partha Niyogi, Neural computation. 156Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural computation, 15(6):1373-1396, 2003.
A survey on metric learning for feature vectors and structured data. Aurélien Bellet, Amaury Habrard, Marc Sebban, arXiv:1306.6709arXiv preprintAurélien Bellet, Amaury Habrard, and Marc Sebban. A survey on met- ric learning for feature vectors and structured data. arXiv preprint arXiv:1306.6709, 2013.
Kenneth Blomqvist, Samuel Kaski, Markus Heinonen, arXiv:1810.03052Deep convolutional gaussian processes. arXiv preprintKenneth Blomqvist, Samuel Kaski, and Markus Heinonen. Deep convolu- tional gaussian processes. arXiv preprint arXiv:1810.03052, 2018.
Omip-018: Chemokine receptor expression on human t helper cells. Tess Brodie, Elena Brenna, Federica Sallusto, Cytometry Part A. 836Tess Brodie, Elena Brenna, and Federica Sallusto. Omip-018: Chemokine receptor expression on human t helper cells. Cytometry Part A, 83(6):530- 532, 2013.
API design for machine learning so ware: experiences from the scikit-learn project. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Alexandre Peter Pre Enhofer, Jaques Gramfort, Robert Grobler, Jake Layton, Arnaud Vanderplas, Brian Joly, Gaël Holt, Varoquaux, ECML PKDD Workshop: Languages for Data Mining and Machine Learning. Lars Buitinck, Gilles Louppe, Mathieu Blondel, Fabian Pedregosa, Andreas Mueller, Olivier Grisel, Vlad Niculae, Peter Pre enhofer, Alexandre Gram- fort, Jaques Grobler, Robert Layton, Jake VanderPlas, Arnaud Joly, Brian Holt, and Gaël Varoquaux. API design for machine learning so ware: expe- riences from the scikit-learn project. In ECML PKDD Workshop: Languages for Data Mining and Machine Learning, pages 108-122, 2013.
A molecular census of arcuate hypothalamus and median eminence cell types. N John, Evan Z Campbell, Henning Macosko, Fenselau, H Tune, Anna Pers, Danielle Lyubetskaya, Melissa Tenen, Anne Mj Goldman, Jon M Verstegen, Steven A Resch, Mccarroll, Nature neuroscience. 203484John N Campbell, Evan Z Macosko, Henning Fenselau, Tune H Pers, Anna Lyubetskaya, Danielle Tenen, Melissa Goldman, Anne MJ Verstegen, Jon M Resch, Steven A McCarroll, et al. A molecular census of arcuate hypothala- mus and median eminence cell types. Nature neuroscience, 20(3):484, 2017.
Classifying clustering schemes. Gunnar Carlsson, Facundo Mémoli, Foundations of Computational Mathematics. 132Gunnar Carlsson and Facundo Mémoli. Classifying clustering schemes. Foundations of Computational Mathematics, 13(2):221-252, 2013.
. Brian Clark, Genevieve Stein-O'brien, Fion Shiau, Gabrielle Cannon, Emily Davis, omas Sherman, Fatemeh Rajaii, Rebecca James-Esposito, RichardBrian Clark, Genevieve Stein-O'Brien, Fion Shiau, Gabrielle Cannon, Emily Davis, omas Sherman, Fatemeh Rajaii, Rebecca James-Esposito, Richard
Comprehensive analysis of retinal development at single cell resolution identi es n factors as essential for mitotic exit and speci cation of late-born cells. bioRxiv. Elana Gronostajski, Fertig, 378950Gronostajski, Elana Fertig, et al. Comprehensive analysis of retinal devel- opment at single cell resolution identi es n factors as essential for mitotic exit and speci cation of late-born cells. bioRxiv, page 378950, 2018.
Di usion maps. Applied and computational harmonic analysis. Stéphane Ronald R Coifman, Lafon, 21Ronald R Coifman and Stéphane Lafon. Di usion maps. Applied and com- putational harmonic analysis, 21(1):5-30, 2006.
Revealing multi-scale population structure in large cohorts. bioRxiv. Alex Diaz-Papkovich, Luke Anderson-Trocme, Simon Gravel, 423632Alex Diaz-Papkovich, Luke Anderson-Trocme, and Simon Gravel. Reveal- ing multi-scale population structure in large cohorts. bioRxiv, page 423632, 2018.
E cient k-nearest neighbor graph construction for generic similarity measures. Wei Dong, Charikar Moses, Kai Li, Proceedings of the 20th International Conference on World Wide Web, WWW '11. the 20th International Conference on World Wide Web, WWW '11New York, NY, USAACMWei Dong, Charikar Moses, and Kai Li. E cient k-nearest neighbor graph construction for generic similarity measures. In Proceedings of the 20th In- ternational Conference on World Wide Web, WWW '11, pages 577-586, New York, NY, USA, 2011. ACM.
selfa entive) autoencoder-based universal language representation for machine translation. Carlos Escolano, Marta R Costa-Jussà, Fonollosa, arXiv:1810.06351arXiv preprintCarlos Escolano, Marta R Costa-jussà, and José AR Fonollosa. (self- a entive) autoencoder-based universal language representation for ma- chine translation. arXiv preprint arXiv:1810.06351, 2018.
Survey article: an elementary illustrated introduction to simplicial sets. Greg Friedman, Rocky Mountain Journal of Mathematics. 422Greg Friedman et al. Survey article: an elementary illustrated introduction to simplicial sets. Rocky Mountain Journal of Mathematics, 42(2):353-423, 2012.
Lukas Fuhrimann, Vahid Moosavi, Patrick Ole Ohlbrock, Pierluigi Dacunto, arXiv:1809.08660Data-driven design: Exploring new structural forms using machine learning and graphic statics. arXiv preprintLukas Fuhrimann, Vahid Moosavi, Patrick Ole Ohlbrock, and Pierluigi Da- cunto. Data-driven design: Exploring new structural forms using machine learning and graphic statics. arXiv preprint arXiv:1809.08660, 2018.
Gaussian mixture models with wasserstein distance. Ilya Benoit Gaujac, David Feige, Barber, arXiv:1806.04465arXiv preprintBenoit Gaujac, Ilya Feige, and David Barber. Gaussian mixture models with wasserstein distance. arXiv preprint arXiv:1806.04465, 2018.
Simplicial homotopy theory. G Paul, John F Jardine Goerss, Springer Science & Business MediaPaul G Goerss and John F Jardine. Simplicial homotopy theory. Springer Science & Business Media, 2009.
Analysis of a complex of statistical variables into principal components. Harold Hotelling, Journal of educational psychology. 246417Harold Hotelling. Analysis of a complex of statistical variables into princi- pal components. Journal of educational psychology, 24(6):417, 1933.
Multidimensional scaling by optimizing goodness of t to a nonmetric hypothesis. J B , Psychometrika. 291J. B. Kruskal. Multidimensional scaling by optimizing goodness of t to a nonmetric hypothesis. Psychometrika, 29(1):1-27, Mar 1964.
Numba: A llvm-based python jit compiler. Antoine Siu Kwan Lam, Stanley Pitrou, Seibert, Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, LLVM '15. the Second Workshop on the LLVM Compiler Infrastructure in HPC, LLVM '15New York, NY, USAACM7Siu Kwan Lam, Antoine Pitrou, and Stanley Seibert. Numba: A llvm-based python jit compiler. In Proceedings of the Second Workshop on the LLVM Compiler Infrastructure in HPC, LLVM '15, pages 7:1-7:6, New York, NY, USA, 2015. ACM.
e MNIST database of handwri en digits. Yann Lecun, Corinna Cortes, Yann Lecun and Corinna Cortes. e MNIST database of handwri en digits.
Shi -invariant similarities circumvent distance concentration in stochastic neighbor embedding and variants. A John, Michel Lee, Verleysen, Procedia Computer Science. 4John A Lee and Michel Verleysen. Shi -invariant similarities circumvent distance concentration in stochastic neighbor embedding and variants. Pro- cedia Computer Science, 4:538-547, 2011.
Manifold learning of fourdimensional scanning transmission electron microscopy. Xin Li, E Ondrej, Dyck, P Mark, Andrew R Oxley, Leland Lupini, John Mcinnes, Stephen Healy, Sergei V Jesse, Kalinin, arXiv:1811.00080arXiv preprintXin Li, Ondrej E Dyck, Mark P Oxley, Andrew R Lupini, Leland McInnes, John Healy, Stephen Jesse, and Sergei V Kalinin. Manifold learning of four- dimensional scanning transmission electron microscopy. arXiv preprint arXiv:1811.00080, 2018.
UCI machine learning repository. M Lichman, M. Lichman. UCI machine learning repository, 2013.
Fit-sne. George Linderman, George Linderman. Fit-sne. https://github.com/KlugerLab/FIt-SNE, 2018.
E cient algorithms for t-distributed stochastic neighborhood embedding. Manas George C Linderman, Jeremy G Rachh, Stefan Hoskins, Yuval Steinerberger, Kluger, arXiv:1712.09005arXiv preprintGeorge C Linderman, Manas Rachh, Jeremy G Hoskins, Stefan Steiner- berger, and Yuval Kluger. E cient algorithms for t-distributed stochastic neighborhood embedding. arXiv preprint arXiv:1712.09005, 2017.
Categories for the working mathematician. Saunders Mac Lane, Springer Science & Business Media5Saunders Mac Lane. Categories for the working mathematician, volume 5. Springer Science & Business Media, 2013.
Simplicial objects in algebraic topology. May Peter, University of Chicago Press11J Peter May. Simplicial objects in algebraic topology, volume 11. University of Chicago Press, 1992.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Je Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Je Dean. Distributed representations of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119, 2013.
Columbia object image library (coil-20. A Sameer, Nene, K Shree, Hiroshi Nayar, Murase, Technical reportSameer A. Nene, Shree K. Nayar, and Hiroshi Murase. Columbia object image library (coil-20. Technical report, 1996.
. A Sameer, Nene, K Shree, Hiroshi Nayar, Murase, Technical reportobject image library (coil-100Sameer A. Nene, Shree K. Nayar, and Hiroshi Murase. object image library (coil-100. Technical report, 1996.
Human bone marrow assessment by single cell rna sequencing, mass cytometry and ow cytometry. Katherine E Karolyn A Oetjen, Meghali Lindblad, Gege Goswami, Gui, K Pradeep, Catherine Dagur, Laura W Lai, Philip Dillon, Christopher S Mccoy, Hourigan, bioRxivKarolyn A Oetjen, Katherine E Lindblad, Meghali Goswami, Gege Gui, Pradeep K Dagur, Catherine Lai, Laura W Dillon, J Philip McCoy, and Christopher S Hourigan. Human bone marrow assessment by single cell rna sequencing, mass cytometry and ow cytometry. bioRxiv, 2018.
Fast batch alignment of single cell transcriptomes uni es multiple mouse cell atlases into an integrated landscape. bioRxiv. Jong-Eun Park, Krzysztof Polanski, Kerstin Meyer, Sarah A Teichmann, 397042Jong-Eun Park, Krzysztof Polanski, Kerstin Meyer, and Sarah A Teichmann. Fast batch alignment of single cell transcriptomes uni es multiple mouse cell atlases into an integrated landscape. bioRxiv, page 397042, 2018.
Simplicial autoencoders. Jose Daniel, Gallego Posada, Jose Daniel Gallego Posada. Simplicial autoencoders. 2018.
A leisurely introduction to simplicial sets. Emily Riehl, Emily Riehl. A leisurely introduction to simplicial sets. Unpublished ex- pository article available online at h p://www. math. harvard. edu/˜eriehl, 2011.
Category theory in context. Emily Riehl, Courier. Dover PublicationsEmily Riehl. Category theory in context. Courier Dover Publications, 2017.
A nonlinear mapping for data structure analysis. W John, Sammon, IEEE Transactions on computers. 1005John W Sammon. A nonlinear mapping for data structure analysis. IEEE Transactions on computers, 100(5):401-409, 1969.
Flowrepository: A resource of annotated ow cytometry datasets associated with peer-reviewed publications. Josef Spidlen, Karin Breuer, Chad Rosenberg, Nikesh Kotecha, Ryan R Brinkman, Cytometry Part A. 819Josef Spidlen, Karin Breuer, Chad Rosenberg, Nikesh Kotecha, and Ryan R Brinkman. Flowrepository: A resource of annotated ow cytometry datasets associated with peer-reviewed publications. Cytometry Part A, 81(9):727-731, 2012.
Metric realization of fuzzy simplicial sets. Self published notes. I David, Spivak, David I Spivak. Metric realization of fuzzy simplicial sets. Self published notes, 2012.
. Jian Tang, Largevis, Jian Tang. Largevis. https://github.com/lferry007/LargeVis, 2016.
Visualizing largescale and high-dimensional data. Jian Tang, Jingzhou Liu, Ming Zhang, Qiaozhu Mei, Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebInternational World Wide Web Conferences Steering Commi eeJian Tang, Jingzhou Liu, Ming Zhang, and Qiaozhu Mei. Visualizing large- scale and high-dimensional data. In Proceedings of the 25th International Conference on World Wide Web, pages 287-297. International World Wide Web Conferences Steering Commi ee, 2016.
Mapping a manifold of perceptual observations. Joshua B Tenenbaum, Advances in Neural Information Processing Systems 10. M. I. Jordan, M. J. Kearns, and S. A. SollaMIT PressJoshua B. Tenenbaum. Mapping a manifold of perceptual observations. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Infor- mation Processing Systems 10, pages 682-688. MIT Press, 1998.
A global geometric framework for nonlinear dimensionality reduction. science. Vin De Joshua B Tenenbaum, John C Silva, Langford, 290Joshua B Tenenbaum, Vin De Silva, and John C Langford. A global geometric framework for nonlinear dimensionality reduction. science, 290(5500):2319-2323, 2000.
. Dmitry Ulyanov, Multicore-Tsne, Dmitry Ulyanov. Multicore-tsne. https://github.com/DmitryUlyanov/ Multicore-TSNE, 2016.
Accelerating t-sne using tree-based algorithms. Laurens Van Der Maaten, Journal of machine learning research. 151Laurens van der Maaten. Accelerating t-sne using tree-based algorithms. Journal of machine learning research, 15(1):3221-3245, 2014.
Visualizing data using t-sne. Laurens Van Der Maaten, Geo Rey Hinton, Journal of machine learning research. 9Laurens van der Maaten and Geo rey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579-2605, 2008.
Visualizing data using t-SNE. Laurens Van Der Maaten, Geo Rey Hinton, Journal of Machine Learning Research. 9Laurens van der Maaten and Geo rey Hinton. Visualizing data using t-SNE. Journal of Machine Learning Research, 9:2579-2605, 2008.
What do numbers look like?. John Williamson, John Williamson. What do numbers look like? https://johnhw.github. io/umap_primes/index.md.html, 2018.
Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. Han Xiao, Kashif Rasul, Roland Vollgraf, Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
Distance metric learning: A comprehensive survey. Liu Yang, Rong Jin, 2Michigan State UniversiyLiu Yang and Rong Jin. Distance metric learning: A comprehensive survey. Michigan State Universiy, 2(2):4, 2006.
. Lo, Zadeh, Information and control. Fuzzy sets. 83Lo i A Zadeh. Information and control. Fuzzy sets, 8(3):338-353, 1965.
| [
"https://github.com/lmcinnes/umap,",
"https://github.com/jlmelville/uwot.",
"https://github.com/KlugerLab/FIt-SNE,",
"https://github.com/lferry007/LargeVis,",
"https://github.com/DmitryUlyanov/"
] |
[
"arXiv:physics/0008155v2 [physics.chem-ph] 26 Oct 2000 GRECP/MRD-CI calculations of the spin-orbit splitting in the ground state of Tl and of the spectroscopic properties of TlH. Typeset using REVT E X 1",
"arXiv:physics/0008155v2 [physics.chem-ph] 26 Oct 2000 GRECP/MRD-CI calculations of the spin-orbit splitting in the ground state of Tl and of the spectroscopic properties of TlH. Typeset using REVT E X 1"
] | [
"A V Titov ",
"N S Mosyagin ",
"A B Alekseyev ",
"R J Buenker ",
"\nTheoretische Chemie\nPetersburg Nuclear Physics Institute\nGatchina, St.-Petersburg district 188350RUSSIA\n",
"\nBergische Universität GH Wuppertal\nGaußstraße 20D-42097WuppertalGERMANY\n"
] | [
"Theoretische Chemie\nPetersburg Nuclear Physics Institute\nGatchina, St.-Petersburg district 188350RUSSIA",
"Bergische Universität GH Wuppertal\nGaußstraße 20D-42097WuppertalGERMANY"
] | [] | The generalized relativistic effective core potential (GRECP) approach is employed in the framework of multireference single-and double-excitation configuration interaction (MRD-CI) method to calculate the spin-orbit (SO) splitting in the 2 P o ground state of the Tl atom and spectroscopic constants for the 0 + ground state of TlH. The 21-electron GRECP for Tl is used and the outer core 5s and 5p pseudospinors are frozen with the help of the level shift technique. The spin-orbit selection scheme with respect to relativistic multireference states and the corresponding code are developed and applied in the calculations. In this procedure both correlation and spin-orbit interactions are taken into account. A [4,4,4,3,2] basis set is optimized for the Tl atom and employed in the TlH calculations. Very good agreement is found for the equilibrium distance, vibrational frequency, and dissociation energy of the TlH ground state (R e = 1.870Å, ω e = 1420 cm −1 , D e = 2.049 eV) as compared with the experimental data (R e = 1.868Å, ω e = 1391 cm −1 , D e = 2.06 eV).SHORT NAME: GRECP/MRD-CI calculations on Tl and TlH | null | [
"https://export.arxiv.org/pdf/physics/0008155v2.pdf"
] | 119,412,112 | physics/0008155 | e90f03bf4c64f3267e842789fd7ccdf506bf14b2 |
arXiv:physics/0008155v2 [physics.chem-ph] 26 Oct 2000 GRECP/MRD-CI calculations of the spin-orbit splitting in the ground state of Tl and of the spectroscopic properties of TlH. Typeset using REVT E X 1
A V Titov
N S Mosyagin
A B Alekseyev
R J Buenker
Theoretische Chemie
Petersburg Nuclear Physics Institute
Gatchina, St.-Petersburg district 188350RUSSIA
Bergische Universität GH Wuppertal
Gaußstraße 20D-42097WuppertalGERMANY
arXiv:physics/0008155v2 [physics.chem-ph] 26 Oct 2000 GRECP/MRD-CI calculations of the spin-orbit splitting in the ground state of Tl and of the spectroscopic properties of TlH. Typeset using REVT E X 1
(March 31, 2022)FOR INDEXING: Relativistic Effective Core Potential Configuration InteractionMolecule with heavy atomsElectronic structure calculation
The generalized relativistic effective core potential (GRECP) approach is employed in the framework of multireference single-and double-excitation configuration interaction (MRD-CI) method to calculate the spin-orbit (SO) splitting in the 2 P o ground state of the Tl atom and spectroscopic constants for the 0 + ground state of TlH. The 21-electron GRECP for Tl is used and the outer core 5s and 5p pseudospinors are frozen with the help of the level shift technique. The spin-orbit selection scheme with respect to relativistic multireference states and the corresponding code are developed and applied in the calculations. In this procedure both correlation and spin-orbit interactions are taken into account. A [4,4,4,3,2] basis set is optimized for the Tl atom and employed in the TlH calculations. Very good agreement is found for the equilibrium distance, vibrational frequency, and dissociation energy of the TlH ground state (R e = 1.870Å, ω e = 1420 cm −1 , D e = 2.049 eV) as compared with the experimental data (R e = 1.868Å, ω e = 1391 cm −1 , D e = 2.06 eV).SHORT NAME: GRECP/MRD-CI calculations on Tl and TlH
I. INTRODUCTION
During the last few years a large number of publications have dealt with calculations of the 2 P o 1/2 − 2 P o 3/2 splitting in the ground state of the Tl atom and spectroscopic constants for the 0 + ground state of TlH. Such interest to these systems arises because of their relatively simple electronic structure in the valence region. This makes them very convenient objects for testing methods for the description of relativistic and correlation effects. We can mention some recent papers [1][2][3][4][5][6][7] in which the electronic structure of thallium was studied and papers [8][9][10][11] in which the calculation of spectroscopic constants for TlH was carried out. With the exception of the atomic RCC calculation by Eliav et al. [6,7] and the atomic CI/MBPT2 calculation by Dzuba et al. [2], the published results cannot be considered to be very accurate and reliable, however, primarily because of the rather small basis sets and the small numbers of correlated electrons.
In calculations of Tl and TlH with the use of the relativistic effective core potential (RECP) approximation [12], in which only 13 thallium electrons are treated explicitly (13e-RECPs), one more problem appears. The correlation of the outer core (OC) and valence (V) electrons, occupying the 5d and ns, np, nd (n = 6, 7, . . .) orbitals, respectively, cannot be satisfactorily described, mainly because the smoothed V-pseudoorbitals (pseudospinors) have the wrong behaviour in the OC region. One-electron functions φ corr x,k (r), being some linear combinations of virtual orbitals, correlate to occupied orbitals φ occ x (where x = c, v stands for the OC and V orbital indices) and are usually localized in the same space region as φ occ
x . Therefore, the original "direct" Coulomb two-electron integrals describing the OC-V correlation of φ occ c and φ occ v can be well reproduced by those with the pseudoorbitals, despite their localization in different space regions. However, a two-electron integral describing the "exchange" part of the OC-V correlation, r dr φ corr † c,k ′ (r)φ occ v (r)
r ′ dr ′ φ corr † v,k (r ′ )φ occ c (r ′ ) 1 r − r ′ ,(1)
cannot be well reproduced because the V-pseudoorbitals are smoothed in the OC region where the OC-pseudoorbitals are localized (for more theoretical details, see Ref. [13]). The first RECPs for Tl with the 5s, 5p shells treated explicitly (21e-RECPs) for which this disadvantage of the earlier "semicore" RECPs was overcome were generated and tested in single-configurational calculations by Mosyagin et al. [14,15]. Some other inherent problems of the "nodeless" RECPs were also solved with the 21-electron Generalized RECP (21e-GRECP) version presented in Ref. [14,15]. In Ref. [15], for the case of the 21e-GRECP it was also shown that the 5s, 5p pseudospinors could be frozen while still providing significantly higher accuracy than 13e-RECPs because the valence and virtual ns and np (n = 6, 7, . . .) pseudoorbitals in the former case already have the proper nodal structure in the OC region.
II. THE GRECP OPERATOR IN THE SPIN-ORBIT REPRESENTATION
In most existing quantum-chemical codes for molecular calculations with RECPs (as well as in the MRD-CI code used in the present work) spin-orbit basis sets are used. In these versions the number of the two-electron integrals is substantially smaller than in the case of spinor basis sets providing the same level of correlation treatment. Spin-orbit basis sets are preferable in the calculations in which correlation effects give a higher contribution to the properties of interest than those of a relativistic nature. This is usually the case for valence and outermost core electrons, which mainly determine chemical and spectroscopic properties of molecules.
Together with the spin-orbit basis set, the GRECP for Tl should also be employed in the spin-orbit representation. Following Ref. [16,17], the components of the spin-averaged part of the GRECP operator called the averaged relativistic effective potentials (AREP) are written in the form [13,14]:
U AREP nvl (r) = l + 1 2l + 1 U nv l+ (r) + l 2l + 1 U nvl− (r),(2)U AREP ncl (r) = l + 1 2l + 1 V ncnvl+ (r) + l 2l + 1 V ncnvl− (r),(3)V ncnvl± (r) = [U ncl± (r) − U nvl± (r)] P ncl± (r) + P ncl± (r)[U ncl± (r) − U nvl± (r)] − n ′ c P ncl± (r) U ncl± (r) + U n ′ c l± (r) 2 − U nvl± (r) P n ′ c l± (r) ,(4)
where U nl± (r) are the potentials generated for theφ nl± (r) pseudospinors by means of the Goddard scheme [18]; n x is the principal quantum number of an outercore (n c ), valence (n v ) or virtual (n a ) pseudospinor; l and j are angular and total electron momenta; ± stands for j = l ± 1/2; P ncl± (r) is the radial projector on the OC pseudospinors:
P ncl± (r) = m |n c , l, ±, m n c , l, ±, m| .
Clearly, the AREP component of the GRECP may be used in calculations with nonrelativistic quantum-chemical codes in order to take account of spin-independent relativistic effects. The operator of the effective spin-orbit interaction can be derived following the expression for the spin-angular projector P l± from Ref. [17]:
P l± (Ω, σ) = 1 2l + 1 l + 1 2 ± 1 2 P l (Ω) ± 2P l (Ω) l sP l (Ω) .(6)
Its components, called the effective spin-orbit potentials (ESOP), can be written as [13,14] ∆U nvl (r) = U nvl+ (r) − U nvl− (r),
∆U ncl (r) = V ncnvl+ (r) − V ncnvl− (r),(7)U ESOP nl = 2∆U nl (r) 2l + 1 P l l s ,(8)P l = l m=−l |lm lm| ,(9)
where |lm lm| is the projector on the spherical function Y lm . Neglecting the difference between U AREP nvL and U nv LJ for virtual pseudospinors with l > L (for theoretical details see Ref. [13]), one can write the GRECP operator U as
U = U AREP nvL (r) + L−1 l=0 U AREP nvl (r) − U AREP nvL (r) P l + nc L l=0 U AREP ncl (r)P l + L l=1 U ESOP nvl + nc U ESOP ncl P l .(11)
Note that the nonlocal terms with the projectors on the most important correlation functionsφ corr nxl±;n k l k ± (r) (1) (where x = c, v) localized mainly in the OC and V regions and with the corresponding potentials U corr n k l k ± (r) can be taken into account in the considered expressions for the GRECP operator additionally to those with the OC projectors. Obviously, the non-local GRECP terms for the frozen OC pseudospinors can be omitted in the sum over (n c l) in Eq. (11).
We should emphasize that in spite of the rather complicated form of the above GRECP operator, the main computational effort in calculating matrix elements with the GRECP is caused by the standard radially-local operator, which is also a part of conventional RECP operators, and not by the non-local GRECP terms. Thus, the additional complications in calculations with GRECPs are negligible in comparison with treatments employing conventional semi-local RECPs if comparable gaussian expansions are used for the partial potentials. The more critical point is that the effort in the calculation and transformation of two-electron integrals is always substantially higher than that in the computation of RECP integrals for all known RECP versions (including GRECPs) when appropriately large basis sets are employed in the precise calculations.
III. FROZEN-CORE APPROXIMATION FOR THE OUTER-CORE SHELLS
To perform precise calculations of chemical and spectroscopic properties, correlations should be taken into account not only within the valence regions of heavy atoms and heavyatom molecules but in the core regions and between the valence and core electrons as well. In practice, the goal is to achieve a given level of accuracy by correlating as small a number of electrons as possible, thus reducing the computational effort. However, as discussed in the Introduction, the accuracy of the RECPs generated for a given number of explicitly treated electrons cannot always satisfy the accuracy requirements expected from correlating all these electrons in the corresponding all-electron calculation. This is true, in particular, for calculations of Tl, having a 5d 10 6s 2 6p 1 leading configuration in the ground state, and its compounds.
To attain an accuracy level of 400 cm −1 for the 2 P o 1/2 − 2 P o 3/2 splitting in the ground state and for excitation energies to low-lying states of Tl and to take account of the core polarization, one should correlate at least 13 electrons, i.e. include the 5d shell. This is achieved in the present MRD-CI calculations with f and g basis functions describing mainly polarization of the 5d shell (for other recent results see, e.g., [1,3,4]). Some data from our 13e-CI calculations of the SO-splitting in the ground state of Tl are collected in Table I in comparison with the 3e-CI results, which in our DF/CI (Dirac-Fock calculations followed by CI) and the GRECP/CI calculations have errors of about 600 cm −1 .
We also should mention the recent relativistic coupled-cluster (RCC) results of Landau et al. [7], in which 35 electrons are correlated and a decrease of close to 90 cm −1 in the above mentioned SO splitting is due to the Breit interaction. Note that this interaction is not yet taken into account in the RECPs considered in the present work.
Obviously, the 5d shell should also be explicitly treated in calculations of molecules containing Tl to take into account core relaxation and polarization effects with satisfactory accuracy. For these calculations it would be optimal to use the RECPs with 13 electrons of Tl treated explicitly (13e-RECPs) such as the RECP of Ross et al. [19] or our valence RECP version [15]. None of the known nodeless 13e-RECPs can provide the aforementioned accuracy, however. Although single-configurational tests [14,15] give errors of 100 cm −1 or somewhat more for excitation energies to low-lying states, they are dramatically increased for 13e-RECPs if all 13 electrons are correlated. The reasons are discussed in the Introduction (one can also see the results of the 13e-RECP/MRD-CI calculations in Ref. [5] and of the 13e-PP/MRCI calculations in Ref. [4]).
To overcome this disadvantage, one should use RECPs with at least 21 electrons, e.g. 21e-GRECP [14,15] and 21e-PP [4] for Tl. The 5s and 5p pseudospinors can be treated as frozen, however, while still providing the aforementioned accuracy. The 5p orbitals have energies about four times higher and their average radii are 1.4 times shorter than those for the 5d orbitals. Moreover, their angular correlation is supressed as compared with the 5d shell because the most important polarization functions (5d for the 5p orbitals and 5p for the 5s orbitals) are completely occupied in the lowest-lying states. Therefore, the 5s, 5p orbitals are substantially less active in chemical processes.
In order to freeze the 5s and 5p pseudospinors, one can apply the energy level shift technique [13]. Following Huzinaga et al. [20], one should add the matrix elements of the SCF field operators (the Coulomb and spin-dependent exchange terms) over these OC pseudospinors to the one-electron part of the Hamiltonian together with the level shift terms
nc f ,l,± B nc f l± P nc f l± (r) ,(12)
where B nc f l± is at least of order |2ε nc f l± | and ε nc f l± is the orbital energy of the OC pseudospinor φ nc f l± (r) to be frozen. Such nonlocal terms are needed in order prevent collapse of the molecular orbitals to the frozen states (the 5s 1/2 , 5p 1/2,3/2 pseudospinors for Tl). All terms with the frozen core pseudospinors described here (the Coulomb and exchange interactions, and the level shift operator) can easily be presented in spin-orbit form with the help of eq. (6), as was done above for the GRECP operator. More importantly, these OC pseudospinors can be frozen in calculations with spin-orbit basis sets and they can already be frozen at the stage of calculation of the one-electron matrix elements of the Hamiltonian, as implemented in the MOLGEP code [21]. Thus, any integrals with indices of the frozen spinors are completely excluded after the integral calculation step.
In single-configurational calculations with the numerical HFJ code [15] we have seen that the SO splitting of the 5p shell increases the resulting SO splitting of the 2 P o ground state by about 400 cm −1 , whereas the SO splitting of the 5d shell decreases the final SO splitting by almost the same value. Therefore, it is important to freeze the 5p 1/2 and 5p 3/2 (pseudo)spinors and not some averaged 5p (pseudo)orbitals if the SO interaction is to be taken into account in the 5d and valence shells.
In Ref. [4], the 21e-"energy-adjusted" Pseudopotential (PP) having the features which have been emphasized [13][14][15]22] as inherent for GRECPs (different potentials for the 5p and 6p pseudospinors in the case of Tl) is generated and applied to the calculation of the SO splitting in Tl, with the core correlations described by the core polarization potential (CPP). Some average OC pseudoorbitals are frozen and the SO splitting of 7810 cm −1 obtained in their 21e-PP/MRCI calculation is quite different than our result.
After applying the projection operator of eq. (6) to the level shift (12), Coulomb and exchange terms with the frozen core pseudospinors, the AREP and ESOP parts of the GRECP operator are to be modified to include these new contributions. This technique was successfully employed in our earlier calculations of the spin-rotational Hamiltonian parameters in the BaF and YbF molecules [23].
The freezing technique discussed above can be efficiently applied to those OC shells for which the spin-orbit interaction is clearly more important than the correlation and relaxation effects. If the latter effects are neglected entirely or taken into account within "correlated" GRECP versions [13], the corresponding OC pseudospinors can be frozen and the spin-orbit basis sets can be successfully used for other explicitly treated shells. This is true for the 5p 1/2,3/2 subshells in Tl, contrary to the case of the 5d 3/2,5/2 subshells. Freezing the OC pseudospinors allows one to optimize an atomic basis set only for the orbitals which are varied or correlated in subsequent calculations, thus avoiding the basis set optimization for the frozen states and reducing the number of the calculated and stored two-electron integrals. Otherwise, if the 5p shell should be correlated explicitly, a spinor basis set can be more appropriate than the spin-orbit one.
IV. THE MRD-CI METHOD
In the multireference single-and double-excitation CI approach [24], the ΛS-basis sets of many-electron spin-adapted (and space symmetry-adapted) functions (SAFs) are employed. This method makes use of configuration selection and perturbative energy extrapolation techniques [24] and employs the Table CI algorithm [25] for efficient handling of the various open-shell cases which arise in the Hamiltonian matrix elements. Some new features of the selection scheme used in this work are considered below. The higher excitations in the CI treatment has been assessed by applying the generalized multireference analogue [27] of the Davidson correction [26] to the extrapolated T =0 energies of each root.
After selecting the ΛS-sets of SAFs for a chosen threshold T i (i = 1, 2), they are collected together in accord with the relativistic double-group symmetry requirements and a spin-orbit CI (SO-CI) calculation is performed with these SAFs to obtain some SO-roots (Ψ SO,T i I ) and their energies (E SO,T i I ) which are of interest in a considered double group irreducible representation (irrep). Then the linear T =0 correction is evaluated in the basis of the calculations with the T 1 and T 2 thresholds. Finally, the generalized Davidson (or full CI) correction is applied to each root of interest.
The stage of the molecular spectroscopic constants calculation begins with the fitting of the relativistic CI potential curves to polynomials which are employed to construct appropriate Born-Oppenheimer nuclear motion Schrödinger equations solved by the Dunham method with the help of the DUNHAM-SPECTR code of Mitin [28].
A. Features of the spin-orbit selection procedure Let us define a Hamiltonian H for a molecule as
H = H (0) + V corr + H SO ,(13)
where H (0) is an unperturbed spin-independent Hamiltonian, V corr is a two-electron operator describing correlations, and H SO is a one-electron spin-orbit operator (ESOP in our case). Let us choose an orthonormal basis set of SAFs {Φ (n)ΛS I } in the ΛS-coupling scheme (or "spin-orbit" basis set). In particular, these SAFs can be solutions of Hartree-Fock equations with a spin-averaged RECP for the molecule considered. The H (0) Hamiltonian is constructed to be diagonal in the given many-electron basis set:
H (0) Φ (n)ΛS I = E (n)ΛS I Φ (n)ΛS I ,(14)
where n = 0, 1, . . . (see below the description of the indices in more detail). Additionally define
H (0) so that < Φ (n)ΛS I |H (0) |Φ (n)ΛS I > ≡ < Φ (n)ΛS I |H|Φ (n)ΛS I >(15)
in order to exclude the first-order PT contributions to total energies of molecular states (this corresponds to the Epstein-Nesbet PT form). We will ignore the two-electron spin-dependent (Breit) interactions which ordinarily can be neglected when studying chemical and spectroscopic properties. Breit and other quantum electrodynamic (QED) effects are relatively large for lanthanides and actinides, but for the V and OC shells they can be efficiently represented by the one-electron j-dependent RECP terms.
Let us distinguish the following types of many-electron functions which are considered in a double-group symmetry:
• {Φ (0)ΛS I , E (0)ΛS I } N (0)ΛS I=0
are reference SAFs ("Mains") and their energies
E (n)ΛS I =< Φ (n)ΛS I |H (0) |Φ (n)ΛS I >(16)
at n = 0 for those ΛS-irreps which are of interest for the final spin-orbit CI (SO-CI) calculation;
• {Ψ (0)ΛS I , E (0)ΛS I } N (0)ΛS I=0
are some of the CI solutions ("ΛS-roots") and their energies
E (0)ΛS I =< Ψ (0)ΛS I |H (0) + V corr |Ψ (0)ΛS I >(17)
in the ΛS-irrep which diagonalize the (H (0) + V corr ) in the subspace of Mains only;
• {Ψ (0)SO I , E (0)SO I } N (0)SO I=0
are some of the SO-CI solutions ("SO-roots" which are of interest) and their energies
E (0)SO I =< Ψ (0)SO I |H (0) + V corr + H SO |Ψ (0)SO I >(18)
which diagonalize the complete H Hamiltonian in the subspace of all Mains collected from all the ΛS-irreps considered;
• {Φ (1)ΛS I , E (1)ΛS I } N (1)ΛS I=0
are the singly-excited SAFs (SE-SAFs) and their energies (16) at n = 1, i.e.
Φ (1)ΛS I ∈ {P ΛS a + p a q Φ (0)Λ ′ S ′ J } \ {Φ (0)ΛS K } ∀ (p, q; J, K) ,(19)
where P ΛS = |ΛS >< ΛS| is a projector on the subspace of the ΛS-states, a + p (a q ) are the creation (annihilation) operators of one-electron states (spin-orbitals) φ p (φ q ). The SE-SAFs can be automatically selected because of their relatively small number;
• {Φ (2)ΛS I , E (2)ΛS I } N (2)ΛS I=0 are the doubly-excited SAFs (DE-SAFs) Φ (2)ΛS I ∈ {P ΛS a + p a + q a r a s Φ (0)Λ ′ S ′ J } \ ({Φ (1)ΛS K } ∪ {Φ (0)ΛS L }) ∀ (p, q, r, s; J, K, L)(20)
and their energies (16) at n = 2; a SAF Φ
(2)ΛS I should be selected in accordance with some selection criteria to be used in the final SO-CI calculation. In principle, triple and higher excited sets of SAFs can be similarly defined.
The correlation operator, V corr , has the symmetry of the molecule and, therefore, can be rewritten as
V corr ≡ ΛS P ΛS V corr P ΛS .(21)
It normally gives the most important contribution through the second-order Brillouin-Wigner PT energy correction in the basis set of Φ (n)ΛS J (after appropriate redefinition of H (0) in the subspace of Mains, see Ref. [29,30]):
n=1,2 J | < Φ (n)ΛS J |V corr |Ψ (0)ΛS 0 > | 2 E ΛS 0 − E (n)ΛS J(22)
for the non-degenerate ground state Ψ ΛS 0 with the exact energy E ΛS 0 in the ΛS-irrep (obviously, terms with n ≥ 3 are automatically equal to zero because V corr is a two-electron operator). A similar expression with the replacements Ψ
for n = 1, 2 are usually employed in the selection procedures for SAFs {Φ (1,2)ΛS I } based on the nonrelativistic A k and B k approximations (when H SO is not taken into account) [29,30] or on the multi-diagonalization scheme [24] for subsequent calculations of Ψ ΛS I . In spite of some differences between these selection schemes, they are not very essential for the final CI results if a high quality reference set (set of Mains) and a suitably small threshold are chosen.
For molecules with heavy and very heavy atoms, the H SO operator can give large contributions to the energy both in second and in higher PT orders if a non-optimal set of Mains, {Φ (0)ΛS I }, is chosen after an SCF calculation with the SO-averaged potentials (AREPs). The latter is the usual practice and the set of Mains generated in such a manner can be smaller than optimal for the case of large SO interaction. Therefore, not only second but third and maybe even higher PT order(s) can be important in the selection procedure for a "bad" set of the starting roots Ψ (0)SO I . This means that the off-diagonal matrix elements of H between secondary many-electron basis functions (SE-, DE-SAFs) may be introduced into the selection procedure because H SO is a substantially off-diagonal operator contrary to V corr :
< Φ (n)ΛS I |H SO |Φ (n ′ )Λ ′ S ′ J > (ΛS) and (Λ ′ S ′ ) can be different, n ′ ∈ {n, |n ± 1|} .(24)
In particular, H SO gives zero matrix elements between SAFs belonging to the same ΛS-irrep in the D 2h or C 2v symmetry groups. For simplicity, let us consider the selection scheme based on the A k approximation (22). In the nonrelativistic-type selection scheme, a SAF Φ
(1,2)ΛS J is selected in a ΛS-irrep if | < Φ (1,2)ΛS J |V corr |Ψ (0)ΛS I > | 2 E (1,2)ΛS J − E (0)ΛS I ≥ δE ΛS T ,(25)
where I ≤ N SO and δE ΛS T is a threshold criterion for the energy selection scheme in the ΛSirrep. In (25) ) and the perturbation (V corr → V corr + H SO ) should be used in the previous expression, so that
| < Φ (1,2)ΛS J |V corr + H SO |Ψ (0)SO I > | 2 E (1,2)ΛS J − E (0)SO I ≥ δE SO T ,(26)
where δE SO T is a selection threshold for Φ (1,2)ΛS J to be used in the subsequent SO-CI calculation.
In more detail, the matrix element in the PT numerator of the above formula can be rewritten as
| < Φ (1,2)ΛS J |V corr |Ψ (0)SO I > | 2(27)+ | < Φ (1)ΛS J |H SO |Ψ (0)SO I > | 2(28)+ 2ℜ(< Ψ (0)SO I |V corr |Φ (1)ΛS J >< Φ (1)ΛS J |H SO |Ψ (0)SO I >)(29)
by taking into account eq. (21) in the calculation of the matrix elements for V corr , contrary to those for H SO . In spite of mixing different ΛS-states due to H SO , the number of non-zero matrix elements with H SO in eq. (29) is usually relatively small because the SO interaction is a one-electron operator (see eq. (24)) which is very localized compared with the long-range Coulomb interaction. Thus, one can see that the nonrelativistic-type selection due to V corr with respect to {P ΛS Ψ ) can be efficiently applied instead of eq. (26). It must be emphasized that contrary to the selection schemes in the nonrelativistic case, SE-SAFs should be generated with respect to the Mains from all the used Λ ′ S ′ -irreps. In a more simplified treatment, the automatic selection of SE-SAFs can be done with respect to a subset of the most important Mains, e.g. having largest CI-coefficients in the Ψ (0)SO I roots. Next let us consider the terms from the third-order PT energy (PT-3) for SAFs {Φ (1,2)ΛS J } which can be essential for the SO selection procedure. Below we shall discuss only matrix elements in the PT numerators of the corresponding PT-3 terms because specific expressions for the energy denominators are not essential for our analysis and conclusions. For simplicity, we shall omit the terms conjugate to those considered.
The first two types of the PT-3 matrix elements are:
< Ψ (0)SO I |H SO |Φ (1)ΛS J >< Φ (1)ΛS J |H SO |Φ (1)Λ ′ S ′ K >< Φ (1)Λ ′ S ′ K |H SO |Ψ (0)SO I > ,(30)< Ψ (0)SO I |H SO |Φ (1)ΛS J >< Φ (1)ΛS J |V corr |Φ (1)ΛS L >< Φ (1)ΛS L |H SO |Ψ (0)SO I > .(31)
The first intermediate state, Φ The following matrix element type
< Ψ (0)SO I |V corr |Φ (1,2)ΛS J >< Φ (1,2)ΛS J |H SO |Φ (1)Λ ′ S ′ K >< Φ (1)Λ ′ S ′ K |H SO |Ψ (0)SO I > .(32)
can be used for the selection of Φ can be important for a subsequent SO-CI calculation. The following matrix element types contain a second order perturbation in V corr and, therefore, we can suggest that in general they are less important for our consideration than the above terms:
< Ψ (0)SO I |V corr |Φ (1,2)ΛS J >< Φ (1,2)ΛS J |V corr |Φ (1)ΛS L >< Φ (1)ΛS L |H SO |Ψ (0)SO I > ,(33)< Ψ (0)SO I |V corr |Φ (1,2)ΛS J >< Φ (1,2)ΛS J |H SO |Φ (1,2)Λ ′ S ′ K >< Φ (1,2)Λ ′ S ′ K |V corr |Ψ (0)SO I > .(34)
These terms, together with the conjugate ones, can be used for the selection of Φ
(2)ΛS J >< Φ (2)ΛS J |H SO |Φ (2)Λ ′ S ′ K >< Φ (2)Λ ′ S ′ K |V corr |Ψ (0)SO I >(35)
can be analyzed separately because it contains both the intermediate states as DE-SAFs. In general, it is more difficult to take such terms into account in the selection procedure, because of the large number of tested DE-SAFs. We should note, however, that when a tested Φ
(2)ΛS J
DE-SAF is fixed, the other intermediate states, {Φ
(2)Λ ′ S ′ K }, are those DE-SAFs which are only singly excited with respect to the tested one. Therefore, the number of them will not be very high.
For completeness, the matrix element type which is cubic in the V corr perturbation should be listed:
< Ψ (0)SO I |V corr |Φ (1,2)ΛS J >< Φ (1,2)ΛS J |V corr |Φ (1,2)ΛS K >< Φ (1,2)ΛS K |V corr |Ψ (0)SO I > .(36)
This term is of nonrelativistic type and it is out of our particular interest because it does not contain the H SO perturbation. Again, we can separate the term
< Ψ (0)SO I |V corr |Φ (2)ΛS J >< Φ (2)ΛS J |V corr |Φ (2)ΛS K >< Φ (2)ΛS K |V corr |Ψ (0)SO I > .(37)
from the previous one only because the latter contains both the DE-SAF intermediate states.
We should emphasize that the terms containing SE-or DE-SAFs in the intermediate states of the PT-3 expressions are not taken into account in the B k and multi-diagonalization selection procedures, although these schemes include, in fact, contributions of higher than the second-order PT terms.
When analyzing the above PT-3 terms, it can be concluded that if one replaces the reference SO roots, Ψ , which diagonalize the complete Hamiltonian H for the sets of both Mains and SE-SAFs taken together, and applies the selection criterion based on the second-order PT (26), then the main part of the above PT-3 terms will be taken into account in such a selection. An exception occurs for terms (35) and (37), but in general they are thought to be less important than the other third-order PT terms.
In a more sophisticated treatment, the reference Ψ Again we should emphasize that it is not necessary to use the third-order PT or the suggested automatic selection of SE-SAFs in a selection procedure if a fairly good set of the reference roots Ψ (0)SO I is used, i.e., if they provide good approximations to the required solutions Ψ SO I . In particular, if the {Ψ (0)SO I } set is obtained from a preliminary series of SO-CI calculations of the studied states, this can be superfluous.
As an alternative to the above selection schemes with respect to the PT energy, the PT expressions for the CI coefficient of a trial SE-or DE-SAF can be also explored. Applying the above PT analysis to the case of the Ψ
(0+1 ′ )SO I > reference state, a Φ (1 ′′ ,2)ΛS J SAF is selected if its CI coefficient C (1 ′′ ,2)ΛS J satisfies the inequality |C (1 ′′ ,2)ΛS J | ≥ C min ,
where C min is the selection threshold for the CI coefficients and
C (1 ′′ ,2)ΛS J = < Φ (1 ′′ ,2)ΛS J |H|Ψ (0+1 ′ )SO I > E (0+1 ′ )SO I − E (1 ′′ ,2)ΛS J ,(38)
is the first-order PT value for the CI coefficient of a tested SAF which is not included in the subset of the Φ (1 ′ )ΛS J reference SE-SAFs. Such a means of selection can be preferable if those properties of primary interest cannot be calculated from potential energy curves or surfaces. Moreover, the PT selection with respect to both the energy and the CI coefficients can be applied simultaneously if the properties are of different nature.
V. CALCULATIONS
In the CI calculations of Tl and TlH we used the MRD-CI package [24] combined with the SO selection codes based on the scheme described above. Our test calculations have shown that spin-orbit selection is very helpful for preparation of appropriate sets of Mains and for reducing effort in the final CI calculations with an optimal set of selected SAFs.
A. Spin-orbit splitting in the ground state of Tl
Calculations for the Tl atom were performed to optimize the basis set and the level shift GRECP parameters for the 21e/8fs-GRECP, i.e. the 21 electron GRECP with 8 electrons occupying the frozen OC pseudospinors, 5s 1/2 and 5p 1/2,3/2 . The quality of the generated basis set is analyzed by calculating the 2 P o 1/2 − 2 P o 3/2 splitting for the ground state. Before discussing the present results, it is worthwhile to to make some brief comments concerning numerous values for the Tl( 2 P o ) spin-orbit splitting calculated and published in the last years. It is well known that this quantity calculated at the one-configuration Dirac-Fock level agrees very well, within 100 cm −1 , with the experimental value of 7793 cm −1 [31]. However, such a good agreement results from the fortuitous cancelation of a number of large errors caused by the DF approximation. The situation changes dramatically when even the three outermost 6s 2 6p 1 electrons are correlated. Some Tl( 2 P o ) splitting values from our 3e-CI calculations employing different codes and basis sets are given in Table I and lie between 7130 and 7210 cm −1 . The corresponding 3e-CI results obtained by other groups after 1996 range from 6800 to 7800 cm −1 and such a large divergency can not be considered as satisfactory, because the ground state of Tl has a very simple configuration structure as compared to other heavy elements. We consider our calculated values of about 7200 cm −1 for this splitting as reliable for an approach in which the 5d spinors are frozen after the DF calculation of the nonrelativistically averaged 6s 2 6p 1 configuration. Taking into account that a contribution of aproximately -100 cm −1 arises from the Breit terms, the deviation from the experimental value for the splitting is around 600-700 cm −1 (this size of an error can be justified theoretically). As will be shown below, the computated value for the Tl( 2 P o ) splitting can be significantly improved if 5d electrons are explicitly included in the calculations and the corresponding basis set contains functions with sufficiently high angular momenta.
The optimal basis set was selected in a series of MRD-CI calculations for Tl (with different sets of primitives and numbers of contracted s, p, d, f and g functions) to minimize the sum of energies for the ground 2 P 1/2 and 2 P 3/2 states. In these calculations, the SAFs were selected in the 2 B 1u , 2 B 2u , and 2 B 3u irreps of the D 2h group (nonrelativistic-type degenerate 2 P ground states belong to these irreps) because these doublets are strongly mixed by the SO interaction, resulting in the splitting of the ground 2 P state. We have found that two g functions should be added to the basis set, giving a contribution of about 9000 cm −1 to the 2 P o ground state total energies. The resulting [4,4,4,3,2] basis set and GRECP parameters for Tl can be found on http://www.qchem.pnpi.spb.ru.
For the [4,4,4,3,2] basis set we have also performed MRD-CI calculations including SAFs from the 2 A u irrep and SAFs with quartet multiplicity ( 4 B 1u , 4 B 2u , 4 B 3u , and 4 A u ). In our calculations with different basis sets, their contributions have decreased the SO splitting by about 170 cm −1 and the total energy by about 2000 cm −1 . One can see from Table I that this decrease is mainly caused by the p-and d-components which arise from reexpansion of the leading spinor configuration in terms of the spin-orbit configurations. For good accuracy we can recommend the inclusion of Λ ′ |S ± 1|-irreps for the calculation of states having leading configurations in ΛS-irreps.
In Table I some of our final MRD-CI results are collected together with the atomic relativistic coupled-cluster (RCC) results [6] obtained with a very large basis set. In these MRD-CI calculations altogether 627 Mains in three basic irreps and about 100 Mains in five additional irreps were involved and SE-SAFs were automatically generated for three Mains to prepare the reference {Ψ
(0+1 ′ )SO I } 3 I=1
states. Relatively small thresholds, T 1 =0.03 and T 2 =0.01 µE h , are used in the final runs with the [4,4,4,3,2] basis set (for the T =0 threshold and full-CI extrapolations), thus selecting respectively about 190000 and 450000 SAFs altogether.
One can see that the best SO splitting calculated in the present work underestimates the experimental result [31] by about 400 cm −1 (recall that additionally about 90 cm −1 is due to the Breit interaction [7]). Analyzing our previous GRECP/RCC calculations of Hg [37] it can be concluded that this occurs due to the neglect of the OC-V correlations with the OC 5p and 4f shells, and to a lesser extent with 5s rather than due to the atomic basis set incompleteness, the GRECP errors or the restricted CI approximation. The OC-V correlation (contribution to the total energy) in Tl and Hg will have the same order of magnitude for respective pairs of correlated electrons (spinors).
We also studied the reliability of the linear T →0 extrapolation procedure currently used in the MRD-CI code. In the final results of our MRD-CI calculations the corresponding correction gives the highest contribution to the cumulative error. So this is a bottleneck of the present Tl and TlH calculations with the large number of Mains.
B. Spectroscopic constants of the ground state in TlH
The explicit treatment of 5d electrons in precise TlH (TlX) calculations is necessary not only due to the strong correlation between these and the valence electrons of Tl, but also because of the substantial influence of relaxation-polarization effects in this shell on the bond formation. This cannot be very accurately taken into account by employing a polarization potential [32,33] in combination with, e.g., 3e-RECPs [14,15]. The influence of other atoms (X) in a TlX molecule on the 5p, 5s and 4f shells of Tl is significantly smaller and can be neglected if an accuracy of a few hundreds of wavenumbers for the excitation energies of low-lying states is sufficient. We neglected their contributions in calculation of the TlH spectroscopic constants.
In calculating spectroscopic properties for the TlH ground state (Table II) we used the contracted [4,4,4,3,2] basis set for thallium discussed above and the [4,3,1] set for hydrogen (see http://www.qchem.pnpi.spb.ru) contracted from the primitive (6,3,1) gaussian basis set of Dunning [38]. The SAFs were selected in the 1 A 1 , 3 B 1 , 3 B 2 and 3 A 2 ΛS-irreps of the C 2v group because the triplet states are most strongly admixed by the SO interaction to the nonrelativistic 1 A 1 (or 1 Σ + in C ∞v ) ground state producing the relativistic 0 + ground state in the double C * ∞v group. We have performed three series of TlH calculations for 16 interatomic distances. In these runs, the Ψ (0+1 ′ )SO 0 reference SO states are generated with the MRD-CI code by diagonalizing H for the set of Mains and the SE-SAFs which are automatically selected with respect to the single configuration SCF ground state (calculated with the SO-averaged GRECP), giving a contribution of more than 90 % to the final wave function.
The first run is used for preparing an optimal set of Mains for the second series of SO-CI calculations. Only one SCF configuration which has the lowest energy in each ΛS-irrep is included into the subspace of ΛS Mains and, consequently, the SO reference state consists of these SCF configurations and the automatically selected SE-SAFs with respect to the SCF configuration from the 1 A 1 irrep.
Those SAFs were selected as Mains for the second run which had the highest CI coefficients in the first run. As a result, 37 Mains in all irreps together are employed in the second run. Relatively small thresholds, T 1 =1.0 and T 2 =0.1 µE h , are used in the second run (for the T →0 extrapolation [24]), thus causing about 20000 and 85000 SAFs to be selected in the ΛS-irreps altogether.
In the most computationally consuming third run (with the set of Mains consisting of the SAFs having the largest CI coefficients in the wave function from the second run), about 320 Mains are used altogether and the thresholds are set at T 1 =0.1 and T 2 =0.05 µE h . About 70000 and 130000 SAFs, respectively, were used in the ΛS-irreps altogether in the final SO-CI calculations.
One can see from Table II that the basis set superposition error (BSSE) (see [39] and references) must be taken into account for an accurate computation of spectroscopic constants. The BSSE was studied in the Tl + ion calculations for the same interatomic distances as in TlH and estimated also in the Tl − calculations for three distances, i.e. with the ghost H atom. The same molecular basis set as in TlH was used for both the Tl and H atoms. The contribution from BSSE to the total energy is decisive for the 5d 10 and 6s 2 shells considered in the case of Tl + , while its changing due to addition of the 6p electrons (which are bonding in TlH) can be considered as relatively small, because the difference in BSSE for Tl + and Tl − is not significant in comparison with other errors. In the calculations of the spectroscopic properties with the counterpoise corrections (CPC), the calculated TlH points on the potential curve were corrected with the calculated BSSE for Tl + , i.e. for the 5d, 6s shells taken into account.
One can see that after applying the T=0, FCI, and counterpoise corrections, the calculated properties are in very good agreement with the experimental data both in the second and third runs. The accuracy obtained is notably better than for other existing results for TlH (and not only for those presented in Table II). We suggest, however, that the very good agreement of the calculated D e with the experimental value can be fortuitous and the "real" (full CI) value can be notably different from the listed one because of the approximations made.
VI. RESUME
The SO splitting in the ground 2 P state of Tl is calculated by the MRD-CI method with the 21e-GRECP when 5d 10 , 6s 2 and 6p 1 electrons are correlated and the 5s 2 and 5p 6 pseudospinors are frozen in the framework of the level shift technique. A [4,4,4,3,2] basis set is optimized for Tl and an underestimation of about 400 cm −1 is found for the SO splitting as compared with the experimental data.
Further improvement of the accuracy can be attained when correlations with the outer core 4f, 5p and 5s shells of Tl and Breit effects are taken into account. We expect that this can be efficiently done in the framework of the "correlated" 21e/8fs-GRECP version in which 13 electrons are treated explicitly as in the present calculation. The inclusion of h-type functions is also desirable, as has been demonstrated for Hg in Ref. [37].
Fourteen electrons are correlated in the calculation of spectroscopic constants for the 0 + ground state of TlH and very good agreement with the experimental data is found.
The developed spin-orbit selection scheme and code are demonstrated to be efficient when large sets of basis functions and reference configurations are required in high-precision electronic-structure calculations.
ACKNOWLEDGMENTS
This work was supported by the DFG/RFBR grant N 96-03-00069 and the RFBR grant N 99-03-33249. AVT is grateful to REHE program of the European Science Foundation for fellowship grants (NN 14-95 and 22-95) to visit the laboratory of one of us (RJB), where part of the work was done. We are thankful to K. Shulgina and T. Isaev (PNPI) for writing some codes used for automatic generation of Mains.
We are grateful to Dipl.-Ing. H.-P. Liebermann for the help in combining the MOLGEP and MRD-CI codes. We are also grateful to Dr. G. Hirsch (deceased) for his kind hospitality and invaluable help during visits to Wuppertal by AVT and NSM.
The main part of the present calculations was carried out at the computer center of the Bergische Universität GH Wuppertal. JECS codes developed by PNPI quantum chemistry group were used for remote control of the calculations.
precautions should be taken concerning the degenerate states and the orthogonality constraints with respect to the lower-lying states with J < I). As a result, the first rows, columns and energies on the diagonal of the Hamiltonian matrix < Φ(n)ΛS J |V corr |Φ (0)ΛS I > , < Φ (n)ΛS J |H|Φ (n)ΛS J >
,
is a test SE-SAF and the indices for other intermediate SAFs run over all the allowed ones. The PT-3 terms summed over the indices of the second intermediate state give contributions (together with the conjugate terms) for the selection of the test SE-SAF. However, the SE-SAFs can be selected automatically and these terms are out of our particular interest.
over another set of intermediate states in the PT-3 expression. As one can see, this term can be used for the selection of both SE-SAFs and DE-SAFs. The above expression is quadratic in the (large) H SO interaction contrary to the remaining terms considered below. The contribution of the terms with matrix elements (32) can be essential and their use for the selection of DE-SAFs Φ (2)ΛS J
(
0+1 ′ )SO I SO states can be generated when diagonalizing H for the sets of Mains and those SE-SAFs ({Φ (1 ′ )ΛS J }), which are automatically generated with respect to the most important subset of Mains ({Φ (0 ′ )ΛS I }). The latter subset can be selected from a preliminary CI calculation for the set of Mains, e.g. in a basis of configurations with the highest CI coefficients in Ψ (0)SO I , and so on. This is worthwhile in order to reduce the number of SAFs in the resulting reference states Ψ (0+1 ′ )SO I rather than in Ψ (0+1)SO I , thus reducing the selection time which can otherwise be very large. We should also note that the trial SE-and DE-SAFs, which are tested in the above selection procedure, are generated only for the set of Mains and not for the {Φ (0+1 ′ )ΛS I } set. Therefore, the number of the tested configurations and the selection time are reasonably limited. If the number of configurations used in {Φ (0+1 ′ )ΛS I } is not high, one can extend the set of Mains by including the above subset of SE-SAFs, thus obviously enlarging the set of the consequently generated and tested SE-and DE-SAFs.
we have replaced the exact E ΛS values (17) that corresponds to the Rayleigh-Schrödinger PT case. Such a simplification is justified for small δE ΛS T and good reference states. In a SO-CI calculation within some relativistic double-group irrep, substitutions for the reference state (ΨI
energies by the approximate E
(0)ΛS
I
(0)ΛS
I
→ Ψ
(0)SO
I
TABLE II .
IIGRECP/MRDCI calculations of the spectroscopic constants for the ground state of TlH. REP/KRCCSD(T): Tl [4,5,5,1] + H [3,2] (Lee et al., 1998 [10]) REP/KRCCSD(T): Tl [4,5,5,1] + H [3,2] (Han et al., 2000 [11]) 1.877 2.00 21e/8fs-GRECP/14e-MRD-CI Tl [4,4,4,3,2] + H [4,3,1] a Huber & Herzberg (1979) [35] have published value 1.87Å which can be obtained from the rotational constant B e . b This value is calculated by us from B e .R e
ω e
D e
Method
(Å)
(cm −1 )
(eV )
SOCIEX: Tl [8,8,5,2] + H [4,3,1]
(Rakowitz & Marian, 1997 [8])
1.86
1386
2.13
13e-RECP/SOCI: Tl [4,4,4,1] + H [4,2]
(DiLabio & Christiansen, 1998 [9])
1.912
1341
1.908
13e-1.910
1360
2.02
21e-(Present calculations)
37 Mains, T=0.1
1.858
1481
2.03
---"---+ CPC
1.872
1446
1.984
---"---+ T=0 + FCI
1.858
1453
2.10
---"---+ T=0 + FCI + CPC
1.872
1410
2.026
320 Mains, T=0.05
1.866
1408
2.23
----"----+ T=0 + FCI
1.858
1449
2.124
----"----+ T=0 + FCI + CPC
1.870
1420
2.049
Experiment (Grundström & Valberg, 1938 [34])
1.866 a
1390.7
2.06
Experiment (Urban et al., 1989 [36])
1.872 b
1391.3
. F Rakowitz, C M Marian, Chem. Phys. Lett. 257105F. Rakowitz and C. M. Marian, Chem. Phys. Lett. 257, 105 (1996).
. V A Dzuba, V V Flambaum, M G Kozlov, Phys. Rev. A. 543948V. A. Dzuba, V. V. Flambaum, and M. G. Kozlov, Phys. Rev. A 54, 3948 (1996).
. U Wahlgren, M Sjøvoll, H Fagerli, O Gropen, B Schimmelpfennig, Theor. Chim. Acc. 97324U. Wahlgren, M. Sjøvoll, H. Fagerli, O. Gropen, and B. Schimmelpfennig, Theor. Chim. Acc. 97, 324 (1997).
. T Leininger, A Berling, A Nicklass, H Stoll, H.-J Werner, H.-J Flad, Chem. Phys. 21719T. Leininger, A. Berling, A. Nicklass, H. Stoll, H.-J. Werner, and H.-J. Flad, Chem. Phys. 217, 19 (1997).
. R J Buenker, A B Alekseyev, H.-P Liebermann, R Lingott, G Hirsch, J. Chem. Phys. 1083400R. J. Buenker, A. B. Alekseyev, H.-P. Liebermann, R. Lingott, and G. Hirsch, J. Chem. Phys. 108, 3400 (1998).
. E Eliav, U Kaldor, Y Ishikawa, M Seth, P Pyykkö, Phys. Rev. A. 533926E. Eliav, U. Kaldor, Y. Ishikawa, M. Seth, and P. Pyykkö, Phys. Rev. A 53, 3926 (1999).
Relativistic Quantum Chemistry -Progress and Prospects. A Landau, E Eliav, U Kaldor, Acquafredda di Maratea, ItalyPoster presented at the European Research ConferenceA. Landau, E. Eliav, U. Kaldor, Poster presented at the European Research Conference "Relativistic Quantum Chemistry -Progress and Prospects", Acquafredda di Maratea, Italy, 10-15 April 1999.
. F Rakowitz, C M Marian, Chem. Phys. 225223F. Rakowitz and C. M. Marian, Chem. Phys. 225, 223 (1997).
. G A Dilabio, P A Christiansen, J. Chem. Phys. 1087527G. A. DiLabio and P. A. Christiansen, J. Chem. Phys. 108, 7527 (1998).
. H.-S Lee, Y.-K Han, M C Kim, C Bae, Y S Lee, Chem. Phys. Lett. 29397H.-S. Lee, Y.-K. Han, M. C. Kim, C. Bae, and Y. S. Lee, Chem. Phys. Lett. 293, 97 (1998).
. Y.-K Han, C Bae, S.-K Son, Y S Lee, J. Chem. Phys. 1122684Y.-K. Han, C. Bae, S.-K. Son, and Y. S. Lee, J. Chem. Phys. 112, 2684 (2000).
. K Balasubramanian, K S Pitzer, Adv. Chem. Phys. 1287K. Balasubramanian and K. S. Pitzer, Adv. Chem. Phys. 1, 287 (1987);
. W C Ermler, R B Ross, P A Christiansen, Adv. Quant. Chem. 19139W. C. Ermler, R. B. Ross, and P. A. Christiansen, Adv. Quant. Chem. 19, 139 (1988);
. K Balasubramanian, Chem. Rev. 891801K. Balasubra- manian, Chem. Rev. 89, 1801 (1989);
. A V Titov, N S Mosyagin, Int. J. Quant. Chem. 71359A. V. Titov and N. S. Mosyagin, Int. J. Quant. Chem. 71, 359 (1999).
. N S Mosyagin, A V Titov, Z Latajka, Int. J. Quant. Chem. 631107N. S. Mosyagin, A. V. Titov, and Z. Latajka, Int. J. Quant. Chem. 63, 1107 (1997).
. I I Tupitsyn, N S Mosyagin, A V Titov, J. Chem. Phys. 1036548I. I. Tupitsyn, N. S. Mosyagin, and A. V. Titov, J. Chem. Phys. 103, 6548 (1995).
. W C Ermler, Y S Lee, K S Pitzer, N W Winter, J. Chem. Phys. 69976W. C. Ermler, Y. S. Lee, K. S. Pitzer, and N. W. Winter, J. Chem. Phys. 69, 976 (1978).
. P Hafner, W H E Schwarz, Chem. Phys. Lett. 65537P. Hafner and W. H. E. Schwarz, Chem. Phys. Lett. 65, 537 (1979).
. W A Goddard, Iii , Phys. Rev. 174659W. A. Goddard III, Phys. Rev. 174, 659 (1968).
. R B Ross, J M Powers, T Atashroo, W C Ermler, L A Lajohn, P A Christiansen, J. Chem. Phys. 936654R. B. Ross, J. M. Powers, T. Atashroo, W. C. Ermler, L. A. Lajohn, and P. A. Chris- tiansen, J. Chem. Phys. 93, 6654 (1990).
. V Bonifacic, S Huzinaga, J. Chem. Phys. 602779V. Bonifacic and S. Huzinaga, J. Chem. Phys. 60, 2779 (1974);
. O Gropen, S Huzinaga, A D Mclean, J. Chem. Phys. 73402O. Gropen, S. Huzinaga, and A. D. McLean, J. Chem. Phys. 73, 402 (1980);
. S Katsuki, S Huzinaga, Chem. Phys. Lett. 147203ibid.S. Katsuki and S. Huzinaga, Chem. Phys. Lett. 147, 597 (1988); ibid., 152, 203 (1988);
. L Seijo, Z Barandiaran, S Huzinaga, Chem. Phys. Lett. 192217L. Seijo, Z. Barandiaran, and S. Huzinaga, Chem. Phys. Lett. 192, 217 (1992).
MOLGEP code for calculation of matrix elements with GRECP. A V Titov, A N Petrov, A I Panin, Yu G Khait, St.-PetersburgA. V. Titov, A. N. Petrov, A. I. Panin, and Yu. G. Khait, MOLGEP code for calculation of matrix elements with GRECP, St.-Petersburg, 1999.
. A V Titov, A O Mitrushenkov, I I Tupitsyn, Chem. Phys. Lett. 185330A. V. Titov, A. O. Mitrushenkov, and I. I. Tupitsyn, Chem. Phys. Lett. 185, 330 (1991);
. N S Mosyagin, A V Titov, A V Tulub, Phys. Rev. A. 502239N. S. Mosyagin, A. V. Titov, and A. V. Tulub, Phys. Rev. A 50, 2239 (1994);
. N S Mosyagin, A V Titov, N. S. Mosyagin and A. V. Titov, E-print: http://xxx.lanl.gov/abs/physics/9808006;
. A V Titov, N S Mosyagin, A. V. Titov and N. S. Mosyagin, E-print. Russian J. Phys. Chem.A. V. Titov and N. S. Mosyagin, Russian J. Phys. Chem., in press, E-print: http://xxx.lanl.gov/abs/physics/0008160; A. V. Titov and N. S. Mosyagin, E-print: http://xxx.lanl.gov/abs/physics/0008239.
. M G Kozlov, A V Titov, N S Mosyagin, P V Souchko, Phys. Rev. A. 563326M. G. Kozlov, A. V. Titov, N. S. Mosyagin, and P. V. Souchko, Phys. Rev. A 56, R3326 (1997);
. N S Mosyagin, M G Kozlov, A V Titov, J. Phys. B. 31763N. S. Mosyagin, M. G. Kozlov, and A. V. Titov, J. Phys. B 31, L763 (1998).
. R J Buenker, S D Peyerimhoff, Theor. Chim. Acta. 35217ibid.R. J. Buenker and S. D. Peyerimhoff, Theor. Chim. Acta 35, 33 (1974); ibid., 39, 217 (1975);
. R J Buenker, S Peyerimhoff, W Butscher, Mol. Phys. 35771R. J. Buenker, S. Peyerimhoff, and W. Butscher, Mol. Phys. 35, 771 (1978);
. A B Alekseyev, H.-P Liebermann, I Boustani, G Hirsch, R J Buenker, Chem. Phys. 173333A. B. Alekseyev, H.-P. Liebermann, I. Boustani, G. Hirsch, and R. J. Buenker, Chem. Phys. 173, 333 (1993);
. A B Alekseyev, R J Buenker, H.-P Liebermann, G Hirsch, J. Chem. Phys. 1002989A. B. Alekseyev, R. J. Buenker, H.-P. Liebermann, and G. Hirsch, J. Chem. Phys. 100, 2989 (1994).
. R J Buenker, R A Phillips, J. Molec. Struct. Theochem. 123291R. J. Buenker and R. A. Phillips, J. Molec. Struct. Theochem. 123, 291 (1977).
The World of Quantum Chemistry. E R Davidson, R. Daudel and B. PullmanReidel17DordrechtE. R. Davidson, The World of Quantum Chemistry, edited by R. Daudel and B. Pull- man, p.17 (Dordrecht: Reidel, 1974).
. G Hirsch, P J Bruna, S D Peyerimhoff, R J Buenker, Chem. Phys. Lett. 52442G. Hirsch, P. J. Bruna, S. D. Peyerimhoff, and R. J. Buenker, Chem. Phys. Lett. 52, 442 (1977).
. A V Mitin, J. Comput. Chem. 1994A. V. Mitin, J. Comput. Chem. 19, 94 (1998).
. Z Gershgorn, I Shavitt, Int. J. Quant. Chem. 2751Z. Gershgorn and I. Shavitt, Int. J. Quant. Chem. 2, 751 (1968).
I Shavitt, Methods of Electronic Structure Theory). H. F. Schaefer IIIN.Y.Plenum Press3189Modern Theoretical ChemistryI. Shavitt, in "Modern Theoretical Chemistry", Vol. 3 (Methods of Electronic Structure Theory), ed. H. F. Schaefer III, Plenum Press, N.Y. 1977, p.189.
. C E Moore, Circ. Natl. Bur. Stand. 467C. E. Moore, Circ. Natl. Bur. Stand. (U.S.) 467 (1958).
. W Müller, W J Flesch, W Meyer, J. Chem. Phys. 803297W. Müller, W. J. Flesch, and W. Meyer, J. Chem. Phys. 80, 3297 (1984);
. W Müller, W Meyer, J. Chem. Phys. 803311W. Müller and W. Meyer, J. Chem. Phys. 80, 3311 (1984);
. P Fuentealba, H Preuss, H Stoll, L Von Szentpaly, Chem. Phys. Lett. 89418P. Fuentealba, H. Preuss, H. Stoll, and L. von Szentpaly, Chem. Phys. Lett. 89, 418 (1982).
. T Leininger, A Berning, A Nicklass, H Stoll, H.-J Werner, H.-J Flad, Chem. Phys. 21719T. Leininger, A, Berning, A. Nicklass, H. Stoll, H.-J. Werner, and H.-J. Flad, Chem. Phys. 217, 19 (1997).
. B Grundström, P Valberg, Z. Physik. 108326B. Grundström and P. Valberg, Z. Physik 108, 326 (1938).
H P Huber, G Herzberg, Constants of Diatomic Molecules. New YorkVan Nostrand-ReinholdH. P. Huber and G. Herzberg, "Constants of Diatomic Molecules", (Van Nostrand- Reinhold, New York, 1979).
. R.-D Urban, A H Bahnmaier, U Magg, H Jones, Chem. Phys. Lett. 158443R.-D. Urban, A. H. Bahnmaier, U. Magg, and H. Jones, Chem. Phys. Lett. 158, 443 (1989).
. N S Mosyagin, M G Kozlov, A V Titov, E-Print ; N S Mosyagin, E Eliav, A V Titov, U Kaldor, J. Phys. B. 33667N. S. Mosyagin, M. G. Kozlov, and A. V. Titov, E-print: http://xxx.lanl.gov/abs/physics/9804013; N. S. Mosyagin, E. Eliav, A. V. Titov, and U. Kaldor, J. Phys. B 33, 667 (2000).
. T H Dunning, Jr , J. Chem. Phys. 901007T. H. Dunning, Jr., J. Chem. Phys. 90, 1007 (1989).
. B Liu, A D Mclean, J. Chem. Phys. 912348B. Liu and A. D. McLean, J. Chem. Phys. 91, 2348 (1989);
. M Gutowski, J H Van Lenthe, J Verbeek, F B Van Duijneveldt, G Cha, Lasiński, Chem. Phys. Lett. 124370M. Gutowski, J. H. van Lenthe, J. Verbeek, F. B. van Duijneveldt, G. Cha lasiński, Chem. Phys. Lett. 124, 370 (1986).
Calculations of the spin-orbit splitting of the 2 P o ground state in Tl (the. Table I, 5d spinors are frozen from the SCF calculation of the nonrelativistically averaged [. . .]6s 2 6p 1 configurationTABLE I. Calculations of the spin-orbit splitting of the 2 P o ground state in Tl (the [. . .], 5d spinors are frozen from the SCF calculation of the nonrelativistically averaged [. . .]6s 2 6p 1 configu- ration).
Method SO splitting in cm −1. Method SO splitting in cm −1
Spinor basis sets. 7,7,5] [7,7,5,3] [7,7,5,3,1Spinor basis sets: [7,7,5] [7,7,5,3] [7,7,5,3,1]
Spin-orbit basis sets. 4,4,4. 4,4,4,3. 4,4,4,3,2Spin-orbit basis sets: [4,4,4] [4,4,4,3] [4,4,4,3,2]
Spinor basis set. 35,27,21,15,9,6,4Spinor basis set: [35,27,21,15,9,6,4]
| [] |
[
"Centre vortex structure in the presence of dynamical fermions",
"Centre vortex structure in the presence of dynamical fermions"
] | [
"James C Biddle \nDepartment of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia\n",
"Waseem Kamleh \nDepartment of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia\n",
"Derek B Leinweber \nDepartment of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia\n"
] | [
"Department of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia",
"Department of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia",
"Department of Physics\nCentre for the Subatomic Structure of Matter\nThe University of Adelaide\n5005SAAustralia"
] | [] | An analysis of the geometry and structure of centre vortices in the presence of dynamical fermions is performed. A variety of metrics are used to measure the matrix structure of the vortex-modified gauge fields. Visualisations of centre vortices are presented and percolating clusters are identified. The size of secondary vortex clusters is analysed, with substantial differences observed between the pure Yang-Mills and dynamical fermion case. Vortex fields are represented as directed graphs, with branching points acting as the vertices. This representation leads to a novel picture of vortex branching as a binomial process. These results elucidate the change in the centre vortex vacuum induced by the introduction of dynamical fermions. | 10.1103/physrevd.107.094507 | [
"https://export.arxiv.org/pdf/2302.05897v1.pdf"
] | 256,827,061 | 2302.05897 | 003111469bc5112dba5f7e192af95e15747abc31 |
Centre vortex structure in the presence of dynamical fermions
James C Biddle
Department of Physics
Centre for the Subatomic Structure of Matter
The University of Adelaide
5005SAAustralia
Waseem Kamleh
Department of Physics
Centre for the Subatomic Structure of Matter
The University of Adelaide
5005SAAustralia
Derek B Leinweber
Department of Physics
Centre for the Subatomic Structure of Matter
The University of Adelaide
5005SAAustralia
Centre vortex structure in the presence of dynamical fermions
An analysis of the geometry and structure of centre vortices in the presence of dynamical fermions is performed. A variety of metrics are used to measure the matrix structure of the vortex-modified gauge fields. Visualisations of centre vortices are presented and percolating clusters are identified. The size of secondary vortex clusters is analysed, with substantial differences observed between the pure Yang-Mills and dynamical fermion case. Vortex fields are represented as directed graphs, with branching points acting as the vertices. This representation leads to a novel picture of vortex branching as a binomial process. These results elucidate the change in the centre vortex vacuum induced by the introduction of dynamical fermions.
I. INTRODUCTION
There is now a wealth of literature exploring the impact of centre vortices on pure Yang-Mills gauge theory . These results have consistently shown that centre vortices play an important role in the emergence of non-perturbative properties. However, there have also been consistent discrepancies between original and vortex-only calculations. Recent results [27,28] have for the first time considered centre vortices in the presence of dynamical fermions. These results demonstrated the dramatic effect dynamical fermions have on the behaviour of centre vortices. In contrast to prior pure Yang-Mills studies [19,21,22,25,[29][30][31], the static quark potential can be fully recreated from centre vortices alone [27], and vortex removal results in complete suppression of the infrared Landau-gauge gluon propagator [28]. In light of these unexpected results, it is natural seek a deeper understanding of these effects by directly analysing the structure of the vortices themselves.
In this work, we first look for changes in the bulk properties of the lattice configurations by analysing the norms and traces of the gauge links, as well as the values of the maximal centre gauge functional. Bulk discrepancies between pure-gauge and dynamical ensembles may suggest where the differences in vortex structure arise from.
We then expand upon the visualisation techniques developed in Ref. [32] to analyse the geometric structure of centre vortices. New developments allow us to split the vortex structure into individual disconnected clusters. From these clusters we may examine the degree of vortex percolation present in the vacuum.
In the supplemental material located at the end of this document, visualisations of these centre vortex clusters are presented as interactive 3D models embedded in the document. Instructions on viewing these models are also included therein. Figures with a corresponding interactive model that can be found in the supplemental material are marked as Interactive in the caption. Interactive models in the supplementary material are also referenced as Fig. S-x in the text. A selection of preset views that highlight regions of interest is also available.
Following cluster identification, we present a novel perspective that considers each cluster as a directed graph of vortex branching points, with the weight of each graph edge corresponding to the number of vortex plaquettes between branching points. This data structure enables us to develop quantitative measures of the size and shape of centre vortex clusters, facilitating a detailed comparison of vortex structure between pure-gauge and dynamical QCD.
This paper is structured as follows. In Sec. II we detail the centre vortex model and how centre vortices are identified on the lattice. We then present the analysis of the bulk gauge link properties in Sec. III. In Sec. IV our visualisation conventions are introduced. In Sec. V we discuss the cluster identification algorithm and subsequent findings. In Sec. VI we introduce the method by which vortex clusters can be converted to a graph, and discuss the analysis performed on these graphs. Finally, the findings of this work are summarised in Sec. VII.
II. CENTRE VORTICES
In QCD, centre vortices are regions of a gauge field that carry flux associated with Z 3 , the centre of the SU (3) gauge group. Z 3 consists of the three elements,
Z 3 = exp m 2πi 3 I, m = −1, 0, +1 .(1)
For the purposes of our discussion, m will be referred to as the centre charge of the vortex. On the lattice, thin centre vortices appear as closed sheets in four dimensions, or as closed lines on three dimensional slices of the lattice.
Centre vortices are identified on the lattice through a well-known procedure [32,33], briefly summarised here. First, the configurations are rotated to maximal centre gauge (MCG) by determining a gauge rotation, Ω(x), that maximises the functional [23,31,33]
Φ = 1 V N dim n 2 c x,µ Tr U Ω µ (x) 2 .(2)
This process brings each gauge link as close as possible to one of the elements of Z 3 . Once the ensemble has been fixed to maximal centre gauge, each link is projected onto the nearest centre element, U µ (x) → Z µ (x), as defined by the phase of the trace of each link. Centre vortices are then identified by the location of non-trivial plaquettes P µν = exp m 2πi 3 I, in the µ-ν plane with m = ±1. This process of centre projection defines the vortex-only ensemble, Z µ (x). Using these identified vortices, we also construct the vortex-removed ensemble by
computing R µ (x) = Z † µ (x) U µ (x)
. Hence, this procedure results in three ensembles:
1. Original, untouched (UT) fields, U µ (x), 2. Vortex-only (VO) fields, Z µ (x), 3. Vortex-removed (VR) fields, R µ (x),
Visualisations of vortices are naturally constructed from the vortex-only ensembles, and as such the Z µ (x) fields will be of primary focus in this work. However, the effectiveness of vortex removal is also of great interest as it has been observed that the vortex removed ensembles also vary in behaviour depending on the presence or absence of dynamical fermions [27,28].
For this work, we continue the analysis performed in our previous work [27,28] and make use of three original (UT) ensembles. Each ensemble has dimensions 32 3 × 64 and is comprised of 200 lattice configurations. Two of the ensembles are (2 + 1) flavour dynamical ensembles from the PACS-CS collaboration [34]. We choose the heaviest and lightest pion mass ensembles, with masses of 701 MeV and 156 MeV respectively. This allows us to observe the greatest differentiation between the dynamical ensembles. The third ensemble is pure Yang-Mills, generated with the Iwasaki gauge action [35]. The lattice spacing is tuned to be similar to that of the PACS-CS ensembles. A summary of the lattice parameters is provided in Table I.
III. BULK PROPERTIES
In understanding the impact dynamical fermions have on the centre-vortex vacuum, it is natural to first look for bulk changes in the SU (3) lattice gauge fields upon the introduction of dynamical fermions. The first measure we
φ µ (x) = 1 n 2 c Tr U Ω µ (x) 2(3)
defined such that the total MCG functional given in Eq.
(2) can be written as
Φ = 1 V N dim x, µ φ µ (x)(4)
The distribution of R µ (x) values is presented for the untouched ensembles in Fig. 1.
We observe that the pure gauge ensemble achieves a typically larger value of φ µ (x), indicating that the links have been brought closer to the centre of SU (3). The two dynamical ensembles follow each other rather closely, although the heavier pion mass appears to achieve slightly larger Φ values than its lighter counterpart. It should be noted however that larger values of φ µ (x) do not necessarily indicate that the MCG algorithm has performed better on these ensembles. As was determined in Refs. [19,36,37], there are a number of methods that can be used to increase the typical values of φ µ (x) obtained from maximal centre gauge. However, these methods do not necessarily improve the vortex-finding abilities of the procedure and in some cases actually degrade the vortexfinding performance. As such, it should be understood that the results presented in Fig. 1 are simply showing a noticeable change in behaviour as we transition from pure gauge to dynamical ensembles, and not necessarily a worsening of vortex identification.
Next, we wish to compare the distribution of the trace phases, arg (Tr U µ (x)), from each ensemble both before and after fixing to maximal centre gauge. These results are presented in Fig. 2. As intended, the phases are tightly packed about the three centre values after fixing to maximal centre gauge. However, the pure-gauge results are distributed slightly closer to the centre elements than the dynamical ensembles.
In conjunction with the trace phases, we can also look at the magnitude of the traces, | Tr U µ (x)|. These values are presented in Fig. 3. Note that a centre element will have | Tr U µ (x)| = 3. MCG then clearly serves to not only bring the phases close to that of a centre element, but also the magnitude. However, the effect on the magnitude is less than that on the phase. This suggests that there is still significant off-diagonal strength in the original ensembles after fixing to maximal centre gauge. Again, the pure gauge values are distributed closer to the centre value of 3 when compared with the dynamical results. The next bulk measures we examine are two matrix norms designed to determine the residual off-diagonal strength present in the vortex-removed fields in MCG. The norms are
L µ (x) = i, j U ij µ (x) − δ ij 2 1 2(5)
and
M µ (x) = i, j i =j U ij µ (x) 2 1 2(6)
We find for the untouched configurations that the results for both norms are identical across all ensembles, as shown in Figs. 4 and 5. However, after vortex removal we notice that differences appear in both norms. The results for L µ (x) and M µ (x) on the vortex removed ensembles are shown in Fig. 6 and Fig. 7 respectively.
We observe that the dynamical ensembles retain a greater proportion of their off-diagonal strength. This is interesting, as it has been shown in Ref. [28] that vortex removal results in a more significant loss of infrared strength in the Landau-gauge gluon propagator when dynamical fermions are present. This indicates that the residual strength as measured by our norms in MCG does not coincide with enhancement as measured via the Landau-gauge gluon propagator.
These measures indicate that there is a substantial difference in behaviour between the pure-gauge and dynamical ensembles when considering their MCG matrix substructure. Both the trace phases and magnitudes are further from the centre elements and the dynamical ensembles retain more off-diagonal strength. Here we see the change in behaviour after the introduction of dynamical fermions.
IV. VISUALISATIONS
Motivated by the difference in the bulk structure of the gauge fields in maximal centre gauge, we now wish to look more closely at the fine-grained structure of the vortex vacuum. We do this by extending the visualisation techniques first developed in Ref. [32]. Given that vortices are associated with non-trivial plaquettes, vortices themselves exist on the dual lattice. Hence, for a vortex-only ensemble we write the plaquette as [15,26]
P µν (x) = exp πi 3 µνκλ m κλ (x) ,(7)
where m κλ (x) ∈ {−1, 0, 1} defines the directed vortex charge orthogonal to the plaquette and based atx = x + a 2 (μ +ν −κ −λ). Note also that m κλ (x) is anti- A trend similar to that seen in Fig. 6 is observed. symmetric under index permutation, such that there is a natural association between the sign of m and the vortex orientation.
0.0 0.5 1.0 1.5 2.0 2.5 3.0 M µ (x)Z x (n) Z † y (n) Z y (n +x) Z † x (n +ŷ) x z y Z x (n) Z † y (n) Z y (n +x) Z † x (n +ŷ)
To produce a 3D visualisation, one fixes the value of λ in Eq.(7) to be the dimension upon which slices are taken. The remaining three dimensions comprise the slice, such that the plaquettes now may be written as
P ij (x) = exp 2πi 3 ijk m kλ (x) ,(8)
where the Latin indices enumerate the three dimensions orthogonal to the fixed λ. Using this definition, a vortex is rendered as a jet of length a, pointing in the m kλ (x)k direction that pierces the P ij (x) plaquette. For example, if we choose λ = 4, a m = +1 vortex identified by P xy (n) would be rendered in the +ẑ direction. This rendering convention is illustrated in Fig. 8.
A notable feature of SU (3) centre vortices is the presence of vortex branching. Due to the periodicity of the non-trivial centre phases in Z 3 , one unit of positive centre charge is equivalent to two units of negative centre charge. Hence, within a 3D slice a vortex line carrying m = +1 charge may branch into two m = −1 vortex lines. Note that this process is indistinguishable from three m = +1 vortex lines converging to the vacuum, as illustrated in Fig. 11. Recall that our visualisations illustrate the directed flow of m = +1 charge. This is why these branching points are also sometimes referred to as vortex monopoles and anti-monopoles in the literature [26]. This ambiguity in charge assignment has important ramifications for centre vortex topology, as discussed in Ref. [9].
For the purposes of this work, we will refer to intersections of three or five vortices as branching points. Intersections of four vortices occur at the intersection of vortex lines and do not constitute vortex branching. They are thus excluded from the branching point analysis. Finally, intersections of six vortices could arise from either vortex branching or the intersection of three vortex lines. As these situations are indistinguishable, for this work we will consider these points to be branching points. However, it must be noted that the occurrence of six-way branching points is so infrequent that this choice has an insignificant impact on branching point statistics.
A straightforward nomenclature for referring to branching points [26] is to define the branching genus n cube (x |μ). Here,μ denotes the direction along which the lattice has been sliced and hence identifies the remaining three coordinates,î,,k, that describe the location within the 3D slice. Within the selected slice, we define x to denote the dual lattice site, x = x+ a 2 (î+ĵ +k). n cube (x |μ) then counts the number of vortices piercing the elementary cube around x . Thus, we have the following interpretation for the possible values of n cube (x |μ):
n cube (x |μ) = 0
No vortex 2 regular vortex line 3, 5, 6 branching point 4 touching point (9) The normalised distribution of values of n cube across the three ensembles is shown in Fig. 9. We observe that the distribution of the higher genus values decreases monotonically for all ensembles. The dynamical ensembles feature a greater probability of high-multiplicity branching points. This predicts a greater vortex density for these ensembles relative to the pure gauge case, as will be discussed in the next section.
V. CLUSTER IDENTIFICATION
It is well known that for SU (2) gauge fields in the confining phase, percolation of centre vortices can be used as an order parameter for the transition from the confined phase to the deconfined phase [6,8]. At a glance, the visualisations constructed in Ref. [32] support this assessment, with a single large connected vortex cluster clearly visible in each visualisation and only a handful of separate smaller secondary clusters present. Studying the confinement phase transition at the critical temperature will be the subject of future work. However, it is of interest to build the necessary tools to perform such a study. This requires us to quantitatively understand the degree to which a vortex ensemble is dominated by a primary percolating cluster, as opposed to a collection of smaller secondary clusters. To do this, it is necessary to develop an algorithm that can trace these vortex lines and identify disconnected clusters.
Such an analysis is quite straightforward in SU (2), as SU (2) vortices do not permit branching points. This simplifies the algorithm, as each vortex cluster consists of a single line that may be followed until it arrives back at its starting location. In SU (3), vortex branching demands that the algorithm track multiple branching paths, and only terminates when there are no continuations for every path. We describe such an algorithm here.
The starting point for the algorithm is to have all vortices in a 3D slice stored along with their associated tip and base coordinates. With this setup, the algorithm proceeds as follows:
1. Choose an arbitrary vortex to start at. Mark it as visited and record it as belonging to an incomplete line segment.
5.
Once there are no unvisited touching vortices, mark the segment as complete.
6. If all segments are complete, the cluster is complete. Record all vortices in all segments as belonging to this cluster. Return to step 1, selecting an unvisited vortex.
7. If there are no unvisited vortices, all clusters have been identified and the algorithm is complete.
This algorithm can then be applied to each 3D slice to isolate all independent vortex clusters.
Employing this algorithm and our visualisation conventions defined in Sec. IV, the pure-gauge vortex vacuum on a single slice appears as in top-left panel of Fig. 10. The interactive version of this visualisation may be found in Fig. S-19. As our investigation takes place at zero temperature on a large volume lattice, the choice of slice direction does not impact most intrinsic measurements, and as such we choose to present plots obtained from slicing in thex direction. The only notable exception is the size of the percolating cluster as it fills the 3D volume and is therefore smaller fort slices. The choice of x will be assumed for the remainder of this work unless stated otherwise. Numerical values presented in tables will be averaged across all slice dimensions, where applicable.
We observe that indeed the vacuum is dominated by a single primary percolating cluster, with an assortment of small secondary clusters also present. Branching points are readily observed within the visualisation, as can be seen in Fig. 11 The dominance of a single vortex cluster is even more pronounced once it is removed, as shown in the righthand panels of Fig. 10 for the pure-gauge (top) and dynamical-fermion (bottom) slices. Almost all the vortex matter is associated with the percolating cluster. However, if we focus on the dynamical-fermion secondary clusters in the bottom-right panel of Fig. 10, we see that the number of secondary clusters has increased substantially when compared to the pure gauge ensemble. Moreover, an increase in the complexity of the secondary structures through branching-point clusters is also evident.
These secondary clusters may also be explored in the interactive models given in Figs. S-21 and S-22 for the pure-gauge and dynamical-fermion cases. There several features are highlighted in the "Views" menu and these views are also available in the full vortex illustrations of
Figs. S-19 and S-20.
To gauge the relative sizes of the primary and secondary clusters, we calculate the average total number of vortices per slice, N slice , the average number of vortices associated with the primary cluster, N primary , and the average number of vortices associated with a secondary cluster, N secondary . N slice , N primary , and N secondary for all three ensembles are presented in Table II. Note that the spatial values are obtained by averaging across the three spatial dimensions acting as the slice dimension. When t is selected for slicing the four dimensional volume, the spatial volume is half that when a spatial direction is selected. As such, the percolating cluster values in thet column are expected to be half those in the spatial slicing column.
Interestingly, we observe that N secondary decreases in the presence of dynamical fermions, indicating that the secondary clusters are smaller on average. This is due to a proliferation of elementary plaquette vortex paths in dynamical fermion QCD, as illustrated in the bottomright panel of Fig. 10.
We also see that N slice and N primary from the heavier quark-mass ensemble are larger than the values calculated on the light ensemble. This is likely a result of the fact that the heavier pion mass configurations have a slightly larger physical volume. We can determine if this is the case by considering the vortex density, ρ vortex .
The vortex density is calculated by considering the proportion of plaquettes that are pierced by a vortex, P vortex . This is best calculated by first defining an indicator function, v µν (x) = 1, P µν (x) = exp ±2 π i 3 I 0, P µν (x) = I . We then calculate the proportion of pierced plaquettes as,
P vortex = 1 6 V µ, ν µ<ν x v µν (x) ,(11)
where the value 6 counts the number of plaquettes associated with site x in four dimensions and V = N x N y N z N t counts the number of sites in the sum over x. The physical density is then given by,
ρ vortex = P vortex a 2 .(12)
In the case where the vortex distribution is isotropic, the density derived in four dimensions is equal to the mean of the three-dimensional density when averaged over slices (such as in Fig. 10). We can decompose the lattice coordinates into a 1 + 3-dimensional notation, x = (w, x |μ), with w corresponding to the index in the slicing dimensionμ and x specifying the location within the corresponding hyperplane. Then the vortex density for slice w along the dimensionμ is
P 3 (w,μ) = 1 3 V 3 (μ) i, j i<j, =µ x v ij (w, x |μ) ,(13)
where v ij (w, x |μ) is the restriction of the indicator function in Eq. 10 to the relevant slice, V 3 (μ) is the corresponding 3-volume (e.g. V 3 (x) = N y N z N t ), and the division by 3 averages the number of plaquettes associated with each site in three dimensions.
Upon averaging over all w slices in a given dimension and then averaging over the four slice directions, one finds the following for the mean densitȳ
P 3 = 1 3 V 1 4 µ i, j i<j, =µ w,x v ij (w, x |μ) ,(14)
Noting that each plaquette has been counted twice in the sum over i, j and µ, one recovers P vortex of Eq. (11). Of course, in both cases, the physical density is governed by the area of the plaquette as in Eq. (12).
The vortex densities from the three ensembles are shown in Table. III. We see that the ρ vortex is indeed larger on the ensemble with the lightest pion mass, indicating a consistent trend of increasing vortex density as the physical pion mass is approached from above.
Another quantity of interest is the branching point density. This is obtained by considering the fraction of elementary cubes within each 3D slice that contain a branching point, P branch . Again, this is best calculated by first considering the indicator function b(x |μ) = 1, n cube (x |μ) = 3, 5, 6 0, otherwise .
The branching point proportion is then given by
P branch = 1 4 V µ x b(x |μ) ,(16)
where µ sums over all four dimensions. As this density is defined as an average over 3D cubes, the associated physical density is
ρ branch = P branch a 3 .(17)
The branching point density is shown in Table III. Here we observe that the branching point density follows the same trend as the vortex density, namely that it increases with decreasing dynamical quark mass.
To quantify the change in the behaviour of N secondary recorded in Table II we count the number of clusters of a given size and average across slices and the ensemble. These results are shown in Fig. 12. There are a number of interesting features present here. Firstly, it is clear that it is not possible to have clusters containing less than four vortices, and that it is also not possible to have five vortices in a cluster. There is an interesting trend that the number of clusters containing an even number of vortices is higher than the number containing an odd number of vortices, especially at small cluster sizes. This results in the alternating comb pattern present in Fig. 12. This is a result of the fact that a branching point is necessary for a cluster to contain an odd number of vortices. Hence, this alternating pattern speaks to the presence of a 'cost' associated with a branching point, resulting in clusters containing branching points being less probable than those without. This effect is mitigated as the cluster size increases and the number of vortex arrangements leading to that cluster size increases.
Comparing the different ensembles, we find that the number of clusters at each size on the dynamical ensembles exceed almost all of the pure gauge clusters. However, if we normalise the histogram by the total number of clusters found in the ensemble, as shown in Fig. 13, we find that the pure gauge ensembles have a comparable or greater proportion of larger secondary clusters present, perhaps due to the low vortex density. We observe that the dynamical ensembles still retain a larger proportion of the smallest secondary clusters.
We can measure the size of a cluster by defining the cluster extent as the largest pairwise distance between vortices belonging to the same cluster, as done in Ref. [8]. The cluster extents are binned, and the content of each bin represents the average number of vortices in the associated cluster, relative to the total number of vortices in the ensemble. The cluster extents are normalised by the greatest distance on a N y × N z × N t slice of a periodic lattice,
L max = (N y /2) 2 + (N z /2) 2 + (N t /2) 2 .(18)
The results of this analysis for our three ensembles is shown in Fig. 14.
The cluster extents shown in Fig. 14 clearly demonstrate that at zero temperature the SU (3) vortex vacuum is dominated by a single percolating vortex cluster, with only a minority of vortices comprising smaller secondary loops. It is expected that this situation will change as the temperature exceeds the critical temperature, as has been observed in SU (2) gauge theory [8]. We also observe that the pure gauge secondary clusters tend to be larger than their dynamical counterparts.
We find that the vortex and branching point density significantly increases upon the introduction of dynamical fermions. However, relative to the total number of vortices present, the pure gauge sector contains a greater proportion of larger secondary clusters than the dynamical case. Aside from the primary vortex cluster, the dynamical vortex vacuum is dominated by an excess of very small secondary clusters. The visualisations reveal significant branching-point complexity in the large secondary clusters of the dynamical-fermion vortex vacuum. Several features are highlighted in the "Views" menu of the interactive figures provided in the supplemental material.
VI. BRANCHING POINT GRAPHS
The cluster analysis presented in Sec. V enables us to gain insight into the size of the primary and secondary vortex clusters. It is also of interest to study the relationship between branching points, as these structures are absent in SU (2) where much of the analysis of vortex structure has previously been performed. Furthermore, it is helpful to abstract the vortex clusters such that we need not be concerned with their precise 3D coordinates. To that end, we seek to represent vortex clusters as a directed graph, with branching points acting as vertices and the edges being given by vortex lines, with each edge weighted by the number of vortices in the line.
The algorithm to perform this graph construction starts with an identified vortex cluster as defined in Sec. V. First, for each vortex we evaluate whether it touches a point with n cube (x |μ) ≥ 3 at its tip, base, both or neither. Each branching or touching point should also have a unique ID. The algorithm proceeds as follows:
1. Find an untraversed vortex with a branching/touching point at its base. If no untraversed vortex can be found, then we are done. Otherwise, set the found vortex to be the current vortex and mark it as traversed. Set the current inter-branching point distance to 1 and record the ID of the branching/touching point at the base. with weight equal to the current inter-branching point distance. Return to step 1.
3. Otherwise, find the vortex with its base touching the tip of the current vortex and mark it as traversed. Set the new vortex to be the current vortex and add 1 to the inter-branching point distance. Return to step 2.
The resulting graph encodes the separations between all branching and touching points within a cluster without reference to the specific cluster geometry.
Applying this algorithm to the primary clusters shown in Fig. 10 for pure gauge and dynamical vacuum fields, we produce the graphs shown in Figs. 15 and 16 respectively. These visualisations clearly demonstrate the significant increase in vortices and branching points present on the dynamical configurations.
Utilising this new construction, we wish to determine a measure of the separation between connected branching points. A pair of branching points may be connected via multiple vortex lines, and these lines may also pass through touching points that we wish to exclude from the calculation. The presence of these touching points makes it impossible to devise a unique distance between two branching points, as this distance will depend on the manner in which the touching point is traversed, as shown in Fig. 17. Instead, we devise an algorithm for calculating the inter-branching point distance that enables a random selection of directions with which to traverse these touching point vertices. The algorithm proceeds as follows. 3. If this edge arrives at a branching point, store the path and the current path length and return to The end result of this algorithm is a list of paths between branching points that permit the ability to pass through touching points. However, not all edges will be traversed by this method, as the presence of touching points allows for cycles to emerge from these paths. Fortunately, due to conservation of vortex flux, any cycle emerging from a given path will return to that same path. Hence to rectify the algorithm, we simply need to traverse all cycles on a given path and add their length to the existing length. This is done by performing a modified depth-first search on each vertex to traverse any cycles that were omitted from the above method. Pseudocode for this search on a single vertex is as follows: The path lengths now accurately represent the distance between branching points. This concludes our determination of the branching point separations. Note that because of the inherent ambiguities in the branching point graphs, the solution is not unique. We determine whether the impact of this randomness is significant in the ensemble average choosing a single calculation of the distances as a reference, then repeating the distance calculation nine further times with different random seeds. We then use the Kolmogorov-Smirnov test [40] to determine the equality of the different distributions. We find that the test statistic for all ensembles is of order 10 −5 , with corresponding p-values consistent with 1. Thus we are satisfied that the variance in this distance measure is negligible in the ensemble average, and we are therefore justified in considering it a useful measure of branching point separation.
The average separation, d, for each ensemble is presented in Table IV. The physical separation ∆ = a d is also determined. Here we see that there is a consistent trend of decreasing average separation with decreasing pion mass. This coincides with our determination of the branching point and vortex densities, as a higher density suggests a smaller separation between points.
We also present the average number of edges in the graphs, n edges , and the average number of edges per node, n edges /n nodes in Table IV as measures of the complexity and structure of the graphs. We observe that, as expected, the number of edges substantially increases upon the introduction of dynamical fermions. The number of edges per node is close to 1.5 for all ensembles, as the majority of edges emerge from a three-way branching point and terminate at another three-way branching point. However, the number of edges per node is larger on the dynamical ensembles, likely due to the increase in vortex density resulting in a higher number of vortex intersections.
The distribution of branching point separations is shown in Fig. 18. The results are normalised by the total number of vortex paths considered, such that the histogram has unit area. Apart from an enhancement of the smallest branching point separations, the distances are exponentially distributed. This distribution is consistent with a constant branching probability, i.e. the probability of branching at the next link of a vortex chain is independent of the length of the vortex chain.
This supports a previous conjecture for the interpretation of vortex branching [26,31]: that a vortex can be considered to have some fixed rate of branching as it propagates through space-time. This interpretation allows for vortex branching on the lattice to be considered as a binomial random variable X with some probability of branching, q. Thus, the probability of branching after k lattice plaquettes is given by the geometric distribution
P k = q (1 − q) k−1 .(19)
Typically, one estimates the rate of a binomial random variable by evaluating q = 1/X, whereX = k k P k . However, due to the deviations from linearity found at small separations in the log-distributions shown in Fig. 18, this measure fails to capture the true rate of branching. To account for this, we instead fit a linear function,
f (k) = α − β k ,(20)
to the log of the distribution of branching point separations for k > 3. The result of this fit for each ensemble is plotted in Fig. 18.
Of course, for a normalised distribution, α is constrained by β. However, the significant non-exponential behaviour for k ≤ 3 spoils the exponential normalisation constraint. Thus α is introduced to accommodate for this, and we refer to β describing the k dependence to determine the branching probability q.
The parameters of this fit are related to the log of the binomial rate log(P k ) = log(q)−log(1−q)+log(1−q) k = α−β k . (21) Equating the coefficients of the terms linear in k, we resolve the branching rate
q = 1 − e −β .(22)
Note, for small β, q = β. This rate can be converted to a physical quantity by then considering the rate per unit length, λ = q/a. All fitted parameters are calculated on 200 bootstrap ensembles, with errors determined via the bootstrap variance.
The rate described above can then be compared to the naive rate, q naive , calculated by considering the number of cubes containing branching points divided by the number of cubes pierced by two or more vortices. Defining
c(x |μ) = 1, n cube (x |μ) = 0 0, otherwise ,(23)
and recalling the branching point indicator defined in Eq. (15), we define the naive rate to be,
q naive = µ x b(x |μ) µ x c(x |μ) .(24)
The associated physical quantity is the rate per unit length, λ naive = q naive /a. The calculated rate parameters from both methods are shown in Table V. We observe that with both measures the physical branching rate increases as the physical pion mass is approached. We emphasise, only q contains the detailed information on the path geometry.
The difference between the fitted and naive rates is an interesting finding. The naive rate will include the short-range non-exponential behaviour, inconsistent with a constant branching rate. At larger separations, vortex branching follows a constant rate. However, there are clearly short-range effects that result in clustering of branching points, which in turn necessitates the more sophisticated approach detailed above for q. These clustering effects appear to be amplified upon introduction of dynamical fermions. Whether this clustering radius is a physical effect or the result of finite lattice-spacing effects is an interesting avenue for future study.
It should be noted that whilst the distributions shown in Fig. 18 take into account all primary and secondary clusters, the results are minimally affected if the secondary clusters are removed due to the vast majority of branching points belonging to the primary cluster.
An interesting correlation we observe is that the ratio between the pure gauge and dynamical branching rates is similar to the corresponding ratio of the vortex-only string tensions calculated in Ref. [27]. The vortex density is naturally correlated with the branching rate. In SU (2) at least, it has been shown through simple combinatoric arguments that the Wilson loop area law and hence the string tension can be related to the density of percolating random vortices [41]. It seems reasonable to infer then that the correlation we observe between the branching rate and string tension ratios is not simply a coincidence but a reflection of the differing structure of the vortex fields in the pure gauge and dynamical sectors.
VII. CONCLUSION
In this work we have explored of the impact of dynamical fermions on the centre-vortex structure of the vacuum ground-state fields.
Examining the bulk properties of the original gauge fields, we find that dynamical fermions lead to greater offdiagonal strength in the lattice gauge links. The presence of dynamical fermions gives rise to an increased abundance of centre vortices and branching points, as reflected by the increasing vortex and branching point densities as the physical pion mass is approached.
We construct cluster identification algorithms to identify independent vortex clusters and use this identification to construct visualisations of the vortex vacuum. These reveal that the vacuum is dominated by a single percolating cluster. Our results show that dynamical fermions lead to an abundance of smaller clusters as compared to their pure-gauge counterparts.
We employ a novel method of reducing vortex clusters to directed graphs, with vertices defined by branching points and edges connecting them weighted by the number of vortex links. Using this construction, we render the graphs to illustrate the radical change in the number of vortices and branching points after the introduction of dynamical fermions. We define a measure of branching point separation, and observe that the distribution of separations follows an approximate geometric distribution. We estimate the rate of this distribution and find that there is a tendency for branching points to cluster at small separations.
Understanding the role of dynamical quarks in the QCD vacuum continues to be an interesting area of study. The effect of matter fields on the vacuum phase structure has been explored elsewhere within the gauge-Higgs model [42][43][44][45]. The extension of these ideas to QCD may shed further light on the nature of confinement. In particular, investigations that further our understanding of string breaking in terms of QCD vacuum structure is desirable.
The findings of this paper illustrate the substantial impact dynamical fermions have on the geometry and structure of the centre vortex vacuum. These results add to the growing body of evidence [27,28] for the effect of dynamical fermions on centre vortices as compared to the TABLE V. The naive and fitted branching rates, qnaive and q, and their physical counterparts λnaive and λ obtained through the methods described in the text. The fit parameter β is also presented. Only q and λ are associated with a constant branching probability. well-established pure gauge sector. The relationship between the vortex geometry analysed here and the shift in observable behaviour is still a subject of great interest. Future work is also intended to explore how this geometry changes in the finite temperature regime.
FIG. 1 .
1Distribution of the local maximal centre gauge functional, Rµ(x), as defined in Eq. 3 examine is the distribution of the local MCG functional
FIG. 2 .FIG. 3 .
23Distribution of trace phases before (top) and after (bottom) fixing to MCG. We plot the bins for the dynamical ensembles side-by-side as they are similar to one another, with the pure gauge results overlayed. Distribution of trace magnitudes before (top) and after (bottom) fixing to MCG.
FIG. 4 .
4The Lµ(x) norm calculated prior to fixing to MCG.
FIG. 5 .FIG. 6 .
56The Mµ(x) norm calculated prior to fixing to MCG. The Lµ(x) norm calculated on the VR ensembles.
FIG. 7 .
7The Mµ(x) norm calculated on the VR ensembles.
FIG. 8 .
8The spacial vortex plotting convention with λ = 4. An m = +1 vortex (left) identified by the plaquette Pxy(n) is rendered in theẑ direction. An m = −1 vortex (right) identified by the same plaquette is rendered in the −ẑ direction.
FIG. 9 .
9The distribution of branching point genera as defined in Eq.(9).
and in the interactive view 'Branching Points' in Fig. S-19. The transition to full QCD leads to a marked shift in the behaviour of the centre vortices, as can be seen from the vortex vacuum of the lightest pion mass ensemble shown in the bottom-left panel of Fig. 10. The interactive version of this visualisation may be found in Fig. S-20. The total number of vortices has increased significantly.
FIG. 10 .
10(Top left) The centre vortex structure of a pure-gauge configuration. (Top right) The pure-gauge vortex vacuum as shown in the top left panel with the primary percolating vortex cluster removed. (Bottom left) The centre-vortex structure of a 2 + 1 flavour dynamical-fermion configuration from the mπ = 156 MeV ensemble. (Bottom Right) The dynamical vortex structure in the bottom-left panel with the primary percolating vortex cluster removed. Note the increased abundance of elementary vortex paths and the prevalence of branching points. In each panel, separate vortex clusters are rendered with different colours. These 3D models are generated with AVS scientific visualisation software [38]. Interactive in the supplemental material. FIG. 11. A collection of branching points (red ovals), a touching point (green circle) and a secondary loop (red jets) as they appear in our visualisations. Each jet illustrates the flow of m = +1 centre charge.
FIG. 13 .
13Proportion of clusters of a given size per slice, normalised by the total number of clusters in their respective ensemble.
FIG. 14 .
14Histogram of the cluster extents relative to Lmax for all three ensembles, as described in the text. It is clear that the vortex vacuum at zero temperature is dominated by a single percolating cluster, as can be seen by the dominance of the bin containing the clusters of maximal extent. Bin widths are 0.1 and are centred at the tick marks of the x-axis.
2 .
2Check if the current vortex has a branching/touching point at its tip. If it does, create an edge between the saved branching/touching point ID and the ID of the branching/touching point at the tip FIG. 15. The pure-gauge primary vortex cluster from the slice shown in the top-left panel of Fig. 10 rendered as a graph. Branching/touching points are the vertices and connecting vortex lines are the edges. Blue vertices indicate three-way branching points and orange vertices indicate four-way touching points. Visualisations were generated with the Pyvis visualisation package [39].
FIG. 16 .
16The mπ = 156 MeV primary vortex cluster from the slice shown in the bottom-left panel ofFig. 10rendered as a graph. Plotting conventions are as described inFig. 15
FIG. 17 .
17An example of how the touching point T1 introduces ambiguity into the distance between branching points, Bi. B1 can connect to either B3 or B4, with B2 then connecting to B4 or B3 respectively. This would result in either distances of 4, 2 or 3, 3 being recorded by our algorithm, depending on the order of traversal.1. Randomly choose a branching point vertex with untraversed outgoing edges. Record the vertex as the first in a path. Set the current path length to 0. If there is no vertex with an untraversed outgoing edge then we are done. 2. Randomly choose an untraversed outgoing edge to follow to a new vertex. Mark the chosen edge as traversed, add the new vertex to the current path and add its length to the path length.
r edge in t h i s v e r t e x . e d g e s : i f ( edge i s not t r a v e r s e d and edge i s o u t g o i n g ) : path . l e n g t h += edge .
) mπ = 156 MeV, log scale.
FIG. 18 .
18Normalised branching point (BP) separations from all ensembles, along with the corresponding fit to f (k) given in Eq.(20).
FIG. S- 21 .
21The centre-vortex structure of the secondary loops identified from the pure-gauge configuration shown in Fig. S-19. (Click to activate.) FIG. S-22. The centre-vortex structure of the secondary loops identified from the dynamical-fermion configuration shown in Fig. S-20. (Click to activate.)
TABLE I .
IA summary of the lattice ensembles used in this
work [34].
Type
a (fm)
β
κ u,d
mπ (MeV)
Pure gauge
0.100
2.58
-
-
Dynamical
0.102
1.9
0.13700
701
Dynamical
0.093
1.9
0.13781
156
TABLE II .
IIThe average number of vortices associated with: the total per 3D slice (N slice ), the primary cluster (Nprimary), and a secondary cluster (N secondary ), as calculated on the three ensembles. Separate averages are listed for the slicing dimensionμ being temporal or spatial.tx,ŷ,ẑ
Pure gauge
N slice
1673(3)
3347(6)
Nprimary
1638(3)
3277(6)
N secondary
7.32(5)
7.40(3)
701 MeV
N slice
3651(4)
7302(8)
Nprimary
3366(4)
6731(8)
N secondary
5.047(5)
5.057(3)
156 MeV
N slice
3227(4)
6452(8)
Nprimary
2964(4)
5926(9)
N secondary
5.011(5)
5.018(3)
TABLE III .
IIIThe vortex density as calculated on the three ensembles. The proportion of pierced plaquettes, Pvortex, the physical vortex density, ρvortex, the proportion of branching points, P branch and the physical branching point density, ρ branch are presented.ρvortex
ρ branch
Ensemble
Pvortex
(fm −2 )
P branch
(fm −3 )
Pure gauge 0.01702(3) 1.702(3) 0.00249(1) 2.49(1)
701 MeV
0.03714(4) 3.556(4) 0.00897(1) 8.41(1)
156 MeV
0.03282(4) 3.770(5) 0.00753(1) 9.27(2)
TABLE IV .
IVThe average distance between branching points, d, the same distance in physical units, ∆, the average number of
edges per graph, n edges , and the average number of edges per node, n edges /n nodes .
Ensemble
d
∆ (fm)
n edges
ρ edges (fm −3 )
n edges /n nodes
Pure gauge
13.55(2)
1.355(2)
238(1)
4.14(1)
1.53849(8)
701 MeV
7.691(4)
0.7860(4)
970(1)
15.84(2)
1.58667(6)
156 MeV
8.082(5)
0.7541(5)
807(1)
17.32(3)
1.58332(7)
step 1.
4. If the edge arrives at a touching point, repeat from
step 2 with the new vertex as the starting vertex.
. Considering the last vortex in each incomplete line segment, produce a list of all unvisited vortices touching this vortex (both base and tip, accounting for periodicity). Then mark them all as visited 3. Append one of the found vortices to the current segment. For all others, begin a new segment.4. If there are incomplete segments, repeat from step 2 for each incomplete segment.
ACKNOWLEDGEMENTSWe thank the PACS-CS Collaboration for making their 2 +1 flavour configurations available via the International Lattice Data Grid (ILDG). This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), provided through the National Computational Merit Allocation Scheme and supported by the Australian Government through Grant No. LE190100021 via the University of Adelaide Partner Share. This research is supported by Australian Research Council through Grants No. DP190102215 and DP210103706. WK is supported by the Pawsey Supercomputing Centre through the Pawsey Centre for Extreme Scale Readiness (PaCER) program.
On the Phase Transition Towards Permanent Quark Confinement. G Hooft, 10.1016/0550-3213(78)90153-0Nucl. Phys. B. 1381G. 't Hooft, On the Phase Transition Towards Permanent Quark Confinement, Nucl. Phys. B 138, 1 (1978).
A Property of Electric and Magnetic Flux in Nonabelian Gauge Theories. G Hooft, 10.1016/0550-3213(79)90595-9Nucl. Phys. B. 153141G. 't Hooft, A Property of Electric and Magnetic Flux in Nonabelian Gauge Theories, Nucl. Phys. B 153, 141 (1979).
Center dominance and Z(2) vortices in SU(2) lattice gauge theory. L Debbio, M Faber, J Greensite, S Olejnik, 10.1103/PhysRevD.55.2298arXiv:hep-lat/9610005Phys. Rev. D. 552298L. Del Debbio, M. Faber, J. Greensite, and S. Olejnik, Center dominance and Z(2) vortices in SU(2) lattice gauge theory, Phys. Rev. D 55, 2298 (1997), arXiv:hep- lat/9610005.
Casimir scaling from center vortices: Towards an understanding of the adjoint string tension. M Faber, J Greensite, S Olejnik, 10.1103/PhysRevD.57.2603arXiv:hep-lat/9710039Phys. Rev. D. 572603M. Faber, J. Greensite, and S. Olejnik, Casimir scaling from center vortices: Towards an understanding of the adjoint string tension, Phys. Rev. D 57, 2603 (1998), arXiv:hep-lat/9710039.
Detection of center vortices in the lattice Yang-Mills vacuum. L Debbio, M Faber, J Giedt, J Greensite, S Olejnik, 10.1103/PhysRevD.58.094501arXiv:hep-lat/9801027Phys. Rev. D. 5894501hep-latL. Del Debbio, M. Faber, J. Giedt, J. Greensite, and S. Olejnik, Detection of center vortices in the lattice Yang-Mills vacuum, Phys. Rev. D 58, 094501 (1998), arXiv:hep-lat/9801027 [hep-lat].
The Structure of projected center vortices in lattice gauge theory. R Bertle, M Faber, J Greensite, S Olejnik, 10.1088/1126-6708/1999/03/019arXiv:hep-lat/9903023JHEP. 0319R. Bertle, M. Faber, J. Greensite, and S. Olejnik, The Structure of projected center vortices in lattice gauge the- ory, JHEP 03, 019, arXiv:hep-lat/9903023.
The Vortex finding property of maximal center (and other) gauges. M Faber, J Greensite, S Olejnik, D Yamada, 10.1088/1126-6708/1999/12/012arXiv:hep-lat/9910033JHEP. 1212M. Faber, J. Greensite, S. Olejnik, and D. Yamada, The Vortex finding property of maximal center (and other) gauges, JHEP 12, 012, arXiv:hep-lat/9910033.
Tennert, Deconfinement in SU(2) Yang-Mills theory as a center vortex percolation transition. M Engelhardt, K Langfeld, H Reinhardt, O , 10.1103/PhysRevD.61.054504arXiv:hep-lat/9904004Phys. Rev. D. 6154504M. Engelhardt, K. Langfeld, H. Reinhardt, and O. Ten- nert, Deconfinement in SU(2) Yang-Mills theory as a center vortex percolation transition, Phys. Rev. D 61, 054504 (2000), arXiv:hep-lat/9904004.
Center projection vortices in continuum Yang-Mills theory. M Engelhardt, H Reinhardt, 10.1016/S0550-3213(99)00727-0arXiv:hep-th/9907139Nucl. Phys. B. 567249hep-thM. Engelhardt and H. Reinhardt, Center projection vor- tices in continuum Yang-Mills theory, Nucl. Phys. B 567, 249 (2000), arXiv:hep-th/9907139 [hep-th].
Center vortex model for the infrared sector of Yang-Mills theory: Topological susceptibility. M Engelhardt, 10.1016/S0550-3213(00)00350-3arXiv:hep-lat/0004013Nucl. Phys. 585614hep-latM. Engelhardt, Center vortex model for the infrared sec- tor of Yang-Mills theory: Topological susceptibility, Nucl. Phys. B585, 614 (2000), arXiv:hep-lat/0004013 [hep-lat].
P vortices, gauge copies, and lattice size. R Bertle, M Faber, J Greensite, S Olejnik, 10.1088/1126-6708/2000/10/007arXiv:hep-lat/0007043JHEP. 107R. Bertle, M. Faber, J. Greensite, and S. Olejnik, P vortices, gauge copies, and lattice size, JHEP 10, 007, arXiv:hep-lat/0007043.
Gluon propagators and quark confinement. K Langfeld, H Reinhardt, J Gattnar, 10.1016/S0550-3213(01)00574-0arXiv:hep-ph/0107141Nucl. Phys. B. 621131K. Langfeld, H. Reinhardt, and J. Gattnar, Gluon prop- agators and quark confinement, Nucl. Phys. B 621, 131 (2002), arXiv:hep-ph/0107141.
The Confinement problem in lattice gauge theory. J Greensite, 10.1016/S0146-6410(03)90012-3arXiv:hep-lat/0301023Prog. Part. Nucl. Phys. 511J. Greensite, The Confinement problem in lattice gauge theory, Prog. Part. Nucl. Phys. 51, 1 (2003), arXiv:hep- lat/0301023.
Writhe of center vortices and topological charge: An Explicit example. F Bruckmann, M Engelhardt, 10.1103/PhysRevD.68.105011arXiv:hep-th/0307219Phys. Rev. D. 68105011hepthF. Bruckmann and M. Engelhardt, Writhe of center vor- tices and topological charge: An Explicit example, Phys. Rev. D 68, 105011 (2003), arXiv:hep-th/0307219 [hep- th].
Center vortex model for the infrared sector of SU(3) Yang-Mills theory: Confinement and deconfinement. M Engelhardt, M Quandt, H Reinhardt, 10.1016/j.nuclphysb.2004.02.036arXiv:hep-lat/0311029Nucl. Phys. B. 685227M. Engelhardt, M. Quandt, and H. Reinhardt, Center vortex model for the infrared sector of SU(3) Yang-Mills theory: Confinement and deconfinement, Nucl. Phys. B 685, 227 (2004), arXiv:hep-lat/0311029.
Once more on the interrelation between Abelian monopoles and P-vortices in SU(2) LGT. P Y Boyko, V G Bornyakov, E M Ilgenfritz, A V Kovalenko, B V Martemyanov, M Muller-Preussker, M I Polikarpov, A I Veselov, 10.1016/j.nuclphysb.2006.08.025arXiv:hep-lat/0607003Nucl. Phys. B. 75671P. Y. Boyko, V. G. Bornyakov, E. M. Ilgenfritz, A. V. Kovalenko, B. V. Martemyanov, M. Muller-Preussker, M. I. Polikarpov, and A. I. Veselov, Once more on the interrelation between Abelian monopoles and P-vortices in SU(2) LGT, Nucl. Phys. B 756, 71 (2006), arXiv:hep- lat/0607003.
E.-M Ilgenfritz, K Koller, Y Koma, G Schierholz, T Streuer, V Weinberg, M Quandt, 10.22323/1.042.0311arXiv:0710.2607Localization of overlap modes and topological charge, vortices and monopoles in SU(3) LGT. 2007311hep-latE.-M. Ilgenfritz, K. Koller, Y. Koma, G. Schierholz, T. Streuer, V. Weinberg, and M. Quandt, Localization of overlap modes and topological charge, vortices and monopoles in SU(3) LGT, PoS LATTICE2007, 311 (2007), arXiv:0710.2607 [hep-lat].
Interrelation between monopoles, vortices, topological charge and chiral symmetry breaking: Analysis using overlap fermions for SU(2). V G Bornyakov, E M Ilgenfritz, B V Martemyanov, S M Morozov, M Muller-Preussker, A I Veselov, 10.1103/PhysRevD.77.074507arXiv:0708.3335Phys. Rev. D. 7774507hep-latV. G. Bornyakov, E. M. Ilgenfritz, B. V. Martemyanov, S. M. Morozov, M. Muller-Preussker, and A. I. Veselov, Interrelation between monopoles, vortices, topological charge and chiral symmetry breaking: Analysis using overlap fermions for SU(2), Phys. Rev. D 77, 074507 (2008), arXiv:0708.3335 [hep-lat].
. A O'cais, W Kamleh, K Langfeld, B Lasscock, D Leinweber, P Moran, A Sternbeck, L , Preconditioning Maximal Center Gauge withA. O'Cais, W. Kamleh, K. Langfeld, B. Lasscock, D. Leinweber, P. Moran, A. Sternbeck, and L. von Smekal, Preconditioning Maximal Center Gauge with
. 10.1103/PhysRevD.82.114512arXiv:0807.0264Phys. Rev. D. 823114512Stout Link Smearing in SU. hep-latStout Link Smearing in SU(3), Phys. Rev. D 82, 114512 (2010), arXiv:0807.0264 [hep-lat].
Center vortex model for the infrared sector of SU(3) Yang-Mills theory: Topological susceptibility. M Engelhardt, 10.1103/PhysRevD.83.025015arXiv:1008.4953Phys. Rev. D. 8325015hep-latM. Engelhardt, Center vortex model for the infrared sec- tor of SU(3) Yang-Mills theory: Topological suscepti- bility, Phys. Rev. D 83, 025015 (2011), arXiv:1008.4953 [hep-lat].
Role of center vortices in chiral symmetry breaking in SU(3) gauge theory. P O Bowman, K Langfeld, D B Leinweber, A Sternbeck, L Smekal, A G Williams, 10.1103/PhysRevD.84.034501arXiv:1010.4624Phys. Rev. D. 8434501hep-latP. O. Bowman, K. Langfeld, D. B. Leinweber, A. Stern- beck, L. von Smekal, and A. G. Williams, Role of center vortices in chiral symmetry breaking in SU(3) gauge the- ory, Phys. Rev. D 84, 034501 (2011), arXiv:1010.4624 [hep-lat].
SU(3) centre vortices underpin confinement and dynamical chiral symmetry breaking. E.-A O'malley, W Kamleh, D Leinweber, P Moran, 10.1103/PhysRevD.86.054503arXiv:1112.2490Phys. Rev. D. 8654503hep-latE.-A. O'Malley, W. Kamleh, D. Leinweber, and P. Moran, SU(3) centre vortices underpin confinement and dynamical chiral symmetry breaking, Phys. Rev. D 86, 054503 (2012), arXiv:1112.2490 [hep-lat].
Connection between center vortices and instantons through gauge-field smoothing. D Trewartha, W Kamleh, D Leinweber, 10.1103/PhysRevD.92.074507arXiv:1509.05518Phys. Rev. D. 9274507hep-latD. Trewartha, W. Kamleh, and D. Leinweber, Con- nection between center vortices and instantons through gauge-field smoothing, Phys. Rev. D 92, 074507 (2015), arXiv:1509.05518 [hep-lat].
J Greensite, 10.1051/epjconf/201713701009arXiv:1610.06221Confinement from Center Vortices: A review of old and new results. 1371009hep-latJ. Greensite, Confinement from Center Vortices: A re- view of old and new results, EPJ Web Conf. 137, 01009 (2017), arXiv:1610.06221 [hep-lat].
Gluon propagator on a center-vortex background. J C Biddle, W Kamleh, D B Leinweber, 10.1103/PhysRevD.98.094504arXiv:1806.04305Phys. Rev. D. 9894504hep-latJ. C. Biddle, W. Kamleh, and D. B. Leinweber, Gluon propagator on a center-vortex background, Phys. Rev. D 98, 094504 (2018), arXiv:1806.04305 [hep-lat].
Branching of Center Vortices in SU(3) Lattice Gauge Theory. F Spengler, M Quandt, H Reinhardt, 10.1103/PhysRevD.98.094508arXiv:1810.04072Phys. Rev. D. 9894508hep-thF. Spengler, M. Quandt, and H. Reinhardt, Branching of Center Vortices in SU(3) Lattice Gauge Theory, Phys. Rev. D 98, 094508 (2018), arXiv:1810.04072 [hep-th].
Static quark potential from centre vortices in the presence of dynamical fermions. J Biddle, W Kamleh, D Leinweber, arXiv:2206.00844hep-latJ. Biddle, W. Kamleh, and D. Leinweber, Static quark potential from centre vortices in the presence of dynam- ical fermions, (2022), arXiv:2206.00844 [hep-lat].
Impact of Dynamical Fermions on the Centre Vortex Gluon Propagator. J Biddle, W Kamleh, D Leinweber, arXiv:2206.02320hep-latJ. Biddle, W. Kamleh, and D. Leinweber, Impact of Dy- namical Fermions on the Centre Vortex Gluon Propaga- tor, (2022), arXiv:2206.02320 [hep-lat].
Evidence that centre vortices underpin dynamical chiral symmetry breaking in SU(3) gauge theory. D Trewartha, W Kamleh, D Leinweber, 10.1016/j.physletb.2015.06.025arXiv:1502.06753Phys. Lett. B. 747373hep-latD. Trewartha, W. Kamleh, and D. Leinweber, Evidence that centre vortices underpin dynamical chiral symmetry breaking in SU(3) gauge theory, Phys. Lett. B 747, 373 (2015), arXiv:1502.06753 [hep-lat].
Centre vortex removal restores chiral symmetry. D Trewartha, W Kamleh, D B Leinweber, 10.1088/1361-6471/aa9443arXiv:1708.06789J. Phys. G. 44125002hep-latD. Trewartha, W. Kamleh, and D. B. Leinweber, Centre vortex removal restores chiral symmetry, J. Phys. G 44, 125002 (2017), arXiv:1708.06789 [hep-lat].
Vortex structures in pure SU(3) lattice gauge theory. K Langfeld, 10.1103/PhysRevD.69.014503arXiv:hep-lat/0307030Phys. Rev. D. 6914503K. Langfeld, Vortex structures in pure SU(3) lattice gauge theory, Phys. Rev. D 69, 014503 (2004), arXiv:hep- lat/0307030.
Visualization of center vortex structure. J C Biddle, W Kamleh, D B Leinweber, 10.1103/PhysRevD.102.034504arXiv:1912.09531Phys. Rev. D. 10234504hep-latJ. C. Biddle, W. Kamleh, and D. B. Leinweber, Visu- alization of center vortex structure, Phys. Rev. D 102, 034504 (2020), arXiv:1912.09531 [hep-lat].
Study of SU(3) vortex -like configurations with a new maximal center gauge fixing method. A Montero, 10.1016/S0370-2693(99)01113-2arXiv:hep-lat/9906010Phys. Lett. B. 467106A. Montero, Study of SU(3) vortex -like configurations with a new maximal center gauge fixing method, Phys. Lett. B 467, 106 (1999), arXiv:hep-lat/9906010.
PACS-CS), 2+1 Flavor Lattice QCD toward the Physical Point. S Aoki, 10.1103/PhysRevD.79.034503arXiv:0807.1661Phys. Rev. D. 7934503hep-latS. Aoki et al. (PACS-CS), 2+1 Flavor Lattice QCD to- ward the Physical Point, Phys. Rev. D 79, 034503 (2009), arXiv:0807.1661 [hep-lat].
Renormalization Group Analysis of Lattice Theories and Improved Lattice Action. II. Fourdimensional non-Abelian SU(N) gauge model. Y Iwasaki, arXiv:1111.7054hep-latY. Iwasaki, Renormalization Group Analysis of Lat- tice Theories and Improved Lattice Action. II. Four- dimensional non-Abelian SU(N) gauge model, (1983), arXiv:1111.7054 [hep-lat].
On P vortices and the Gribov problem. T G Kovacs, E T Tomboulis, 10.1016/S0370-2693(99)00944-2arXiv:hep-lat/9905029Phys. Lett. B. 463T. G. Kovacs and E. T. Tomboulis, On P vortices and the Gribov problem, Phys. Lett. B 463, 104 (1999), arXiv:hep-lat/9905029.
P vortices and drama of Gribov copies. V G Bornyakov, D A Komarov, M I Polikarpov, 10.1016/S0370-2693(00)01309-5arXiv:hep-lat/0009035Phys. Lett. B. 497151V. G. Bornyakov, D. A. Komarov, and M. I. Polikarpov, P vortices and drama of Gribov copies, Phys. Lett. B 497, 151 (2001), arXiv:hep-lat/0009035.
The significance probability of the smirnov two-sample test. J L Hodges, Arkiv för Matematik. 3469J. L. Hodges, The significance probability of the smirnov two-sample test, Arkiv för Matematik 3, 469 (1958).
Tennert, Interaction of confining vortices in SU(2) lattice gauge theory. M Engelhardt, K Langfeld, H Reinhardt, O , 10.1016/S0370-2693(98)00583-8arXiv:hep-lat/9801030Phys. Lett. B. 431141M. Engelhardt, K. Langfeld, H. Reinhardt, and O. Ten- nert, Interaction of confining vortices in SU(2) lattice gauge theory, Phys. Lett. B 431, 141 (1998), arXiv:hep- lat/9801030.
Confinement criterion for gauge theories with matter fields. J Greensite, K Matsuyama, 10.1103/PhysRevD.96.094510arXiv:1708.08979Phys. Rev. D. 9694510hep-latJ. Greensite and K. Matsuyama, Confinement criterion for gauge theories with matter fields, Phys. Rev. D 96, 094510 (2017), arXiv:1708.08979 [hep-lat].
What symmetry is actually broken in the Higgs phase of a gauge-Higgs theory?. J Greensite, K Matsuyama, 10.1103/PhysRevD.98.074504arXiv:1805.00985Phys. Rev. D. 9874504hepthJ. Greensite and K. Matsuyama, What symmetry is actu- ally broken in the Higgs phase of a gauge-Higgs theory?, Phys. Rev. D 98, 074504 (2018), arXiv:1805.00985 [hep- th].
Higgs phase as a spin glass and the transition between varieties of confinement. J Greensite, K Matsuyama, 10.1103/PhysRevD.101.054508arXiv:2001.03068Phys. Rev. D. 10154508hepthJ. Greensite and K. Matsuyama, Higgs phase as a spin glass and the transition between varieties of confinement, Phys. Rev. D 101, 054508 (2020), arXiv:2001.03068 [hep- th].
J Greensite, K Matsuyama, 10.3390/sym14010177arXiv:2112.06421Symmetry, Confinement, and the Higgs Phase, Symmetry. 14hep-latJ. Greensite and K. Matsuyama, Symmetry, Confine- ment, and the Higgs Phase, Symmetry 14, 177 (2022), arXiv:2112.06421 [hep-lat].
Linux users may install Adobe Acroread version 9.4.1, or use a Windows emulator such as PlayOnLinux. 3D content must be enabled for the interactive content to be available, and for proper rendering it is necessary to enable double-sided rendering in the preferences menu. To activate the models, simply click on the image. To rotate the model, click and hold the left mouse button and move the mouse. Use the scroll wheel or shift-click to zoom. Some pre-set views of the model are also provided to highlight areas of interest. These can be accessed by right clicking and using the. James C Supplemental Material, Waseem Biddle, Derek B Kamleh, Leinweber, SA. 5005Centre for the Subatomic Structure of Matter, Department of Physics, The University of AdelaideAustralia This supplementary document contains interactive 3D models embedded in the text, complementary to the static images presented in the main text. Views" menu. To reset the model back to its original orientation and zoom, press the 'home' icon in the toolbar or change the view to 'Default view'Supplemental Material James C. Biddle, Waseem Kamleh, and Derek B. Leinweber Centre for the Subatomic Structure of Matter, Department of Physics, The University of Adelaide, SA 5005, Australia This supplementary document contains interactive 3D models embedded in the text, complementary to the static images presented in the main text. To interact with these models, it is necessary to open this document in Adobe Reader or Adobe Acrobat (requires version 9 or newer). Linux users may install Adobe Acroread version 9.4.1, or use a Windows emulator such as PlayOnLinux. 3D content must be enabled for the interactive content to be available, and for proper rendering it is necessary to enable double-sided rendering in the preferences menu. To activate the models, simply click on the image. To rotate the model, click and hold the left mouse button and move the mouse. Use the scroll wheel or shift-click to zoom. Some pre-set views of the model are also provided to highlight areas of interest. These can be accessed by right clicking and using the "Views" menu. To reset the model back to its original orientation and zoom, press the 'home' icon in the toolbar or change the view to 'Default view'.
The centre vortex structure of a ground-state vacuum field configuration in pure SU(3) gauge theory. The flow of +1 centre charge through the gauge field is illustrated by the jets. Fig. S-19, see main text for a description of the plotting conventionsFIG. S-19. The centre vortex structure of a ground-state vacuum field configuration in pure SU(3) gauge theory. The flow of +1 centre charge through the gauge field is illustrated by the jets (see main text for a description of the plotting conventions).
Blue jets are used to illustrate the primary percolating vortex cluster, while other colours illustrate the secondary clusters. Blue jets are used to illustrate the primary percolating vortex cluster, while other colours illustrate the secondary clusters. (Click to activate.)
The centre-vortex structure of a ground-state vacuum field configuration in dynamical 2+1 flavour QCD with mπ = 156 MeV. Symbols are as described in Fig. Fig. S-20, S-19. (Click to activateFIG. S-20. The centre-vortex structure of a ground-state vacuum field configuration in dynamical 2+1 flavour QCD with mπ = 156 MeV. Symbols are as described in Fig. S-19. (Click to activate.)
| [] |